text
stringlengths
0
820
standard deviation centered around the centroid of the sampled city;
3) ensure the patch does not overlap with any existing sampled patches;
4) ensure that there exist 4 patches of imagery from 4 different seasons—each selected from
a 60-day window centered about the vernal and autumnal equinoxes and the summer and
winter solstices (within a 2-year window)—with less than 20% cloud coverage;
5) ensure that none of these patches contain nodata pixels;
6) if the previous three criteria are met, download the imagery corresponding to the patch.
If any step in this algorithm fails (there is overlap, or a location does not have a set of 4 cloud-free,
nodata-free images), the sample is skipped and we start over at step 1. This algorithm is designed to
maximize the diversity of images in the dataset, relying on the assumption that most of the diversity
in land cover is centered around large cities, with a gradual transition between urban, suburban,
farmland, and forest. Uniform sampling would instead result in images that are 70% ocean, 10%
desert, and 9% forest, resulting in very little dataset diversity [12]. Note that this sampling strategy
does result in decreased sampling from regions with persistent cloud cover (tropical rainforests) or
lower populations (desert, taiga, tundra, and polar biomes). By sampling different points in time, we
allow seasonal differences to act as natural forms of data augmentation during contrastive learning.
Differences between our sampling strategy and the one used by SSL4EO-S12 are as follows.
SSL4EO-S12 used Euclidean distance between patch centroids and a grid heuristic to detect overlap
between patches. This method has an O
N2/M
average run-time complexity, where Nis the
total number of samples and Mis the number of grid cells. We replace this with an O(NlogN)
R-tree [56], removing the 1–3% overlap reported by Wang et al. [14] due to use of this grid heuristic.
Among the cloud-free images in the aforementioned time windows, we sort by cloud cover instead
of date to provide the best possible image patches. We also skip patches containing nodata pixels
due to sampling near the border of a scene, which we found to be prevalent (on the order of 25%) in
prior datasets. We found it necessary to increase the cloud coverage threshold from 10% to 20% due
to the larger patch size (Sentinel-2 has a 10 m resolution, but Landsat has a 30 m resolution, result-
ing in patches that cover 9 ×the area) and avoidance of nodata pixels. Finally, since the resolution
of most bands are the same, we resample all thermal and panchromatic bands to a 30 m resolution,
allowing all bands to be concatenated into a single file.
We download all data from Google Earth Engine (GEE) [57], with a total of 250K locations, each
sampled at 4 different seasons, for a total of 1M unlabeled image patches per sensor/product and
5M in total. Each image is 264×264px, corresponding to 7.92×7.92km at 30 m/px resolution.
There are separate datasets for TM TOA, ETM+ TOA, ETM+ SR, OLI/TIRS TOA, and OLI SR.
We decided not to include RBV and MSS sensors due to the limited data availability on GEE and
the fact that it is not possible to create a benchmark dataset for these sensors due to their age. Since
TM and ETM+ use the same sensor for SR bands, we did not create a separate dataset for TM SR.
For similar reasons, there is a single dataset for OLI/TIRS and OLI-2/TIRS-2. TM data is collected
from 4 different seasons in 2009–2010, as the TM sensor failed in November 2011. ETM+ data is
collected from 2001–2002, as the scan line corrector (SLC) failed in May, 2003, resulting in images
with significant nodata pixels. OLI/TIRS data is collected from 2021–2022. See Figure 3 for a map
of the geographical distribution for each sensor. Note that it is not possible to sample high latitudes
due to lack of winter imagery.
All TOA and SR datasets represent a parallel corpus (the TOA and SR images are taken at the same
locations and dates). Due to differences in collection years and cloud coverage/nodata pixels, it was
not possible to create a parallel corpus between sensors. However, approximately 50% of TM and
ETM+, 40% of TM and OLI/TIRS, and 40% of ETM+ and OLI/TIRS images are sampled from
the same location, allowing for multimodal data fusion studies. The official scale factors suggested
4
(a) OLI/TIRS
(b) TM
(c) ETM+
Figure 3: Geographical distribution of the SSL4EO-L dataset, including the (a) Landsat 8–9
OLI/TIRS, (b) Landsat 4–5 TM, and (c) Landsat 7 ETM+ splits. Surface reflectance (SR) and
top of atmosphere (TOA) products are sampled from the same locations per sensor.
by the USGS to map between Level-1 and Level-2 Landsat imagery2and the visualization range
recommended by GEE for each sensor are used to map from float32 to uint8. The resulting datasets
are 274–385 GB when compressed and can be downloaded from Hugging Face3using TorchGeo.
2.2 Dataset archaeology
In order to benchmark the ability of our learned representations to transfer to downstream appli-
cations, we require curated benchmark datasets for evaluation. Although there exist ∼10 semantic
segmentation datasets for OLI/TIRS TOA, an extensive literature review found almost no benchmark
datasets for other sensors, products, or tasks. This is due to both their age (deep learning was not
commonplace in the field of remote sensing until recently) and the fact that semantic segmentation
is the primary task for which lower resolution satellite imagery is used.
A single classification dataset, Statlog [58], was found for the MSS sensor. However, this dataset
is composed of 3×3px images, making it unsuitable for evaluation of CNN and ViT backbones.
For the task of semantic segmentation for cloud cover, three ETM+ TOA datasets were found: L7
SPARCS [59], L7 Irish [60, 61], and L7 Scaramuzza [62]. Each of these datasets also has a cor-
responding dataset for OLI/TIRS TOA (L8 SPARCS [63, 64], L8 Biome [65, 66], and L8 Scara-
muzza [67]), making it possible to compare learned representations across sensors. No benchmark
datasets for TM or ETM+ SR were ever found. The L7 SPARCS dataset, while thought to be lost to
time, was eventually recovered from a hard drive found in the closet of one of the dataset’s authors.
The majority of the aforementioned cloud segmentation datasets are official datasets used by the
USGS to validate their cloud detection algorithms. Among these datasets, we chose to use L7 Irish
and L8 Biome due to their larger size and greater number of citations.
2.2.1 L7 Irish dataset
The L7 Irish dataset, originally selected by Irish et al. [68] and later digitized by Scaramuzza et al.
[61], is a validation dataset for cloud cover assessment algorithms composed of 206 Landsat 7
ETM+ Level-1G scenes and manually generated cloud masks divided between 9 unique biomes.
Each scene is a 9-band, roughly 8000×8000 px multispectral image with 30 m/px resolution.
Cloud masks consist of 5 classes: 1) fill, 2) cloud shadow, 3) clear, 4) thin cloud, and 5) cloud.
There are 2015 [69] and 2019 [60] versions of this dataset available for download. Unfortunately,
both versions have numerous issues that make them difficult to use for evaluation. The 2015 version
contains 1 scene with a corrupted thermal band file, 2 scenes that are missing masks, 1 scene with an
2https://www.usgs.gov/faqs/how-do-i-use-scale-factor-landsat-level-2-science-products
3https://huggingface.co/torchgeo
5
inconsistent filename format, and the documented class labels do not match the actual class labels
used. Additionally, there is no way to programmatically download the entire dataset. All 206 files
must be manually downloaded, one at a time, with a limit of 6 parallel downloads, requiring 3–4 hrs
of constant supervision and clicking each link every 5 min. The 2019 version has even more issues,
including 5 scenes with corrupted thermal band files, 1 scene missing geolocation, 6 scenes with
inconsistent filename formats, and inconsistent thermal band resolutions. Although 17% of masks
matched the documented labels, the other 83% of masks use a completely different mapping, with
both clear and fill classes mapped to the same value.
In order to use this dataset for evaluation, we start with the 2015 version and use scenes from the
2019 version to replace corrupted images and missing masks. We correct the class mapping of
copied masks and copy the fill pixels from the images to the masks. We convert all images to Cloud