text
stringlengths 0
820
|
---|
Optimized GeoTIFFs (COGs), resample to 30 m resolution, and stack them into single multi-band
|
files with consistent filenames. The compression algorithm used by COGs resulted in a dataset that
|
is 33% of the original size and therefore faster to download and load from disk. The final ML-ready
|
dataset is available on Hugging Face and can be automatically downloaded using TorchGeo.
|
2.2.2 L8 Biome dataset
|
The L8 Biome dataset, created by Foga et al. [65], is a validation dataset for cloud cover assessment
|
algorithms consisting of 96 Landsat 8 OLI/TIRS Level-1T scenes and manually generated cloud
|
masks evenly divided between 8 unique biomes. Each scene is an 11-band, roughly 9000×9000 px
|
multispectral image with 30 m/px resolution. Cloud masks consist of the same 5 classes as L7 Irish.
|
Comparatively, L8 Biome has fewer issues than L7 Irish. The masks lack geolocation, but we can
|
copy this from the image files. While the dataset can be programmatically downloaded, it requires
|
scraping a webpage for 96 different URLs for each scene. We convert the raw uint16 images to uint8
|
to match L7 Irish, and create compressed COGs of all files, resulting in a dataset 9% of the original
|
size. We resample all images to 30 m/px resolution and stack them in single multi-band files. The
|
dataset is available on Hugging Face and can be automatically downloaded using TorchGeo.
|
2.3 SSL4EO-L benchmark dataset
|
As there are no existing benchmark datasets for TM or ETM+ SR, we need to design our own.
|
Crucially, we want a single benchmark dataset that can be used for a consistent comparison across all
|
5 sensors/products for which we are pre-training models. We create our own land cover classification
|
datasets based on NLCD [70] and CDL [71] masks, described in more detail below. They are the
|
only large, Landsat-based semantic segmentation masks with a long enough history to benchmark
|
foundation models for historical satellites.
|
Our sampling strategy is similar to the one used for our pre-training dataset, with a few differ-
|
ences. As CDL only exists for the continental U.S. (CONUS), we restrict our sampling strategy to
|
CONUS. To achieve maximum coverage, especially in lower population regions where agriculture
|
is most prevalent, we replace the city-centered Gaussian distribution with a uniform sampling distri-
|
bution. We choose a single 60-day window centered around August 1stwhen crop types are easiest
|
to distinguish. As CDL data is not available before the ETM+ SLC failure, we do not exclude no-
|
data pixels for this sensor. Additionally, nodata masks are copied from SLC-off imagery to masks
|
so as to avoid penalizing models for making incorrect predictions where there is no data. The 2019
|
NLCD and CDL datasets are used for ETM+ and OLI/TIRS evaluation since 2019 is the most recent
|
year for which both datasets exist. The 2011 datasets are used for TM since 2011 is the most recent
|
year for which both Landsat 5 and NLCD/CDL overlap. These years are different than the years
|
collected for our pre-training dataset, allowing us to accurately measure performance on images that
|
the pre-trained model has never seen before.
|
The resulting dataset consists of 25K Landsat, NLCD, and CDL triplets, converted from float32
|
to uint8 using the same scaling as above. All images have the same resolution and dimensions as
|
the pre-training dataset. The datasets form a parallel corpus between TOA and SR products, and
|
have approximately 85% spatial overlap across sensors, although not necessarily during the same
|
year, allowing for multimodal data fusion studies. All datasets are available for download from
|
Hugging Face using the TorchGeo library, making it easy for other researchers to compare against
|
our preliminary benchmark results.
|
6
|
NLCD The National Land Cover Database (NLCD) [70] is a land cover product produced every
|
2–3 years by the USGS, in collaboration with the Multi-Resolution Land Characteristics (MRLC)
|
consortium. The dataset spans the entire U.S. from 2001–2019. The final products are generated
|
at a 30 m resolution by random forest models trained on spectral, spatial, temporal, and ancillary
|
data [72, 73, 74]. We use the 21 class version, with an estimated overall accuracy of 77.5±1.0% [75].
|
CDL The Cropland Data Layer (CDL) [32] is an annual land cover product produced by the U.S.
|
Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) focusing on
|
crop classification. Although the dataset is available starting in 1997, full CONUS coverage is not
|
available until 2008. The dataset consists of 134 classes, primarily for agricultural crops grown in
|
the U.S. Labels are generated at a 30 m resolution using a decision tree classifier. The most common
|
crop classes are estimated to have an accuracy of 85–95% [32]. All non-agricultural classes are
|
taken from NLCD, and should be considered to have a similar accuracy.
|
3 Experimental setup
|
For pre-training we conduct experiments similar to those performed in SSL4EO-S12 [13] for each
|
sensor/product in the dataset described in Section 2.1. We pre-train various ResNet [52] and ViT [53]
|
backbones initialized with ImageNet weights using the SimCLR v1 [21] and MoCo v2 [17] SSL
|
methods. RGB ImageNet weights are repeated (RGBRGB. . . ) and scaled ( 3/CforCchannels)
|
in the first convolutional layer in order to handle multispectral images. During pre-training we
|
use the same default augmentations and hyperparameters as SimCLR and MoCo with a couple of
|
exceptions. As saturation and hue are undefined for multispectral imagery, we skip these parts of
|
color jitter. Instead, we use the random season contrast technique proposed by Manas et al. [12]
|
by utilizing 2 randomly sampled multitemporal images from the same location as the augmented
|
views. Additionally, although grayscale is undefined for multispectral imagery, we take the average
|
of all bands to compute random grayscale images. We pre-train each model for 200 epochs using
|
a batch size of 1024. All pre-training experiments are performed on a GPU cluster, with 80 GB of
|
memory per GPU. Each experiment takes anywhere from 15–40 hrs depending on the number of
|
spectral bands and model size, each trained in parallel on 4 ×GPUs, for a total of ∼4K GPU hours
|
including hyperparameter tuning.
|
For benchmarking, we freeze the encoder and fine-tune a U-Net [76] decoder for all cloud detection
|
and land cover classification datasets mentioned above. For the L7 Irish and L8 Biome datasets,
|
we use a random 60-20-20 train-val-test split. For the NLCD and CDL datasets, we use a random
|
70-15-15 train-val-test split. NLCD and CDL classes are limited to those with > 1% area, with
|
remaining classes mapped to the background class. Splits are defined using a fixed random seed for
|
reproducibility. Random horizontal and vertical flip and random resized crop data augmentations
|
are used during training. Models are trained for a minimum of 20 epochs and a maximum of 100
|
epochs using early stopping and a learning rate schedule patience of 6 epochs. Only learning rate
|
undergoes hyperparameter tuning, with the most common optimal learning rate being 3e-3. All
|
benchmarking experiments are conducted on NVIDIA RTX A6000 (2.5 hr/experiment) and A100
|
(1 hr/experiment) GPUs for a total of ∼200 GPU hours. Configuration files and training scripts for
|
reproducing all experiments are made available in the TorchGeo library [54].
|
4 Benchmark results
|
In order to evaluate the effectiveness of our pre-trained models, we report overall accuracy and
|
mean intersection over union (mIoU) on four semantic segmentation datasets. Table 1 demonstrates
|
substantial gains over ImageNet, with up to an 18.43% accuracy and 24.25 mIoU improvement for
|
MoCo and up to a 14.43% accuracy and 18.69 mIoU improvement for SimCLR. Although MoCo
|
outperforms ImageNet in 5 out of 6 experiments, SimCLR shows mixed results, outperforming
|
ImageNet in only 2 out of 6 experiments. Our SimCLR models suffered from convergence issues
|
with the smaller batch size we used, and may improve with better hyperparameter tuning.
|
Note that both our sampling method and pretext task are explicitly designed to ignore clouds. During
|
sampling, we only select patches from scenes with < 20% cloud cover, decreasing the frequency of
|
clouds in our pre-training dataset. Our pretext task involves mapping patches taken from 2 different
|
seasons to the same representation. If one patch contains partial cloud cover, the model must learn to
|
7
|
Table 1: Cloud detection benchmark results. Overall accuracy and mean intersection over union
|
(mIoU) are reported for the test splits of the L7 Irish (Landsat 7 ETM+ TOA) and L8 Biome (Landsat
|
8 OLI/TIRS TOA) datasets for a range of backbones and pre-training techniques. All predictions are
|
made by U-Nets with frozen backbones. Three random seeds are used to compute mean ±standard
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.