File size: 10,071 Bytes
c6016e9 6f44ca5 c6016e9 41d0264 6f44ca5 c6016e9 41d0264 c6016e9 41d0264 c6016e9 41d0264 6f44ca5 c6016e9 41d0264 a75dc0e df2d12a a75dc0e df2d12a 3e22cfb 5f1abcc df2d12a 9740175 df2d12a 3e22cfb df2d12a 9740175 c6016e9 0f4db9e afc834f a366e54 afc834f c6016e9 0795903 a366e54 1a3dd96 1d9e9ac 1a3dd96 1d9e9ac 1a3dd96 14162e4 3c8b460 23bbb09 3c8b460 1a3dd96 14162e4 1a3dd96 1d9e9ac 23bbb09 1a8078f 23bbb09 c6016e9 1a8078f c6016e9 1a8078f c6016e9 1a8078f c6016e9 1a8078f c6016e9 1a8078f c6016e9 1a8078f c6016e9 1a8078f a366e54 1a3dd96 0f4db9e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |
---
pretty_name: Geolayers
language: en
language_creators:
- found
license: cc-by-4.0
multilinguality: monolingual
size_categories:
- 10K<n<100K
task_categories:
- image-classification
- image-segmentation
source_datasets:
- SustainBench
- USAVars
- BigEarthNetv2.0
- EnviroAtlas
homepage: https://huggingface.co/datasets/arjunrao2000/geolayers
repository: https://huggingface.co/datasets/arjunrao2000/geolayers
download_size: 25570000000
tags:
- climate
- remote-sensing
# 1) Index all JPG previews under huggingface_preview/
data_files:
- "huggingface_preview/**/*.jpg"
# 2) Also index metadata.parquet so DuckDB can read it
- "metadata.csv"
# 3) Tell HF that metadata.parquet is your preview table,
# and explicitly name which columns are images.
preview:
path: metadata.csv
images:
- rgb
- osm
- dem
- mask
configs:
- config_name: benchmark
data_files:
- split: sustainbench
path: metadata.csv
---
# Geolayers-Data
<img src="osm_usavars.png" alt="Sample Geographic Inputs with the USAVars Dataset" width="800"/>
-->
This dataset card contains usage instructions and metadata for all data-products released with our paper:
*Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery.* We release 3 modified versions of 3 benchmark datasets spanning land-cover segmentation, tree-cover regression, and multi-label land-cover classification tasks. These datasets are augmented with auxiliary, geographic inputs. A full list of contributed data products is shown in the table below.
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Task Description</th>
<th>Multispectral Input</th>
<th>Model</th>
<th>Additional Data Layers</th>
<th colspan="2">Dataset Size</th>
<th>OOD Test Set Present?</th>
</tr>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th>Compressed</th>
<th>Uncompressed</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://arxiv.org/abs/2111.04724">SustainBench</a></td>
<td>Farmland boundary delineation</td>
<td>Sentinel-2 RGB</td>
<td>U-Net</td>
<td>OSM rasters, EU-DEM</td>
<td>1.76 GB</td>
<td>1.78 GB</td>
<td>β</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2202.14000">EnviroAtlas</a></td>
<td>Land-cover segmentation</td>
<td>NAIP RGB + NIR</td>
<td>FCN</td>
<td><a href="https://arxiv.org/abs/2202.14000">Prior</a>, OSM rasters</td>
<td>N/A</td>
<td>N/A</td>
<td>β</td>
</tr>
<tr>
<td><a href="https://bigearth.net/static/documents/Description_BigEarthNet_v2.pdf">BigEarthNet v2.0</a></td>
<td>Land-cover classification</td>
<td>Sentinel-2 (10 bands)</td>
<td>ViT</td>
<td><a href="https://arxiv.org/abs/2311.17179">SatCLIP</a> embeddings</td>
<td>120 GB (raw), 91 GB (H5)</td>
<td>205 GB (raw), 259 GB (H5) </td>
<td>β</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2010.08168">USAVars</a></td>
<td>Tree-cover regression</td>
<td>NAIP RGB + NIR</td>
<td>ResNet-50</td>
<td>OSM rasters</td>
<td> 23.56 GB </td>
<td> 167 GB</td>
<td>β</td>
</tr>
</tbody>
</table>
## Usage Instructions
* Download the `.h5.gz` files in `data/<source dataset name>`. Our source datasets include SustainBench, USAVars, and BigEarthNet2.0. Each dataset with the augmented geographic inputs is detailed in [this section π¦](#geolayersused)
* You may use pigz (https://linux.die.net/man/1/pigz) to decompress the archive. This is especially recommended for USAVars' train-split, which is 117 GB when uncompressed. This can be done with `pigz -d <.h5.gz>`
* Datasets with auxiliary geographic inputs can be read with H5PY.
* If you plan on using the original datasets, please cite our paper "Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery". BibTex can be found at the bottom of this README.
### Usage Instructions for the BigEarthNetv2.0 dataset (Clasen et al. (2025))
We detail usage instructions to train on the BigEarthNetv2.0 dataset as a separate section due to the unique fusion mechanism we use for this input modality. Our training code is a distinct Github repository for fusing and training a ViT on the BigEarthNet2.0 dataset using an auxiliary SatCLIP token fused with `TOKEN-FUSE`. [Link to Github Repository](https://github.com/UCBoulder/GeoViTTokenFusion). This repository also uses torchgeo and timm as *submodules*.
We use the original dataset [BigEarthNetv2.0](https://bigearth.net/) dataset which is processed with spatially-buffered train-test splits. We release two **processed** versions of the datasets introduced in Casen et al. (2025)
The first version is stored in directory `data/bigearthnet/raw/`. This dataset, although called `raw` is a pre-processed version of the raw BigEarthNetv2.0 dataset. We follow instructions listed on [this repository](https://git.tu-berlin.de/rsim/reben-training-scripts/-/tree/main?ref_type=heads#data). Steps performed:
1. We download the raw `BigEarthNet-S2.tar.zst` Sentinel-2 BigEarthNet dataset.
2. We extract and process the raw S2 tiles to a LMDB 'Lightning' Database. This allows for faster reads during training. We use the rico-hdl tool [here](https://github.com/kai-tub/rico-hdl) to accomplish this.
3. We download reference maps and sentinel-2 tile metadata with snow and cloud cover rasters
4. This final dataset is compressed into several chunks and stored in `data/bigearthnet/raw/bigearth.tar.gz.part-a<x>`. Each chunk is 5G large. There are 24 total chunks.
To uncompress and re-assemble the compressed files in `data/bigearthnet/raw/`, download all the parts and run:
```
cat bigearthnet.tar.gz.part-* \
| pigz -dc \
| tar -xpf -
```
Note that if this version of the dataset is used, SatCLIP embeddings would need to be re-computed on-the-fly. To use this dataset with the pre-computed SatCLIP embeddings, refer to the note below.
#### π‘ Do you want to try your own input fusion mechanism with BigEarthNetv2.0?
The second version of the BigEarthNetv2.0 dataset is stored in `data/bigearthnet/`. These datasets are stored as 3 H5PY datasets (`.h5`) for each split in the dataset.
This version of the processed dataset comes with (i) raw location co-ordinates, and (ii) pre-computed SatCLIP embeddings (L=10, ResNet50 image encoder backbone).
You may access these embeddings and location metadata with keys `location` and `satclip_embedding`.
### Usage Instructions for the SustainBench Farmland Boundary Delineation Dataset (Yeh et al. (2021))
1. Unzip the archive in `data/sustainbench-field-boundary-delineation` with `unzip sustainbench.zip`
2. You should see a directory structure as follows:
```
dataset_release/
βββ id_augmented_test_split_with_osm_new.h5.gz.zip
βββ id_augmented_train_split_with_osm_new.h5.gz.zip
βββ id_augmented_val_split_with_osm_new.h5.gz.zip
βββ raw_id_augmented_test_split_with_osm_new.h5.gz.zip
βββ raw_id_augmented_train_split_with_osm_new.h5.gz.zip
βββ raw_id_augmented_val_split_with_osm_new.h5.gz.zip
```
3. Unzip all files using `unzip` and `pigz -d <path to .h5.gz file>`
There are two versions of data released: Datasets that begin with `id_augmented` refer to the version of the SustainBench farmland boundary delineation dataset with the OSM and DEM rasters pre-processed to RGB space following the application of the Gaussian Blur. Datasets that begin with `raw_id_augmented` contain the RGB imagery with 19 categorical rasters for OSM, and 1 raster for the DEM geographic input.
## π¦ <a name="geolayersused"></a> Datasets & Georeferenced Auxiliary Layers
### SustainBench β Farmland Boundary Delineation
* **Optical input:** Sentinel-2 RGB patches (224Γ224 px, 10 m GSD) covering French cropland in 2017; β 1.6 k training images.
* **Auxiliary layers (all geo-aligned):**
* 19-channel OpenStreetMap (OSM) raster stack (roads, waterways, buildings, biome classes, β¦)
* EU-DEM (20 m GSD, down-sampled to 10 m)
* **Why:** OSM + DEM give an 8 % Dice boost when labels are scarce; gains appear once the training set drops below β 700 images.
---
### EnviroAtlas β Land-Cover Segmentation
* **Optical input:** NAIP 4-band RGB-NIR aerial imagery at 1 m resolution.
* **Auxiliary layers:**
* OSM rasters (roads, waterbodies, waterways)
* **Prior** raster β a hand-crafted fusion of NLCD land-cover and OSM layers (PROC-STACK)
* **Splits:** Train = Pittsburgh; OOD validation/test = Austin & Durham. Auxiliary layers raise OOD overall accuracy by ~4 pp without extra fine-tuning.
---
### BigEarthNet v2.0 β Multi-Label Land-Cover Classification
* **Optical input:** 10-band Sentinel-2 tile pairs; β 550 k patch/label pairs over 19 classes.
* **Auxiliary layer:**
* **SatCLIP** location embedding (256-D), one per image center, injected as an extra ViT token (TOKEN-FUSE).
* **Splits:** Grid-based; val/test tiles lie outside the training footprint (spatial OOD by design). SatCLIP token lifts macro-F1 by ~3 pp across *all* subset sizes.
---
### USAVars β Tree-Cover Regression
* **Optical input:** NAIP RGB-NIR images (1 kmΒ² tiles); β 100 k samples with tree-cover % labels.
* **Auxiliary layers:**
* Extended OSM raster stack (roads, buildings, land-use, biome classes, β¦)
* **Notes:** Stacking the OSM raster boosts RΒ² by 0.16 in the low-data regime (< 250 images); DEM is provided raw for flexibility.
Citation:
```
@inproceedings{
rao2025using,
title={Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for {ML} with Satellite Imagery},
author={Arjun Rao and Esther Rolf},
booktitle={TerraBytes - ICML 2025 workshop},
year={2025},
url={https://openreview.net/forum?id=p5nSQMPUyo}
}
```
---
|