Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found BigEarthNet.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found BigEarthNet.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

BigEarthNet

BigEarthNet is a large-scale benchmark dataset for multi-label classification, derived from Sentinel-1 (radar) and Sentinel-2 (optical) satellite imagery.

We have pre-processed the dataset by upsampling all sentinel-2 channels to 120x120 pixels and concatenated them together. Please see Torchgeo/bigearthnet for more information about pre-processing. In addition, we map the original 43 land cover classes to 19 broader categories using a predefined conversion scheme.

How to Use This Dataset

from datasets import load_dataset

dataset = load_dataset("GFM-Bench/BigEarthNet")

Also, please see our GFM-Bench repository for more information about how to use the dataset! 🤗

Dataset Metadata

The following metadata provides details about the Sentinel-2 imagery used in the dataset:

  • Number of Sentinel-1 Bands: 2
  • Sentinel-1 Bands: VV, VH
  • Number of Sentinel-2 Bands: 12
  • Sentinel-2 Bands: B01 (Coastal aerosol), B02 (Blue), B03 (Green), B04 (Red), B05 (Vegetation red edge), B06 (Vegetation red edge), B07 (Vegetation red edge), B08 (NIR), B8A (Narrow NIR), B09 (Water vapour), B11 (SWIR), B12 (SWIR)
  • Image Resolution: 120 x 120 pixels
  • Spatial Resolution: 10 meters
  • Number of Classes: 19
  • Class Labels:
    • Urban fabric
    • Industrial or commercial units
    • Arable land
    • Permanent crops
    • Pastures
    • Complex cultivation patterns
    • Land principally occupied by agriculture, with significant areas of natural vegetation
    • Agro-forestry areas
    • Broad-leaved forest
    • Coniferous forest
    • Mixed forest
    • Natural grassland and sparsely vegetated areas
    • Moors, heathland and sclerophyllous vegetation
    • Transitional woodland, shrub
    • Beaches, dunes, sands
    • Inland wetlands
    • Coastal wetlands
    • Inland waters
    • Marine waters

Dataset Splits

The BigEarthNet dataset consists following splits:

  • train: 269,695 samples
  • val: 123,723 samples
  • test: 125,866 samples

Dataset Features:

The BigEarthNet dataset consists of following features:

  • radar: the Sentinel-1 image.
  • optical: the Sentinel-2 image.
  • label: the classification label.
  • radar_channel_wv: the central wavelength of each Sentinel-1 bands.
  • optical_channel_wv: the central wavelength of each Sentinel-2 bands.
  • spatial_resolution: the spatial resolution of images.

Citation

If you use the BigEarthNet dataset in your work, please cite original papers:

@inproceedings{sumbul2019bigearthnet,
  title={Bigearthnet: A large-scale benchmark archive for remote sensing image understanding},
  author={Sumbul, Gencer and Charfuelan, Marcela and Demir, Beg{\"u}m and Markl, Volker},
  booktitle={IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium},
  pages={5901--5904},
  year={2019},
  organization={IEEE}
}

and if you also find our benchmark useful, please consider citing our paper:

@misc{si2025scalablefoundationmodelmultimodal,
      title={Towards Scalable Foundation Model for Multi-modal and Hyperspectral Geospatial Data}, 
      author={Haozhe Si and Yuxuan Wan and Minh Do and Deepak Vasisht and Han Zhao and Hendrik F. Hamann},
      year={2025},
      eprint={2503.12843},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.12843}, 
}
Downloads last month
556