Dataset Viewer
The dataset viewer is not available for this dataset.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for DASP

Dataset Description

The DASP (Distributed Analysis of Sentinel-2 Pixels) dataset consists of cloud-free satellite images captured by Sentinel-2 satellites. Each image represents the most recent, non-partial, and cloudless capture from over 30 million Sentinel-2 images in every band. The dataset provides a near-complete cloudless view of Earth's surface, ideal for various geospatial applications. Images were converted from JPEG2000 to JPEG-XL to improve storage efficiency while maintaining high quality.

Huggingface page: https://huggingface.co/datasets/RichardErkhov/DASP

Github repository: https://github.com/nicoboss/DASP

Points of Contact:

Dataset Summary

  • Full cloudless satellite coverage of Earth.
  • Sourced from Sentinel-2 imagery, selecting the most recent cloud-free images.
  • JPEG2000 images transcoded into JPEG-XL for efficient storage.
  • Cloudless determination based on B1 band black pixel analysis.
  • Supports AI-based image stitching, classification, and segmentation.

Use cases

  • Image Stitching: Combines individual images into a seamless global mosaic.
  • Enables high-resolution satellite mosaics for academic and commercial applications.
  • Supports AI-driven Earth observation projects.
  • Facilitates urban planning, climate research, and environmental monitoring.
  • Land Use Classification: Enables categorization of land cover types.

Download a band (folder)

huggingface-cli download RichardErkhov/DASP --include TCI/* --local-dir DASP --repo-type dataset

Dataset Structure

Data Instances

The resulting image are in separate folders named after their band. The image names can be collated to the provided metadata. The ZStatandard compression algorithm was used to compress the metadata.

File: Sentinel_B1_black_pixel_measurements.txt

Header:

URL, total black pixels, black pixels top, black pixels right, black pixels bottom, black pixels left, average grayscale value of all non-black pixels

Sample data:

http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/43/N/CA/S2A_MSIL1C_20220401T051651_N0400_R062_T43NCA_20220401T075429.SAFE/GRANULE/L1C_T43NCA_A035380_20220401T053643/IMG_DATA/T43NCA_20220401T051651_B01.jp2: 62262 0,747,166,0 20
http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/36/M/XD/S2B_MSIL1C_20190716T074619_N0208_R135_T36MXD_20190716T104338.SAFE/GRANULE/L1C_T36MXD_A012316_20190716T080657/IMG_DATA/T36MXD_20190716T074619_B01.jp2: 0 0,0,0,0 20
http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/20/V/LJ/S2A_MSIL1C_20200629T154911_N0209_R054_T20VLJ_20200629T193223.SAFE/GRANULE/L1C_T20VLJ_A026220_20200629T155413/IMG_DATA/T20VLJ_20200629T154911_B01.jp2: 2293175 876,1830,1630,0 35

File: index_Sentinel.csv

Header:

GRANULE_ID,PRODUCT_ID,DATATAKE_IDENTIFIER,MGRS_TILE,SENSING_TIME,TOTAL_SIZE,CLOUD_COVER,GEOMETRIC_QUALITY_FLAG,GENERATION_TIME,NORTH_LAT,SOUTH_LAT,WEST_LON,EAST_LON,BASE_URL

Sample data:

L1C_T42UWG_A041401_20230527T062703,S2A_MSIL1C_20230527T062631_N0509_R077_T42UWG_20230527T071710,GS2A_20230527T062631_041401_N05.09,42UWG,2023-05-27T06:33:56.700000Z,764715852,0.597667731340191,,2023-05-27T07:17:10.000000Z,55.94508401564941,54.947111902793566,68.99952976138768,70.75711635116411,gs://gcp-public-data-sentinel-2/tiles/42/U/WG/S2A_MSIL1C_20230527T062631_N0509_R077_T42UWG_20230527T071710.SAFE
L1C_T33XWB_A021112_20190708T105646,S2A_MSIL1C_20190708T105621_N0208_R094_T33XWB_20190708T113743,GS2A_20190708T105621_021112_N02.08,33XWB,2019-07-08T11:00:35.000000Z,197594271,0.0,,2019-07-08T11:37:43.000000Z,73.86991541093971,72.88068077877183,16.368773276100033,18.540242190343452,gs://gcp-public-data-sentinel-2/tiles/33/X/WB/S2A_MSIL1C_20190708T105621_N0208_R094_T33XWB_20190708T113743.SAFE
L1C_T23LLJ_A028635_20201215T132230,S2A_MSIL1C_20201215T132231_N0209_R038_T23LLJ_20201215T151022,GS2A_20201215T132231_028635_N02.09,23LLJ,2020-12-15T13:25:11.367000Z,721319047,62.8896,,2020-12-15T15:10:22.000000Z,-9.946873284601002,-10.942725175756962,-46.83018842375086,-45.82296488039833,gs://gcp-public-data-sentinel-2/tiles/23/L/LJ/S2A_MSIL1C_20201215T132231_N0209_R038_T23LLJ_20201215T151022.SAFE

Dataset Creation

Collection and Processing

The dataset was curated by selecting the latest cloud-free images from Sentinel-2 data archives. The B1 spectrum black pixel count was analyzed to determine partial or full images. Images with black pixels exceeding a threshold were discarded. The selected images were then transcoded from JPEG2000 to JPEG-XL for optimized storage.

Source Data

  • Satellite: Sentinel-2 (ESA)
  • Selection Criteria:
    • Cloud coverage < 1% (from metadata)
    • Most recent full image per tile (based on B1 black pixel analysis)
      • Less than 10000 total black pixels and no more than 6 black pixels on each side of the image
  • Data Transformation: JPEG2000 → JPEG-XL conversion

Annotation Process

No additional annotations are provided beyond the provided metadata and B1 black pixel measurements

Sensitive Information

The dataset contains only satellite images and does not include personal or sensitive data.

Code used to filter images

Filtering out partial images based ouer B1 black pixel measurments

# Function to parse the data and filter URLs
def parse_and_filter_data(file_path, output_path):
    with open(file_path, 'r') as file:
        with open(output_path, 'w') as output_file:
            for line in file:
                if "Error decoding JPEG2000 image" in line:
                    continue
                if "manifest.safe does not contain B01.jp2" in line:
                    continue
                url, data = line.split(': ')
                first_number, comma_separated, _ = data.split(' ')
                first_number = int(first_number)
                comma_separated_numbers = list(map(int, comma_separated.split(',')))
                
                if first_number < 10000 and all(num <= 6 for num in comma_separated_numbers):
                    output_file.write(url + '\n')
                    #print(line)

# Example usage
file_path = 'Sentinel_B1_black_pixel_measurements.txt'
output_path = 'filteredUrls.txt'
parse_and_filter_data(file_path, output_path)

Extracting URLs of Cloudless Images

import csv
from datetime import datetime

data = {}
print("Reading index_Sentinel.csv...")
with open('index_Sentinel.csv', 'r') as csvfile:
    reader = csv.DictReader(csvfile)
    for row in reader:
        try:
            cloud_cover = float(row['CLOUD_COVER'])
        except ValueError:
            continue
        if cloud_cover < 1:
            mgrs_tile = row['MGRS_TILE']
            sensing_time = datetime.fromisoformat(row['SENSING_TIME'].replace('Z', '+00:00'))
            if mgrs_tile not in data or sensing_time > data[mgrs_tile]['SENSING_TIME']:
                data[mgrs_tile] = {
                    'SENSING_TIME': sensing_time,
                    'GRANULE_ID': row['GRANULE_ID']
                }
print("Finished reading index_Sentinel.csv.")

filtered_urls = []
with open('filteredUrls.txt', 'r') as urlfile:
    for line in urlfile:
        granule_id = line.split('/')[10]
        if granule_id in data:
            filtered_urls.append(line.strip().replace('_B01.jp2', '_TCI.jp2'))

print(f"Number of filtered URLs: {len(filtered_urls)}")
with open('noCloudURLs.txt', 'w') as outfile:
    outfile.write('\n'.join(filtered_urls))
print("Filtered URLs saved.")

Citation

If you use this dataset, please cite:

@misc{DASP,
  author    = {Richard Erkhov and Nico Bosshard},
  title     = {DASP},
  year      = {2025},
  url       = {https://huggingface.co/datasets/RichardErkhov/DASP}
}
Downloads last month
10,804