The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Last update: 09-04-2025


Atmospheric Radiative Transfer Emulation Challenge
Introduction
Atmospheric Radiative Transfer Models (RTM) are crucial in Earth and climate sciences with applications such as synthetic scene generation, satellite data processing, or numerical weather forecasting. However, their increasing complexity results in a computational burden that limits direct use in operational settings. A practical solution is to interpolate look-up-tables (LUTs) of pre-computed RTM simulations generated from long and costly model runs. However, large LUTs are still needed to achieve accurate results, requiring significant time to generate and demanding high memory capacity. Alternative, ad hoc solutions make data processing algorithms mission-specific and lack generalization. These problems are exacerbated for hyperspectral satellite missions, where the data volume of LUTs can increase by one or two orders of magnitude, limiting the applicability of advanced data processing algorithms. In this context, emulation offers an alternative, allowing for real-time satellite data processing algorithms while providing high prediction accuracy and adaptability across atmospheric conditions. Emulation replicate the behavior of a deterministic and computationally demanding model using statistical regression algorithms. This approach facilitates the implementation of physics-based inversion algorithms, yielding accurate and computationally efficient model predictions compared to traditional look-up table interpolation methods.
RTM emulation is challenging due to the high-dimensional nature of both input (~10 dimensions) and output (several thousand) spaces, and the complex interactions of electromagnetic radiation with the atmosphere. The research implications are vast, with potential breakthroughs in surrogate modeling, uncertainty quantification, and physics-aware AI systems that can significantly contribute to climate and Earth observation sciences.
This challenge will contribute to reducing computational burdens in climate and atmospheric research, enabling (1) Faster satellite data processing for applications in remote sensing and weather prediction, (2) improved accuracy in atmospheric correction of hyperspectral imaging data, and (3) more efficient climate simulations, allowing broader exploration of emission pathways aligned with sustainability goals.
Challenge Tasks and Data
Participants in this challenge will develop emulators trained on provided datasets to predict spectral magnitudes (atmospheric transmittances and reflectances) based on input atmospheric and geometric conditions. The challenge is structured around three main tasks: (1) training ML models using predefined datasets, (2) predicting outputs for given test conditions, and (3) evaluating emulator performance based on accuracy and runtime.
Proposed Experiments
The challenge includes two primary application test scenarios:
- Atmospheric Correction (
A
): This scenario focuses on the atmospheric correction of hyperspectral satellite imaging data. Emulators will be tested on their ability to reproduce key atmospheric transfer functions that influence radiance measurements. This includes path radiance, direct/diffuse solar irradiance, and transmittance properties. Full spectral range simulations (400-2500 nm) will be provided at a resolution of 5cm-1. - CO2 Column Retrieval (
B
): This scenario is in the context of atmospheric CO2 retrieval by modeling how radiation interacts with various gas layers. The emulators will be evaluated on their accuracy in predicting top-of-atmosphere radiance, particularly within the spectral range sensitive to CO2 absorption (2000-2100 nm) at high spectral resolution (0.1cm-1).
For both scenarios, two test datasets (tracks) will be provided to evaluate 1) interpolation, and 2) extrapolation.
Each scenario-track combination will be identified using alphanumeric ID Sn
, where S
={A
,B
} denotes to the scenario, and n
={1,2}
represents test dataset type (i.e., track). For example, A2
refers to prediction for the atmospheric correction scenario using the the extrapolation dataset.
Participants may choose their preferred scenario(s) and tracks; however, we encourage submitting predictions for all test conditions.
Data Availability and Format
Participants will have access to multiple training datasets of atmospheric RTM simulations varying in sample sizes, input parameters, and spectral range/resolution. These datasets will be generated using Latin Hypercube Sampling to ensure a comprehensive input space coverage and minimize issues related to ill-posedness and unrealistic results.
The training data (i.e., inputs and outputs of RTM simulations) will be stored in HDF5 format with the following structure:
Dimensions | |
---|---|
Name | Description |
n_wl |
Number of wavelengths for which spectral data is provided |
n_funcs |
Number of atmospheric transfer functions |
n_comb |
Number of data points at which spectral data is provided |
n_param |
Dimensionality of the input variable space |
Data Components | |||
---|---|---|---|
Name | Description | Dimensions | Datatype |
LUTdata |
Atmospheric transfer functions (i.e. outputs) | n_funcs*n_wvl x n_comb |
single |
LUTHeader |
Matrix of input variable values for each combination (i.e., inputs) | n_param x n_comb |
double |
wvl |
Wavelength values associated with the atmospheric transfer functions (i.e., spectral grid) | n_wvl |
double |
Note: Participants may choose to predict the spectral data either as a single vector of length n_funcs*n_wvl
or as n_funcs
separate vectors of lenght n_wvl
.
Testing input datasets (i.e., input for predictions) will be stored in a tabulated .csv
format with dimensions n_param x n_comb
.
The trainng and testing dataset will be organized organized into scenario-specific folders: scenarioA
(Atmospheric Correction), and scenarioB
(CO2 Column Retrieval).
Each folder will contain:
- A
train
with multiple.h5
files corresponding to different training sample sizes (e.g.train2000.h5
contains 2000 samples). - A
reference
subfolder containg three test files (refInterp
andrefExtrap
) referring to the two aforementioned tracks (i.e., interpolation and extrapolation). Additionally, a global attribute (scenario
) will be included in the training data files to indicate the relevant challenge scenario (see Proposed experiments)
Here is an example of how to load each dataset in python:
import h5py
import pandas as pd
import numpy as np
# Replace with the actual path to your training and testing data
trainFile = 'train2000.h5'
testFile = 'refInterp.csv'
# Open the H5 file
with h5py.File(file_path, 'r') as h5_file
Ytrain = h5_file['LUTdata'][:]
Xtrain = h5_file['LUTHeader'][:]
wvl = h5_file['wvl'][:]
# Read testing data
df = pd.read_csv(testFile)
Xtest = df.to_numpy()
in Matlab:
# Replace with the actual path to your training and testing data
trainFile = 'train2000.h5';
testFile = 'refInterp.csv';
# Open the H5 file
Ytrain = h5read(trainFile,'/LUTdata');
Xtrain = h5read(trainFile,'/LUTheader');
wvl = h5read(trainFile,'/wvl');
# Read testing data
Xtest = importdata(testFile);
and in R language:
library(rhdf5)
# Replace with the actual path to your training and testing data
trainFile <- "train2000.h5"
testFile <- "refInterp.csv"
# Open the H5 file
lut_data <- h5read(file_path, "LUTdata")
lut_header <- h5read(file_path, "LUTHeader")
wavelengths <- h5read(file_path, "wvl")
# Read testing data
Xtest <- as.matrix(read.table(file_path, sep = ",", header = TRUE))
All data will be shared through a dedicated Zenodo repository. Participants will also have access to the evaluation scripts on this GitLab to ensure transparency and reproducibility.
Evaluation methodology
The evaluation will focus on three key aspects: prediction accuracy, computational efficiency, and extrapolation performance.
Prediction Accuracy
For the atmospheric correction scenario (A
), the predicted atmospheric transfer functions will be used to retrieve surface reflectance from the top-of-atmosphere
(TOA) radiance simulations in the testing dataset. The evaluation will proceed as follows:
- The relative difference between retrieved and reference reflectance will be computed for each spectral channel and sample from the testing dataset.
- The mean relative error (MRE) will be calculated over the enrire reference dataset to assess overall emulator bias.
- The spectrally-averaged MRE (MREλ will be computed, excluding wavelengths in the deep H2O. absorption regions, to ensure direct comparability between participants.
For the CO2 retrieval scenario (B
), evaluation will follow the same steps, comparing predicted TOA radiance spectral data against the reference values
in the testing dataset.
Since each participant/model can contribute to up to four scenario-track combinations, we will consolidate results into a single final ranking using the following process:
- Individual ranking: For each of the four combinations, submissions will be ranked based on their MREλ values. Lower MREλ values correspond to better performance. In the unlikely case of ties, these will be handled by averaging the tied ranks.
- Final ranking: Rankings will be aggregated into a single final score using a weighted average. The following weights will be applied: 0.375 for interpolation and 0.175 for extrapolation tracks. That is: Final score = (0.325 × AC-Interp Rank) + (0.175 × AC-Extrap Rank) + (0.325 × CO2-Interp Rank) + (0.175 × CO2-Extrap Rank)
- Missing Submissions: If a participant does not submit results for a particular scenario-track combination, they will be placed in the last position for that track.
To ensure fairness in the final ranking, we will use the standard competition ranking method in the case of ties. If two or more participants achieve the same weighted average rank, they will be assigned the same final position, and the subsequent rank(s) will be skipped accordingly. For example, if two participants are tied for 1st place, they will both receive rank 1, and the next participant will be ranked 3rd (not 2nd).
Computational efficiency
Participants must report the runtime required to generate predictions across different emulator configurations. To facilitate fair comparisons, they should also provide a report with hardware specifications, including: CPU, Parallelization settings (e.g., multi-threading, GPU acceleration), RAM availability. Additionally, participants should report key model characteristics, such as the number of operations required for a single prediction and the number of trainable parameters in their ML models.
All evaluation scripts will be publicly available on GitLab and Huggingface to ensure fairness, trustworthiness, and transparency.
Proposed Protocol
Participant must generate emulator predictions on the provided testing datasets before the submission deadline. Multiple emulator models can be submitted.
The submission will be made via a pull request to this repository.
Each submission MUST include the prediction results in hdf5 format and a
metadata.json
.The predictions should be stored in a
.h5
file with the same format as the training data. Note that only theLUTdata
matrix (i.e., the predictions) are needed. A baseline example of this file is available for participants (baseline_Sn.h5
).Each prediction file must be stored in
predictions
subfolder within the corresponding scenario folder (e.g., (e.g./scenarioA/predictions
). The prediction files should be named using the emulator/model name followed by the scenario-track ID (e.g./scenarioA/predictions/mymodel_A1.h5
). A global attribute namedscenario
must be included to specify the corresponding scenario-track (e.g.,A1
, see Proposed experiments). A global attributed namedruntime
must be included to report the computational efficiency of your model (value expressed in seconds). Note that all predictions for different scenario-tracks should be stored in separate files.The metadata file (
metadata.json
) shall contain the following information:
{
"name": "model_name",
"authors": ["author1", "author2"],
"affiliations": ["affiliation1", "affiliation2"],
"description": "A brief description of the emulator",
"url": "[OPTIONAL] URL to the model repository if it is open-source",
"doi": "DOI to the model publication (if available)"
}
Emulator predictions will be evaluated once per day at 12:00 CET based on the defined metrics.
After the deadline, teams will be contacted with their evaluation results. If any issues are identified, theams will have up to two weeks to provide the necessary corrections.
Questions and discussions will be held in the discussion section of this repository.
After all the participants have provided the necessary corrections, the results will be published in the discussion section of this repository.
Expected Outcomes
- No clear superiority of any methodology in all metrics is expected.
- Participants will benefit from the analysis on scenarios/tracks, which will serve them to improve their models.
- A research publication will be submitted to a remote sensing journal with the top three winners.
- An overview paper of the challenge will be published at the ECML-PKDD 2025 proceedings.
- The winner will get covered the registratin cost for the ECML-PKDD 2025.
- We are exploring the possibility to provid an economic prizes for the top three winners. Stay tuned!
- Downloads last month
- 59