text
stringlengths 0
820
|
---|
can significantly improve model performance (Rao et al., 2020; Hengl et al., 2017). We therefore
|
pre-trained Presto on a diverse set of directly-sensed and derived Earth observation products which
|
we pre-processed and exported using Google Earth Engine (Gorelick et al., 2017).
|
A pre-training batch contained several pixel-timeseries samples, each of which is a concatenation of
|
dynamic-in-time datapoints with each timestep representing a month (yielding T= 12 timesteps in
|
total). The following dynamic-in-time data products were used, yielding 15channels: (i) Sentinel-2
|
(S2) multispectral data, (ii) Sentinel-1 (S1) radar data, (iii) ERA5 climate reanalysis data, (iv) NDVI
|
(Rouse et al., 1974) derived from Sentinel-2 data and (v) land cover classes Vfrom Dynamic World.
|
To every pixel-timeseries we appended two static-in-time products: (i) topography data from the
|
SRTM digital elevation model (90m Digital Elevation Data, 2003) and (ii) location coordinates of
|
each pixel. Hence, one pre-training sample x, comprising a pixel-timeseries t∈[RT×15;VT×1]and
|
static variables s∈R1×5, is summarized as follows:
|
x=h
|
tS1
|
i;tS2
|
i;tERA5
|
i;tNDVI
|
i;tDW
|
i|i= 1, ...,12
|
;sTG;sLoci
|
(1)
|
From now on, we use “pixel-timeseries” to refer to both the dynamic and the static variables.
|
3.2 Encoding and tokenization
|
We transformed the pixel-timeseries xinto a number of tokens (each represented by an embedding e)
|
to be processed by the Presto transformer. Per timestep 0≤i < T , we split the input variables into
|
channel groups Caccording to their type of sensor or source: e.g., the S1 bands form one channel
|
group. We describe these groups in more detail in Appendix A.1.3. Each real-valued channel group
|
represents a different sensor, native spatial resolution or (in the case of Sentinel-2 channel-groups)
|
region of the electromagnetic spectrum. We projected each channel group to a common latent space
|
of dimension deby separate learned linear projections hC: e.g., eS1
|
i=hS1(tS1
|
i). The Dynamic World
|
classes are categorical, so we embedded them by indexing them into an embedding matrix.
|
Unlike natural images in which the data and its label are self-contained, remote sensing labels are
|
inherently associated to a place and time on Earth (i.e., a latitude/longitude and timestamp). In
|
addition, while natural images contain RGB channels from the same camera sensor, Presto’s pixel-
|
timeseries input contains channels from multiple remote sensing instruments and data products. We
|
therefore wanted to communicate to the model: (i) the location of the datapoint (already present in
|
4
|
Table 1: We evaluated Presto on a wide variety of downstream tasks , including segmentation
|
(seg.), multi-label (ml) scene classification (class.) and regression (reg.) tasks. There is diversity in
|
terms of data composition, geographic area and training set size. Input shape describes the shape of a
|
single sample, in terms of [Height, Width, Timesteps, Channels]. We bold the temporal dimension,
|
to highlight time-series versus single-timestep inputs.
|
Dataset Task RegionInput shape
|
[H, W, T, C]Train
|
samples
|
CropHarvest Seg.Kenya
|
[1, 1, 12, 18]1,345
|
Brazil 203
|
Togo 1,319
|
S2-Agri 100 Class. France [5, 5, 24, 10] 1,500
|
TreeSatML
|
Class.Germany[6, 6, 1, 2]45,337[6, 6, 1, 11]
|
EuroSat Class. Europe[64, 64, 1, 3]21,600[64, 64, 1, 11]
|
Fuel Moisture Reg. USA [1, 1, 3, 19] 1,578
|
Algae Blooms Reg. USA [1, 1, 12, 19] 777
|
the input as static variable through coordinates sLoc) and a variable’s (ii) timestamp and (iii) channel
|
group. We did this by adding encodings to the previously described embeddings e. The complete
|
encoding has dimension deand contains a concatenation of positional, month, and learned channel
|
encodings described below.
|
•Positional: We used the sinusoidal positional encoding originally used by Vaswani et al. (2017).
|
•Month: We added an encoding representing the month being captured by each token, because we
|
expect timesteps from similar months to have similar features even if they are from different years.
|
We assign an integer to each month ranging from 0to11, yielding:
|
pmonth,2i= sin ((2 π×month )/12) (2)
|
pmonth,2i+1= cos ((2 π×month )/12) (3)
|
For static-in-time variables, the positional and month encodings were set to zero.
|
•Channel Group: Each token is associated with a set of input channels. In multispectral SatMAE
|
(Cong et al., 2022), a fixed encoding was used to communicate input-band information with
|
different channels representing different wavelengths, which is possible because only input data
|
from one sensor (Sentinel-2) is used. However, since Presto’s input data includes multiple remote
|
sensing products, we applied a learnable encoding for each channel group from the set of possible
|
channel groups C={S1,S2 RGB , ...,ERA5 ,TG,Loc}.
|
The transformer input E∈R(T·|Cdynamic|+|Cstatic|)×de(for encoder dimension de) is a concatenation of:
|
•Dynamic variables, for timesteps i < T and channel groups c∈ C :ec
|
i=hc(tc
|
i) +
|
[pc
|
channel ;psin(i);pmonth(i) ]
|
• Topographical data: eTG=hTG(sTG) + [pTG
|
channel ; 0; 0]
|
• Coordinates: eLoc=hLoc(sLoc)
|
3.3 Pre-training via Structured Masking
|
A key requirement for Presto was to perform well even with incomplete inputs (i.e., when there are
|
missing timesteps, channels, or both). When masking out part of the input x, we therefore tailored
|
the masking strategies to encourage the model to learn representations that perform well when given
|
a subset of bands or timesteps for downstream tasks. For a T×Dinput of Ttimesteps and Dtotal
|
input channels, we used the following masking techniques (illustrated in Figure 1), where Presto
|
considers a token to be a 1×dinput (a single timestep of dgrouped channels). The coordinates were
|
never masked but the static topological tokens can be.
|
1.Random :(t×d)masked values, with t < T andd < D
|
5
|
Table 2: Mean F1 score across all CropHarvest tasks. Presto outpeforms TIML (Tseng et al., 2022)
|
and MOSAIKS-1D while requiring the adaptation of far fewer parameters. The TIML and
|
MOSAIKS-1D model did not receive Dynamic World as input, so we measured Presto’s performance
|
both with and without it.
|
#. parameters
|
Model Total Adapted Mean F1
|
Random Forest 0.441
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.