text
stringlengths
0
820
MOSAIKS-1D R 418K 8193 0.738
TIML 91K 91K 0.802
Presto R402K 1290.835
no DW 0.836
Figure 3: Presto is robust to incomplete inputs . We measured the AUC ROC score of Presto with
Linear probing (Presto R) on the CropHarvest dataset when no Dynamic World input is passed, and
with a subset of input months (the x-axis). We plot the performance of MOSAIKS-1D and TIML
when they receive the full 12 months of input (dashed horizontal lines) - Presto Rrecovered the
performance of these models given only a subset of input months.
2.Channel-groups :(T×d)masked values, with d < D
3.Contiguous timesteps :(t×D)masked values, t < T
4.Timesteps :(t×D)masked values, with t < T
For each training instance, we randomly sampled from the above strategies to construct a mask.
To handle both the categorical and continuous inputs we used the following loss function, which
balances the continuous and categorical losses for every batch so that each reconstructed value
receives the same weighting in the final loss: Ltotal=LMSE+λNcat
NcontLCE.LMSEis the mean squared
error reconstruction loss used for the continuous values, LCEis the cross entropy loss used for the
categorical values, Ncontis the number of masked continuous values and Ncatis the number of masked
categorical values in the batch. λis a hyperparameter, which we set to 2.
4 Experiments
In all experiments described below, we use a Presto model with identical encoder and decoder
configurations (2 attention layers with 8 heads, an embedding size of 128 and an MLP ratio of 4). We
investigated the effect of different encoder configurations in Table 8.
For downstream evaluation, we took the encoder-decoder model learned during pre-training and
discarded the decoder. As in He et al. (2022), we passed a global pool of all the encoder’s output
tokens to a downstream classifier. We evaluated the performance of three different models: Presto R,
Presto RF, and Presto FT, defined below.
6
Figure 4: We obtained per-image predictions using Presto by computing a mean and standard deviation
of Presto’s per-pixel outputs, and passing this concatenated vector to a downstream classifier. We
illustrate this for the EuroSat task.
•Feature extraction. Rolf et al. (2021) demonstrated the utility of neural networks as feature-
extractors on top of which computationally efficient classifiers could be trained. Presto Rand
Presto RFconsist respectively of linear or logistic regressions and random forests trained on
Presto’s embeddings. Since only the regression/random forest is trained, this a computationally
efficient method for adapting Presto to a wide range of tasks.
•Fine-tuning . Presto FTconsists of the Presto encoder, followed by a linear transformation of the
pooled tokens to the desired outputs. This entire model (the encoder and the linear transformation)
is fine-tuned on the training data from each evaluation task. We used a subset of the (downstream)
training data for validation.
During pre-training, we used a validation task consisting of classifying all points in the CropHarvest
dataset (Tseng et al., 2021) according to their FAO indicative crop classifications. For this validation
task, we excluded points used for evaluation (Section 5.1).
For evaluation, we compared Presto to state-of-the-art task-specific baselines (Section 5). Because
there are no other global self-supervised models for pixel-timeseries, we adapted MOSAIKS (Rolf
et al., 2021) for timeseries data by performing convolutions over the temporal rather than spatial
dimension (MOSAIKS-1D). We used the output features with random forests (MOSAIKS-1D RF)
and regressions (MOSAIKS-1D R).
5 Evaluation Tasks & Results
We evaluated Presto using six evaluation tasks spanning diverse task types, geographic locations (4
continents and 38 countries), input data modalities, and fine-tuning dataset sizes (Table 1). Whenever
possible, we benchmarked Presto against the state-of-the-art model for that task.
Applying Presto to downstream tasks is computationally efficient . While other methods require a
cluster of GPUs for fine-tuning (Cong et al., 2022), we fine-tuned Presto on a single GPU or CPU.
For the fuel moisture task described in Section 5.1, fine-tuning Presto took under 6 minutes on a 2017
MacBook Pro’s CPU. When Presto is used as a feature extractor, simple models can be trained which
require few parameters to be learned, as we show in Table 2. Even when fully fine-tuned, Presto’s
small size meant that relatively few parameters needed to be trained (Tables 5 and 6). This makes
Presto accessible to practitioners, especially those lacking significant computational resources.
Below, we describe the tasks used to evaluate Presto and discuss Presto’s performance on these tasks.
5.1 Timeseries Tasks
•Crop type Segmentation : The CropHarvest (Tseng et al., 2021) evaluation datasets consist of
binary pixel classification of (i) maize in Kenya, (ii) coffee in Brazil and (iii) cropland in Togo. We
compared Presto to the baselines provided by CropHarvest and to Task-Informed Meta-Learning
(TIML, Tseng et al., 2022), which achieved state-of-the-art results on these datasets.
7
Table 3: RMSE results on the regression tasks. The literature baselines are not directly comparable,
since they use different input datasets or private test data (or both). Rao et al. (2020) reported an
RMSE of 25 on the fuel moisture dataset with a physics-assisted neural network and the algae bloom
competition winner reported an RMSE of 0.761, indicating our results are within the scope of utility.
Best results are highlighted blue , with second best results in bold . Models have a high variance in
performance across tasks, so we calculated the mean difference in RMSE from the linear regression
baseline across both tasks. Presto performed most consistently, both when used as a feature-extractor
and when fine-tuned.
Fuel
MoistureAlgae
BloomsMean
difference
Linear Regression 28.20 0.850 0%
Random Forest 23.84 1.249 15.7%
MOSAIKS-1D RF 28.75 0.972 8.15%
Presto FT(random init.) 26.07 0.955 2.40%
Presto FT 25.28 0.815−7.24%
Presto RF 25.98 0.884−1.94%
Table 4: Results on the TreeSatAI dataset. We compared Presto to the dataset’s benchmark models.
The MLPs contain 3 layers (with 563K-723K parameters respectively) and are tuned for this task. We
froze the Presto encoder’s 402k parameters and trained a random forest on its outputs with default
scikit-learn hyperparameters.
Weighted Micro
Model Data F1 mAP F1 mAP
MLP
S110.09 29.42 12.82 33.09
LightGBM 11.86 32.79 14.07 35.11
Presto RF 38.34 35.45 40.79 38.64
MLP
S251.97 64.19 54.59 65.83
LightGBM 48.17 61.99 52.52 61.66
Presto RF 55.29 61.53 58.29 63.31
•Fuel Moisture : The live fuel moisture dataset (Rao et al., 2020) measures live fuel moisture content