text
stringlengths 0
820
|
---|
A.6 Presto’s failure modes
|
Presto processes pixel-timeseries independently, without spatial context from other pixels or locations.
|
This means that when we make image-based predictions (such as for scene classification), Presto’s
|
independent pixel representations must be aggregated into a single prediction. We opt for a simple
|
concatenation of the element-wise mean and standard deviation of the representations, from which
|
a classifier makes a prediction. Information gets lost in such a simple aggregation, which impacts
|
Presto’s performance on such tasks.
|
19
|
Table 13: Additional results for the EuroSat task - results for the ScaleMAE, SatMAE and ConvMAE
|
models are from (Reed et al., 2022). We report kNN classifier results for different values of k, and at
|
varying input resolutions.
|
Resolution 16 32 64
|
k 5 20 100 5 20 100 5 20 100
|
SatMAE 0.729 0.727 0.695 0.871 0.876 0.854 0.934 0.931 0.913
|
ScaleMAE 0.751 0.744 0.699 0.912 0.901 0.869 0.960 0.956 0.935
|
ConvMAE 0.835 0.826 0.788 0.909 0.898 0.863 0.947 0.940 0.914
|
Presto (RGB) 0.869 0.828 0.713 0.869 0.829 0.712 0.869 0.829 0.713
|
Presto (MS) 0.916 0.892 0.844 0.920 0.892 0.846 0.921 0.893 0.846
|
Table 14: Additional results for the EuroSat task for Presto when run with reduced resolutions
|
(compared to those used by (Reed et al., 2022) and reported in Table 13). We report kNN classifier
|
results for different values of k, and at varying input resolutions.
|
Resolution 2 4 8
|
k 5 20 100 5 20 100 5 20 100
|
Presto (RGB) 0.843 0.811 0.699 0.860 0.820 0.706 0.869 0.826 0.710
|
Presto (MS) 0.873 0.852 0.799 0.895 0.874 0.824 0.911 0.886 0.838
|
Table 15: Additional results for the TreeSatAI (as in (Ahlswede et al., 2023), we report precision
|
and recall in addition to F1score and mAP). In addition, we report the results of finetuning Presto
|
(Presto FT) from the pre-trained weights and from a random initialization.
|
Model Data Aggregation F1 mAP Precision Recall
|
MLP
|
S1Weighted10.09 29 .42 33 .29 7 .13
|
LightGBM 11.86 32 .79 37 .96 8 .06
|
Presto FT(random init.) 40.36±0.77 39 .77±0.79 30 .69±0.82 64 .69±1.09
|
Presto FT 38.69±0.78 37 .41±0.58 30 .09±0.74 61 .20±0.85
|
Presto RF 38.34±0.07 35 .45±0.03 29 .67±0.07 57 .23±0.06
|
MLP
|
Micro12.82 33 .09 63 .01 7 .13
|
LightGBM 14.07 35.11 55.49 8 .06
|
Presto FT(random init.) 42.04±0.73 43 .00±0.80 31 .20±1.00 64 .69±1.09
|
Presto FT 41.65±0.46 40 .75±0.69 31 .58±0.47 61 .20±0.85
|
Presto RF 40.79±0.04 38 .64±0.02 31 .69±0.03 57 .23±0.06
|
MLP
|
S2Weighted51.97 64 .19 74 .59 42 .23
|
LightGBM 48.17 61 .99 74 .27 40 .04
|
Presto FT(random init.) 52.74±0.50 57 .24±0.64 45 .87±1.17 64 .29±1.51
|
Presto FT 53.63±0.42 59 .16±1.24 47 .15±1.40 65 .11±3.21
|
Presto RF 55.29±0.08 61 .53±0.09 56 .93±0.07 58 .56±0.09
|
MLP
|
Micro54.49 65 .83 77 .18 42 .23
|
LightGBM 52.52 61 .66 76 .27 40 .04
|
Presto FT(random init.) 52.56±0.41 58 .08±0.66 44 .56±1.03 64 .29±1.51
|
Presto FT 53.31±0.18 59 .77±1.13 45 .51±1.46 65 .11±3.21
|
Presto RF 58.29±0.06 63 .31±0.06 58 .04±0.05 58 .56±0.09
|
20
|
Table 16: Full results on the S2-Agri 100dataset, including standard errors obtained from 3 runs.
|
To obtain standard errors for the SITS-Former, we run the official code ( https://github.com/
|
linlei1214/SITS-Former ) with 3 seeds. Best results are highlighted .
|
Params (M) Pre-trained? OA κ F1
|
SITS
|
Former2.565.13±3.01 0.55±0.03 42.12±0.52
|
✓ 67.03±2.24 0.56±0.02 42.83±0.30
|
Presto 0.445.98±2.74 0.35±0.02 27.45±0.64
|
✓ 68.89±1.05 0.58±0.01 40.41±0.25
|
Figure 7: Accuracy of kNN@5 classifier with Presto RGB representations on the EuroSat dataset vs.
|
the input resolution, for different categories. Some categories have been left out for clarity.
|
(a) Forest
|
(b) Annual Crop
|
(c) Highway
|
(d) River
|
Figure 8: the RGB bands of example images from EuroSat classes.
|
For example, Presto’s performance on the EuroSat dataset reaches a plateau when increasing the
|
input resolution. As Figure 7 shows, this is mainly caused by a failure to accurately predict specific
|
classes (for example, the Highway andRiver classes). Figure 8 shows example images for these
|
classes, as well as for the Forest andAnnualCrop classes, on which Presto achieves higher accuracies.
|
While in the Forest andAnnualCrop images, most pixels of the image actually represent the labelled
|
class, in the Highway andRiver images only a relatively small part of the image actually contains the
|
label (a highway or river). We hypothesize that since many pixels in the Highway andRiver images
|
do not actually represent that class, the crude token-aggregation method we use to represent images
|
is insufficiently discriminative to accurately classify these images.
|
Other pre-trained remote sensing models use much more powerful mechanisms for aggregating
|
spatial information. For example, ViT models convolve over patches and then apply an attention
|
mechanism between spatial patches. If image-based predictions are needed and these predictions are
|
highly dependent on the occurrence of objects in subregions of the image, models which natively
|
process this important spatial information may be better suited.
|
We plan on exploring techniques to mitigate this difficulty with Presto in future work.
|
21
|
Geography-Aware Self-Supervised Learning
|
Kumar Ayush*
|
Stanford UniversityBurak Uzkent*
|
Stanford UniversityChenlin Meng*
|
Stanford UniversityKumar Tanmay
|
IIT Kharagpur
|
Marshall Burke
|
Stanford UniversityDavid Lobell
|
Stanford UniversityStefano Ermon
|
Stanford University
|
Abstract
|
Contrastive learning methods have significantly nar-
|
rowed the gap between supervised and unsupervised learn-
|
ing on computer vision tasks. In this paper, we explore their
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.