text
stringlengths
0
820
application to geo-located datasets, e.g. remote sensing,
where unlabeled data is often abundant but labeled data
is scarce. We first show that due to their different char-
acteristics, a non-trivial gap persists between contrastive
and supervised learning on standard benchmarks. To close
the gap, we propose novel training methods that exploit the
spatio-temporal structure of remote sensing data. We lever-
age spatially aligned images over time to construct tempo-
ral positive pairs in contrastive learning and geo-location
to design pre-text tasks. Our experiments show that our
proposed method closes the gap between contrastive and
supervised learning on image classification, object detec-
tion and semantic segmentation for remote sensing. More-
over, we demonstrate that the proposed method can also be
applied to geo-tagged ImageNet images, improving down-
stream performance on various tasks. Project Webpage can
be found at this link geography-aware-ssl.github.io.
1. Introduction
Inspired by the success of self-supervised learning meth-
ods [3, 13], we explore their application to large-scale re-
mote sensing datasets (satellite images) and geo-tagged nat-
ural image datasets. It has been recently shown that self-
supervised learning methods perform comparably well or
even better than their supervised learning counterpart on im-
age classification, object detection, and semantic segmenta-
tion on traditional computer vision datasets [21, 10, 13, 3,
2]. However, their application to remote sensing images is
largely unexplored, despite the fact that collecting and la-
*Equal Contribution. Contact: {kayush, buzkent, chen-
lin}@cs.stanford.edubeling remote sensing images is particularly costly as anno-
tations often require domain expertise [37, 38, 36, 16, 5].
In this direction, we first experimentally evaluate the per-
formance of an existing self-supervised contrastive learning
method, MoCo-v2 [13], on remote sensing datasets, finding
a performance gap with supervised learning using labels.
For instance, on the Functional Map of the World (fMoW)
image classification benchmark [5], we observe an 8% gap
in top 1 accuracy between supervised and self-supervised
methods.
To bridge this gap, we propose geography-aware con-
trastive learning to leverage the spatio-temporal structure
of remote sensing data. In contrast to typical computer vi-
sion images, remote sensing data are often geo-located and
might provide multiple images of the same location over
time. Contrastive methods encourage closeness of represen-
tations of images that are likely to be semantically similar
(positive pairs). Unlike contrastive learning for traditional
computer vision images where different views (augmenta-
tions) of the same image serve as a positive pair, we pro-
pose to use temporal positive pairs from spatially aligned
images over time. Utilizing such information allows the
representations to be invariant to subtle variations over time
(e.g., due to seasonality). This can in turn result in more
discriminative features for tasks focusing on spatial vari-
ation, such as object detection or semantic segmentation
(but not necessarily for tasks involving temporal variation
such as change detection). In addition, we design a novel
unsupervised learning method that leverages geo-location
information, i.e., knowledge about where the images were
taken. More specifically, we consider the pretext task of
predicting where in the world an image comes from, similar
to [11, 12]. This can complement the information-theoretic
objectives typically used by self-supervised learning meth-
ods by encouraging representations that reflect geograph-
ical information, which is often useful in remote sensing
tasks [31]. Finally, we integrate the two proposed methods
1arXiv:2011.09980v7 [cs.CV] 8 Mar 2022
Figure 1: Left shows the original MoCo-v2 [3] framework. Right shows the schematic overview of our approach.
into a single geography-aware contrastive learning objec-
tive.
Our experiments on the functional Map of the World [5]
dataset consisting of high spatial resolution satellite im-
ages show that we improve MoCo-v2 baseline significantly.
In particular, we can improve the accuracy on target ap-
plications utilizing image recognition [5], object detec-
tion [39, 1], and semantic segmentation [46]. In particular,
we improve it by∼8%classification accuracy when testing
the learned representations on image classification, ∼2%
AP on object detection, ∼1%mIoU on semantic segmen-
tation, and∼3%top-1 accuracy on land cover classifica-
tion ˙Interestingly, our geography-aware learning can even
outperform the supervised learning counterpart on temporal
data classification by ∼2%. To further demonstrate the ef-
fectiveness of our geography-aware learning approach, we
extract the geo-location information of ImageNet images
using FLICKR API similar to [7], which provides us with
a subset of 543,435 geo-tagged ImageNet images. We ex-
tend the proposed approaches to geo-located ImageNet, and
show that geography-aware learning can improve the per-
formance of MoCo-v2 by ∼2%on image classification,
showing the potential application of our approach to any
geo-tagged dataset. Figure 1 shows our contributions in de-
tail.
2. Related Work
Self-supervised methods use unlabeled data to learn rep-
resentations that are transferable to downstream tasks ( e.g.
image classification). Two commonly seen self-supervised
methods are pre-text task andcontrastive learning .
Pre-text tasks Pre-text task based learning [22, 41, 29, 49,
43, 28] can be used to learn feature representations when