text
stringlengths
0
820
randomly perturbed temporal positive pair xt1
i,xt2
i. Similar
as equation 1, Nis number of negative samples, {kj}N
j=1
are the encoded negative pairs and λ∈R+is the temper-
ature hyperparameter. Again, we refer readers to [13] for
details on construction of these negative pairs.
Note that compared to equation 1, we use two real im-
ages from the same area over time to create positive pairs.
We believe that relying on real images for positive pairs en-
courages the network to learn better representations for real
data than the one relying on synthetic images. On the other
hand, our objective in equation 2 enforces the representa-
tions to be invariant to changes over time. Depending on
the target task, such inductive bias can be desirable or unde-
sirable. For example, for a change detection task, learning
representations that are highly sensitive to temporal changes
can be more preferable. However, for image classification
or object detection task, learning temporally invariant fea-
tures should not degrade the downstream performance.
4.2. Geo-location Classification as a Pre-text Task
In this section, we explore another aspect of remote sens-
ing images, geolocation metadata , to further improve the
quality of the learned representations. In this direction, we
design a pre-text task for unsupervised learning. In our
pre-text task, we cluster the images in the dataset using
their coordinates (lati,loni). We use a clustering method
to construct Kclusters and assign an area with coordinates
(lati,loni)a categorical geo-label ci∈C ={1,···,K}.
Using the cross entropy loss function, we then optimize a
geo-location predictor network fcas
Lg=K∑
k=1−p(ci=k) log(ˆp(ci=k|fc(xt
i)), (3)
whereprepresent the probability of the cluster k represent-
ing the true cluster and ˆprepresents the predicted probabili-
ties forKclusters. In our experiments, we represent fcwith
a CNN parameterized by θc. With this objective, our goal is
to learn location-aware representations that can potentially
transfer well to different downstream tasks.
5
4.3. Combining Geo-location and Contrastive
Learning Losses
In the previous section, we designed a pre-text task lever-
aging the geo-location meta-data of the images to learn
location-aware representations in a standalone setting. In
this section, we combine geo-location prediction and con-
trastive learning tasks in a single objective to improve the
contrastive learning-only and geo-location learning-only
tasks. In this direction, we first integrate the geo-location
learning task into the contrastive learning framework using
the cross-entropy loss function where the geo-location pre-
diction network uses features zt
ifrom the query encoder as
Lg=−K∑
i=1p(ci=k) log(ˆp(ci=k|fc(zt
i)). (4)
In the combined framework (see Fig. 1), fcis represented
by a linear layer parameterized by θc. Finally, we propose
the objective for joint learning as the linear combination of
TemporalInfoNCE and geo-classification loss with coeffi-
cientsαandβrepresenting the importance of contrastive
learning and geo-location learning losses as
arg min
θq,θk,θcLf=αLzt1+βLg. (5)
By combining two tasks, we learn representations to
jointly maximize agreement between spatio-temporal pos-
itive pairs, minimize agreement between negative pairs and
predict the geo-label of the images from the positive pairs.
5. Experiments
In this study, we perform unsupervised representation
learning on fMoW and GeoImageNet datasets. We then
evaluate the learned representations/pre-trained models on
a variety of downstream tasks including image recognition,
object detection and semantic segmentation benchmarks on
remote sensing and conventional images.
Figure 7: Left shows the number of clusters per label and
Right shows the number of unique labels per cluster in
fMoW and GeoImageNet. Labels represent the original
classes in fMoW and GeoImageNet.Implementation Details for Unsupervised Learning For
contrastive learning , similar to MoCo-v2 [3], we use
ResNet-50 to paramaterize the query and key encoders,
fqandfk, in all experiments. We use following hyper-
parameters in the MoCo-v2 pre-training step: learning rate
of 1e-3, batch size of 256, dictionary queue of size 65536,
temperature scaling of 0.2 and SGD optimizer. We use sim-
ilar setups for both fMoW and GeoImageNet datasets. Fi-
nally, for each downstream experiment, we report results for
the representations learned after 200 epochs.
For geo-location classification task , we run k-Means
clustering algorithm to cluster fMoW and GeoImageNet
intoK= 100 geo-clusters given their latitude and longi-
tude pairs. We show the clusters in Fig. 8. As seen in the fig-
ure, while both datasets have similar clusters there are some
differences, particularly in North America and Europe. In
Fig. 7 we analyze the clusters in GeoImageNet and fMoW.
The figure shows that the number of clusters per class on
GeoImageNet tend to be skewed towards smaller numbers
than fMoW whereas the number of unique classes per clus-