text
stringlengths
0
820
ter on GeoImageNet has more of a uniform distribution. For
fMoW, we can conclude that each cluster contain samples
from most of the classes. Finally, when adding the geo-
location classification task into the contrastive learning we
tuneαandβto be 1.
Methods We compare our unsupervised learning approach
tosupervised learning for image recognition task. For ob-
ject detection, and semantic segmentation we compare them
Figure 8: Top andBottom show the distributions of the
fMoW and GeoImageNet clusters.
6
to pre-trained weights obtained using (a) supervised learn-
ing, and (b) random initilization while fine-tuning on the
target task dataset. Finally, for ablation analysis we provide
results using different combinations of our methods. When
appending only geo-location classification task or temporal
positives into MoCo-v2 we use MoCo-v2+Geo andMoCo-
v2+TP . When adding both of our approaches into MoCo-
v2we use MoCo-v2+Geo+TP .
5.1. Experiments on fMoW
We first perform experiments on fMoW image recogni-
tion task. Similar to the common protocol of evaluating un-
supervised pre-training methods [3, 13], we freeze the fea-
tures and train a supervised linear classifier. However, in
practice, it is more common to finetune the features end-to-
end in a downstream task. For completeness and a better
comparison, we report end-to-end finetuning results for the
62-class fMoW classification as well. We report both top-1
accuracy and F1-scores for this task.
BackboneF1-Score↑
(Frozen/Finetune)Accuracy↑
(Frozen/Finetune)
Sup. Learning (IN wts. init.) *ResNet50 -/64.72 -/69.07
Sup. Learning (Scratch) * ResNet50 -/64.71 -/69.05
Geoloc. Learning * ResNet50 48.96/52.23 52.40/56.59
MoCo-V2 (pre. on IN) ResNet50 31.55/57.36 37.05/62.90
MoCo-V2 ResNet50 55.47/60.61 60.69/64.34
MoCo-V2+Geo ResNet50 61.60/66.60 64.07/69.04
MoCo-V2+TP ResNet50 64.53/67.34 68.32/71.55
MoCo-V2+Geo+TP ResNet50 63.13/66.56 66.33/70.60
Table 1: Experiments on fMoW on classifying single im-
ages. * indicates a model trained up to epoch with the high-
est accuracy on the validation set. We use the same set up
for Sup. Learning and Geoloc. Learning in the remaining
experiments. Frozen corresponds to linear classification on
frozen features. Finetune corresponds to end-to-end fine-
tuning results for the fmow classification.
Classifying Single Images In Table 1, we report the results
on single image classification on fMoW. We would like to
highlight that in this case we classify each image individ-
ually. In other words, we do not use the prior information
that multiple images over the same area (x1
i,x2
i,...,xTi
i)
have the same labels (yi,yi,...,yi). For evaluation, we
use 53,041 images. Our results on this task (linear classi-
fication on frozen features) show that MoCo-v2 performs
reasonably well on a large-scale dataset with 60.69% accu-
racy, 8%less than the supervised learning methods. Sup.
Learning (IN wts. init.) andSup. Learning (Scratch) cor-
respond to supervised learning method starting from ima-
genet pre-trained weights and random weights respectively.
This result aligns with MoCo-v2’s performance on the Ima-
geNet dataset [3]. Next, by incorporating geo-location clas-
sification task into MoCo-v2, we improve by 3.38% in top-
1 classification accuracy. We further improve the resultsto68.32% using temporal positives, bridging the gap be-
tween the MoCo-v2 baseline and supervised learning to less
than1%. However, when we perform end-to-end finetun-
ing for the classification task, we observe that our method
surpasses the supervised learning methods by more than
2%. For completeness, we also include results for MoCo-
v2 pre-trained on Imagenet dataset (4th row in Table 1) and
find that the distribution shift between Imagenet and down-
stream dataset leads to suboptimal performance.
Classifying Temporal Data In the next step, we change
how we perform testing across multiple images over an
area at different times. In this case, we predict labels
from images over an area i.e. make a prediction for each
t∈{1,...,Ti}, and average the predictions from that area.
We then use the most confident class prediction to get area-
specific class predictions. In this case, we evaluate the per-
formance on 11,231 unique areas that are represented by
multiple images at different times. Our results in Table 2
show that doing area-specific inference improves the classi-
fication accuracies by 4-8%over image-specific inference.
Even incorporating temporal positives, we can improve the
accuracy by 6.1%by switching from image classification to
temporal data classification. Overall, our methods outper-
form the baseline Moco-v2 by 4-6%and supervised learn-
ing by 1-2%. Here we only report temporal classification
on top of frozen features.
Backbone F1-Score↑ Accuracy↑
Sup. Learning (IN wts. init.) *ResNet50 68.72 (+4.01) 73.22 (+4.15)
Sup. Learning (Scratch) * ResNet50 68.73 (+4.02) 73.24 (+4.19)
Geoloc. Learning * ResNet50 52.01 (+3.05) 56.12 (+3.72)
MoCo-V2 (pre. on IN) ResNet50 35.93 (+4.38) 42.56 (+5.51)
MoCo-V2 ResNet50 63.96 (+8.49) 68.64 (+7.95)
MoCo-V2+Geo ResNet50 66.93 (+5.33) 70.48 (+6.41)
MoCo-V2+TP ResNet50 70.11 (+5.58) 74.42 (+6.10)