text
stringlengths 0
820
|
---|
5.3. Experiments on GeoImageNet
|
After fMoW, we adopt our methods for unsupervised
|
learning on fMoW for improving representation learning on
|
the GeoImageNet. Unfortunately, since ImageNet does not
|
contain images from the same area over time we are not able
|
to integrate the temporal positive pairs into the MoCo-v2
|
objective. However, in our GeoImageNet experiments we
|
show that we can improve MoCo-v2 by introducing geo-
|
location classification pre-text task.
|
Qualitative Analysis Table 6 shows the top-1 and top-5
|
classification accuracy scores on the test set of GeoIma-
|
geNet. Surprisingly, with only geo-location classification
|
task we can achieve 22.26% top-1 accuracy. With MoCo-v2
|
baseline, we get 38.51accuracy, about 3.47% more than su-
|
pervised learning method. With the addition of geo-location
|
classification, we can further improve the top-1 accuracy by
|
1.45%. These results are interesting in a way that MoCo-v2
|
8
|
(200 epochs) on ImageNet-1k performs 8%worse than su-
|
pervised learning whereas it outperforms supervised learn-
|
ing on our highly imbalanced GeoImageNet with 5150 class
|
categories which is about 5×more than ImageNet-1k.
|
BackboneTop-1
|
(Accuracy)↑Top-5
|
(Accuracy)↑
|
Sup. Learning (Scratch) ResNet50 35.04 54.11
|
Geoloc. Learning ResNet50 22.26 39.33
|
MoCo-V2 ResNet50 38.51 57.67
|
MoCo-V2+Geo ResNet50 39.96 58.71
|
Table 6: Experiments on GeoImageNet. We divide the
|
dataset into 443,435 training and 100,000 test images across
|
5150 classes. We train MoCo-V2 and MoCo-V2+Geo for
|
200 epochs whereas Sup. and Geoloc. Learning are
|
trained until they converge .
|
6. Conclusion
|
In this work, we provide a self-supervised learning
|
framework for remote sensing data, where unlabeled data is
|
often plentiful but labeled data is scarce. By leveraging spa-
|
tially aligned images over time to construct temporal posi-
|
tive pairs in contrastive learning and geo-location in the de-
|
sign of pre-text tasks, we are able to close the gap between
|
self-supervised and supervised learning on image classifica-
|
tion, object detection and semantic segmentation on remote
|
sensing and other geo-tagged image datasets.
|
Acknowledgement
|
This research is based upon work supported in part by the
|
Office of the Director of National Intelligence (ODNI), In-
|
telligence Advanced Research Projects Activity (IARPA),
|
via 2021-2011000004. The views and conclusions con-
|
tained herein are those of the authors and should not be
|
interpreted as necessarily representing the official policies,
|
either expressed or implied, of ODNI, IARPA, or the U.S.
|
Government. The U.S. Government is authorized to repro-
|
duce and distribute reprints for governmental purposes not-
|
withstanding any copyright annotation therein.
|
This research was also supported by Stanford Data
|
for Development Initiative, HAI, IARPA SMART, ONR
|
(N00014-19-1-2145), and NSF grants #1651565 and
|
#1733686.
|
References
|
[1] Kumar Ayush, Burak Uzkent, Marshall Burke, David Lobell,
|
and Stefano Ermon. Generating interpretable poverty maps
|
using object detection in satellite images. arXiv preprint
|
arXiv:2002.01612 , 2020. 2
|
[2] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge-
|
offrey Hinton. A simple framework for contrastive learningof visual representations. arXiv preprint arXiv:2002.05709 ,
|
2020. 1, 2, 4
|
[3] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He.
|
Improved baselines with momentum contrastive learning.
|
arXiv preprint arXiv:2003.04297 , 2020. 1, 2, 4, 6, 7, 8
|
[4] Anil M Cheriyadat. Unsupervised feature learning for aerial
|
scene classification. IEEE Transactions on Geoscience and
|
Remote Sensing , 52(1):439–451, 2013. 2
|
[5] Gordon Christie, Neil Fendley, James Wilson, and Ryan
|
Mukherjee. Functional map of the world. In Proceedings
|
of the IEEE Conference on Computer Vision and Pattern
|
Recognition , pages 6172–6180, 2018. 1, 2, 4
|
[6] Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard,
|
Yang Song, Fernando Brucher, Thomas Leung, and Hartwig
|
Adam. Geo-aware networks for fine-grained recognition. In
|
Proceedings of the IEEE International Conference on Com-
|
puter Vision Workshops , pages 0–0, 2019. 3
|
[7] Terrance de Vries, Ishan Misra, Changhan Wang, and Lau-
|
rens van der Maaten. Does object recognition work for ev-
|
eryone? In Proceedings of the IEEE Conference on Com-
|
puter Vision and Pattern Recognition Workshops , pages 52–
|
59, 2019. 2, 4
|
[8] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li,
|
and Li Fei-Fei. Imagenet: A large-scale hierarchical image
|
database. In 2009 IEEE conference on computer vision and
|
pattern recognition , pages 248–255. Ieee, 2009. 4
|
[9] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Un-
|
supervised representation learning by predicting image rota-
|
tions. arXiv preprint arXiv:1803.07728 , 2018. 2
|
[10] Jean-Bastien Grill, Florian Strub, Florent Altch ´e, Corentin
|
Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch,
|
Bernardo Avila Pires, Zhaohan Guo, Mohammad Ghesh-
|
laghi Azar, et al. Bootstrap your own latent-a new approach
|
to self-supervised learning. Advances in Neural Information
|
Processing Systems , 33, 2020. 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.