text
stringlengths 0
820
|
---|
MoCo-V2+Geo+TP ResNet50 69.56 (+6.43) 72.76 (+6.43)
|
Table 2: Experiments on fMoW on classifying temporal
|
data. In the table, we compare the results to the ones on
|
single image classification. Here we present results corre-
|
sponding to linear classification on frozen features only.
|
5.2. Transfer Learning Experiments
|
Previously, we performed pre-training experiments on
|
fMoW dataset and quantified the quality of the representa-
|
tions by supervised training a linear layer for image recogni-
|
tion on fMoW. In this section, we perform transfer learning
|
experiments on different low level tasks.
|
5.2.1 Object Detection
|
For object detection, we use the xView dataset [16] consist-
|
ing of high resolution satellite images captured with similar
|
sensors to the ones in the fMoW dataset. The xView dataset
|
7
|
pre-train AP50↑
|
Random Init. 10.75
|
Sup. Learning (IN wts. init.) 14.44
|
Sup. Learning (Scratch) 14.42
|
MoCo-V2 15.45 (+4.70)
|
MoCo-V2-Geo 15.63 (+4.88)
|
MoCo-V2-TP 17.65 (+6.90)
|
MoCo-V2-Geo+TP 17.74 (+6.99)
|
Table 3: Object detection results on the xView dataset.
|
consists of 846 very large ( ∼2000×2000 pixels) satellite
|
images with bounding box annotations for 60 different class
|
categories including airplane, passenger vehicle, maritime
|
vessel, helicopter etc.
|
Implementation Details We first divide the set of large
|
images into 700 training and 146 test images. Then, we
|
process the large images to create 416 ×416 pixels images
|
by randomly sampling the bounding box coordinates of the
|
small image and we repeat this process 100 times for each
|
large image. In this process, we ensure that there is less than
|
25% overlap between any two bounding boxes from the
|
same image. We then use RetinaNet [18] with pre-trained
|
ResNet-50 backbone and fine-tune the full network on the
|
xView training set. To train RetinaNet, we use learning rate
|
of 1e-5 and a batch size of 4 and Adam optimizer.
|
Qualitative Analysis Table 3 shows the object detection
|
performance on the xView test set. We achieve the best re-
|
sults with the addition of temporal positive pair, and geo-
|
location classification pre-text task into MoCo-v2. With
|
our final model, we can outperform the randomly initial-
|
ized weights by 7%AP and the supervised learning on the
|
fMoW by 3.3%AP.
|
5.2.2 Image Segmentation
|
In this section, we perform downstream experiments on the
|
task of Semantic Segmentation on SpaceNet dataset [40].
|
The SpaceNet datasets consists of 5000 high resolution
|
satellite images with segmentation masks for buildings.
|
Implementation Details We divide our SpaceNet dataset
|
into training and test sets of 4000 and 1000 images respec-
|
tively. We use PSAnet [50] network with ResNet-50 back-
|
bone to perform semantic segmentation. We train PSAnet
|
network with a batch size of 16 and a learning rate of 0.01
|
for 100 epochs and use SGD optimizer.
|
Qualitative Analysis Table 4 shows the segmentation per-
|
formance of differently initialized backbone weights on the
|
SpaceNet test set. Similar to object detection, we achieve
|
the best IoU scores with the addition of temporal positives
|
and geo-location classification task. Our final model out-
|
performs the randomly initialized weights and supervised
|
learning by 3.58% and2.94% IoU scores. We observe that
|
the gap between the best and worst models shrinks goingfrom the image recognition to object detection, and seman-
|
tic segmentation task. This aligns with the performance of
|
the MoCo-v2 pre-trained on ImageNet and fine-tuned on
|
the Pascal-VOC object detection and semantic segmenta-
|
tion experiments [13, 3].
|
pre-train mIOU↑
|
Random Init. 74.93
|
Imagenet Init. 75.23
|
Sup. Learning (IN wts. init.) 75.61
|
Sup. Learning (Scratch) 75.57
|
MoCo-V2 78.05 (+3.12)
|
MoCo-V2-Geo 78.42 (+3.49)
|
MoCo-V2-TP 78.48 (+3.55)
|
MoCo-V2-Geo+TP 78.51 (+3.58)
|
Table 4: Semantic segmentation results on Space-Net.
|
pre-train Top-1 Accuracy ↑
|
Random Init. 51.89
|
Imagenet Init. 53.46
|
Sup. Learning (IN wts. init.) 54.67
|
Sup. Learning (Scratch) 54.46
|
MoCo-V2 55.18 (+3.29)
|
MoCo-V2-Geo 58.23 (+6.34)
|
MoCo-V2-TP 57.10 (+5.21)
|
MoCo-V2-Geo+TP 57.63 (+5.74)
|
Table 5: Land Cover Classification on NAIP dataset.
|
5.2.3 Land Cover Classification
|
Finally, we perform transfer learning experiments on land
|
cover classification across 66 land cover classes using high
|
resolution remote sensing images obtained by the USDA’s
|
National Agricultural Imagery Program (NAIP). We use the
|
images from the California’s Central Valley for the year of
|
2016. Our final dataset consists of 100,000 training and
|
50,000 test images. Table 5 shows that our method outper-
|
forms the randomly initialized weights by 6.34% and super-
|
vised learning by 3.77%.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.