text
stringlengths 0
820
|
---|
tom shows the distribution of GeoImageNet.
|
vised way. The training objective encourages representa-
|
tions corresponding to pairs of images that are known a pri-
|
ori to be semantically similar (positive pairs) to be closer
|
to each other than typical unrelated pairs (negative pairs).
|
With similarity measured by dot product, recent approaches
|
in contrastive learning differ in the type of contrastive loss
|
and generation of positive and negative pairs. In this work,
|
we focus on the state-of-the-art contrastive learning frame-
|
work MoCo-v2 [3], an improved version of MoCo [13], and
|
study improved methods for the construction of positive and
|
negative pairs tailored to remote sensing applications.
|
The contrastive loss function used in the MoCo-v2
|
framework is InfoNCE [27], which is defined as follows for
|
a given data sample:
|
Lz=−logexp(z·ˆz/λ)
|
exp(z·ˆz/λ) +∑N
|
j=1exp(z·kj/λ),(1)
|
wherezandˆzare query and key representations obtained
|
by passing the two augmented views of xt
|
i(denotedvand
|
v′in Fig. 1) through query and key encoders, fqandfkpa-
|
rameterized by θqandθkrespectively. Here zandˆzform
|
a positive pair. The Nnegative samples, {kj}N
|
j=1, come
|
from a dictionary of representations built as a queue. We
|
refer readers to [13] for details on this. λ∈R+is the tem-
|
perature hyperparameter.
|
The key idea here is to encourage representations of pos-
|
itive (semantically similar) pairs to be closer, and negative
|
4
|
(semantically unrelated) pairs to be far apart as measured
|
by dot product. The construction of positive and negative
|
pairs plays a crucial role in this contrastive learning frame-
|
work. MoCo and MoCo-v2 both use perturbations (also
|
called “data augmentation”) from the same image to create
|
a positive example and perturbations from different images
|
to create a negative example. Commonly used perturbations
|
include random color jittering, random horizontal flip, and
|
random grayscale conversion.
|
2016-04-17T15:49:27Z2012-11-21T15:17:29Z2016-11-10T16:00:51Z2016-11-10T16:00:51Z2011-06-06T15:56:51Z
|
Figure 6: Demonstration of temporal positives in eq. 2. An
|
image from an area is paired to the other images includ-
|
ing itself from the same area captured at different time. We
|
show the time stamps for each image underneath the im-
|
ages. We can see the color changes in the stadium seatings
|
and surrounding areas.
|
Temporal Positive Pairs Different from many commonly
|
seen natural image datasets, remote sensing datasets of-
|
ten have extra temporal information, meaning that for a
|
given location (lati,loni), there exists a sequence of spa-
|
tially aligned images Xi= (x1
|
i,···,xTi
|
i)over time. Unlike
|
in traditional videos where nearby frames could experience
|
large changes in content ( e.g. from a cat to a tree), in re-
|
mote sensing the content is often more stable across time
|
due to the fixed viewpoint. For instance, a place on ocean
|
is likely to remain as ocean for months or years, in which
|
case satellite images taken across time at the same location
|
should share high semantic similarities. Even for locations
|
where non-trivial changes do occur over time, certain se-
|
mantic similarities could still remain. For instance, key fea-
|
tures of a construction site are likely to remain the same
|
even as the appearance changes due to seasonality.
|
Given these observations, it is natural to leverage tempo-
|
ral information for remote sensing while constructing pos-
|
itive or negative pairs since it can provide us with extra
|
semantically meaningful information of a place over time.
|
More specifically, given an image xt1
|
icollected at time t1,
|
we can randomly select another image xt2
|
ithat is spatially
|
aligned with xt1
|
i(i.e.xt2
|
i∈Xi). We then apply perturba-
|
tions ( e.g. random color jittering) as used in MoCo-v2 to the
|
spatially aligned image pair xt1
|
iandxt2
|
i, providing us with
|
atemporal positive pair (denotedvandv′in Figure 1) that
|
can be used for training the contrastive learning frameworkby passing them through query and key encoders, fqandfk
|
respectively (see Fig. 1). Note that when t1=t2, the tem-
|
poral positive pair is the same as the positive pair used in
|
MoCo-v2.
|
Given a data sample xt1
|
i, our TemporalInfoNCE objec-
|
tive function can be formulated as follows:
|
Lzt1
|
i=−logexp(zt1
|
i·zt2
|
i/λ)
|
exp(zt1
|
i·zt2
|
i/λ) +N∑
|
j=1exp(zt1
|
i·kj/λ),(2)
|
wherezt1
|
iandzt2
|
iare the encoded representations of the
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.