title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Zhang_TokenHPE_Learning_Orientation_Tokens_for_Efficient_Head_Pose_Estimation_via_CVPR_2023
|
Abstract
Head pose estimation (HPE) has been widely used in
the fields of human machine interaction, self-driving, and
attention estimation. However, existing methods cannot
deal with extreme head pose randomness and serious oc-
clusions. To address these challenges, we identify three
cues from head images, namely, neighborhood similari-
ties, significant facial changes, and critical minority rela-
tionships. To leverage the observed findings, we propose
a novel critical minority relationship-aware method based
on the Transformer architecture in which the facial part
relationships can be learned. Specifically, we design sev-
eral orientation tokens to explicitly encode the basic ori-
entation regions. Meanwhile, a novel token guide multi-
loss function is designed to guide the orientation tokens as
they learn the desired regional similarities and relation-
ships. We evaluate the proposed method on three chal-
lenging benchmark HPE datasets. Experiments show that
our method achieves better performance compared with
state-of-the-art methods. Our code is publicly available at
https://github.com/zc2023/TokenHPE .
|
1. Introduction
Head pose estimation (HPE) is a popular research area in
computer vision and has been widely applied to driver assis-
tance [29], human–computer interaction [36], virtual reality
[22], and attention detection [5]. In recent years, HPE has
been actively studied and the accuracy has been consider-
ably improved in terms of utilizing extra facial landmark in-
formation [2,20], extra RGB-depth information [13,26–28],
extra temporal information [16], stage-wise regression strat-
egy [42], multi-task learning [1, 38], and alternative param-
eterization of orientation [3,15,18,19,24]. Currently, many
methods focus on the representation of the head pose ori-
*: Corresponding author (Hai Liu, Youfu Li)
(i)
(ii)
(iii)Neighbored orientations similarity
and significant facial part changesChallenge of facial part missing
vs. Finding of critical minority
relationships
Missing
parts
Critical minority
partsFigure 1. Left part: Missing facial parts and our finding on critical
minority relationships. Although some of the facial parts are miss-
ing or occluded (marked with a red rectangle), the pose orientation
still can be inferred from the existing critical minority facial parts
(marked with a green circle). Right part: different head orientation
images in a panoramic overview. The rectangular boxes highlight
several significant facial changes, such as i) appearance of the eye
on one side, ii) appearance of the nostril, and iii) overlapping of
the nose and mouth. The circled areas show some regions in which
the facial part features are similar.
entation and have achieved impressive performance, but the
intrinsic facial part relationships are usually neglected. A
possible reason is that these relationships are difficult to
learn by existing CNN architectures. However, in some
challenging scenarios, as shown in the left part of Fig. 1,
many remarkable facial parts are missing. Consequently,
the remaining facial parts and their geometric relationships
must be leveraged to achieve robust and high-accuracy pre-
diction. Therefore, how to leverage the facial part relation-
ships for high-accuracy HPE is an attractive research topic.
To begin with, we firstly identify three implicit facial part
relationships in head poses. First, a local similarity in spe-
cific spatial orientation exists. Inside the circled region in
Fig. 1, the facial appearances are similar. Second, sev-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8897
eral significant facial part changes are observed in specific
orientations. For example, in Fig. 1, the two circled fa-
cial regions can be distinguished by a significant facial part
change, which is the appearance of the right eye. Third,
critical minority relationships of facial parts exist, and they
can determine the orientation of a head pose despite pos-
sible occlusions. As Fig. 1 shows, if a person’s mouth is
occluded, the head pose can be determined by the geomet-
ric spatial relationships of the eyes, nose, and the outline of
the face. In these scenarios, the remaining minor facial parts
and their relationships are crucial for high-accuracy HPE.
Given the aforementioned facial part relationships, the
question is how to design a model that can utilize this
heuristic knowledge. The traditional CNN architecture can-
not easily learn these relationships. In contrast, the Trans-
former architecture can effectively address this drawback of
CNN. Recently, Vision Transformer (ViT) [11] emerged as
a new choice for various computer vision tasks. The Trans-
former architecture is known for its extraordinary ability to
learn long-distance, high-level relationships between image
patches. Therefore, using Transformer to learn the rela-
tionships among critical minority facial parts is reasonable.
Moreover, the basic orientation regions can be well repre-
sented by learnable tokens in Transformer.
Inspired by the three findings and Transformer’s prop-
erties, we propose TokenHPE, a method that can discover
and leverage facial part relationships and regional similari-
ties via the Transformer architecture. The proposed method
can discover facial part geometric relationships via self-
attention among visual tokens, and the orientation tokens
can encode the characteristics of the basic orientation re-
gions. The latent relationships between visual and orienta-
tion tokens can be learned from large HPE datasets. Then,
the learned information is encoded into the orientation to-
kens, which can be visualized by vector similarities. In
addition, a special token guide multi-loss function is con-
structed to help the orientation token learn the general in-
formation. Our main contributions can be summarized as
follows:
(1) Three findings are derived on head images, including
facial part relationships and neighborhood orientation simi-
larities. Furthermore, to leverage our findings and cope with
challenging scenarios, a novel token-learning model based
on Transformer for HPE is presented for the first time.
(2) We find that the head pose panoramic overview can
be partitioned into several basic regions according to the
orientation characteristics. The same number of learnable
orientation tokens are utilized to encode this general infor-
mation. Moreover, a novel token guide multi-loss function
is designed to train the model.
(3) We conduct experiments on three widely used HPE
datasets. TokenHPE achieves state-of-the-art performance
with a novel token-learning concept compared with its ex-isting CNN-based counterparts. Abundant visualizations
are also provided to illustrate the effectiveness of the pro-
posed orientation tokens.
|
Yao_Hi-LASSIE_High-Fidelity_Articulated_Shape_and_Skeleton_Discovery_From_Sparse_Image_CVPR_2023
|
Abstract
Automatically estimating 3D skeleton, shape, camera
viewpoints, and part articulation from sparse in-the-wild
image ensembles is a severely under-constrained and chal-
lenging problem. Most prior methods rely on large-scale
image datasets, dense temporal correspondence, or human
annotations like camera pose, 2D keypoints, and shape tem-
plates. We propose Hi-LASSIE, which performs 3D articu-
lated reconstruction from only 20-30 online images in the
wild without any user-defined shape or skeleton templates.
We follow the recent work of LASSIE that tackles a similar
problem setting and make two significant advances. First,
instead of relying on a manually annotated 3D skeleton,
we automatically estimate a class-specific skeleton from the
selected reference image. Second, we improve the shape
reconstructions with novel instance-specific optimization
strategies that allow reconstructions to faithful fit on each
instance while preserving the class-specific priors learned
*Work done as a student researcher at Google.across all images. Experiments on in-the-wild image en-
sembles show that Hi-LASSIE obtains higher fidelity state-
of-the-art 3D reconstructions despite requiring minimum
user input. Project page: chhankyao.github.io/
hi-lassie/
|
1. Introduction
3D assets of articulated animals enable numerous appli-
cations in games, movies, AR/VR, etc. However, building
high-fidelity 3D models of articulated shapes like animal
bodies is labor intensive either via manual creation or 3D
scanning. Recent advances in deep learning and 3D rep-
resentations have significantly improved the quality of 3D
reconstruction from images. Much of the success depends
on the availability of either rich 3D annotations or multi-
view captures, both of which are not always available in a
real-world scenario. A more practical and scalable alter-
native is automatic 3D reconstruction from online images
as it is straightforward to obtain image ensembles of any
animal category ( e.g., image search results). In this work,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4853
Reference image
…3D skeleton3D shape3D shape with surface mapsLASSIEHi-LASSIE (ours)SMALR, 3DSafariCSM, A-CSM
Figure 2. User inputs across different techniques for articulated animal reconstruction. Contrary to prior methods that leverage
detailed 3D shapes or skeletons, Hi-LASSIE only requires the user to select a reference image where most animal body parts are visible.
we tackle a practical problem setting introduced in a recent
work, LASSIE [38], where the aim is to automatically esti-
mate articulated 3D shapes with only a few (20-30) in-the-
wild images of an animal species, without any image-level
2D or 3D annotations.
This problem is highly under-constrained and challeng-
ing due to a multitude of variations within image ensembles.
In-the-wild images usually have diverse backgrounds, light-
ing, and camera viewpoints. Moreover, different animal in-
stances can have distinct 2D appearances due to pose ar-
ticulations, shape variations, and surface texture variations
(skin colors, patterns, and lighting). As shown in Fig. 2,
early approaches in this space try to simplify the problem
with some user-defined 3D templates, hurting the general-
ization of those techniques to classes where such templates
are not always readily available. In addition, most of these
methods (except LASSIE [38]) assume either large-scale
training images of each animal species [16, 17, 43] or per-
image 2D keypoint annotations [44], which also limits their
scalability.
In this work, we propose a more practical technique that
does not require any 3D shape or skeleton templates. In-
stead, as illustrated in Fig. 2, the user simply has to se-
lect a reference image in the ensemble where all the ani-
mal parts are visible. We achieve this by providing two key
technical advances over LASSIE: 1) 3D skeleton discovery
and 2) instance-specific optimization strategies. Our over-
all framework, named Hi-LASSIE, can produce Higher-
fidelity articulated shapes than LASSIE [38] while requir-
ing minimal human input. Our key insight is to exploit
the 2D part-level correspondences for 3D skeleton discov-
ery. Recent works [1, 32] observe that the deep features ex-
tracted from a self-supervised vision transformer (ViT) [11]
like DINO-ViT [7] can provide good co-part segmentation
across images. We further exploit such features to rea-
son about part visibility and their symmetry. At a high
level, we first obtain a 2D skeleton using the animal silhou-
ette and part clusters [1] obtained from DINO features [7].
We then uplift this 2D skeleton into 3D using symmet-
ric part information that is present in the deep DINO fea-
tures. Fig. 1 shows the skeleton for zebra images discov-ered by Hi-LASSIE. Similar to LASSIE [38], we leverage
3D part priors (learned from and shared across instances)
and the discovered 3D skeleton to regularize the articu-
lated shape learning. Furthermore, we design three novel
modules to increase the quality of output shapes: 1) High-
resolution optimization by zooming in on individual parts,
2) Surface feature MLPs to densely supervise the neural
part surface learning, and 3) Frequency-based decomposi-
tion of part surfaces for shared and instance-specific com-
ponents. Note that Hi-LASSIE can generalize to diverse
animal species easily as it does not require any image anno-
tations or category-specific templates.
We conduct extensive experiments on the Pascal-Part [8]
and LASSIE [38] image ensembles, which contain in-the-
wild images of various animal species like horse, elephant,
and penguin. Compared with LASSIE and other baselines,
we achieve higher reconstruction accuracy in terms of key-
point transfer, part transfer, and 2D IOU metrics. Quali-
tatively, Hi-LASSIE reconstructions show considerable im-
provement on 3D geometric and texture details as well as
faithfulness to input images. Finally, we demonstrate sev-
eral applications like animation and motion re-targeting en-
abled by our 3D part representation. Fig. 1 (right) shows
some Hi-LASSIE 3D reconstructions for different animal
species. The main contributions of this work are:
• To our best knowledge, Hi-LASSIE is the first approach
to discover 3D skeletons of articulated animal bodies from
in-the-wild image ensembles without using any image-level
annotations. We show that the discovered 3D skeleton can
faithfully fit all instances in the same class and effectively
regularize the 3D shape optimization.
• Hi-LASSIE includes several novel optimization strategies
that makes the output shapes richer in 3D details and more
faithful to each image instance than prior methods.
• Extensive results on multiple animal classes and datasets
demonstrate the state-of-the-art performance of Hi-LASSIE
while requiring less user inputs than prior works.
|
Zhang_Decoupling_MaxLogit_for_Out-of-Distribution_Detection_CVPR_2023
|
Abstract
In machine learning, it is often observed that standard
training outputs anomalously high confidence for both in-
distribution (ID) and out-of-distribution (OOD) data. Thus,
the ability to detect OOD samples is critical to the model de-
ployment. An essential step for OOD detection is post-hoc
scoring. MaxLogit is one of the simplest scoring functions
which uses the maximum logits as OOD score. To provide a
new viewpoint to study the logit-based scoring function, we
reformulate the logit into cosine similarity and logit norm
and propose to use MaxCosine and MaxNorm. We empir-
ically find that MaxCosine is a core factor in the effective-
ness of MaxLogit. And the performance of MaxLogit is en-
cumbered by MaxNorm. To tackle the problem, we propose
the Decoupling MaxLogit (DML) for flexibility to balance
MaxCosine and MaxNorm. To further embody the core of
our method, we extend DML to DML+ based on the new
insights that fewer hard samples and compact feature space
are the key components to make logit-based methods effec-
tive. We demonstrate the effectiveness of our logit-based
OOD detection methods on CIFAR-10, CIFAR-100 and Im-
ageNet and establish state-of-the-art performance.
|
1. Introduction
In real-world applications, the closed-world assumption
does not always hold where all the classes in the test phase
would be available in the training phase. Out-of-distribution
(OOD) detection [11] is a natural and challenging setting,
and there is an open space containing outliers not belong-
ing to any training classes. When the model is deployed in
practice, OOD data often come from the open world [17].
Thus, it is crucial for a trustworthy model to not only pro-
duce accurate predictions on in-distribution (ID) data, but
also distinguish the OOD data and reject them. However,
the machine-learning model easily produces over-confident
*Equal contribution, co-first author; also with Nat. Key Lab of MSIIPT.
Correspondence to Xiang Xiang ([email protected])
50 60 70 80 90 100
CIFAR-100 Mean AUROC818487909396ImageNet Mean AUROCGradNormEnergy
MSPMaxLogit
ViMODINDMLDML+(a) AUROC of different methods
Textures SVHN LSUN-C LSUN-R iSUN Places365020406080100FPR95 scoreOurs
SOTA w/o training
SOTA w/ training (b) FPR95 of SOTA methods
Figure 1. AUROC and FPR95 (in percentage) of previous OOD
detection methods on ImageNet and CIFAR-100. (a) shows the
AUROC (higher is better) of our methods (orange pentagons) and
other methods (blue rectangles). (b) shows the FPR95 (lower is
better) of our methods and SOTA methods w/ (LogitNorm [37])
and w/o (ViM [36]) training on CIFAR-100.
wrong predictions on OOD data [25]. For instance, a model
may wrongly detect the zebra as a horse with high confi-
dence when the zebra is not in the training set.
A key of the OOD detection algorithm is a scoring func-
tion that maps the input to the OOD score, indicating to
what extent the sample is an OOD sample. Various scor-
ing functions have been proposed to seek the properties
that better distinguish OOD samples. The OOD score is
calculated mainly from the output of the model, includ-
ing features [20, 30,36], logits [10, 11,23]. For example,
MSP [11] uses the maximum Softmax probabilities, and
MaxLogit [10] uses the maximum logits as the OOD score.
MaxLogit and MSP are two simplest scoring functions
that do not require extra computational costs. In contrast,
other methods require extra storage [30], or extra compu-
tational cost [14]. However, the logit-based methods MSP
and MaxLogit are not state-of-the-art (SOTA). Intuitively,
the simple logit-based method could achieve comparable
performance as other complex scoring methods, because
logit contains high-level semantic information. We hypoth-
esize some underlying reasons limit the performance.
To revitalize the simple logit-based method, we start the
work by analyzing the reasons which cause the performance
gap between MSP and MaxLogit. The gap may be due to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3388
the softmax operation normalizing out much feature norm
information of the logits. To delve into the effect of feature
norm, we divide the logit into two parts: (1) the cosine sim-
ilarity between the features and classifier; (2) the feature
norm. We discard the classifier weight norm because the
norm is identical after the model coverage [27]. We use the
top value of the two as the OOD score, named MaxCosine
and MaxNorm. Therefore, MaxLogit is a coupled form.
We find that MaxCosine outperforms MaxLogit with
the same model and MaxNorm performs much worse
than MaxLogit. Thus, MaxLogit (1) is encumbered by
MaxNorm, (2) suppresses the effectiveness of MaxCo-
sine, and (3) restricts the flexibility to balance MaxCosine
and MaxNorm. The three problems are the bottleneck of
MaxLogit. To tackle the problem, we propose Decoupling
MaxLogit (DML) for flexibility to balance MaxCosine and
MaxNorm. DML decouples the MaxCosine from the equal
coefficient with MaxNorm by replacing it with a constant.
The decoupling method solves the second and third
problems but still leaves the first problem unsolved. Al-
though MaxNorm helps DML to outperform MaxCosine,
the improvement is marginal due to the low performance
of MaxNorm. Therefore, we study the role of model train-
ing and show that a simple modification to standard train-
ing could significantly boost MaxNorm and MaxCosine for
OOD detection. Specifically, a feature space with fewer
hard samples benefits MaxCosine and a compact feature
space benefits MaxNorm. Also, the normalized feature and
classifier are the key to the success of the logit-based meth-
ods. These findings are not discussed in prior works. We
extend DML to DML+ based on the above new insights to
further boost the DML performance as shown in Fig. 1.
We summarize our contributions as follows.
• To overcome the limitations of MaxLogit, we propose
a post-hoc scoring method DML, which decouples
MaxLogit for flexibility to balance MaxCosine and
MaxNorm. DML outperforms MaxLogit and achieves
comparable performance with SOTA methods.
• We offer new insights into the key components to make
MaxCosine, MaxNorm and DML effective, including
replacing the standard linear classifier with a cosine
classifier and different training losses. The findings are
supported by empirical results and theoretical analysis.
We also prove that the findings could greatly boost the
performance of existing OOD scoring methods.
• Based on the insights, we extend DML to DML+
which changes the standard training. Significant im-
provements on CIFAR and ImageNet have shown its
effectiveness.
|
Yang_BiCro_Noisy_Correspondence_Rectification_for_Multi-Modality_Data_via_Bi-Directional_Cross-Modal_CVPR_2023
|
Abstract
As one of the most fundamental techniques in multi-
modal learning, cross-modal matching aims to project
various sensory modalities into a shared feature space. To
achieve this, massive and correctly aligned data pairs are
required for model training. However, unlike unimodal
datasets, multimodal datasets are extremely harder to
collect and annotate precisely. As an alternative, the
co-occurred data pairs (e.g., image-text pairs) collected
from the Internet have been widely exploited in the area.
Unfortunately, the cheaply collected dataset unavoidably
contains many mismatched data pairs, which have been
proven to be harmful to the model’s performance. To
address this, we propose a general framework called BiCro
(Bidirectional Cross-modal similarity consistency), which
can be easily integrated into existing cross-modal matching
models and improve their robustness against noisy data.
Specifically, BiCro aims to estimate soft labels for noisy
data pairs to reflect their true correspondence degree.
The basic idea of BiCro is motivated by that – taking
image-text matching as an example – similar images should
have similar textual descriptions and vice versa. Then the
consistency of these two similarities can be recast as the
estimated soft labels to train the matching model. The ex-
periments on three popular cross-modal matching datasets
demonstrate that our method significantly improves the
noise-robustness of various matching models, and surpass
the state-of-the-art by a clear margin.The code is available
athttps://github.com/xu5zhao/BiCro .
|
1. Introduction
As a key step towards general intelligence, multimodal
learning aims to endow an agent with the ability to extract
and understand information from various sensory modali-
ties. One of the most fundamental techniques in multimodal
*Corresponding Author([email protected]).
Soccer player is beaten to the ball by soccer player.I1T1
I2Look at him, like it’s no work at all.
He is doing some works.T2˜I˜Tclean pairsnoisy pairsFigure 1. Illustration of the Bidirectional Cross-modal similarity
consistency (BiCro) . In an image-text shared feature space, as-
sume (I1, T1)and(I2, T2)are two clean positive data pairs. The
image ˜Iis very close to the image I1, but their corresponding texts
˜TandT1are far away from each other. Similarly, the text ˜Tis very
close to the text T2while their corresponding images ˜IandI2are
irrelevant. Therefore the data pair ( ˜I,˜T) is possibly noisy (mis-
matched).
learning is cross-modal matching. Cross-modal matching
tries to project different modalities into a shared feature
space, and it has been successfully applied to many areas,
e.g., image/video captioning [1, 21, 23], cross-modal hash-
ing [14, 18], and visual question answering [15, 58].
Like many other tasks in deep learning, cross-modal
matching also requires massive and high-quality labeled
training data. But even worse, the multimodal datasets ( e.g.,
in image-text matching) are significantly harder to be man-
ually annotated than those unimodal datasets ( e.g., in image
classification). This is because the textual description of an
image is very subjective, while the categorical label of an
image is easier to be determined. Alternatively, many works
collect the co-occurred image-text pairs from the Internet to
train the cross-modal matching model, e.g., CLIP [32] was
trained over 400 million image-text pairs collected from the
Internet. Furthermore, one of the largest public datasets in
cross-modal matching, Conceptual Caption [36], was also
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19883
harvested automatically from the Internet. Such cheap data
collection methods inevitably introduce noise into the col-
lected data, e.g., the Conceptual Caption dataset [36] con-
tains 3%∼20% mismatched image-text pairs. The noisy
data pairs undoubtedly hurt the generalization of the cross-
modal matching models due to the memorization effect of
deep neural networks [55].
Different from the noisy labels in classification, which
refer to categorical annotation errors, the noisy labels in
cross-modal matching refer to alignment errors in paired
data. Therefore most existing noise-robust learning meth-
ods developed for classification cannot be directly applied
to the cross-modal matching problem. Generally, all the
data pairs in a noisily-collected cross-modal dataset can
be categorized into the following three types based on
their noise level: well-matched ,weakly-matched , and mis-
matched . The well-matched data pairs can be treated as
clean examples, and the weakly-matched andmismatched
data pairs are noisy examples since they are all labeled as
positive ( y= 1) in the dataset. We illustrate some noisy
data pairs, including both weakly-matched andmismatched
pairs in Fig. 2. The key challenge in noise-robust cross-
modal matching is how to estimate accurate soft correspon-
dence labels for those noisy data pairs. The estimated soft
label is expected to be able to depict the true correspon-
dence degree between different modalities in a data pair.
In this work, we propose a general framework for
soft correspondence label estimation given only noisily-
collected data pairs. Our framework is based on the Bidirec-
tional Cross-modal similarity consistency (BiCro) , which
inherently exists in paired data. An intuitive explanation
of the BiCro is – taking image-text matching task as an
example – similar images should have similar textual de-
scriptions and vice versa. In other words, if two images
are highly similar but their corresponding texts are irrele-
vant, there possibly exists image-text mismatch issue (see
Fig. 1). Specifically, we first select a set of clean positive
data pairs as anchor points out of the given noisy data. This
step is achieved by modeling the per-sample loss distribu-
tion of the dataset using a Beta-Mixture-Model (BMM) and
selecting those samples with a high probability of being
clean. Then, with the help of the selected anchor points,
we compute the Bidirectional Cross-modal similarity con-
sistency (BiCro) scores for every noisy data pair as their
soft correspondence labels. Finally, the estimated soft cor-
respondence labels are recast as soft margins to train the
cross-modal matching model in a co-teaching manner [11]
to avoid the self-sample-selection error accumulation prob-
lem.
Before delving into details, we summarize our main con-
tributions below:
• This paper tackles a widely-exist but rarely-explored
problem in cross-modal matching, i.e.,the noisy cor-respondence issue. We identify and explore the key
challenge of noise-robust cross-modal learning: how
to rectify the binary noisy correspondence labels into
more accurate soft correspondence labels for noisy
data pairs.
• We propose a general framework called Bidirectional
Cross-modal similarity consistency (BiCro) for soft
correspondence label estimation given only noisy data
pairs. The BiCro is motivated by a rational assumption
that – taking the image-text matching as an example
–similar images should have similar textual descrip-
tions and vice versa. The BiCro can be easily adapted
to any cross-modal matching models to improve their
noise robustness.
• To compute the BiCro score for noisy data pairs, we
propose to first select some clean positive data pairs
asanchor points out of the noisy dataset by mod-
eling the per-sample loss distribution of the dataset
through a Beta-Mixture-Model (BMM) . Then a set
of examples with high clean possibilities can be se-
lected. The BMM has a better modeling ability on
the skewed clean data loss distribution compared to the
Gaussian-Mixture-Model (GMM) used in the previous
method [12].
• Extensive experiments on three cross-modal match-
ing datasets, including both synthetic and real-world
noisy labels, demonstrate that our method surpasses
the state-of-the-art by a large margin. The visualiza-
tion results also indicate that our estimated soft corre-
spondence labels are highly-aligned with the correla-
tion degree between different modalities in data pairs.
|
Zhang_Efficient_RGB-T_Tracking_via_Cross-Modality_Distillation_CVPR_2023
|
Abstract
Most current RGB-T trackers adopt a two-stream struc-
ture to extract unimodal RGB and thermal features and
complex fusion strategies to achieve multi-modal featurefusion, which require a huge number of parameters, thus
hindering their real-life applications. On the other hand,a compact RGB-T tracker may be computationally effi-cient but encounter non-negligible performance degrada-tion, due to the weakening of feature representation abil-ity. To remedy this situation, a cross-modality distillationframework is presented to bridge the performance gap be-tween a compact tracker and a powerful tracker . Specifi-cally, a specific-common feature distillation module is pro-posed to transform the modality-common information aswell as the modality-specific information from a deeper two-stream network to a shallower single-stream network. Inaddition, a multi-path selection distillation module is pro-posed to instruct a simple fusion module to learn more ac-curate multi-modal information from a well-designed fusion
mechanism by using multiple paths. We validate the effec-
tiveness of our method with extensive experiments on threeRGB-T benchmarks, which achieves state-of-the-art perfor-
mance but consumes much less computational resources.
|
1. Introduction
RGB-T tracking is the task of estimating the state of
an arbitrary target in each frame of an RGB-T video se-
quence [ 35]. Due to the affordability of thermal infrared
(TIR) sensors, RGB-T tracking draws more and more re-search interest.
As shown in Fig. 1(a), most existing RGB-T tracking
models first adopt a two-stream structure to extract multi-level unimodal RGB and TIR features, respectively, andthen employ elaborate-designed multi-modal feature fusion
modules to exploit complementary information within the
*Corresponding author.Complex
Fusion
ConV ConV ConV
ConV ConV ConVComplex
FusionComplex
FusionTarget
Estimation
Simple
Fusion
ConV ConV ConV
ConV ConV ConVSimple
FusionSimple
FusionTarget
EstimationShared
Shared
Target
EstimationTeacher:
Two-stream Feature Extractor
& Complex Fusion
Student:
Single-stream Feature Extractor
& Simple FusionCross-Modality Distillation(a) Two-stream Structure (b) Single-stream Structure
(c) Our proposed methodConV
ConVConv block in RGB feature extractor
Conv block in TIR feature extractor
ConV Conv block in single-stream feature extractor
Fusion Fusion Module RGB data flow
TIR data flow TIR & RGB data flow
Figure 1. Architectures of different RGB-T tracking models. (a)
Two-stream structure. (b) Single-stream structure. (c) Our pro-posed method.
multi-modal data. Finally, they deduce the target state, of-
ten represented by a bounding box, from the fused features.Although great progress has been made, these powerfulRGB-T tracking models usually require high computationalcosts and large model sizes to handle the information of twomodalities in the stages of unimodal feature extraction andmulti-modal feature fusion.
There are two straightforward solutions to tackle the
complexity and efficiency issues. One is to adopt a single-
stream feature extractor with fewer convolutional layers,
and the other is to employ simpler multi-modal feature fu-sion modules, as shown in Fig. 1(b). Although such com-
pact models can reduce computational complexity, they in-evitably bring non-negligible performance degradation due
to the weakening of unimodal feature representation ability
and multi-modal complementary information exploration
ability. For instance, a powerful RGB-T tracker [ 35] with a
two-stream structure and complicated multi-modal feature
fusion modules suffers from severe performance degrada-
tion after the above model simplification operations (84.4%precision rate vs78.1% precision rate on RGBT234 dataset
[11]), as shown in Fig. 2.
Now, the research question becomes: can we shrink
the RGB-T tracker without sacrificing performance? This
paper answers this question using knowledge distillation,which allows a compact model to obtain a similar abil-ity of a complex model at little cost. We call this com-
plex but powerful model the teacher model, and call this
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5404
77.578.579.580.581.582.583.584.585.5
15 17 19 21 23 25 27 29 31 33Student1: Student + SCFD
Student2: Student + SCFD + MPSD
Student3: Student + SCFD + MPSD+HFRDStudent1Student2Student3
StudentTeacher
10M 20M40M
Model Speed (FPS)Precision Rate (%)
Figure 2. Experimental results of different RGB-T tracking struc-
tures on RGBT234 dataset [ 11]. Teacher denotes the two-stream
structure with complex fusion modules. Student denotes a single-
stream structure with simple fusion operations. The teacher modelemploys ResNet50 [ 7] for feature extraction and fusion modules
in [35] for multi-modal feature fusion, respectively. The student
model employs ResNet18 [ 7] for feature extraction and concate-
nation for multi-modal feature fusion, respectively.
compact model the student model. Although some works
[15,19,20,29] have made considerable progress on knowl-
edge distillation in multi-modal tasks, they fail to conduct adeep investigation on the huge feature differences betweenteacher and student in the unimodal feature extraction stageas well as in the multi-modal feature fusion stage, therebyresulting in suboptimal efficiency of the knowledge trans-
formation. For that, a novel teacher-student knowledge dis-tillation training framework, named Cross-Modality Dis-
tillation (CMD), is proposed to elaborately guide efficientimitation from three stages: unimodal feature extraction,multi-modal feature fusion and target estimate estimation,
as shown in Fig. 1(c).
Specifically, in the stage of unimodal feature extraction,
as pointed out by many previous works [ 9,17], the shal-
lower layers of unimodal features usually contain abun-
dant low-level spatial details, which are usually modality-dependent. Differently, the deeper layers of unimodal fea-
tures often contain many high-level semantic cues, which
tend to be strongly modality-consistent. The student modeluses a compact single-stream network to extract both RGBfeatures and TIR features, which not only lacks the ability toextract modality-specific information in the shallower lay-ers, but also insufficiently explores the modality-commoninformation in the deeper layers. These interesting observa-tions inspire us to design a Specific-common Feature Dis-tillation (SCFD) module, which transforms the modality-specific information as well as the modality-common in-formation from a two-stream deeper network to a single-
stream shallower network.
Second, in the stage of multi-modal feature fusion, the
complex multi-modal feature fusion modules in the teacher
model show great advantages in various scenarios, whilethe simple fusion strategies in the student model are usu-
ally effective in some specific scenarios. It is difficult for astudent model with a single simple fusion strategy to learn
more effective complementary information mining capabil-
ities from a complex teacher model due to the large fea-
ture differences. Therefore, we design a fusion module withmultiple simple fusion strategies in the student model, de-
noted as Multi-path Selection Distillation (MPSD) module.In the process of learning from the teacher model, the stu-dent model can adaptively combine different types of fusion
features to make up for the lack of complementary informa-
tion mining capabilities of a single simple fusion strategy.
Finally, in the stage of target state estimation, with the
weakening of the feature representation ability of the stu-
dent model, the discriminative ability of the tracker fordistractors is also reduced. For that, we further present a
Hard-focused Response Distillation (HFRD) module to im-
prove the student model’s discriminative ability by alleviat-ing the problem of data imbalance between the targets andthe backgrounds, which employs the response maps gener-ated by the teacher model to instruct the student to focus ondistinguishing targets from hard negative samples.
As shown in Fig. 2, each of our proposed modules con-
tinuously reduces the performance gap between the student
model and the teacher model without increasing the numberof parameters obviously. To sum up, our work improves an
RGB-T tracker dramatically because of the following two
contributions:
• A Cross-Modality Distillation (CMD) framework is
presented to bridge the performance gap between acompact student model and a powerful teacher model
through three stages, i.e, unimodal feature extraction,
multi-modal feature fusion and target state estimation.To the best of our knowledge, we are the first to intro-
duce knowledge distillation for multi-modal tracking.
• Experimental results show that our proposed approach
helps a student model achieves the state-of-the-art per-
formance on the challenging GTOT [ 10], RGBT234
[11] and LasHer [ 14], while reducing the number of
parameters and computational complexity.
|
Zeng_Deep_Fair_Clustering_via_Maximizing_and_Minimizing_Mutual_Information_Theory_CVPR_2023
|
Abstract
Fair clustering aims to divide data into distinct clusters
while preventing sensitive attributes (e.g., gender, race,
RNA sequencing technique) from dominating the clustering.
Although a number of works have been conducted and
achieved huge success recently, most of them are heuristi-
cal, and there lacks a unified theory for algorithm design.
In this work, we fill this blank by developing a mutual
information theory for deep fair clustering and accordingly
designing a novel algorithm, dubbed FCMI. In brief,
through maximizing and minimizing mutual information,
FCMI is designed to achieve four characteristics highly ex-
pected by deep fair clustering, i.e., compact, balanced, and
fair clusters, as well as informative features. Besides the
contributions to theory and algorithm, another contribution
of this work is proposing a novel fair clustering metric built
upon information theory as well. Unlike existing evalua-
tion metrics, our metric measures the clustering quality
and fairness as a whole instead of separate manner. To
verify the effectiveness of the proposed FCMI, we conduct
experiments on six benchmarks including a single-cell
RNA-seq atlas compared with 11 state-of-the-art methods
in terms of five metrics. The code could be accessed from
https://pengxi.me .
|
1. Introduction
Clustering plays an important role in machine learning
[19, 27–29, 34, 42, 43], which could partition data into dif-
ferent clusters without any label information. It has been
widely used in many real-world applications such as multi-
view learning [35,39], image segmentation [24], and bioin-
formatics [20]. In practice, however, the data might be con-
founded with sensitive attributes ( e.g., gender, race, etc.,
also termed as group information ) that probably over-
∗Equal contribution
†Corresponding author
maximize
minimize
Cluster Information Group InformationFigure 1. Illustration of our basic idea using information diagrams,
where X,G, and Cdenote the inputs, sensitive attributes, and
clustering assignments, respectively. To cluster data, we expect
the overlap between non-group information and cluster informa-
tion ( i.e., the conditional mutual information I(X;C|G)) to be
maximized. Meanwhile, to prevent sensitive attributes from domi-
nating the clustering results, we expect the overlap between group
information and cluster information ( i.e., the mutual information
I(G, C )) to be minimized.
whelm the intrinsic semantic of samples (also termed as
cluster information ). Taking single-cell RNA clustering
as a showcase, standard methods would partition data based
on sequencing techniques (group information) instead of
intrinsic cell types (cluster information), since cells se-
quenced by different techniques would result in different
expression levels [36] and most clustering methods cannot
distinguish these two kinds of information. The case is sim-
ilar in many automatic learning systems where the clus-
tering results are biased toward sensitive attributes, which
would interfere with the decision-making [9, 12, 18]. No-
tably, even though these sensitive attributes are known in
prior, it is daunting to alleviate or even eliminate their in-
fluence, e.g., removing the “gender” information from the
photos of users.
As a feasible solution, fair clustering aims to hide sen-
sitive attributes from the clustering results. Commonly, a
clustering result is considered fair when samples of different
sensitive attributes are uniformly distributed in clusters so
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23986
that the group information is protected. However, it would
lead to a trivial solution if the fairness is over-emphasized,
i.e., all samples are assigned to the same cluster. Hence,
in addition to fairness, balance and compactness are also
highly expected in fair clustering. Specifically, a balanced
clustering could avoid the aforementioned trivial solution
brought by over-emphasized fairness, and the compactness
refers to a clear cluster boundary.
To achieve fair clustering, many studies have been con-
ducted to explore how to incorporate fairness into luster-
ing [3,4,8,22,23,38,44,46]. Their main differences lie in i)
the stage of fairness learning, and ii)the depth of the model.
In brief, [3, 8] incorporate the fairness in a pre-processing
fashion by packing data points into so-called fairlets with
balanced demographic groups and then partitioning them
with classic clustering algorithms. [22,46] are in-processing
methods that formulate fairness as a constraint for cluster-
ing. As a representative of post-processing methods, [4]
first performs classic clustering and then transforms the
clustering result into a fair one by linear programming. Dif-
ferent from the above shallow models, [23, 38, 44] propose
performing fair clustering in the latent space learned by
different deep neural networks to boost performance. Al-
though promising results have been achieved by these meth-
ods, almost all of them are heuristically and empirically de-
signed, with few theoretical explanations and supports. In
other words, it still lacks a unified theory to guide the algo-
rithm design.
In this work, we unify the deep fair clustering task un-
der the mutual information theory and propose a novel
theoretical-grounded deep fair clustering method accord-
ingly. As illustrated in Fig. 1, we theoretically show that
clustering could be achieved by maximizing the condi-
tional mutual information (CMI) I(X;C|G)between in-
putsXand cluster assignments Cgiven sensitive attributes
G. Meanwhile, we prove that the fairness learning could
be formulated as the minimization of the mutual informa-
tion (MI) I(G;C). In this case, sensitive attributes will be
hidden in the cluster assignments and thus fair clustering
could be achieved. To generalize our theory to deep neu-
ral networks, we additionally show a deep variant could be
developed by maximizing the mutual information I(X;X′)
between the input Xand its approximate posterior X′. No-
tably, some deep clustering methods [17,26] have been pro-
posed based on the information theory. However, they are
remarkably different from this work. To be exact, they ig-
nored the group information. As a result, the group infor-
mation will leak into cluster assignments, leading to unfair
partitions. In addition, we prove that our mutual informa-
tion objectives intrinsically correspond to four characteris-
tics highly expected in deep fair clustering, namely, com-
pact, balanced, and fair clusters, as well as informative fea-
tures.Besides the above contributions to theory and algorithm,
this work also contributes to the performance evaluation.
To be specific, we notice that almost all existing methods
evaluate clustering quality and fairness separately. How-
ever, as fair clustering methods usually make a trade-off be-
tween these two aspects, such an evaluation protocol might
be partial and inaccurate. As an improvement, we design
a new evaluation metric based on the information theory,
which simultaneously measures the clustering quality and
fairness. The contribution of this work could be summa-
rized as follows:
• We formulate deep fair clustering as a unified mu-
tual information optimization problem. Specifically,
we theoretically show that fair clustering could be
achieved by maximizing CMI between inputs and clus-
ter assignments given sensitive attributes while mini-
mizing MI between sensitive attributes and cluster as-
signments. Moreover, the informative feature extrac-
tion could be achieved by maximizing MI between the
input and its approximate posterior.
• Driven by our unified mutual information theory, we
propose a deep fair clustering method and carry out ex-
tensive experiments to show its superiority on six fair
clustering benchmarks, including a single-cell RNA at-
las.
• To evaluate the performance of fair clustering more
comprehensively, we design a novel metric that mea-
sures the clustering quality and fairness as a whole
from the perspective of information theory.
|
Zhang_NeuralDome_A_Neural_Modeling_Pipeline_on_Multi-View_Human-Object_Interactions_CVPR_2023
|
Abstract
Humans constantly interact with objects in daily life
tasks. Capturing such processes and subsequently conduct-
ing visual inferences from a fixed viewpoint suffers from
occlusions, shape and texture ambiguities, motions, etc.
To mitigate the problem, it is essential to build a train-
ing dataset that captures free-viewpoint interactions. We
construct a dense multi-view dome to acquire a complex
human object interaction dataset, named HODome, that
consists of ∼71M frames on 10 subjects interacting with
23 objects. To process the HODome dataset, we develop
NeuralDome, a layer-wise neural processing pipeline tai-
* These authors contributed equally.
†Corresponding author.lored for multi-view video inputs to conduct accurate track-
ing, geometry reconstruction and free-view rendering, for
both human subjects and objects. Extensive experiments
on the HODome dataset demonstrate the effectiveness of
NeuralDome on a variety of inference, modeling, and ren-
dering tasks. Both the dataset and the NeuralDome tools
will be disseminated to the community for further devel-
opment, which can be found at https://juzezhang.
github.io/NeuralDome
|
1. Introduction
A key task of computer vision is to understand how hu-
mans interact with the surrounding world, by faithfully cap-
turing and subsequently reproducing the process via mod-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8834
eling and rendering. Successful solutions benefit broad ap-
plications ranging from sports training to vocational educa-
tion, digital entertainment to tele-medicine.
Early solutions [11, 12, 51] that reconstruct dynamic
meshes with per-frame texture maps are time-consuming
and vulnerable to occlusions or lack of textures. Recent
advances in neural rendering [37, 62, 71] bring huge po-
tential for human-centric modeling. Most notably, the
variants of Neural Radiance Field (NeRF) [37] achieve
compelling novel view synthesis, which can enable real-
time rendering performance [38, 59, 67] even for dynamic
scenes [45, 63, 77], and can be extended to the generative
setting without per-scene training [23, 69, 80]. However,
less attention is paid to the rich and diverse interactions be-
tween humans and objects, mainly due to the severe lack
of dense-view human-object datasets. Actually, existing
datasets of human-object interactions are mostly based on
optical markers [61] or sparse RGB/RGBD sensors [6, 26],
without sufficient appearance supervision for neural render-
ing tasks. As a result, the literature on neural human-object
rendering [21,57] is surprisingly sparse, let alone further ex-
ploring the real-time or generative directions. Besides, ex-
isting neural techniques [52,77] suffer from tedious training
procedures due to the human-object occlusion, and hence
infeasible for building a large-scale dataset. In a nutshell,
despite the recent tremendous thriving of neural rendering,
the lack of both a rich dataset and an efficient reconstruction
scheme constitute barriers in human-object modeling.
In this paper, we present NeuralDome , a neural pipeline
that takes multi-view dome capture as inputs and conducts
accurate 3D modeling and photo-realistic rendering of com-
plex human-object interaction. As shown in Fig. 1, Neural-
Dome exploits layer-wise neural modeling to produce rich
and multi-modality outputs including the geometry of dy-
namic human, object shapes and tracked poses, as well as a
free-view rendering of the sequence.
Specifically, we first capture a novel human-object dome
(HODome) dataset that consists of 274 human-object inter-
acting sequences, covering 23 diverse 3D objects and 10
human subjects (5 males and 5 females) in various appar-
els. We record multi-view video sequences of natural inter-
actions between the human subjects and the objects where
each sequence is about 60s in length using a dome with 76
RGB cameras, resulting in 71 million video frames. We
also provide an accurate pre-scanned 3D template for each
object and utilize sparse optical markers to track individual
objects throughout the sequences.
To process the HODome dataset, we adopt an extended
Neural Radiance Field (NeRF) pipeline. The brute-force
adoption of off-the-shelf neural techniques such as Instant-
NSR [79] and Instant-NGP [38], although effective, do not
separate objects from human subjects and therefore lack
sufficient fidelity to model their interactions. We insteadintroduce a layer-wise neural processing pipeline. Specif-
ically, we first perform a joint optimization based on the
dense inputs for accurately tracking human motions using
the parametric SMPL-X model [43] as well as localizing
objects using template meshes. We then propose an ef-
ficient layer-wise neural rendering scheme where the hu-
mans and objects are formulated as a pose-embedded dy-
namic NeRF and a static NeRF with tracked 6-DoF rigid
poses, respectively. Such a layer-wise representation ef-
fectively exploits temporal information and robustly tack-
les the occluded regions under interactions. We further in-
troduce an object-aware ray sampling strategy to mitigate
artifacts during layer-wise training, as well as template-
aware geometry regularizers to enforce contact-aware de-
formations. Through weak segmentation supervision, we
obtain the decoupled and occlusion-free appearances for
both the humans and the objects at a high fidelity amenable
for training the input multi-view inputs for a variety of tasks
from monocular motion capture to free-view rendering from
sparse multi-view inputs.
To summarize, our main contributions include:
• We introduce NeuralDome, a neural pipeline, to ac-
curately track humans and objects, conduct layer-wise
geometry reconstruction, and enable novel-view syn-
thesis, from multi-view HOI video inputs.
• We collect a comprehensive dataset that we call
HODome that will be disseminated to the community,
with both raw data and the output modalities including
separated geometry and rendering of individual objects
and human subjects, their tracking results, free-view
rendering results, etc.
• We demonstrate using the dataset to train networks for
a variety of visual inference tasks with complex human
object interactions.
|
Zhang_Blind_Image_Quality_Assessment_via_Vision-Language_Correspondence_A_Multitask_Learning_CVPR_2023
|
Abstract
We aim at advancing blind image quality assessment
(BIQA), which predicts the human perception of image
quality without any reference information. We develop a
general and automated multitask learning scheme for BIQA
to exploit auxiliary knowledge from other tasks, in a way
that the model parameter sharing and the loss weighting
are determined automatically. Specifically, we first describe
all candidate label combinations (from multiple tasks) us-
ing a textual template, and compute the joint probability
from the cosine similarities of the visual-textual embed-
dings. Predictions of each task can be inferred from the
joint distribution, and optimized by carefully designed loss
functions. Through comprehensive experiments on learn-
ing three tasks - BIQA, scene classification, and distor-
tion type identification, we verify that the proposed BIQA
method 1) benefits from the scene classification and dis-
tortion type identification tasks and outperforms the state-
of-the-art on multiple IQA datasets, 2) is more robust in
the group maximum differentiation competition, and 3) re-
aligns the quality annotations from different IQA datasets
more effectively. The source code is available at https:
//github.com/zwx8981/LIQE .
|
1. Introduction
As a fundamental computational vision task, blind im-
age quality assessment (BIQA) [63] aims to predict the
visual quality of a digital image with no access to the
underlying pristine-quality counterpart (if any). In the
age of deep learning, the development of BIQA can be
mainly characterized by strategies to mitigate the conflict
between the large number of trainable parameters and the
*Corresponding author.
(a)
(b)
(c)
Figure 1. (a)A “parrots” image of pristine quality. (b)A dis-
torted version of (a) by global Gaussian blurring. (c)A distorted
“cityscape” image by the same level of Gaussian blurring. Humans
are able to “see through” the Gaussian blur, and recognize the two
parrots in (b) with no effort, suggesting the internal representations
for the task of visual recognition should be distortion-insensitive .
This makes it conceptually conflicting to BIQA, which relies on
distortion-sensitive representations for quality prediction.
small number of human quality annotations in the form
of mean opinion scores (MOSs). When synthetic distor-
tions ( e.g., Gaussian noise and JPEG compression) are
of primary concern, patchwise training [4], quality-aware
pre-training [32, 37, 76], and learning from noisy pseudo-
labels [2, 38, 67] are practical training tricks with less (or
no) reliance on MOSs. Here the underlying assumptions
are that 1) the pristine-quality images exist and are acces-
sible, 2) the visual distortions can be simulated efficiently
and automatically, and 3) full-reference IQA models [64]
are applicable and provide adequate quality approxima-
tions. However, all these assumptions do not hold when
it comes to realistic camera distortion ( e.g., sensor noise,
motion blurring or a combination of both). A different
set of training tricks have been explored, including trans-
fer learning [17, 76], meta learning [80], and contrastive
learning [40]. Emerging techniques that combine multiple
datasets for joint training [78] and that identify informative
samples for active fine-tuning [66] can also be seen as ways
to confront the data challenge in BIQA.
In this paper, we aim to accomplish something in the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14071
same spirit, but from a different multitask learning perspec-
tive. We ask the key question:
Can BIQA benefit from auxiliary knowledge pro-
vided by other tasks in a multitask learning setting?
This question is of particular interest because many high-
level computer vision tasks ( e.g., object recognition [8] and
scene classification [5]), with easier-to-obtain ground-truth
labels, seem to be conceptually conflicting to BIQA. This
is clearly illustrated in Fig. 1. Humans are able to “see
through” the Gaussian blur, and recognize effortlessly the
two parrots in (b). That is, if we would like to develop
computational methods for the same purpose, they should
rely on distortion-insensitive features, and thus be robust
to such corruptions. This is also manifested by the com-
mon practice in visual recognition that treats synthetic dis-
tortions as forms of data augmentation [16]. In stark con-
trast, BIQA relies preferentially on distortion-sensitive fea-
tures to quantify the perceptual quality of images of vari-
ous semantic content. Ma et al. [37] proposed a cascaded
multitask learning scheme for BIQA, but did not investigate
the relationships between BIQA and high-level vision tasks.
Fang et al. [9] included scene classification as one task, but
required manually specifying the parameters ( i.e., computa-
tions) to share across tasks, which is difficult and bound to
be suboptimal.
Taking inspiration from recent work on vision-language
pre-training [49], we propose a general and automated mul-
titask learning scheme for BIQA, with an attempt to answer
the above-highlighted question. Here, “automated” means
that the model parameter sharing for all tasks and the loss
weighting assigned to each task are determined automati-
cally. We consider two additional tasks, scene classification
and distortion type identification, the former of which is
conceptually conflicting to BIQA, while the latter is closely
related. We first summarize the scene category, distortion
type, and quality level of an input image using a textual
template. For example, Fig. 1 (c) may be described as “a
photo of a cityscape with Gaussian blur artifacts, which is
ofbadquality.” We then employ the contrastive language-
image pre-training (CLIP) [49], a joint vision and language
model trained with massive image-text pairs, to obtain the
visual and textual embeddings. The joint probability over
the three tasks can be computed from the cosine similarities
between the image embedding and all candidate textual em-
beddings1. We marginalize the joint distribution to obtain
the marginal probability for each task, and further convert
the discretized quality levels to a continuous quality score
using the marginal distribution as the weighting.
We supplement existing IQA datasets [7, 11, 17, 22, 27,
54] with scene category and distortion type labels, and
1We specify nine scene categories, eleven distortion types, and five
quality levels, giving rise to 495textual descriptions/embeddings in total.jointly optimize the entire method on a combination of them
by minimizing a weighted sum of three fidelity losses [58],
where the loss weightings are adjusted automatically based
on the training dynamics [31]. From extensive experimen-
tal results, we arrive at a positive answer to the highlighted
question: BIQA can indeed benefit from both scene clas-
sification and distortion type identification. The resulting
model, which we name Language- Image Quality Evaluator
(LIQE), not only outperforms state-of-the-art BIQA meth-
ods [13,55,76,78] in terms of prediction accuracy on multi-
ple IQA datasets, but also exhibits improved generalizabil-
ity in the group maximum differentiation (gMAD) competi-
tion [35]. In addition, we provide quantitative evidence that
LIQE better realigns MOSs from different IQA datasets in
a common perceptual scale [48].
|
Yu_MAGVIT_Masked_Generative_Video_Transformer_CVPR_2023
|
Abstract
We introduce the MAsked Generative VIdeo Transformer,
MAGVIT , to tackle various video synthesis tasks with a
single model. We introduce a 3D tokenizer to quantize a
video into spatial-temporal visual tokens and propose an
embedding method for masked video token modeling to fa-
cilitate multi-task learning. We conduct extensive experi-
ments to demonstrate the quality, efficiency, and flexibility
of MAGVIT. Our experiments show that (i) MAGVIT per-
forms favorably against state-of-the-art approaches and es-
tablishes the best-published FVD on three video generation
benchmarks, including the challenging Kinetics-600. (ii)
MAGVIT outperforms existing methods in inference time by
two orders of magnitude against diffusion models and by
60×against autoregressive models. (iii) A single MAGVIT
model supports ten diverse generation tasks and general-
izes across videos from different visual domains. The source
code and trained models will be released to the public at
https://magvit.cs.cmu.edu .
∗Work partially done during a research internship at Google Research.
|
1. Introduction
Recent years have witnessed significant advances in
image and video content creation based on learning
frameworks ranging from generative adversarial networks
(GANs) [15, 43, 48, 59, 66], diffusion models [25, 33, 35,
47, 65], to vision transformers [44, 45, 69]. Inspired by
the recent success of generative image transformers such as
DALL·E [46] and other approaches [12,18,20,73], we pro-
pose an efficient and effective video generation model by
leveraging masked token modeling and multi-task learning.
We introduce the MAsked Generative VIdeo Trans-
former ( MAGVIT ) for multi-task video generation. Specif-
ically, we build and train a single MAGVIT model to per-
form a variety of diverse video generation tasks and demon-
strate the model’s efficiency, effectiveness, and flexibility
against state-of-the-art approaches. Fig. 1(a) shows the
quality metrics of MAGVIT on a few benchmarks with ef-
ficiency comparisons in (b), and generated examples under
different task setups such as frame prediction/interpolation,
out/in-painting, and class conditional generation in (c).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10459
MAGVIT models a video as a sequence of visual tokens
in the latent space and learns to predict masked tokens with
BERT [17]. There are two main modules in the proposed
framework. First, we design a 3D quantization model to
tokenize a video, with high fidelity, into a low-dimensional
spatial-temporal manifold [21, 71]. Second, we propose an
effective masked token modeling (MTM) scheme for multi-
task video generation. Unlike conventional MTM in image
understanding [67] or image/video synthesis [12,26,28], we
present an embedding method to model a video condition
using a multivariate mask and show its efficacy in training.
We conduct extensive experiments to demonstrate the
quality, efficiency, and flexibility of MAGVIT against state-
of-the-art approaches. Specifically, we show that MAGVIT
performs favorably on two video generation tasks across
three benchmark datasets, including UCF-101 [55], BAIR
Robot Pushing [19, 61], and Kinetics-600 [10]. For the
class-conditional generation task on UCF-101, MAGVIT
reduces state-of-the-art FVD [61] from 332 [21] to 76
(↓77%). For the frame prediction task, MAGVIT performs
best in terms of FVD on BAIR ( 84[35]→62,↓26%) and
Kinetics-600 ( 16[33]→9.9,↓38%) .
Aside from the visual quality, MAGVIT’s video synthe-
sis is highly efficient. For instance, MAGVIT generates a
16-frame 128 ×128 video clip in 12steps, which takes 0.25
seconds on a single TPUv4i [36] device. On a V100 GPU, a
base variant of MAGVIT runs at 37frame-per-second (fps)
at 128×128 resolution. When compared at the same res-
olution, MAGVIT is two orders of magnitude faster than
the video diffusion model [33]. In addition, MAGVIT is 60
|
Zhang_Coaching_a_Teachable_Student_CVPR_2023
|
Abstract
We propose a novel knowledge distillation framework for
effectively teaching a sensorimotor student agent to drive
from the supervision of a privileged teacher agent. Cur-
rent distillation for sensorimotor agents methods tend to re-
sult in suboptimal learned driving behavior by the student,
which we hypothesize is due to inherent differences between
the input, modeling capacity, and optimization processes of
the two agents. We develop a novel distillation scheme that
can address these limitations and close the gap between the
sensorimotor agent and its privileged teacher. Our key in-
sight is to design a student which learns to align their input
features with the teacher’s privileged Bird’s Eye View (BEV)
space. The student then can benefit from direct supervision
by the teacher over the internal representation learning. To
scaffold the difficult sensorimotor learning task, the student
model is optimized via a student-paced coaching mecha-
nism with various auxiliary supervision. We further propose
a high-capacity imitation learned privileged agent that sur-
passes prior privileged agents in CARLA and ensures the
student learns safe driving behavior. Our proposed sen-
sorimotor agent results in a robust image-based behavior
cloning agent in CARLA, improving over current models
by over 20.6%in driving score without requiring LiDAR,
historical observations, ensemble of models, on-policy data
aggregation or reinforcement learning.
|
1. Introduction
Learning internal representations for making intricate
driving decisions from images involves a complex optimiza-
tion task [4, 43, 54]. The inherent challenge for end-to-end
training of driving agents lies in the immense complexity of
learning to map high-dimensional visual observations into
general and safe navigational decisions [13, 20, 70]. Even
given millions of training examples [4], today’s agents still
fail to reliably learn an internal representation that can be
used for robust processing of complex visual scenarios (e.g.,
dense urban settings with intricate layouts and dynamic ob-
stacles) in a safe and task-driven manner [13, 20, 70].
To ease the challenging sensorimotor training task, re-
Privileged Agent
Teachable NetBEV Representation
RGB Image
StudentTeacher
Bicyclist
Alignment
ModuleCoachFigure 1. Effective Knowledge Distillation for Sensorimotor
Agents. Our proposed CaT (Coaching a Teachable student) frame-
work enables highly effective knowledge transfer between a priv-
ileged teacher and a sensorimotor (i.e., image-based) student. An
alignment module learns to transform image-based features to the
teacher’s BEV feature space, where the student can then lever-
age extensive and direct supervision on its learned intermediate
representations. The student model is optimized via a coaching
mechanism with extensive auxiliary supervision in order to fur-
ther scaffold the difficult sensorimotor learning task.
cent approaches decompose the task into stages, e.g., by
first training a high-capacity privileged network with com-
plete knowledge of the world and distilling its knowl-
edge into a less capable vision-based student network [11,
13, 23, 47, 79]. However, due to the inherent differences
between the inputs and architectures of the two agents,
current methods rely on limited supervisory mechanisms
from the teacher, i.e., exclusively through the teacher’s
output [11, 13] or knowledge distillation of a single final
fully-connected layer [71, 79, 81]. Moreover, the privileged
teacher’s demonstration targets may be noisy or difficult for
the student to imitate, given the limited perspective [27]. In
this work, we sought to develop a more effective knowledge
distillation paradigm for training a sensorimotor agent to
drive. Our key insight is to enable more extensive supervi-
sion from the teacher by reducing the gap between internal
modeling and learning capabilities between the two agents.
Our proposed approach for holistic knowledge distilla-
tion is informed by human instruction, which often involves
structured supervision in addition to high-level demonstra-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7805
tions, e.g., providing various hints to scaffold information
in a way that the student can better understand [30]. When
teaching others new and challenging skills, i.e., where a stu-
dent may not be able to replicate the demonstration such as
riding a bicycle or driving a vehicle, teachers may provide
additional supervision regarding the underlying task struc-
ture and their own internal reasoning [64]. Analogously to
our settings, the privileged teaching agent can potentially
provide richer supervision when teaching a limited capacity
sensorimotor student, i.e., through more careful and direct
guidance of the underlying representation learning.
In our work, we introduce CaT, a novel method for teach-
ing a sensorimotor student to drive using supervision from a
privileged teacher. Our key insights are threefold: 1) Effec-
tive Teacher: We propose to incorporate explicit safety-
aware cues into the BEV space that facilitate a surpris-
ingly effective teacher agent design. While prior privileged
agents struggle to learn to drive in complex urban driving
scenes, we demonstrate our learned agent to match expert-
level decision-making. 2) Teachable Student via Align-
ment: An IPM-based transformer alignment module can
facilitate direct distillation of most of the teacher’s features
and better guide the student learning process. 3) Student-
paced Coaching: A coaching mechanism for managing
difficult samples can scaffold knowledge and lead to im-
proved model optimization by better considering the ability
of the student. By holistically tackling the complex knowl-
edge distillation task with extensive teacher and auxiliary
supervision, we are able to train a state-of-the-art image-
based agent in CARLA [21]. Through ablation studies, in-
put design, and interpretable feature analysis, we also pro-
vide critical insights into current limitations in learning ro-
bust and generalized representations for driving agents.
|
Zancato_TrainTest-Time_Adaptation_With_Retrieval_CVPR_2023
|
Abstract
We introduce Train/Test-Time Adaptation with Re-
trieval ( T3AR), a method to adapt models both at train and
test time by means of a retrieval module and a searchable
pool of external samples. Before inference, T3ARadapts a
given model to the downstream task using refined pseudo-
labels and a self-supervised contrastive objective function
whose noise distribution leverages retrieved real samples
to improve feature adaptation on the target data manifold.
The retrieval of real images is key to T3ARsince it does
not rely solely on synthetic data augmentations to com-
pensate for the lack of adaptation data, as typically done
by other adaptation algorithms. Furthermore, thanks to
the retrieval module, our method gives the user or service
provider the possibility to improve model adaptation on the
downstream task by incorporating further relevant data or
to fully remove samples that may no longer be available
due to changes in user preference after deployment. First,
we show that T3ARcan be used at training time to im-
prove downstream fine-grained classification over standard
fine-tuning baselines, and the fewer the adaptation data the
higher the relative improvement (up to 13%). Second, we
apply T3ARfor test-time adaptation and show that exploit-
ing a pool of external images at test-time leads to more ro-
bust representations over existing methods on DomainNet-
126 and VISDA-C, especially when few adaptation data are
available (up to 8%).
|
1. Introduction
While Deep Learning models are evolving rapidly, ma-
chine learning systems used in production are updated
rarely, as each deployment requires the provider to engage
in a complex process of scaling, securitization, certification
of new model and dataset cards, bias evaluation, and re-
*Work done during an internship at AWS AI Labs.Decision boundary after adaptationPre-trained decision boundaryTest samples from TSource data from S
Retrieved samples AExternal datasetTTarget datasetSSSource datasetFigure 1. Adaptation with retrieval from an external data pool.
Illustration of how T3ARexploits target data Tand the external
data pool Ato adapt the decision boundary after pre-training on
the source datasets S. For new test queries from the target dataset,
T3ARapproximates the local data manifold around Tby retriev-
ing similar unlabelled examples from A. Then, it updates the de-
cision boundary with a contrastive self-supervised objective.
gression tests. It is now common for users to adapt trained
models to their specific use cases, or to the changed con-
text as time goes by [ 16,39,60]. Such adaptation can be
performed by fine-tuning on a specific dataset Sowned by
the user [ 1,18]. However, on an even finer time-scale, users
may want to adapt their models based on data they observe
at test time, bypassing the time-consuming annotation pro-
cess [ 8,35,53,57]. Test-Time Adaptation (TTA) refers to
the problem of adapting a source model to a target task T
represented by test data, for which no ground-truth labels
are given.
This trend is exacerbated by the advent of Foundation
Models [ 2,10,38,59], at least in the visual domain where
tasks can be antagonistic and models are sensitive to even
subtle changes in the data distribution. At the same time,
both users and providers typically have access to ever-
growing pools of auxiliary data, albeit often heterogeneous
(pertaining to concepts other than the one of interest at test-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15911
time), and without annotations. Yet it seems plausible that,
somewhere within these large pools of data, there may be
information useful for the task at hand.
In this paper, we tackle the problem of performing test-
time adaptation by retrieving information from a large, un-
labeled, heterogeneous, and evolving dataset. The same
procedure could also be followed by the provider, if they
have access to auxiliary internal data and wish to adapt the
production model based on trends observed in test data. We
refer to our method as Train/Test-Time Adaptation with Re-
trieval , orT3AR.
T3AR, if solved, would enable a number of real-world
tasks that have thus far frustrated practitioners. For in-
stance, it would allow a user to select, among a vast data
lake A, which samples to use for a training, based on la-
beled and unlabeled samples [ 61]. It would also enable nim-
ble inference, by adapting a modest-size model to specific
tasks, rather than relying on an unwieldy model to master
all trades. Finally, it would enable reversible adaptation:
While in the case of language models tasks are generally
synergistic [ 44], in vision tasks can be antagonistic.1There-
fore, a model adapted to one task may behave poorly on
another, and a model that encompasses both would require
significantly higher capacity [ 2,10,38,59], to the detriment
of inference efficiency. In T3AR, changing the target data
Tchanges the subset of the data pool Athat is retrieved,
with no impact on other models, instantiating smaller inde-
pendent models for antagonistic tasks, rather than coercing
them into a larger one, likely multiplying inference costs.
T3ARcan be used in a continual setting, where at each
time tone has a different target Tt, and the auxiliary task A
is composed of the union of all prior targets T0,...,T t. The
retrieval system should automatically determine what infor-
mation from whatever past targets is relevant to the present,
and what information is redundant in Aand can be elimi-
nated. The important difference compared to ordinary con-
tinual learning is that each step starts with the base model,
so there is no catastrophic forgetting, and what is updated is
the auxiliary task. In other words, the integration of infor-
mation occurs in A, not in the trained model f.
1.1. Related problems
T3ARrelates to unsupervised domain adaptation (UDA)
[26,28,45], since the target dataset is not annotated. How-
ever, in UDA one assumes that the source dataset Sis avail-
able along with the target T, which is not necessarily the
case in T3ARsince users may want to bypass annotation
altogether, and directly adapt the pre-trained model using
the auxiliary dataset A, based on the target task T, without
having direct access to S.
1E.g., localization requires marginalizing identity, whereas recognition
requires marginalizing location, making the features that are informative
for one detrimental to the other [ 2,38].T3ARalso relates to semi-supervised learning (SSL)
[32,34,51], since the target dataset Tand the auxiliary
dataset Aare not annotated. However, in SSL one assumes
that labeled Sand unlabeled data are drawn from the same
joint distribution, which is not the case for TandAin
T3AR, and, in any case we do not aim to infer labels of
A, and just use it to improve the model on the target task.
T3ARis also related to open-set domain adaptation [ 6,
49], since the auxiliary dataset Ais heterogeneous and does
not share the same label space as the source and target task.
It is also related to out-of-distribution detection (OOD) [ 20,
62], since one needs to decide whether to add samples from
the auxiliary dataset, and to active learning [ 50], since one
needs to decide what samples to add.
Naturally, T3ARclosely relates to test-time adaptation
(TTA) [ 8,35,53,57,65], and to memory-augmented or
retrieval-based architectures [ 3,11,36], widely developed
in the language domain [ 4,33,63], where the hypotheses
live in the same space of the data and nuisance variability is
limited to paraphrasing.
In summary, T3ARlies at the intersection of UDA, SSL,
OOD, TTA, Active Learning, and Retrieval, yet it does not
fit neatly into any of them, making both the survey of related
literature (Sect. 2) and experimental assessment (Sect. 4)
non-straightforward.
1.2. Key ideas and contributions
We propose a method to solve T3AR, based on a target
unlabeled dataset T, that selects samples from an auxiliary
dataset A, using a retrieval model R.
Starting from any model fSpre-trained by the provider
on a dataset Dand later fine-tuned by the user on a labelled
dataset S, our method finds subsets of an auxiliary dataset
Athat are relevant for the target dataset T, using nearest
neighbors in Ato samples in T, measured in a representa-
tion space computed by a retrieval model R(in our case, a
CLIP embedding [ 48]).
The key technical contribution is a contrastive loss used
for updating the model fSto a new model fA|T, whereby
negative pairs are selected by retrieving samples from the
external dataset Athat are informative ofTusing the re-
triever R. Furthermore, to improve training stability, we
exclude same-class negatives pairs from Tby exploiting as-
signed pseudo-labels obtained by averaging predicted log-
its on different data augmentations. Our method can be
thought of as a form of contrastive “dataset augmentation”
by enlarging the user data with samples drawn from a dif-
ferent (unlabeled) dataset A, based on guidance provided by
a retriever R. This procedure can be followed by both the
user and the provider, thus empowering them to adapt the
core model (train-time adaptation) or a sequence of disjoint
custom models (test-time adaptation).
We show that applying T3ARimproves downstream
15912
classification accuracy over the paragon supervised fine-
tuning [ 1,18] for train-time and test-time adaptation meth-
ods [ 8,35,57] for test-time. In particular, as the number of
data available during adaptation decreases, T3ARimproves
by up to 13% and 8% in relative Top1 accuracy at train and
test time, respectively.
|
Xu_PIDNet_A_Real-Time_Semantic_Segmentation_Network_Inspired_by_PID_Controllers_CVPR_2023
|
Abstract
Two-branch network architecture has shown its effi-
ciency and effectiveness in real-time semantic segmentation
tasks. However, direct fusion of high-resolution details and
low-frequency context has the drawback of detailed features
being easily overwhelmed by surrounding contextual infor-
mation. This overshoot phenomenon limits the improvement
of the segmentation accuracy of existing two-branch mod-
els. In this paper, we make a connection between Convolu-
tional Neural Networks (CNN) and Proportional-Integral-
Derivative (PID) controllers and reveal that a two-branch
network is equivalent to a Proportional-Integral (PI) con-
troller, which inherently suffers from similar overshoot is-
sues. To alleviate this problem, we propose a novel three-
branch network architecture: PIDNet, which contains three
branches to parse detailed, context and boundary informa-
tion, respectively, and employs boundary attention to guide
the fusion of detailed and context branches. Our family of
PIDNets achieve the best trade-off between inference speed
and accuracy and their accuracy surpasses all the existing
models with similar inference speed on the Cityscapes and
CamVid datasets. Specifically, PIDNet-S achieves 78.6%
mIOU with inference speed of 93.2 FPS on Cityscapes and
80.1% mIOU with speed of 153.7 FPS on CamVid.
|
1. Introduction
Proportional-Integral-Derivative (PID) Controller is a
classic concept that has been widely applied in modern dy-
namic systems and processes such as robotic manipulation
[3], chemical processes [24], and power systems [25]. Even
though many advanced control strategies with better con-
trol performance have been developed in recent years, PID
controller is still the go-to choice for most industry applica-
tions due to its simplicity and robustness. Furthermore, the
idea of PID controller has been extended to many other ar-
1Work supported in part by NSF grants ECCS-1923803 and CCF-
2007527.
Figure 1. The trade-off between inference speed and accuracy (re-
ported) for real-time models on the Cityscapes [12] test set. Blue
stars refer to our models while green triangles represent others.
eas. For example, researchers introduced the PID concept to
image denoising [32], stochastic gradient decent [1] and nu-
merical optimization [50] for better algorithm performance.
In this paper, we devise a novel architecture for real-time se-
mantic segmentation tasks by employing the basic concept
of PID controller and demonstrate that the performance of
our model surpasses all the previous works and achieves the
best trade-off between inference speed and accuracy, as il-
lustrated in Figure 1, by extensive experiments.
Semantic segmentation is a fundamental task for visual
scene parsing with the objective of assigning each pixel in
the input image to a specific class label. With the increas-
ing demand of intelligence, semantic segmentation has be-
come the basic perception component for applications such
as autonomous driving [16], medical imaging diagnosis [2]
and remote sensing imagery [54]. Starting from FCN [31],
which achieved great improvement over traditional meth-
ods, deep convnets gradually dominated the semantic seg-
mentation field and many representative models have been
proposed [4, 6, 40, 48, 59, 60]. For better performance, var-
ious strategies were introduced to equip these models with
the capability of learning contextual dependencies among
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19529
pixels in large scale without missing important details. Even
though these models achieve encouraging segmentation ac-
curacy, too much computational cost are required, which
significantly hinder their application in real-time scenarios,
such as autonomous vehicle [16] and robot surgery [44].
To meet real-time or mobile requirements, researchers
have come up with many efficient and effective models in
the past for semantic segmentation. Specifically, ENet [36]
adopted lightweight decoder and downsampled the feature
maps in early stages. ICNet [58] encoded small-size in-
puts in complex and deep path to parse the high-level se-
mantics. MobileNets [21, 42] replaced traditional convolu-
tions with depth-wise separable convolutions. These early
works reduced the latency and memory usage of segmen-
tation models, but low accuracy significantly limits their
real-world application. Recently, many novel and promis-
ing models based on Two-Branch Network (TBN) architec-
ture have been proposed in the literature and achieve SOTA
trade-off between speed and accuracy [15, 20, 38, 39, 52].
In this paper, we view the architecture of TBNs from the
prospective of PID controller and point out that a TBN is
equivalent to a PI controller, which suffers from the over-
shoot issue as illustrated in Figure 2. To alleviate this
problem, we devise a novel three-branch network architec-
ture, namely PIDNet, and demonstrate its superiority on
Cityscapes [12], CamVid [5] and PASCAL Context [33]
datasets. We also provide ablation study and feature visual-
ization for better understanding of the functionality of each
module in PIDNet. The source code can be accessed via:
https://github.com/XuJiacong/PIDNet
The main contributions of this paper are three-fold:
• We make a connection between deep CNN and PID
controller and propose a family of three-branch net-
works based on the PID controller architecture.
• Efficient modules, such as Bag fusion module de-
signed to balance detailed and context features, are
proposed to boost the performance of PIDNets.
• PIDNet achieves the best trade-off between inference
speed and accuracy among all the existing models.
In particular, PIDNet-S achieves 78.6%mIOU with
speed of 93.2FPS and PIDNet-L presents the high-
est accuracy ( 80.6%mIOU) in real-time doman on
Cityscapes test set without acceleration tools.
|
Yang_Relational_Space-Time_Query_in_Long-Form_Videos_CVPR_2023
|
Abstract
Egocentric videos are often available in the form of un-
interrupted, uncurated long videos capturing the camera
wearers’ daily life activities.Understanding these videos re-
quires models to be able to reason about activities, objects,
and their interactions. However, current video benchmarks
study these problems independently and under short, cu-
rated clips. In contrast, real-world applications, e.g. AR
assistants, require bundling these problems for both model
development and evaluation. In this paper, we propose to
study these problems in a joint framework for long video
understanding. Our contributions are three-fold. First,
we propose an integrated framework, namely Relational
Space- Time Query (ReST), for evaluating video under-
standing models via templated spatiotemporal queries. Sec-
ond, we introduce two new benchmarks, ReST-ADL and
ReST-Ego4D1, which augment the existing egocentric video
datasets with abundant query annotations generated by the
ReST framework. Finally, we present a set of baselines and
1The latest version of our benchmark and models will be available here.in-depth analysis on the two benchmarks and provide in-
sights about the query tasks. We view our integrated frame-
work and benchmarks as a step towards comprehensive,
multi-step reasoning in long videos, and believe it will fa-
cilitate the development of next generations of video under-
standing models.
|
1. Introduction
Thanks to the advances of modern, massive parallel
hardware, e.g. GPUs, and the availability of large datasets,
significant progress has been made in the last few years
with large language models ( e.g., GPT-3 [6], BERT [11])
and image / video generative models ( e.g., DALLE [40],
Imagen [41], Make-A-Video [46]). Meanwhile, current
video understanding models mostly focus on processing
short video clips [12, 13, 52] and solving basic perception
tasks such as action recognition [16, 28, 31, 44, 48] and de-
tection [7, 19]. One may ask the questions for video under-
standing research: “How far are current models progressing
to a human-level performance on video understanding?”, or
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6398
“What is blocking us from building models that can under-
stand complex relationships in long videos?”
Of course, there exists multiple blockers in practice such
as GPU memory limitation and inefficient hardware support
for processing long videos. Yet the first and most important
reason is always the lack of the right research problem and
theright benchmark . One drawback of current video un-
derstanding benchmarks [19, 28] is that they handle analy-
sis of activities, objects and their interactions in a separate
manner. However, understanding long-form videos usually
requires a unified analysis of these factors because activi-
ties manifesting within these uncurated videos are primar-
ily in the form of human-object interaction, especially for
egocentric recordings of a camera wearer’s daily lives [17].
In recent years, video-QA [18, 33, 49, 66] and video cap-
tioning [10, 50] have been proposed as alternative tasks for
video understanding. These tasks require models to under-
stand both visual and text modalities and perform cross-
modal reasoning. On one hand, such vision-language based
tasks have the benefit of bypassing the pre-defined taxon-
omy and closed-world assumptions by leveraging language
as input and/or output. On the other hand, using language
for vision tasks, either in the form of input query or output
prediction, brings additional ambiguity in text generation
and requires use of uninterpretable evaluation metrics ( e.g.,
BLEU, ROUGE). Language priors also introduce bias to the
task as observed in prior work that the language-only model
achieves comparable results with the VQA ones [4, 18].
In this paper, we present a holistic framework, Relational
Space- Time Query (ReST), for evaluating video under-
standing models via templated spatiotemporal queries. By
combining analysis of activities, objects, and their interac-
tions, our ReST framework captures much of the rich ex-
pressivity of natural language query while avoiding the am-
biguity and bias introduced by the language input / output.
Figure 1 illustrates an example of our ReST framework.
Given a long video spanning up to 30 minutes, we evalu-
ate a video understanding model by asking various queries
about the activities and human-object interactions occurred
in the video. Unlike VQA that relies on language-based
questions and answers, all of our queries and answers are
constructed in the form of pre-defined templates with vi-
sual or categorical input. Such a design helps the evaluation
remain pure vision-focused and enjoy well-defined, well-
posed evaluation metrics. Queries constructed in ReST can
cover various questions with different levels of complexity.
As shown in the examples in Figure 1, the questions can
be: “what activities did I do with the coffee mug?”, “what
objects did I interact with when I was washing dishes?” or
“when did I take the pills stored in this specific bottle?”. We
note that these questions are templated and only the time
in square brackets, the activities in the double quotes, and
the image crops in the curvy brackets are allowed to varyto form different questions. In order to perform well on
these query tasks, a model needs not only be able to process
the long video efficiently, but is also required to understand
the temporal dependencies of activities and the fine-grained
human-object interactions across the video.
We summarize our contributions as follows. We present
Relational Space- Time Query (ReST), a holistic frame-
work and benchmarks for long video understanding with
detailed-reasoning tasks. We provide annotations for our
ReST queries on two existing benchmarks, ADL [37] and
Ego4D [17]. We conduct a set of comprehensive baselines
and analysis for ReST to gain insights into the tasks. We
find that even with the initial set of the three basic tasks,
current state-of-the-art models are not meeting desired per-
formance, which indicates the need for further research and
opportunities in this field.
|
Ye_Improving_Commonsense_in_Vision-Language_Models_via_Knowledge_Graph_Riddles_CVPR_2023
|
Abstract
This paper focuses on analyzing and improving the com-
monsense ability of recent popular vision-language (VL)
models. Despite the great success, we observe that existing
VL-models still lack commonsense knowledge/reasoning
ability (e.g., “Lemons are sour”), which is a vital com-
ponent towards artificial general intelligence. Through
our analysis, we find one important reason is that exist-
ing large-scale VL datasets do not contain much common-
sense knowledge, which motivates us to improve the com-
monsense of VL-models from the data perspective. Rather
than collecting a new VL training dataset, we propose
a more scalable strategy, i.e., “Data Augmentation with
kNowledge graph linearization for CommonsensE capabil-
ity” (DANCE). It can be viewed as one type of data aug-
mentation technique, which can inject commonsense knowl-
edge into existing VL datasets on the fly during training.
More specifically, we leverage the commonsense knowl-
edge graph (e.g., ConceptNet) and create variants of text
description in VL datasets via bidirectional sub-graph se-
quentialization. For better commonsense evaluation, we
further propose the first retrieval-based commonsense di-
agnostic benchmark. By conducting extensive experiments
on some representative VL-models, we demonstrate that
our DANCE technique is able to significantly improve the
commonsense ability while maintaining the performance on
vanilla retrieval tasks. The code and data are available at
https://github.com/pleaseconnectwifi/DANCE.
|
1. Introduction
Many vision-based problems in our daily life go beyond
perception and recognition. For example, when we hear
people say “ It tastes sour ”, we need to identify they are talk-
ing about lemons on the table instead of the chocolate cake.
Therefore, it is essential for artificial general intelligence to
*This work was partially supported by a GRF grant (Project No. CityU
11216122) from the Research Grants Council (RGC) of Hong Kong.
This tastes sour .
B
CLIP
BLIP
(itc)
BLIP
(itm)
ViLT
AFigure 1. Illustration of the commonsense lacking problem of
various popular VL-models, including CLIP [45] pre-trained with
contrastive supervision, ViLT [24] with matching supervision, and
BLIP [28] with the both. The bar plots suggest the alignment
scores of the images to the text. All models fail in retrieving the
correct image with lemon (in blue).
develop commonsense capability. Vision-Language (VL)
models [45] recently show promising signals on mimick-
ing the core cognitive activities of humans by understand-
ing the visual and textual information in the same latent
space [74]. However, we observed that VL-models, e.g.,
CLIP [45], still struggle when minor commonsense knowl-
edge is needed. For example, as shown in figure 1, none of
the existing models correctly identify the lemon with text
input “ It tastes sour ”.
In this work, we take a step towards injecting the VL-
models with commonsense capability. More specifically,
we find one important reason for the commonsense lacking
issue is that existing large-scale VL datasets do not con-
tain much commonsense knowledge. On the one hand, reg-
ular VL data, e.g., COCO [32] and CC 12M [9] contain
more nouns and descriptive adjectives, with much fewer
verbs and particles compared to regular texts. This distribu-
tion difference suggests that it might be infeasible for VL-
models to gain commonsense ability by purely enlarging the
dataset, unlike language-only models [22, 44]. Also, other
objectives like visual question answering or generation are
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2634
not widely applicable for training and have limited data size.
Inspired by the aforementioned findings, we propose
Data Augmentation with kNowledge graph linearization for
CommonsensE capability (DANCE). The main idea is to
generate commonsense-augmented image-text pairs. To do
so, one natural idea is to leverage the rich commonsense
knowledge in knowledge graphs [5, 56]. However, it is not
trivial to inject the knowledge into image-text pairs. On the
one hand, structured data like graphs usually require spe-
cific architectures [59, 75] to embed, which is troublesome.
On the other hand, if we associate the external knowledge
with the text in the training stage, we will need the exter-
nal knowledge-augmentation process in the inference stage
as well to avoid domain shift [51]. This is not desirable,
since the corresponding knowledge is usually not available
for the inference tasks. To address these challenges, we first
re-organize the commonsense knowledge graph into entries
with (entity, relation, entity) format, and pair them to the
images that contain one of the entities. We then hide the
name of entities in that image with demonstrative pronouns,
e.g., “ this item ”. The generated descriptions are in tex-
tual form and therefore readily applicable for the training of
most VL-models. More importantly, by forcing the model
to memorize the relationships between entities in the train-
ing stage, such data augmentation is not needed in the infer-
ence stage. The data pair generation pipeline is automatic
and scalable, leveraging the existing consolidated common-
sense knowledge base and the large and various collections
of image-language supervision.
In addition, existing VL commonsense evaluations are
restricted to visual question answering and generation
which are not a good fit or well received in the majority of
VL-models. Therefore, we propose a new diagnostic test set
in a wider adaptable form, i.e., Image-Text and Text-Image
Retrieval, to achieve a fair evaluation of the pre-trained VL-
models. The set is upgraded by neighborhood hard-negative
filtering to further ensure data quality.
The effectiveness of the DANCE is validated by not
only our diagnostic test set, but also the most popular vi-
sual question answering benchmark for commonsense [36].
Moreover, we show the commonsense ability of the models
trained with DANCE even generalize to unseen knowledge.
We show the potential of the new train strategy and the test
dataset by deep content analysis and baseline performance
measurements across various cutting-edge VL-models. We
summarize our main findings and contributions as follows:
1. We propose a novel commonsense-aware training
strategy DANCE, which is compatible with the most
of VL-models. The inference stage needs no change.
2. We propose a new retrieval-based well-received com-
monsense benchmark to analyze a suite of VL-models
and discover weaknesses that are not widely known:commonsense easy for humans (83%) is hard for cur-
rent state-of-the-art VL-models ( <42%).
3. We conduct extensive experiments to demonstrate the
effectiveness of the proposed strategy and diagnostic
test set. The datasets and all the code will be made
publicly available.
|
Xu_Side_Adapter_Network_for_Open-Vocabulary_Semantic_Segmentation_CVPR_2023
|
Abstract
This paper presents a new framework for open-
vocabulary semantic segmentation with the pre-trained
vision-language model, named Side Adapter Network
(SAN). Our approach models the semantic segmentation
task as a region recognition problem. A side network is
attached to a frozen CLIP model with two branches: one
for predicting mask proposals, and the other for predicting
attention bias which is applied in the CLIP model to rec-
ognize the class of masks. This decoupled design has thebenefit CLIP in recognizing the class of mask proposals.
Since the attached side network can reuse CLIP features,
it can be very light. In addition, the entire network can be
trained end-to-end, allowing the side network to be adapted
to the frozen CLIP model, which makes the predicted mask
proposals CLIP-aware. Our approach is fast, accurate, and
only adds a few additional trainable parameters. We evalu-
ate our approach on multiple semantic segmentation bench-
marks. Our method significantly outperforms other coun-
terparts, with up to 18 times fewer trainable parameters and
19 times faster inference speed. Fig. 1 shows some visual-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2945
ization results on ImageNet. We hope our approach will
serve as a solid baseline and help ease future research in
open-vocabulary semantic segmentation.
|
1. Introduction
Recognizing and segmenting the visual elements of any
category is the pursuit of semantic segmentation. Mod-
ern semantic segmentation methods [5, 7, 23] rely on large
amounts of labeled data, but typically datasets often only
consist of tens to hundreds of categories, and expensive
data collection and annotation limit our possibilities to fur-
ther expand the categories. Recently, large-scale vision-
language models [17, 27, 36,37], represented by CLIP [27],
have enabled arbitrary category recognition at the image
level, i.e., open-vocabulary image classification , and this
great success encourages us to explore its adaptation in se-
mantic segmentation.
Applying the CLIP model in open-vocabulary seman-
tic segmentation is challenging because the CLIP model is
trained by image-level contrastive learning. Its learned rep-
resentation lacks the pixel-level recognition capability that
is required for semantic segmentation. One solution [12,19]
to remedy the granularity gap of representation is fine-
tuning the model on the segmentation dataset. However,
the data sizes of segmentation datasets are much less than
the vision-language pre-training dataset, so the capability of
fine-tuned models on open-vocabulary recognition is often
compromised.
Modeling semantic segmentation as a region recogni-
tion problem bypasses the above difficulties. Early at-
tempts [9, 33] adopt a two-stage training framework. In the
first stage, a stand-alone model is trained to generate a set
of masked image crops as mask proposals. In the second
stage, the vision-language pre-training model ( e.g. CLIP) is
used to recognize the class of masked image crops. How-
ever, since the mask prediction model is completely inde-
pendent of the vision-language pre-training model, it misses
the opportunity to leverage the strong features of the vision-
language pre-training model and the predicted masked im-
age crops may be unsuitable for recognition, which leads to
a heavy, slow, and low-performing model.
This work seeks to fully unleash the capabilities of the
vision-language pre-training model in open vocabulary se-
mantic segmentation. To reach this goal, we present a new
framework (Fig. 2), called side adapter network (SAN). Its
mask prediction and recognition are CLIP-aware because of
end-to-end training, and it can be lightweight due to lever-
aging the features of CLIP.
The side adapter network has two branches: one predict-
ing mask proposals, and one predicting attention biases that
are applied to the self-attention blocks of CLIP for mask
class recognition. We show this decoupled design improves
𝑓𝑓 𝑓𝑓 𝑓𝑓Attention Bias
Mask ProposalsProposal LogitsCLIP Model
Side Adapter Network
For InferencePredictionFigure 2. Overview of our SAN. The red dotted lines indicate the
gradient flow during training. In our framework, the frozen CLIP
model still serves as a classifier, and the side adapter network gen-
erates mask proposals and attention bias to guide the deeper layers
of the CLIP model to predict proposal-wise classification logits.
During inference, the mask proposals and the proposal logits are
combined to get final predictions through Matmul .
the segmentation performance because the region used for
CLIP to recognize the mask may be different from the mask
region itself. To minimize the cost of CLIP, we further
present a single-forward design: the features of shallow
CLIP blocks are fused to SAN, and other deeper blocks are
combined with attention biases for mask recognition. Since
the training is end-to-end, the side adapter network can be
maximally adapted to the frozen CLIP model.
With the aim of fairness and reproducibility, our study is
based on officially released CLIP models. We focus on the
released ViT CLIP models because the vision transformer
hasde facto substituted ConvNet as the dominant backbone
in the computer vision community, and for conceptual con-
sistency and simplicity, the side adapter network is also im-
plemented by the vision transformer.
Accurate semantic segmentation needs high-resolution
images, but the released ViT CLIP models are designed
for low-resolution images ( e.g.224×224) and directly ap-
ply to high-resolution images giving a poor performance.
To alleviate the conflicts in input resolutions, we use low-
resolution images in the CLIP model and high-resolution
images in the side adapter network. We show this asym-
metric input resolution is very effective. In addition, we
also explore only fine-tuning the positional embedding of
the ViT model and note improvements.
We evaluate our method on various benchmarks. Fol-
lowing the setting of previous works [22, 33], the COCO
Stuff [4] dataset is used for training, and Pascal VOC [11],
Pascal Context-59 [25], Pascal Context-459 [25], ADE20K-
150 [41], and ADE20K-847 [41] are used for testing. With-
out bells and whistles, we report state-of-the-art perfor-
mance on all benchmarks: with the CLIP ViT-L/14 model,
our method achieves 12.4 mIoU on ADE-847, 15.7 mIoU
on PC-459, 32.1 mIoU on ADE-150, 57.7 mIoU on PC-
59, and 94.6 mIoU on VOC. Compared to the previous best
2946
method, our method has an average of +1.8 mIoU improve-
ments on 5 datasets for ViT-B/16, and +2.3 mIoU improve-
ments for ViT-L/14, respectively. By further applying en-
semble trick, the average performance gap increases to +2.9
mIoU and +3.7 mIoU for ViT-B/16 and ViT-L/14.
Along with the excellent performance, our approach re-
quires only 8.4M trainable parameters with 64.3 GFLOPs,
which is only 13% and20% of [10], 6%and less than 1%
of [22], respectively.
|
Zeng_Real-Time_Multi-Person_Eyeblink_Detection_in_the_Wild_for_Untrimmed_Video_CVPR_2023
|
Abstract
Real-time eyeblink detection in the wild can widely serve
for fatigue detection, face anti-spoofing, emotion analysis,
etc. The existing research efforts generally focus on single-
person cases towards trimmed video. However, multi-
person scenario within untrimmed videos is also impor-
tant for practical applications, which has not been well
concerned yet. To address this, we shed light on this re-
search field for the first time with essential contributions on
dataset, theory, and practices. In particular, a large-scale
dataset termed MPEblink that involves 686 untrimmed
videos with 8748 eyeblink events is proposed under multi-
person conditions. The samples are captured from uncon-
strained films to reveal “in the wild” characteristics. Mean-
while, a real-time multi-person eyeblink detection method
is also proposed. Being different from the existing counter-
parts, our proposition runs in a one-stage spatio-temporal
way with end-to-end learning capacity. Specifically, it si-
multaneously addresses the sub-tasks of face detection, face
tracking, and human instance-level eyeblink detection. This
paradigm holds 2 main advantages: (1) eyeblink features
can be facilitated via the face’s global context (e.g., head
pose and illumination condition) with joint optimization
and interaction, and (2) addressing these sub-tasks in par-
allel instead of sequential manner can save time remarkably
to meet the real-time running requirement. Experiments on
MPEblink verify the essential challenges of real-time multi-
person eyeblink detection in the wild for untrimmed video.
Our method also outperforms existing approaches by large
margins and with a high inference speed.
†Yang Xiao is corresponding author (Yang [email protected]).
8.38s 8.54s
8.71s
8.88s9.04s 9.21s
9.21s 9.38s 9.50s 9.63s
Person 1 blink
Person 2 blinkPerson 1
Person 2 Timeline
TimelineFigure 1. The illustration on multi-person eyeblink detection. This
task aims at being aware of the presence of people and detecting
their eyeblink activities at instance level.
|
1. Introduction
Real-time eyeblink detection in the wild is a recently
emerged challenging research task [19] that can widely
serve for fatigue detection [2], face anti-spoofing [34], af-
fective analysis [7], etc. Although remarkable progress has
been made [9, 10, 19], the existing methods generally fo-
cus on single-person cases within trimmed videos. Multi-
person scenario within untrimmed videos has not been well
concerned yet. However, detecting long-term eyeblink be-
haviors at multi-instance level is more preferred for some
practical application scenarios. For example, it can be used
to estimate attendees’ attention level and emotional state
change during social interaction [9, 33]. Thus, effective
and real-time multi-person eyeblink detection in the wild
for untrimmed video is indeed required.
To this end, we shed the light on this research problem
with essential contributions on dataset, theory, and prac-
tices. First, a challenging labeled multi-person eyeblink de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13854
tection benchmark termed MPEblink is built under in-the-
wild conditions. It consists of 686 untrimmed long videos
captured from unconstrained movies to reveal the “in the
wild” characteristics. The contained scenarios are realistic
and diverse, such as social interactions and group activi-
ties. To our knowledge, MPEblink is the first multi-person
eyeblink detection dataset that focuses on in-the-wild long
videos. Fig. 1 illustrates a sample video with ground truth
annotations within it. Different from the existing eyeblink
detection benchmarks [10,12,19], the proposed benchmark
aims at being aware of all the attendees along the whole
video and detecting their eyeblinks at instance level. In
summary, MPEblink is featured with multi-instance, uncon-
strained, and untrimmed, which makes it more challenging
and realistic than previous formulations.
To perform eyeblink detection, previous methods [9, 10,
12, 19, 40] generally take a sequential approach contain-
ing face detection, face tracking, and classification on the
pre-extracted local eye clues within a temporal window.
Whereas such a pipeline seems reasonable, it has several
critical drawbacks. First, the use of isolated components
may lead to sub-optimal results as they are not jointly op-
timized and it is inefficient due to the inability to reuse the
features of each stage. Second, the eyeblink features only
contain the information from the pre-extracted local eye
clues (i.e., a small part of face), lacking the useful infor-
mation from global face context such as head pose and illu-
mination condition that are crucial for detecting eyeblinks
in the wild. Moreover, the pre-extracted local eye clues are
also unreliable due to the localization challenge towards un-
constrained scenarios. Third, the sequential approach leads
to computational cost being sensitive to the subject amount,
which is hard to meet the real-time running requirement.
To tackle these issues, we propose a one-stage multi-
person eyeblink detection framework called InstBlink,
which can simultaneously detect human faces, track them
over time, and do eyeblink detection at instance level. Inst-
Blink takes inspiration from the existing query-based meth-
ods [41, 45, 47, 48] and models the spatio-temporal face
as well as eyeblink representations at instance level within
each query. The insight is that the features can be effectively
shared among these sub-tasks and the eyeblink features can
be facilitated via face’s global contexts (e.g., head pose and
illumination condition) with joint optimization and interac-
tion, especially in unconstrained in-the-wild cases. Exper-
iments verify the superiority of InstBlink in both effective-
ness and efficiency, while also highlighting the critical chal-
lenges of real-time multi-person eyeblink detection in the
wild for untrimmed video.
The main contributions of this work lie in 3 folders:
•To our knowledge, it is the first time that instance-level
multi-person eyeblink detection in untrimmed videos is for-
mally defined and explored;•We introduce an unconstrained multi-person eyeblink
detection dataset MPEblink that contains 686 untrimmed
videos with 8748 eyeblink events, featured with more re-
alistic and challenging.
•We propose a one-stage multi-person eyeblink detec-
tion method that can jointly perform face detection, track-
ing, and instance-level eyeblink detection. Such a task-joint
paradigm can benefit the sub-tasks uniformly.
|
Yu_Task_Residual_for_Tuning_Vision-Language_Models_CVPR_2023
|
Abstract
Large-scale vision-language models (VLMs) pre-trained
on billion-level data have learned general visual represen-
tations and broad visual concepts. In principle, the well-
learned knowledge structure of the VLMs should be inherited
appropriately when being transferred to downstream tasks
with limited data. However, most existing efficient transfer
learning (ETL) approaches for VLMs either damage or are
excessively biased towards the prior knowledge, e.g., prompt
tuning (PT) discards the pre-trained text-based classifier and
builds a new one while adapter-style tuning (AT) fully relies
on the pre-trained features. To address this, we propose a
new efficient tuning approach for VLMs named Task Resid-
ual Tuning (TaskRes), which performs directly on the text-
based classifier and explicitly decouples the prior knowledge
of the pre-trained models and new knowledge regarding a
target task. Specifically, TaskRes keeps the original classifier
weights from the VLMs frozen and obtains a new classifier
for the target task by tuning a set of prior-independent pa-
rameters as a residual to the original one, which enables re-
liable prior knowledge preservation and flexible task-specific
knowledge exploration. The proposed TaskRes is simple
yet effective, which significantly outperforms previous ETL
methods (e.g., PT and AT) on 11 benchmark datasets while
requiring minimal effort for the implementation. Our code is
available at https://github.com/geekyutao/TaskRes.
|
1. Introduction
Over the past decade, deep learning-based visual recogni-
tion models [10, 18, 28, 55, 58] have achieved great success.
These state-of-the-art models are often trained on a large
amount of image and discrete label pairs. The discrete la-
bel is generated by converting a detailed textual description,
e.g., “American curl cat”, into a simple scalar, which strik-
ingly eases the computation of loss function. However, this
also results in two evident limitations: (i) the rich semantics
*Equal contribution.
†Corresponding author.
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5
Number of Tunable Parameters (M)5860626466Score (%)
Zero-shot CLIP
CoOp
CLIP-Adapter
Tip-Adapter-F
T askRes1-shot
2-shot
4-shot
8-shot
16-shotFigure 1. Performance comparison between Zero-shot CLIP
[49], CoOp [73], CLIP-Adapter [15], Tip-Adapter-F [71] and our
TaskRes on ImageNet with few shot settings.
in the textual description are underused, and (ii) the trained
models are limited to recognizing the close-set classes only.
Recent large-scale vision-language model (VLM) pre-
training [1, 23, 33, 49, 69] eliminates those limitations by
learning visual representations via textual supervision. For
instance, texts and images are encoded and mapped into a
unified space via a contrastive loss during pre-training [49].
The pre-trained text encoder can then be used to synthesize
a text-based classifier for image recognition given the corre-
sponding natural language descriptions as shown in Figure 2
(a). Those pre-trained VLMs have demonstrated a powerful
transferability on a variety of downstream tasks in a zero-shot
manner. However, the effectiveness of the aforementioned
models heavily relies on their large-scale architectures and
training datasets. For instance, CLIP [49] has up to 428 mil-
lion parameters and is trained on 0.4 billion text-image pairs,
while Flamingo [1] boasts up to 80 billion parameters and is
trained on a staggering 2.1 billion pairs. This makes it im-
practical to fully fine-tune the model on downstream tasks in
a low-data regime.
For that reason, efficient transfer learning (ETL) [15,
71–73] on pre-trained VLMs has gained popularity. ETL
represents transfer learning to downstream tasks in both
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10899
𝐸!𝐸"
“a phot of a [cat]”𝐶𝑜𝑠𝑖𝑛𝑒𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦
cat𝐸!𝐸"
𝐶𝑜𝑠𝑖𝑛𝑒𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦
cat𝒗𝟏, 𝒗𝟐, …, 𝒗[cat]
𝐸!𝐸"
“a phot of a [cat]”𝐶𝑜𝑠𝑖𝑛𝑒𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦
cat
𝐸!𝐸"
“a phot of a [cat]”𝐶𝑜𝑠𝑖𝑛𝑒𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦
cat𝑜𝑛𝑙𝑦𝑟𝑢𝑛𝑜𝑛𝑐𝑒𝑇𝑎𝑠𝑘𝑅𝑒𝑠𝑖𝑑𝑢𝑎𝑙(𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡)
(a) Zero-shot CLIP(b) Prompt Tuning
(c) Adapter-style Tuning(d) Task Residual Tuning𝑒𝑥𝑡𝑟𝑎𝑎𝑟𝑐ℎ𝑖𝑡𝑒𝑐𝑡𝑢𝑟𝑒𝑟𝑢𝑛𝑒𝑣𝑒𝑟𝑦𝑡𝑖𝑚𝑒
𝑡𝑜𝑜𝑏𝑖𝑎𝑠𝑒𝑑
✅
❌
✅
❌
❌…
Figure 2. Illustration of (a) Zero-shot CLIP, (b) prompt tuning, (c) adapter-style tuning and (d) our proposed Task Residual Tuning (TaskRes).
Our method introduces a prior-independent task residual to the fixed pre-trained classifier ( i.e., text embeddings of CLIP), being free of
running the text encoder every time or extra architecture design.
parameter- and data-efficient manner. The core of ETL is
twofold: (i) properly inheriting the well-learned knowledge
structure of VLMs, which is already transferable; (ii) effec-
tively exploring the task-specific knowledge given limited
data. However, most existing ETL approaches, i.e., prompt
tuning (PT) [72, 73] and adapter-style tuning (AT) [15, 71],
either damage the prior knowledge of VLMs or learn the
new knowledge of a task in an inappropriate/insufficient
way. For example, instead of using the pre-trained text-
based classifier, CoOp [73] (in Figure 2 (b)) is proposed
to learn a continuous prompt for synthesizing a completely
new one, which inevitably causes the loss of previous knowl-
edge. Consequently, CoOp underperforms Zero-shot CLIP
by 1.03%/0.37% in 1-/2-shot learning on ImageNet (see Fig-
ure 1). In contrast, CLIP-Adapter [15] preserves the pre-
trained classifier, but is excessively biased towards the prior
knowledge when learning a new task, i.e., it transforms the
pre-trained classifier weights to be task-specific as illustrated
in Figure 2 (c). This results in an inferior new knowledge
exploration, thereby a lower accuracy as shown in Figure 1.
For better ETL on pre-trained VLMs, we propose a
new efficient tuning approach named Task Residual Tuning
(TaskRes), which performs directly on the text-based classi-
fier and explicitly decouples the old knowledge of the pre-
trained models and the new knowledge for a target task.
The rationale is that the decoupling enables a better old
knowledge inheritance from VLMs and a more flexible task-
specific knowledge exploration ,i.e., the learned knowledge
w.r.t. the task is independent on the old knowledge. Specif-
ically, TaskRes keeps the original classifier weights frozen
and introduces a set of prior-independent parameters that are
added to the weights. These additive parameters, tuned for
adaptation to the target task, are thus named “task residual”.
To gain insight into how TaskRes works, we perform ex-tensive experiments across 11 benchmark datasets [73] and
conduct a systematic investigation of learned task residu-
als. The experimental results demonstrate that introducing
task residual can significantly enhance the transfer perfor-
mance. We visualize the correlation between the magnitude
of learned task residual and the difficulty of transferring a
pre-trained model to a downstream task, and observe that the
magnitude increases with the transfer difficulty. This sug-
gests the residual is automatically adapted to the task to fully
explore the new knowledge, thereby achieving a new state-
of-the-art performance on 11 diverse datasets. Furthermore,
it is worth noting that our method requires minimal effort for
the implementation, i.e., technically adding one line of code
only. Our contributions are summarized below:
• We for the first time emphasize the necessity of a
proper knowledge inheritance from pre-trained VLMs
to downstream tasks via ETL, reveal the pitfalls of exist-
ing tuning paradigms, and conduct an in-depth analysis
to manifest that decoupling the old pre-trained knowl-
edge and the new task-specific knowledge is the key.
• We propose a new efficient tuning approach named Task
Residual Tuning (TaskRes), which achieves a better old
knowledge inheritance from VLMs and a more flexible
task-specific knowledge exploration.
• TaskRes is convenient for use, which needs a few tuning
parameters and effortless implementation.
|
Zhang_Prototypical_Residual_Networks_for_Anomaly_Detection_and_Localization_CVPR_2023
|
Abstract
Anomaly detection and localization are widely used in
industrial manufacturing for its efficiency and effectiveness.
Anomalies are rare and hard to collect and supervised mod-
els easily over-fit to these seen anomalies with a handful of
abnormal samples, producing unsatisfactory performance.
On the other hand, anomalies are typically subtle, hard to
discern, and of various appearance, making it difficult to
detect anomalies and let alone locate anomalous regions.
To address these issues, we propose a framework called
Prototypical Residual Network (PRN), which learns feature
residuals of varying scales and sizes between anomalous
and normal patterns to accurately reconstruct the segmen-
tation maps of anomalous regions. PRN mainly consists of
two parts: multi-scale prototypes that explicitly represent the
residual features of anomalies to normal patterns; a multi-
size self-attention mechanism that enables variable-sized
anomalous feature learning. Besides, we present a variety of
anomaly generation strategies that consider both seen and
unseen appearance variance to enlarge and diversify anoma-
lies. Extensive experiments on the challenging and widely
used MVTec AD benchmark show that PRN outperforms cur-
rent state-of-the-art unsupervised and supervised methods.
We further report SOTA results on three additional datasets
to demonstrate the effectiveness and generalizability of PRN.
|
1. Introduction
The human cognition and visual system has an inherent
ability to perceive anomalies [53]. Not only can humans
distinguish between defective and non-defective images, but
they can also point to the location of anomalies even if
they have seen none or only a limited number of anomalies.
Anomaly detection (image-level binary classification) and
anomaly localization (pixel-level binary classification) are
introduced for the same purpose, and have been widely used
∗Corresponding author.
InputGTPatchCoreDRAOursFigure 1. Anomaly detection and localization examples on MVTec
[4]. Compared with the unsupervised method PatchCore [41] and
the supervised method DRA [13], the proposed PRN is able to
locate the anomalous regions more accurately.
in various scenarios due to their efficiency and remarkable
accuracy, including industrial defect detection [4, 7, 34, 61],
medical image analysis [52] and video surveillance [32].
Given its importance, a significant amount of work has
been devoted to anomaly detection and anomaly localization,
but few have addressed both detection and localization prob-
lems well at the same time. We argue that real-world anoma-
lous data weaken these models mainly in three aspects: I) the
amount of abnormal samples is limited and significant fewer
than normal samples, producing data distributions that lead
to a naturally imbalanced learning problem; II) anomalies
are typically subtle and hard to discern, since normal patterns
still dominate the anomalous image; identifying abnormal
regions out of the whole image is the key to anomaly detec-
tion and localization; III) the appearance of anomalies varies
significantly, i.e., abnormal regions can take on a variety of
sizes, shapes and numbers, and such appearance variations
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16281
InputGTDRADevNet
Ours
Figure 2. Indecipherable problem of supervised methods DevNet
[35] and DRA [13]. Both images are detected as anomalous. Other
methods mistakenly highlight normal regions rather than defect
regions, whereas PRN correctly pinpoints the defect regions.
make it challenging to well-localizing all the anomalies.
Without adequate anomalies for training, unsupervised
models become the de facto dominant approaches, which
get rid of the imbalance problem by learning the distribu-
tion of normal samples [5, 9, 10, 12, 18, 25, 41–43, 47, 67] or
generating sufficient synthetic anomalies [26, 28, 50, 63, 68].
However, these methods are opaque to genuine anomalies,
resulting in implicit decisions that may induce many false
negatives and false positives. Besides, unsupervised methods
rely heavily on the quality of normal samples, and thus are
not robust enough and perform poorly on uncalibrated or
noisy datasets [19]. As shown in Fig. 1, unsupervised models
predict broad regions around the anomaly. We attribute this
problem to less discriminative abilities of these methods.
Recently, several supervised methods [13, 35, 46] are in-
troduced. DeepSAD [46] enlarges the margin between the
anomaly and the one-class center in the latent space to obtain
more compact one-class descriptors by limit seen anomalies.
DRA [13] and DevNet [35] formulate anomaly detection as
a multi-instance learning (MIL) problem, scoring an image
as anomaly if any image patch is a defect region. MIL-based
methods enforce the learning at fine-grained image patch
level, which effectively reduces the interference of normal
patches in the anomalous images. Yet, these approaches typi-
cally struggle to accurately locate all anomalous regions with
image-level supervision, as shown in Fig. 1. In particular,
when the anomalous regions only occupy a tiny part of image
patches, image-level representation may be dominated by
the normal regions and disregards tiny anomalous, which
may cause inconsistent image-level and pixel level perfor-
mance as shown in Table 1. Furthermore, as shown in Fig. 2,
these methods also encounter uninterpretable problems when
making decisions.
In this paper, we propose a framework called Prototypical
Residual Network (PRN) as an effective remedy for afore-
said issues on anomaly detection and localization. First, we
propose multi-scale prototypes to represent normal patterns.
In contrast to previous methods for constructing normal pat-terns from concatenated feature memory [41] or random
sampled feature maps [63], we construct normal patterns
with prototypes of intermediate feature maps of different
scales, thereby preserving the spatial information and provid-
ing precise and representative normal patterns. Further, we
obtain the feature map residuals via the deviation between
the anomalous image and the closest prototype at each scale,
and we add multi-scale fusion blocks to exchange informa-
tion across different scales. Second, since the appearance of
anomaly regions varies a lot, it is necessary to learn relation-
ships among patches from multiple receptive fields. Thus, we
introduce a multi-size self-attention [33, 55, 58, 59] mecha-
nism, which operates on patches of different receptive fields
to detect patch-level inconsistencies at different sizes. Fi-
nally, unlike previous methods [13, 35] that use image-level
supervision for training, our model learns to reconstruct
the anomaly segmentation map with pixel-level supervision,
which focuses more on the anomalous regions and preserves
better generalization. Besides, we put forward a variety of
anomaly generation strategies that efficiently mitigate the
impact of data imbalance and enrich the anomaly appearance.
With the proposed modules, our method achieves more accu-
rate localization than previous unsupervised and supervised
methods, as shown in Fig. 1 and Fig. 2.
The main contributions of this paper are summarized as
follows:
•We propose a novel Prototypical Residual Networks
for anomaly detection and localization. Equipped with
multi-scale prototypes and the multi-size self-attention
mechanism, PRN learns residual representations among
multi-scale feature maps and within the multi-size re-
ceptive fields at each scale.
•We present a variety of anomaly generation strategies
that considering both seen and unseen appearance vari-
ance to enlarge and diversify anomalies.
•We perform extensive experiments on four datasets to
show that our approach achieves new SOTA anomaly
detection performance and outperforms current SOTA
in anomaly localization performance by a large margin.
|
Yun_IFSeg_Image-Free_Semantic_Segmentation_via_Vision-Language_Model_CVPR_2023
|
Abstract
Vision-language (VL) pre-training has recently gained
much attention for its transferability and flexibility in novel
concepts (e.g., cross-modality transfer) across various visual
tasks. However, VL-driven segmentation has been under-
explored, and the existing approaches still have the bur-
den of acquiring additional training images or even seg-
mentation annotations to adapt a VL model to downstream
segmentation tasks. In this paper, we introduce a novel
image-free segmentation task where the goal is to perform
semantic segmentation given only a set of the target seman-
tic categories, but without any task-specific images and an-
notations. To tackle this challenging task, our proposed
method, coined IFSeg, generates VL-driven artificial image-
segmentation pairs and updates a pre-trained VL model
to a segmentation task. We construct this artificial train-
ing data by creating a 2D map of random semantic cat-
egories and another map of their corresponding word to-
kens. Given that a pre-trained VL model projects visual and
text tokens into a common space where tokens that share
the semantics are located closely, this artificially generated
word map can replace the real image inputs for such a VL
model. Through an extensive set of experiments, our model
not only establishes an effective baseline for this novel task
but also demonstrates strong performances compared to ex-
isting methods that rely on stronger supervision, such as
task-specific images and segmentation masks. Code is avail-
able at https://github.com/alinlab/ifseg .
|
1. Introduction
Understanding a new concept with less cost ( e.g., col-
lecting data, annotations, or training) is a challenging yet
essential problem in machine learning [41]. The most com-
mon practice is fine-tuning a foundation model, pre-trained
on a large amount of data [3,6,12,18], for downstream tasks.
*Equal contribution
†Work was done while at KAIST
“grass” word“cat” word“dog” word“other” wordFigure 1. Visualization of image-free segmentation results via
IFSeg on a web image. Here, we present a web image ( Top) and
its segmentation results ( Middle andBottom ) of our image-free
segmentation approach. Note that our model is not trained with any
task-specific images and annotations, but only the text words ( e.g.,
“grass”, “cat”, “dog” and “other”) as semantic categories.
In particular, such large-scale models have shown success-
ful adaptation to downstream tasks with only little supervi-
sion across vision [6] and language [3] domains. Recently,
pre-training approaches in the vision-language (VL) domain
have also achieved remarkable results in transferring to novel
tasks ( e.g., few-shot or zero-shot transfer [37]) with various
elaborate designs, including modality interaction between
the dual encoders [20, 32], the multi-modal encoder [22, 43],
and the encoder-decoder [1, 8, 39, 42, 44, 49].
Semantic segmentation is one of the crucial tasks in com-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2967
puter vision that requires understanding dense representa-
tions for pixel-wise classifications. Inspired by the success
of the contrastive VL pre-training, CLIP [32], several re-
cent attempts [15, 25, 27, 48, 53] have explored CLIP-based
segmentation approaches for better transferability ( e.g., zero-
shot [4, 45] and open-vocabulary segmentation [51]). How-
ever, the existing zero-shot or open-vocabulary segmentation
approaches still suffer from a burden of training on addi-
tional image data, segmentation annotations [15, 25, 48, 53],
or natural language supervision [27,47], to adapt pre-trained
VL models to downstream segmentation tasks. In the wild,
however, such training data is not readily available; e.g.,
there would be no task-specific training images or labels for
novel web images like Fig. 1. This limitation inspires us to
investigate how to fully utilize the VL models for seman-
tic segmentation in a lightweight manner, even without any
image data or human-annotated supervision.
Meanwhile, the recent encoder-decoder VL models [1, 8,
39, 42, 44, 49] also have gained popularity with their unique
characteristics of image-to-text generation via the VL de-
coder network. Motivated by this, we explore the potential
usability of the VL decoder to segment pixels in the text
generation manner as an alternative to traditional vision
segmentation decoders,
|
Zhang_Differentiable_Architecture_Search_With_Random_Features_CVPR_2023
|
Abstract
Differentiable architecture search (DARTS) has signif-
icantly promoted the development of NAS techniques be-
cause of its high search efficiency and effectiveness but suf-
fers from performance collapse. In this paper, we make
efforts to alleviate the performance collapse problem for
DARTS from two aspects. First, we investigate the expres-
sive power of the supernet in DARTS and then derive a
new setup of DARTS paradigm with only training Batch-
Norm. Second, we theoretically find that random features
dilute the auxiliary connection role of skip-connection in
supernet optimization and enable search algorithm focus on
fairer operation selection, thereby solving the performance
collapse problem. We instantiate DARTS and PC-DARTS
with random features to build an improved version for each
named RF-DARTS and RF-PCDARTS respectively. Experi-
mental results show that RF-DARTS obtains 94.36% test ac-
curacy on CIFAR-10 (which is the nearest optimal result in
NAS-Bench-201), and achieves the newest state-of-the-art
top-1 test error of 24.0% on ImageNet when transferring
from CIFAR-10. Moreover, RF-DARTS performs robustly
across three datasets (CIFAR-10, CIFAR-100, and SVHN)
and four search spaces (S1-S4). Besides, RF-PCDARTS
achieves even better results on ImageNet, that is, 23.9%
top-1 and 7.1% top-5 test error, surpassing representative
methods like single-path, training-free, and partial-channel
paradigms directly searched on ImageNet.
|
1. Introduction
Differentiable architecture search (DARTS) [27] has
demonstrated both higher search efficiency and better
search efficacy than early pioneering neural architecture
search (NAS) [1,50,51] attempts in the image classification
task. In the past few years, many following works further
*Equal contributions.This work is done during Yonggang Li’s in-
ternship at MEGVII Technology. This work is supported by Science
and Technology Innovation 2030-New Generation Artificial Intelligence
(2020AAA0104401).improve DARTS by introducing additional modules, such
as Gumbel-softmax [13], early stop criterion [23], auxiliary
skip-connection [10], etc. We have witnessed tremendous
improvements in the image recognition task, but it is getting
farther away from exploring how DARTS works. Newly, in
this work, we intend to demystify DARTS by disassembling
key modules rather than make it more complex.
We overview the vanilla DARTS paradigm, and sum-
mary three key modules, namely dataset ,evaluation met-
ric, and supernet as follows:
•Dataset. DARTS [27] searches on proxy dataset
and then transfers to target dataset due to huge
requirements for GPU memory. PC-DARTS [36]
proves that proxy datasets inhibit the effectiveness of
DARTS and directly searching on target dataset ob-
tains more promising architectures. UnNAS [25] and
RLNAS [46] ablate the role of labels in DARTS, and
further conclude that ground truth labels are not neces-
sary for DARTS.
•Evaluation metric. DARTS [27] introduces architec-
ture parameters to reflect the strengths of the candidate
operations. PT-DARTS [32] suspects the effectiveness
of architecture parameters and shows that the magni-
tude of architecture parameters does not necessarily
indicate how much the operation contributes to the su-
pernet’s performance. FreeNAS [45] and TE-NAS [7]
further put forward training-free evaluation metrics to
predict the performance of candidate architectures.
•Supernet. DARTS encodes all candidate architectures
in search space into the supernet. The search cell of
supernet will change as the search space changes. R-
DARTS [41] proposes four challenging search spaces
S1-S4 where DARTS obtains inferior performance
than the Random-search baseline. R-DARTS attributes
the failure of vanilla DARTS to the dominant skip-
connections. Thus R-DARTS concludes that the topol-
ogy of supernet has great influence on the efficacy
of DARTS. P-DARTS [9] finds that the depth gap
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16060
(a) Results on CIFAR-10
(b) Results on CIFAR-100
(c) Results on ImageNet16-120
Figure 1. The correlation between the DARTS supernet performance (blue histograms) and the searched architecture performance (orange
histograms) in NAS-Bench-201 [14] search space (best viewed in color). We directly searched on target datasets CIFAR-10, CIFAR-100,
and ImageNet16-120 in three runs and plot average results. Histograms on three datasets reflect a consistent phenomenon. Supernet
performance and the expressive power are positively correlated. However, supernet with higher performance can not search for better
architectures, and supernet optimized with only BN has the best search effect.
between search and evaluation architecture prevents
DARTS from achieving better search results.
Option ( state )Learnable modules in supernet Expressive
Convolution BN Affine Power
A (unexplored ) ! ! strong
B (explored ) ! %wwww C (unexplored ) % !
D (unexplored ) % % weak
Table 1. Enumerate four combinations of learnable modules in
supernet. We keep the composition of supernet unchanged and
just change the optimizable weights. !means updating weights
with the optimizer. %means freezing weights at the initialization.
In this paper, we take a further step to investigate the su-
pernet in DARTS from two respects. First, we ablate the
expressive power of supernet in DARTS. Specifically, each
convolution operation (like separable convolutions) consists
of two learnable modules: convolution layers and Batch-
Norm (BN) [21] layers. Each convolution layer is followed
by a private BN layer. To study the expressive power in iso-
lation, we thoroughly traverse four combinations (named A,
B, C, and D for simple) of learnable modules as shown in
Tab. 1. Existing DARTS variants [9, 27, 36] adopt option
B by default, which disables BN affine weights and only
trains convolution weights during the supernet training. On
the contrary, option A, C, and D are still unexplored, thus
it is a mystery what will happen when supernet is equipped
with different expressive power, that is, trained with option
of A, C, and D. Hence, we make a sanity check between su-
pernet performance and searched architecture performance
across the above four combinations. As shown in Fig. 1, the
relative ranking of supernet performance is A ≈B>C>D,
which is consistent with the ranking of supernet’s expres-
sive power. However, the relative ranking of searched archi-tecture performance is C ≫D>A≈B, which sounds count-
intuitive. This result implies that the performance of su-
pernet is not such significant and scaling random fea-
tures with BN affine weights is good enough for archi-
tecture search . Therefore, we propose a new extension of
DARTS with random features driven by the surprising re-
sults of only training BN. Second, we explore the work-
ing mechanism of random features by considering skip-
connection roles in DARTS. Skip-connection in DARTS
plays two roles [10]: 1) as a shortcut to help the optimization
of supernet, and 2) a candidate operation for architecture
search. Random features dilute the role of auxiliary con-
nection that skip-connection plays in supernet training and
enable DARTS to focus on fairer operation selection, thus
implicitly solving performance collapse of DARTS [10,41].
Based on the versatility of supernet optimization, we
arm popular DARTS [27] and PC-DARTS [36] with ran-
dom features to build more effective algorithms RF-DARTS
and RF-PCDARTS. On CIFAR-10, RF-DARTS obtains
94.36% test accuracy that is the nearest optimal re-
sults (94.37%) in NAS-Bench-201. RF-DARTS achieves
the state-of-the-art 24.0% top-1 test error on ImageNet
when transferred from CIFAR-10 in DARTS search space.
RF-DARTS also performs robustly on CIFAR-10, CIFAR-
100, and SVHN across S1-S4. RF-PCDARTS directly
searches on ImageNet and achieves 23.9% top-1 test er-
ror, which surpasses representative methods from single-
path, training-free, and partial channel paradigms. Overall,
comprehensive results reveal that the expressive power of
DARTS supernet is over-powerful, and random features is
just perfect for DARTS. We hope these essential analyses
and results will inspire new understandings for NAS.
16061
|
Xu_MM-3DScene_3D_Scene_Understanding_by_Customizing_Masked_Modeling_With_Informative-Preserved_CVPR_2023
|
Abstract
Masked Modeling (MM) has demonstrated widespread
success in various vision challenges, by reconstructing
masked visual patches. Yet, applying MM for large-scale
3D scenes remains an open problem due to the data spar-
sity and scene complexity. The conventional random mask-
ing paradigm used in 2D images often causes a high risk of
ambiguity when recovering the masked region of 3D scenes.
To this end, we propose a novel informative-preserved re-
construction, which explores local statistics to discover and
preserve the representative structured points, effectively en-
hancing the pretext masking task for 3D scene understand-
ing. Integrated with a progressive reconstruction man-
ner, our method can concentrate on modeling regional ge-
ometry and enjoy less ambiguity for masked reconstruc-
tion. Besides, such scenes with progressive masking ra-
*Equal contribution.
†Corresponding authors.tios can also serve to self-distill their intrinsic spatial con-
sistency, requiring to learn the consistent representations
from unmasked areas. By elegantly combining informative-
preserved reconstruction on masked areas and consistency
self-distillation from unmasked areas, a unified framework
called MM-3DScene is yielded. We conduct comprehensive
experiments on a host of downstream tasks. The consistent
improvement ( e.g., +6.1% [email protected] on object detection
and +2.2% mIoU on semantic segmentation) demonstrates
the superiority of our approach.
|
1. Introduction
3D scene understanding plays an essential role in vari-
ous visual applications, such as virtual reality, robot naviga-
tion, and autonomous driving. Over the past few years, deep
learning has dominated 3D scene parsing tasks [26, 40, 73].
However, traditional supervised learning methods require
massive annotation of 3D scene data that are extremely la-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4380
borious to obtain [9], where millions of points or mesh ver-
tices per scene need to be labeled.
To solve this, self-supervised learning (SSL) becomes
a favorable choice since it can extract rich representa-
tions without any annotation [10, 16, 17]. Masked Mod-
eling (MM) [16, 57], as one of the representative meth-
ods in SSL, recently draws significant attention in the vi-
sion community. Recently, It has been explored in 3D vi-
sion [31,39,51,67,69,70], where these 3D MM approaches
randomly mask local regions of point clouds, and pre-train
neural networks to reconstruct the masked areas. Neverthe-
less, such random masking paradigms are not feasible for
large-scale 3D scenes, which often causes a high risk of re-
construction ambiguity. As illustrated in Fig. 1 (a), a chair
and a TV are totally masked, which are extremely difficult
to be recovered without any context guidance. Such ambi-
guity often makes MM difficult to learn informative repre-
sentation for 3D scenes. Hence, we ask a natural question:
can we customize a better way of masked modeling for 3D
scene understanding?
To tackle this question, we propose a novel informative-
preserved masked reconstruction scheme in this paper.
Specifically, we leverage local statistics of each point ( i.e.,
the difference between each point and its neighboring points
in terms of color and shape) as guidance to discover the rep-
resentative structured points which are usually located at the
boundary regions in the 3D scene. We denote these points
as ‘Informative Points ’ since they provide highly useful
information hints and rich semantic context for assisting
masked reconstruction. To this end, our mask strategy is
definite: to preserve Informative Points in a scene and mask
other points. In this way, the basic geometric information
of a scene is explicitly retained , which effectively simpli-
fies the pretext task and reduces ambiguity.
Based on our mask strategy, a progressive masked recon-
struction manner is integrated, to better model the masked
areas. As illustrated in Fig. 1 (b), during each iteration, our
method concentrates on reconstructing the local regional
geometric patterns rather than rebuilding the original intact
scene. In doing so, it enjoys less ambiguity and is able to
restore accurate geometric information.
Moreover, we realize the information of unmasked ar-
eas ( i.e., Informative Points) is underexplored. We find
that there exists point correspondence in the unmasked ar-
eas under progressive masking ratios. Accordingly, we in-
troduce a dual-branch encoding scheme for learning such
intrinsic consistency, with the ultimate goal of unearthing
the consistent ( i.e., masking-invariant) representations from
unmasked areas. This leads to a more powerful SSL frame-
work on 3D scenes, called MM-3DScene, which elegantly
combines the masked modeling on the masked and un-
masked areas in 3D scenes together, while complements
each other. It achieves superior performance in Table 7 (v).Datasets Complexity Task Gain (from scratch)
S3DIS Entire floor, office segmentation (+1.5%) mIoU
segmentation (+2.2%) mIoU
ScanNet Large rooms detection (+4.4) [email protected]
detection (+6.1) [email protected]
SUN-RGBD Cluttered rooms detection (+2.9) [email protected]
detection (+4.4) [email protected]
Table 1. Summary of fine-tuning MM-3DScene on various
downstream tasks and datasets for 3D understanding. Our MM-
3DScene conspicuously boosts the performance of the baseline
model trained from scratch.
Our contributions are motivated and comprehensive:
• We raise the concept of Informative Points – the points
providing significant information hints, and indicate
that preserving them is critical for assisting masked
modeling on 3D scenes (Table 8).
• For masked areas, we propose an informative-
preserved reconstruction scheme to focus on restoring
the regional geometry in a novel progressive manner,
which explicitly simplifies the pretext task.
• For unmasked areas, we introduce a self-distillation
branch, which is encouraged to learn spatial-consistent
representations under progressive masking ratios.
• A unified self-supervised framework, called MM-
3DScene, delivers performance improvements across
on various downstream tasks and datasets (Table 1).
|
Zhang_GrowSP_Unsupervised_Semantic_Segmentation_of_3D_Point_Clouds_CVPR_2023
|
Abstract
We study the problem of 3D semantic segmentation from
raw point clouds. Unlike existing methods which primarily
rely on a large amount of human annotations for training
neural networks, we propose the first purely unsupervised
method, called GrowSP , to successfully identify complex se-
mantic classes for every point in 3D scenes, without need-
ing any type of human labels or pretrained models. The key
to our approach is to discover 3D semantic elements via
progressive growing of superpoints. Our method consists
of three major components, 1) the feature extractor to learn
per-point features from input point clouds, 2) the superpoint
constructor to progressively grow the sizes of superpoints,
and 3) the semantic primitive clustering module to group
superpoints into semantic elements for the final semantic
segmentation. We extensively evaluate our method on mul-
tiple datasets, demonstrating superior performance over all
unsupervised baselines and approaching the classic fully-
supervised PointNet. We hope our work could inspire more
advanced methods for unsupervised 3D semantic learning.
|
1. Introduction
Giving machines the ability to automatically discover se-
mantic compositions of complex 3D scenes is crucial for
many cutting-edge applications. In the past few years, there
has been tremendous progress in fully-supervised semantic
segmentation for 3D point clouds [14]. From the seminar
works PointNet [40] and SparseConv [12] to a plethora of
*Corresponding authorrecent neural models [21, 27, 41, 55, 60], both the accuracy
and efficiency of per-point semantic estimation have been
greatly improved. Unarguably, the success of these meth-
ods primarily relies on large-scale human annotations for
training deep neural networks. However, manually annotat-
ing real-world 3D point clouds is extremely costly due to
the unstructured data format [3, 20]. To alleviate this prob-
lem, a number of recent methods start to use fewer 3D point
labels [19, 69], cheaper 2D image labels [59, 77], or active
annotations [22,63] in training. Although achieving promis-
ing results, they still need tedious human efforts to annotate
or align 3D points across images for particular datasets, thus
being inapplicable to novel scenes without training labels.
In this paper, we make the first step towards unsuper-
vised 3D semantic segmentation of real-world point clouds.
To tackle this problem, there could be two strategies: 1) to
na¨ıvely adapt existing unsupervised 2D semantic segmen-
tation techniques [4, 7, 24] to 3D domain, and 2) to apply
existing self-supervised 3D pretraining techniques [17, 66]
to learn discriminative per-point features followed by clas-
sic clustering methods to obtain semantic categories. For
unsupervised 2D semantic methods, although achieving en-
couraging results on color images, they can be hardly ex-
tended to 3D point clouds primarily because: a) there is
no general pretrained backbone to extract high-quality fea-
tures for point clouds due to the lack of representative 3D
datasets akin to ImageNet [46] or COCO [29], b) they are
usually designed to group pixels with similar low-level fea-
tures, e.g. colors or edges, as a semantic class, whereas such
a heuristic is normally not satisfied in 3D point clouds due
to point sparsity and spatial occlusions. For self-supervised
3D pretraining methods, although the pretrained per-point
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17619
features could be discriminative, they are lack of seman-
tic meanings fundamentally because the commonly adopted
data augmentation techniques do not explicitly capture cat-
egorical information. Section 4 clearly demonstrates that all
these methods fail catastrophically on 3D point clouds.
Given a sparse point cloud composed of multiple seman-
tic categories, we can easily observe that a relative small
local point set barely contains distinctive semantic infor-
mation. Nevertheless, when the size of a local point set is
gradually growing, that surface patch naturally emerges as
a basic element or primitive for a particular semantic class,
and then it becomes much easier for us to identify the cat-
egories just by combining those basic primitives. For ex-
ample, two individual 3D points sampled from a spacious
room are virtually meaningless, whereas two patches might
be easily identified as the back and/or armof chairs.
Inspired by this, we introduce a simple yet effective
pipeline to automatically discover per-point semantics, sim-
ply by progressively growing the size of per-point neigh-
borhood, without needing any human labels or pretrained
backbone. In particular, our architecture consists of three
major components: 1) a per-point feature extractor which
is flexible to adopt an existing (untrained) neural network
such as the powerful SparseConv [12]; 2) a superpoint con-
structor which progressively creates larger and larger su-
perpoints during training to guide semantic learning; 3) a
semantic primitive clustering module which aims to group
basic elements of semantic classes via an existing clustering
algorithm such as K-means. The key to our pipeline is the
superpoint constructor together with a progressive growing
strategy in training. Basically, this component drives the
feature extractor to progressively learn similar features for
3D points within a particular yet grow ingsuperpoint, while
the features of different superpoints tend to be pushed as
distinct elements of semantic classes. Our method is called
GrowSP and Figure 1 shows qualitative results of an indoor
3D scene. Our contributions are:
• We introduce the first purely unsupervised 3D semantic
segmentation pipeline for real-world point clouds, with-
out needing any pretrained models or human labels.
• We propose a simple strategy to progressively grow su-
perpoints during network training, allowing meaningful
semantic elements to be learned gradually.
• We demonstrate promising semantic segmentation results
on multiple large-scale datasets, being clearly better than
baselines adapted from unsupervised 2D methods and
self-supervised 3D pretraining methods. Our code is at:
https://github.com/vLAR-group/GrowSP
|
Yu_PEAL_Prior-Embedded_Explicit_Attention_Learning_for_Low-Overlap_Point_Cloud_Registration_CVPR_2023
|
Abstract
Learning distinctive point-wise features is critical for
low-overlap point cloud registration. Recently, it has
achieved huge success in incorporating Transformer into
point cloud feature representation, which usually adopts
a self-attention module to learn intra-point-cloud features
first, then utilizes a cross-attention module to perform fea-
ture exchange between input point clouds. The advan-
tage of Transformer models mainly benefits from the use
of self-attention to capture the global correlations in fea-
ture space. However, these global correlations may in-
volve ambiguity for point cloud registration task, espe-
cially in indoor low-overlap scenarios, because the corre-
lations with an extensive range of non-overlapping points
may degrade the feature distinctiveness. To address this is-
sue, we present PEAL, a Prior-embedded Explicit Attention
Learning model. By incorporating prior knowledge into
the learning process, the points are divided into two parts.
One includes points lying in the putative overlapping region
and the other includes points located in the putative non-
overlapping region. Then PEAL explicitly learns one-way
attention with the putative overlapping points. This simplis-
tic design attains surprising performance, significantly re-
lieving the aforementioned feature ambiguity. Our method
improves the Registration Recall by 6+% on the challenging
3DLoMatch benchmark and achieves state-of-the-art per-
formance on Feature Matching Recall, Inlier Ratio, and
Registration Recall on both 3DMatch and 3DLoMatch.
|
1. Introduction
Rigid point cloud registration has always been a foun-
dational yet challenging task in 3D vision and robotics
[2, 3, 10, 25], which aims to estimate an optimal rigid trans-
formation to align two point clouds.
Benefiting from the superior feature representation of
deep networks, keypoints-based point cloud registration
methods have become dominant in recent years [4, 9, 12,
∗Corresponding author: Wenhui Zhou, [email protected]; Yu
Zhang, [email protected]
Overlap: 15.6%
# Patch Corr: 256
Inlier ratio: 10.9%
Overlap: 15.6%
# Patch Corr: 256
Inlier ratio: 74.6%
Overlap: 15.6%
# Point Corr : 500
Inlier ratio: 85.4%
Overlap: 15.6%
# Point Corr : 500
Inlier ratio: 16.4%
(a) Implicit attention, Patch Feature and Correspondences
(c) Explicit attention, Patch Feature and Correspondences(b) Implicit attention, Point Correspondences
(d) Explicit attention, Point CorrespondencesFigure 1. Given two low-overlap point clouds, PEAL adopts an
explicit attention learning fashion and learns discriminative su-
perpoint (patch) features (c), which results in significant higher
patch and point inlier ratios. In contrast, GeoTransformer learns
ambiguous patch features (a). For example, PEAL is able to ac-
curately identify corresponding chairs among multiple chairs and
distinguish them from the floor and table, while GeoTransformer
mismatches them. Zoom in for details.
34, 37]. The core idea is to learn to match the learned key-
points across different point clouds. Recently, the keypoint-
free methods [25, 36] demonstrate promising performance
following the coarse-to-fine fashion. They seek correspon-
dences between downsampled point clouds (superpoints),
which are then propagated to individual points to yield
dense correspondences. Thus, the accuracy of superpoint
matching is crucial to the overall performance of point
cloud registration. GeoTransformer [25] proposes a geo-
metric self-attention module that encodes the distance of
point pairs and the angle of triplet to extract transformation-
invariant features. This approach significantly improves the
accuracy of superpoint matching.
However, GeoTransformer may still suffer from am-
biguous matching in certain scenarios with numerous sim-
ilar structure or low geometric discriminative patches (su-
perpoints) [25]. Moreover, the self-attention mechanism
may exacerbate matching ambiguity, especially for low-
overlap registration tasks. Prior works [3, 25] advocate that
modeling geometric consistent correlations among overlap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17702
ping superpoints/points is the key to the success of super-
points/points matching, while the global correlations learn-
ing via the geometric self-attention is inevitably interfered
by numerous superpoints in the non-overlapping region. In
other words, the correlations with non-overlapping super-
points may disrupt the inter-frame geometric consistency
learning and degrade the feature distinctiveness for registra-
tion, which makes the resultant learned superpoint features
ambiguous and leads to numerous outlier matches (Fig. 1
(a)).
To address the aforementioned issues, we design a Prior-
embedded Explicit Attention Learning model (PEAL). It
first leverages an overlap prior to divide the superpoints into
anchor ones (the superpoints lying in putative overlapping
region) and non-anchor ones (the superpoints located in pu-
tative non-overlapping region). Then it alleviates the inter-
ference of non-anchor superpoints by introducing an one-
way attention mechanism, which solely models the correla-
tions from non-anchor superpoints to anchor ones. Benefit-
ting from the promising overlap ratio in the anchor region,
anchor superpoints can be reckoned as simultaneously ex-
isting in both two frames, thus the one-way attention is ca-
pable of acquiring the essential local geometric consistent
correlation from the anchor region, which helps the non-
anchor superpoints encoding the inter-frame local geomet-
ric consistency and relieves the global feature ambiguity
(Fig. 1 (c) ). Furthermore, the embedding prior design in-
volved in PEAL makes refining transformation possible in
an iterative fashion.
In this paper, we introduce two models depending on the
methods of obtaining prior, with extensive experiments on
indoor benchmarks demonstrating the superiority of PEAL.
Compared to state-of-the-art methods, both of the two mod-
els achieve significant improvements on Registration Recall
on the challenging 3DLoMatch benchmark. In summary,
our contributions are summarized as follows:
• To the best of our knowledge, we are the first to explic-
itly inject overlap prior into Transformer to facilitate
low-overlap point cloud registration, and various over-
lap priors can be integrated into this framework, such
as 3D overlap prior, 2D overlap prior, and self-overlap-
prior.
• An explicit one-way attention module, which can sig-
nificantly relieves the feature ambiguity generated by
self-attention. It can be plugged into other transformer-
based point cloud registration networks.
• A novel iterative pose refined fashion for low-overlap
point cloud registration.
|
Yang_TopDiG_Class-Agnostic_Topological_Directional_Graph_Extraction_From_Remote_Sensing_Images_CVPR_2023
|
Abstract
Rapid development in automatic vector extraction from
remote sensing images has been witnessed in recent years.
However, the vast majority of existing works concentrate
on a specific target, fragile to category variety, and hardly
achieve stable performance crossing different categories. In
this work, we propose an innovative class-agnostic model,
namely TopDiG, to directly extract topological directional
graphs from remote sensing images and solve these issues.
Firstly, TopDiG employs a topology-concentrated node de-
tector (TCND) to detect nodes and obtain compact percep-
tion of topological components. Secondly, we propose a
dynamic graph supervision (DGS) strategy to dynamical-
ly generate adjacency graph labels from unordered nodes.
Finally, the directional graph (DiG) generator module is
designed to construct topological directional graphs from
predicted nodes. Experiments on the Inria ,CrowdAI ,
GID ,GF2 andMassachusetts datasets empirically
demonstrate that TopDiG is class-agnostic and achieves
competitive performance on all datasets.
|
1. Introduction
Vector maps that are represented as topological direc-
tional graphs act as the foundation to various remote sens-
ing applications, such as property mapping, cartographic
generalization and disaster assessment [21, 25]. Traditional
manual or semi-automatic vector map generation from re-
mote sensing images is extremely time-consuming and ex-
pensive. In contrast, state-of-the-art approaches, including
segmentation-based [4, 9, 13, 30], contour-based [1, 16, 22,
29, 35, 39] and graph generation [3, 26, 31–34, 41] methods
have typically developed to achieve automation. Howev-
er, these works are concentrated on a specific category and
can hardly achieve satisfactory performance when applied
to other classes.
Among aforementioned approaches, a dominant
paradigm is the segmentation-based method. It follows the
(a) PolyWorld
(b) Enhanced iCurb
(c) Ours
Figure 1. Visual illustrations of current and our approaches on
different targets . In contrast with PolyWorld and Enhanced iCurb
which only serve a specific category, TopDiG is class-agnostic and
can tackle both polygon-shape and line-shape targets. Yellow dots
refer to detected nodes while blue arrowed lines indicate the direc-
tional topological connection between node pairs (best viewed by
zooming in).
segmentation-vectorization pipeline and requires sophis-
ticated vectorization procedures to binary masks. Typical
examples that fall in this scope include PolyMapper [14],
Frame Field [9] and ASIP [13]. These methods retrieve
coarse raster maps with missing details and unavoidably de-
mand elaborate post-processing. Another paradigm mainly
adopts contour-based instance segmentation approaches,
which usually refine initial contours to obtain vector maps.
For instance, Polygon-RNN [6], Polygon-RNN++ [1],
Deep Snake [22], SharpContour [39] and E2EC [35] can
delineate the outlines of polygon-shape targets, such as
buildings and water bodies. However, it depends on the
quality of initial contours and can barely be reliable on
line-shape targets, such as roads. The flourishing graph
generation methods construct the topological graph based
on nodes and their connectivity. A few of such approaches,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1265
including RoadTracer [3], VecRoad [26], iCurb [33] and
RNGDet [32] focus on line-shape targets by iteratively
predicting nodes in a one-by-one manner. Nevertheless,
these methods suffer from the low efficiency, the accumu-
lated node connectivity error, and the poor reliability to
polygon-shape targets. Alternatively, the connectivity of
the nodes also can be recovered from adjacency matrix as
introduced in PolyWorld [41] and csBoundary [31]. These
workflows successfully produce visually pleasing vector
topology without irregular edges and overly smoothed
corners. Unfortunately, for intricate or sinuous structures,
such methods lead to severe topological errors.
Given the class-dependent characteristics, existing work-
s can hardly apply to other classes as illustrated in Fig-
ure 1. For example, PolyWolrd [41] (Figure 1(a)) is able
to extract well-vectorized buildings but fails to delineate
road networks. By contrast, Enhanced iCurb [34], orig-
inally concerning line-shape road curbs, is challenged by
polygon-shape buildings (Figure 1(b)). They adopt the sim-
ilar scheme where topological graphs are constructed by
connecting detected nodes and aim at either polygon-shape
or line-shape targets, respectively. Nevertheless, neither of
them can achieve stable and reliable topological directional
graphs regardless of varying categories. In this work, we
propose a class-agnostic approach named TopDiG which
can robustly obtain precise topological directional graphs
both for polygon-shape and line-shape targets (Figure 1(c)).
The underlying innovation is that the TopDiG formulates
diverse topological structures as directional graphs and nar-
rows the gap among categorical varieties. Besides, we fur-
ther develop a dynamic graph supervision strategy that en-
ables flexible arrangement of the predicted nodes and stabi-
lizes performance crossing different categories. Our contri-
butions are summarized as follows:
ADynamic Graph Supervision (DGS) strategy is de-
signed to generate the ground truth of adjacency matrix
in an on-the-fly manner during training. Instead of utiliz-
ing the adjacency matrix labels established from ordered
ground truth nodes [31, 36, 41], we dynamically generate
such labels according to real-time unordered predict nodes
in each training epoch. Our strategy alleviates the compul-
sory assumption that the sequence of predict nodes must be
in consistent with real ones as in PolyWorld [41]. Conse-
quently, DGS can facilitate the connectivity of unordered
nodes and ease the demand for the accurate positions of
nodes. We further propose a novel topology-concentrated
node detector (TCND) to guarantee an appropriate density
of predicted nodes. Unlike PolyWorld [41] and csBound-
ary [31] that mainly employ semantic contexts, TCND con-
centrates on compact geometric textures via the meticu-
lous perception of topological components, which boosts
the topological APLS score by approximately 8.06 %.
AClass-agnostic Topological Directional Graph Extrac-tion (TopDiG) approach is proposed to extract polygon-
shape and line-shape targets, i.e., buildings, water bodies
and roads, from remote sensing images. In contrast with
existing approaches that can only serve a specific catego-
ry, TopDiG directly performs class-independent vector map
generation from diverse targets. We introduce TCND and
directional graph (DiG) generator module to retain the ge-
ometrical shapes, i.e., polygon-shape and line-shape tar-
gets. Our method is performed in an end-to-end manner
and does not require initial contours or additional post-
processing. The TopDiG outperforms the segmentation-
based, contour-based and previous graph generation ap-
proaches, achieving a competitive performance with bound-
ary mIoU scores of 68.39 %, 72.51 %, 74.51 %and 75.28 %
onInria ,CrowdAI ,GID andGF2 datasets, respective-
ly. Moreover, TopDiG can construct reliable topological di-
rectional graphs, with the application to Massachusetts
dataset, achieving an average path length score ( APLS ) of
64.60 %.
|
Yang_Directional_Connectivity-Based_Segmentation_of_Medical_Images_CVPR_2023
|
Abstract Anatomical consistency in biomarker segmentation is crucial for many medical image analysis tasks. A promising paradigm for achieving anatomically consistent segmentation via deep networks is incorporating pixel connectivity, a basic concept in digital topology, to model inter-pixel relationships. However, previous works on connectivity modeling have ignored the rich channel-wise directional information in the latent space. In this work, we demonstrate that effective disentanglement of directional sub-space from the shared latent space can significantly enhance the feature representation in the connectivity-based network. To this end, we propose a directional connectivity modeling scheme for segmentation that decouples, tracks, and utilizes the directional information across the network. Experiments on various public medical image segmentation benchmarks show the effectiveness of our model as compared to the state-of-the-art methods. Code is available at https://github.com/Zyun-Y/DconnNet.
|
1. Introduction Maintaining anatomical consistency in the segmentation of medical images is important but challenging, as minor geometric errors may change the global topology [1, 2] and cause functional mistakes in downstream clinical decision-making [3]. Anatomical consistency in images can be expressed with topological properties, such as pixel connectivity and adjacency [4, 5]. As such, by directly modeling the mutual information between pixels or regions, graph-based methods have long been used to correct topological and geometrical errors [6-8]. However, such classic machine vision techniques usually depend on manually defined priors and thus are not easily generalizable for a wide variety of applications. Alternative to the classic approaches, deep learning-based segmentation methods utilized an encoder-decoder architecture [9] to learn from a group of pixels in a particular receptive field at each layer. More recently, significant progress has been made in capturing the inter- pixel dependency inside a network’s latent space [10-12]; Figure 1. The latent space differences between traditional pixel-classification-based and connectivity-based models. In the former, only categorical features, e.g., boundaries, are highlighted; while in the latter, the feature map also contains directional information (e.g., the horizontal connections between boundary pixels).
Figure 2. The flows of the two groups of latent features (categorical and directional) in the latent space of DconnNet, are visualized by T-SNE [13]. They were first disentangled (Sec 3.2) and then effectively fused in a projected shared manifold (Sec 3.3). The colors are rendered based on the results of clustering. however, very few studies have been conducted on the problem modeling side of the networks. A typical segmentation network models the problem as a pure pixel-wise classification task and uses a segmentation mask as the only label. Yet, this pixel-wise modeling scheme is suboptimal as it does not directly exploit inter-pixel relationships and geometrical properties [14, 15]. Thus, these models may result in low spatial coherence (i.e., inconsistent predictions for neighboring pixels that share similar spatial features) in their prediction [16]. Especially, when applied to high noise/artifacts medical data, the lower spatial consistency may lead to topological issues [17]. The concept of pixel connectivity has long been used to ensure the basic topological duality of separation and connectedness in digital images [18]. More recently, in the context of deep learning, the connectivity masks, reviewed
Directional Connectivity-based Segmentation of Medical Images Ziyun Yang Sina Farsiu Duke University Duke University Durham, NC, United States Durham, NC, United States [email protected] [email protected]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11525
Figure 3. Illustration of generating the connectivity mask from the segmentation mask by traversing 8-neighbor pixel connectivity. For each pixel, channel 𝑪𝒊’s binary value in the converted connectivity vector carries the categorical information (connected or not connected) while 𝒊 encodes the directional information (direction of connection). in Section 2.1, have been introduced as a topological extension of the segmentation mask [15]. Using connectivity masks as training labels has several advantages over segmentation masks. In terms of problem modeling, using a connectivity mask inherently changes the problem from pixel-wise classification to connectivity prediction, which models and enhances the topological representation between pixels of interest. In terms of label representation, a connectivity mask is more informative in three ways: first, a connectivity mask stores the categorical information among the connections of pixels and it is inter-pixel relation-aware; second, it sparsely represents edge pixels [16]; third, it contains rich directional information channel-wisely. Thus, a network trained with connectivity masks has both categorical (reflected by connectivity) and directional features in its latent space, each of which forms a specific sub-latent space, as shown in Fig. 1. In previous studies [15, 16, 19-22], these two groups of features were learned simultaneously through a shared network path which may result in highly coupled latent space and introduce redundancy [23]. Further, effectively disentangling meaningful subspaces from the shared latent space has been shown effective in accounting for the dependencies/independencies between features [24, 25]. Inspired by the idea of latent space disentanglement, in this paper, we propose a novel directional connectivity-based segmentation network (DconnNet) to disentangle the directional subspace from the shared latent space and utilize the extracted directional features to enhance the overall data representation, as in Fig. 2. The disentangling process is conducted by a sub-path slicing-based module called Sub-path Direction Excitation (SDE). The directional-based feature enhancement is applied in a coarse-to-fine manner using an Interactive Feature-space Decoder (IFD) with two top-down interactive decoding flows. Finally, we propose a novel Size Density loss (SDL) that alleviates the common data imbalance problem in medical datasets with a label size distribution-based weighting scheme. With experiments on different public medical image analysis benchmarks, we demonstrate the superiority of DconnNet against other state-of-art methods.
|
Zhang_PointCert_Point_Cloud_Classification_With_Deterministic_Certified_Robustness_Guarantees_CVPR_2023
|
Abstract
Point cloud classification is an essential component in
many security-critical applications such as autonomous
driving and augmented reality. However, point cloud classi-
fiers are vulnerable to adversarially perturbed point clouds.
Existing certified defenses against adversarial point clouds
suffer from a key limitation: their certified robustness guar-
antees are probabilistic, i.e., they produce an incorrect cer-
tified robustness guarantee with some probability. In this
work, we propose a general framework, namely PointCert,
that can transform an arbitrary point cloud classifier to be
certifiably robust against adversarial point clouds with de-
terministic guarantees. PointCert certifiably predicts the
same label for a point cloud when the number of arbitrarily
added, deleted, and/or modified points is less than a thresh-
old. Moreover, we propose multiple methods to optimize the
certified robustness guarantees of PointCert in three appli-
cation scenarios. We systematically evaluate PointCert on
ModelNet and ScanObjectNN benchmark datasets. Our re-
sults show that PointCert substantially outperforms state-
of-the-art certified defenses even though their robustness
guarantees are probabilistic.
|
1. Introduction
Point cloud classification [ 11,29,30,38,44,50] has many
safety-critical applications, including but not limited to, au-
tonomous driving and augmented reality. However, vari-
ous studies [ 12,16,22,32,40,43,46,51,52] showed that
point cloud classification is vulnerable toadversarial point
clouds. In particular, an attacker can carefully add, delete,
and/or modify a small number of points in a point cloud to
make it misclassified by a point cloud classifier.
Existing defenses against adversarial point clouds can be
categorized intoempirical defenses[ 7,21,33,41,46,49,53]
andcertified defenses[ 5,6,23]. The key limitation of em-
pirical defenses is that they cannot provide formal guaran-tees, and thus are often broken by advanced, adaptive at-
tacks [ 34]. Therefore, we focus on certified defenses in
this work. Randomized smoothing [ 5] and PointGuard [ 23]
are two state-of-the-art certified defenses against adversar-
ial point clouds. In particular, randomized smoothing adds
random noise (e.g., Gaussian noise) to a point cloud, while
PointGuard randomly subsamples a point cloud. Due to the
randomness, their certified robustness guarantees areprob-
abilistic, i.e., they produce incorrect robustness guarantees
with some probability (callederror probability). For in-
stance, when the error probability is 0.001, they produce in-
correct robustness guarantees for 1 out of 1,000 point-cloud
classifications on average. Such probabilistic guarantees are
insufficient for security-critical applications that frequently
classify point clouds.
In this work, we propose PointCert, thefirst certified de-
fense that hasdeterministicrobustness guarantees against
adversarial point clouds. PointCert can transform an arbi-
trary point cloud classifierf(calledbase point cloud classi-
fier) to be certifiably robust against adversarial point clouds.
Specifically, given a point cloud and a base point cloud clas-
sifierf, PointCertfirst divides the point cloud into multiple
disjoint sub-point clouds using a hash function, then usesf
to predict a label for each sub-point cloud, andfinally takes
a majority vote among the predicted labels as the predicted
label for the original point cloud. We prove that PointCert
certifiably predicts the same label for a point cloud when the
number of arbitrarily added, deleted, and/or modified points
is no larger than a threshold, which is known ascertified
perturbation size. Moreover, we also prove that our derived
certified perturbation size istight, i.e., without making as-
sumptions on the base point cloud classifierf, it is theoret-
ically impossible to derive a certified perturbation size for
PointCert that is larger than ours.
We consider three scenarios about how PointCert could
be applied in practice and propose methods to optimize the
performance of PointCert in these scenarios. In particular,
we consider two parties:model providerandcustomer. A
model provider (e.g., Google, Meta) has enough labeled
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9496
data and computation resource to train a base point cloud
classifierfand shares it with customers (e.g., a less re-
sourceful company). Givenf, a customer uses PointCert
to classify its (adversarial) point clouds. We note that the
model provider and customer can be the same entity, e.g., a
company trains and usesfitself. We consider three scenar-
ios, in whichfis trained by the model provider differently
and/or used by a customer differently.
Scenario I represents a naive application of PointCert, in
which the base point cloud classifierfis trained using a
standard training algorithm and a customer directly applies
PointCert to classify its point clouds based onf. PointCert
achieves suboptimal performance in Scenario I becausef,
trained on point clouds, is not accurate at classifying sub-
point clouds as they have different distributions. Therefore,
in Scenario II, we consider a model provider trainsfto
optimize the performance of PointCert. In particular, the
model provider divides each training point cloud into multi-
ple sub-point clouds following PointCert and trainsfbased
on sub-point clouds. In Scenario III, we consider the model
provider has trainedfusing a standard training algorithm
(like Scenario I). However, instead of directly applyingfto
classify sub-point clouds, a customer prepends aPoint Com-
pletion Network (PCN)[ 48] tof. Specifically, a PCN takes
a sub-point cloud as input and outputs a completed point
cloud, which is then classified byf. Moreover, we propose
a new loss function to train the PCN such that its completed
point clouds are classified byfwith higher accuracy, which
further improves the performance of PointCert.
We perform systematic evaluation on ModelNet40
dataset [ 1] and two variants of ScanObjectNN dataset [ 2].
Our experimental results show that PointCert significantly
outperforms the state-of-the-art certified defenses (random-
ized smoothing [ 5] and PointGuard [ 23]) even though their
robustness guarantees are probabilistic. For instance, on
ModelNet40 dataset, PointCert achieves acertified accu-
racyof 79% when an attacker can arbitrarily perturb at most
50 points in a point cloud, where certified accuracy is a
lower bound of testing accuracy. Under the same setting,
the certified accuracy of both randomized smoothing and
PointGuard is0. We also extensively evaluate PointCert in
the three application scenarios.
In summary, we make the following contributions: (1)
We propose PointCert, thefirst certified defense with de-
terministic robustness guarantees against adversarial point
clouds. (2) We design multiple methods to optimize the
performance of PointCert in multiple application scenarios.
(3) We extensively evaluate PointCert and compare it with
state-of-the-art certified defenses.
|
Yu_Learning_Procedure-Aware_Video_Representation_From_Instructional_Videos_and_Their_Narrations_CVPR_2023
|
Abstract
The abundance of instructional videos and their narra-
tions over the Internet offers an exciting avenue for un-
derstanding procedural activities. In this work, we pro-
pose to learn video representation that encodes both ac-
tion steps and their temporal ordering, based on a large-
scale dataset of web instructional videos and their narra-
tions, without using human annotations. Our method jointly
learns a video representation to encode individual step con-
cepts, and a deep probabilistic model to capture both tem-
poral dependencies and immense individual variations in
the step ordering. We empirically demonstrate that learn-
ing temporal ordering not only enables new capabilities
for procedure reasoning, but also reinforces the recognition
of individual steps. Our model significantly advances the
state-of-the-art results on step classification (+2.8%/+3.3%
on COIN / EPIC-Kitchens) and step forecasting (+7.4%
on COIN). Moreover, our model attains promising re-
sults in zero-shot inference for step classification and fore-
casting, as well as in predicting diverse and plausible
steps for incomplete procedures. Our code is available at
https://github.com/facebookresearch/ProcedureVRL.
|
1. Introduction
Many of our daily activities ( e.g. cooking or crafting)
are highly structured, comprising a set of action steps con-
ducted in a certain ordering. Yet how these activities are
performed varies among individuals. Consider the exam-
ple of making scrambled eggs as shown in Fig. 1. While
most people tend to whisk eggs in a bowl, melt butter in a
pan, and cook eggs under medium heat, expert chefs have
recommended to crack eggs into the pan, add butter, and
stir them under high heat. Imagine a vision model that can
account for the individual variations and reason about the
temporal ordering of action steps in a video, so as to infer
prior missing steps, recognize the current step, and forecast
*Work done while Yiwu Zhong was an intern at Meta.
†Co-corresponding authors.
Video 1Video 2whisk eggs in bowlcook eggscrack egg in panadd butterStep Descriptions
flatten doughbake pizzabake cookiesStep ForecastingStep Classification
knead dough
Our ModelOur ModelFigure 1. Top: During training, our model learns from procedu-
ral videos and step descriptions to understand individual steps and
capture temporal ordering and variations among steps. Bottom :
Once trained, our model supports zero-shot step classification and
forecasting, yielding multiple credible predictions.
a future step. Such a model will be immensely useful for
a wide range of applications including augmented reality,
virtual personal assistant, and human-robot interaction.
Understanding complex procedural activities has been a
long-standing challenge in the vision community [7, 18, 22,
39, 41, 46]. While many prior approaches learn from anno-
tated videos following a fully supervised setting [13,27,69],
this paradigm is difficult to scale to a plethora of activ-
ities and their variants among individuals. A promising
solution is offered by the exciting advances in vision-and-
language pre-training, where models learn from visual data
(images or videos) and their paired text data (captions or
narrations) [30, 43, 54, 70] in order to recognize a variety of
concepts. This idea has recently been explored to analyze
instructional videos [33, 37], yet existing methods are lim-
ited to recognize single action steps in procedural activities.
In this paper, we present a first step towards modeling
temporal ordering of action steps in procedural activities by
learning from instructional videos and their narrations. Our
key innovation lies in the joint learning of a video repre-
sentation aiming to encode individual step concepts, and a
deep probabilistic model designed to capture temporal de-
pendencies and variations among steps. The video represen-
tation, instantiated as a Transformer network, is learned by
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14825
matching a video clip to its corresponding narration. The
probabilistic model, built on a diffusion process, is tasked
to predict the distribution of the video representation for a
missing step, given steps in its vicinity. With the help of
a pre-trained vision-and-language model [43], our model is
trained using only videos and their narrations from auto-
matic speech recognition (ASR), and thus does not require
any manual annotations.
Once learned, our model celebrates two unique bene-
fits thanks to our model design and training framework.
First, our model supports zero-shot inference given an input
video, including the recognition of single steps and fore-
casting of future steps, and can be further fine-tuned on
downstream tasks. Second, our model allows sampling mul-
tiple video representations when predicting a missing action
step, with each presenting a possibly different hypothesis of
the step ordering. Instead of predicting a single represen-
tation with the highest probability, sampling from a proba-
bilistic model provides access to additional high-probability
solutions that might be beneficial to prediction tasks with
high ambiguity or requiring user interactions.
We train our models on a large-scale instructional video
dataset collected from YouTube (HowTo100M [38]), and
evaluate them on two public benchmarks (COIN [55] and
EPIC-Kitchens-100 [10]) covering a wide range of pro-
cedural videos and across the tasks of step classification
and step forecasting. Through extensive experiments, we
demonstrate that (1) our temporal model is highly effec-
tive in forecasting future steps, outperforming state-of-the-
art methods by a large margin of +7.4% in top-1 accu-
racy on COIN; (2) modeling temporal ordering reinforces
video representation learning, leading to improved clas-
sification results ( +2.8% /+3.3% for step classification on
COIN/EPIC-Kitchens) when probing the learned represen-
tations; (3) our training framework offers strong results for
zero-shot step classification and forecasting; and (4) sam-
pling from our probabilistic model yields diverse and plau-
sible predictions of future steps.
Contributions . Our work presents the first model that
leverages video-and-language pre-training to capture the
temporal ordering of action steps in procedural activities.
Our key technical innovation lies in the design of a deep
probabilistic model using a diffusion process, in tandem
with video-and-language representation learning. The re-
sult is a model and a training framework that establish new
state-of-the-art results on both step classification and fore-
casting tasks across the major benchmarks. Besides, our
model is capable of generating diverse step predictions and
supports zero-shot inference.
|
Yin_GIVL_Improving_Geographical_Inclusivity_of_Vision-Language_Models_With_Pre-Training_Methods_CVPR_2023
|
Abstract
A key goal for the advancement of AI is to develop
technologies that serve the needs not just of one group
but of all communities regardless of their geographical re-
gion. In fact, a significant proportion of knowledge is lo-
cally shared by people from certain regions but may not
apply equally in other regions because of cultural dif-
ferences. If a model is unaware of regional character-
istics, it may lead to performance disparity across re-
gions and result in bias against underrepresented groups.
We propose GIVL, a Geographically Inclusive Vision-and-
Language Pre-trained model. There are two attributes of
geo-diverse visual concepts which can help to learn geo-
diverse knowledge: 1) concepts under similar categories
have unique knowledge and visual characteristics, 2) con-
cepts with similar visual features may fall in completely
different categories. Motivated by the attributes, we de-
sign new pre-training objectives Image-Knowledge Match-
ing (IKM) and Image Edit Checking (IEC) to pre-train
GIVL. Compared with similar-size models pre-trained with
similar scale of data, GIVL achieves state-of-the-art (SOTA)
and more balanced performance on geo-diverse V&L tasks.Code and data are released at https://github.com/
WadeYin9712/GIVL .
|
1. Introduction
Vision-Language Pre-trained Models (VLPs) [ 9,23,24,
29,53] have achieved remarkable performance on Vision-
Language (V&L) tasks including visual question answer-
ing [ 11,12,15], image-text retrieval [ 22], and image cap-
tioning [ 19,27]. Pre-trained with large-scale corpora of
image-text pairs, e.g. COCO [ 27], OpenImages [ 21]. VLPs
are capable of learning multi-modal representations and can
be effectively fine-tuned on downstream V&L tasks.
While VLPs can solve a broad range of V&L tasks, to
deploy VLPs in real-world applications, it is essential to
consider the geographical inclusivity1of VLPs. Because of
geographic differences, images from different regions em-
body a large amount of knowledge that is locally shared but
cannot be applied in other regions, i.e. geographically di-
verse. For example, in Figure 1, the festivals in different
1We use regions as a proxy to estimate inclusivity of V&L models.
People in the same regions may have different cultures and traditions.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10951
regions look different.
Ideally, a geographically inclusive VLP should be capa-
ble of achieving comparable performance over all the im-
ages, regardless of their origins. However, current VLPs
does not perform equally well on data from different re-
gions. For example, prior works [ 28,49] show that on geo-
diverse V&L tasks, there is nearly a 20% performance dis-
crepancy between Western and East Asian images when
current VLPs are applied. To combat such geographical
bias, we aim to design methods to make VLPs achieve more
balanced performance across regions.
One solution to mitigating bias is to obtain diverse task-
specific annotations for each region and fine-tune VLPs on
the new annotations. However, according to [ 17], most
Amazon MTurk annotators are from US and India, and may
be unfamiliar with the cultures of other regions. Thus, it
is unrealistic to obtain large-scale geo-diverse annotations
even in such a popular crowdsourcing platform.
Pre-training a unified VLP with large-scale unannotated
geo-diverse images and corresponding knowledge could
make the VLP a foundation to provide more generalizable
representations and help to transfer on comprehending im-
ages from various regions easier. In this paper, we propose
GIVL ,aGeographically Inclusive Vision-and- Language
Pre-trained model. We focus on how to encourage GIVL
to better learn geo-diverse knowledge on images from
different regions during its pre-training stage .
We observe two attributes of geo-diverse visual concepts
that can contribute to learning geo-diverse knowledge:
A1: Concepts under similar categories have unique
knowledge and visual characteristics. For example, tra-
ditional Western and Chinese festivals, like Christmas and
Chinese New Year in Figure 1, are held with different rit-
uals and their decoration style differs as well. It is neces-
sary for GIVL to learn the difference between their corre-
sponding knowledge and precisely distinguish these visual
concepts. On the other hand, Christmas andChinese New
Year are both festivals. Learning the commonalities of vi-
sual concepts (e.g., both images in Figure 1belong to the
same category “festival”) would help model connect West-
ern and non-Western concepts and contribute to more effec-
tive transfer on geo-diverse images.
A2: Concepts with similar visual features may lie in
completely different categories. In Figure 2,Chinese pa-
per cuttings share visual features (e.g., color, shape) with
red frisbee . Similarly. sugar cane andflute share visual fea-
tures. However, these concepts are not related to each other.
Since geo-diverse images cover a broader range of visual
concepts, differentiating visually similar concepts given vi-
sual contexts is also essential.
To this end, besides common objectives Masked Lan-
guage Modeling (MLM) and Image-Text Matching (ITM)
for pre-training VLPs, we propose two additional pre-
(a)(b)
Figure 2. Example of Chinese paper cuttings and red frisbee (left),
sugar cane and flute (right). Different concepts may be visually
similar, but they may have completely different functionalities.
training objectives, Image-Knowledge Matching (IKM)
andImage Edit Checking (IEC) . IKM is used to learn
the alignment between images and corresponding textual
knowledge in Wikipedia. It requires GIVL to not only judge
if the input textual knowledge matches input images, but
also identify whether the visual concepts described in input
knowledge falls into similar categories of the concepts in in-
put images. This encourages GIVL to learn corresponding
relationship between knowledge and images as well as rec-
ognize similarity among geo-diverse visual concepts. IEC
is proposed to identify whether a visual concept in input
image is replaced by another concept that is visually similar
but lies in an irrelevant category (see Fig. 3for an example).
It enables GIVL to capture nuances between visually simi-
lar concepts after the replacement given visual contexts.
Our contributions and empirical results are as follows:
•By considering the attributes of geo-diverse visual con-
cepts, we propose two novel V&L pre-training objec-
tives Image-Knowledge Matching (IKM) and Image
Edit Checking (IEC) that can greatly improve the geo-
graphical inclusivity of VLPs.
•Compared with similar-size VLPs pre-trained with
similar scale of data, GIVL achieves state-of-the-art
(SOTA) and more balanced performance over dif-
ferent regions on geo-diverse V&L tasks including
MaRVL [ 28], GD-VCR [ 49] and WIT Image-Text Re-
trieval [ 38]. For geo-diverse zero-shot image classi-
fication on Dollar Street dataset2, GIVL outperforms
VinVL [ 53] 26%.
|
Zhang_Seeing_a_Rose_in_Five_Thousand_Ways_CVPR_2023
|
Abstract
What is a rose, visually? A rose comprises its intrinsics,
including the distribution of geometry, texture, and material
specific to its object category. With knowledge of these in-
trinsic properties, we may render roses of different sizes and
shapes, in different poses, and under different lighting condi-
tions. In this work, we build a generative model that learns
to capture such object intrinsics from a single image, such
as a photo of a bouquet. Such an image includes multiple
instances of an object type. These instances all share the
same intrinsics, but appear different due to a combination of
variance within these intrinsics and differences in extrinsic
factors, such as pose and illumination. Experiments show
that our model successfully learns object intrinsics (distri-
bution of geometry, texture, and material) for a wide range
of objects, each from a single Internet image. Our method
achieves superior results on multiple downstream tasks, in-
cluding intrinsic image decomposition, shape and image
generation, view synthesis, and relighting.
“Nature!... Each of her works has an essence of its own;
each of her phenomena a special characterisation; and
yet their diversity is in unity. ” – Georg Christoph Tobler
|
1. Introduction
The bouquet in Figure 1 contains many roses. Although
each rose has different pixel values, we recognize them as
individual instances of the same type of object. Such un-
derstanding is based on the fact that these instances share
the same object intrinsics —the distributions of geometry,
texture, and material that characterize a rose. The difference
in appearance arises from both the variance within these
distributions and extrinsic factors such as object pose and
environment lighting. Understanding these aspects allows us
to imagine and draw additional, new instances of roses with
varying shape, pose, and illumination.
In this work, our goal is to build a model that captures
such object intrinsics from a single image , and to use this
model for shape and image generation under novel view-
points and illumination conditions, as illustrated in Figure 1.
This problem is challenging for three reasons. First, we
only have a single image. This makes our work fundamen-
tally different from existing works on 3D-aware image gen-
eration models [8, 9, 27, 28], which typically require a large
dataset of thousands of instances for training. In comparison,
1“People where you live ... grow five thousand roses in one garden ...
And yet what they’re looking for could be found in a single rose.” — Quote
from The Little Prince by Antoine de Saint-Exup ´ery.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
962
the single image contains at most a few dozen instances,
making the inference problem highly under-constrained.
Second, these already limited instances may vary signif-
icantly in pixel values. This is because they have different
poses and illumination conditions, but neither of these factors
are annotated or known. We also cannot resort to existing
tools for pose estimation based on structure from motion,
such as COLMAP [35], because the appearance variations
violate the assumptions of epipolar geometry.
Finally, the object intrinsics we aim to infer are proba-
bilistic, not deterministic: no two roses in the natural world
are identical, and we want to capture a distribution of their
geometry, texture, and material to exploit the underlying
multi-view information. This is therefore in stark contrast to
existing multi-view reconstruction or neural rendering meth-
ods for a fixed object or scene [25, 26, 40]. These challenges
all come down to having a large hypothesis space for this
highly under-constrained problem, with very limited visual
observations or signals. Our solution to address these chal-
lenges is to design a model with its inductive biases guided
byobject intrinsics . Such guidance is two-fold: first, the
instances we aim to present share the same object intrinsics,
or the same distribution of geometry, texture, and material;
second, these intrinsic properties are not isolated, but inter-
weaved in a specific way as defined by a rendering engine
and, fundamentally, by the physical world.
Specifically, our model takes the single input image and
learns a neural representation of the distribution over 3D
shape, surface albedo, and shininess of the object, factoring
out pose and lighting variations, based on a set of instance
masks and a given pose distribution of the instances. This ex-
plicit, physically-grounded disentanglement helps us explain
the instances in a compact manner, and enables the model to
learn object intrinsics without overfitting the limited obser-
vations from only a single image.
The resulting model enables a range of applications. For
example, random sampling from the learned object intrinsics
generates novel instances with different identities from the
input. By modifying extrinsic factors, the synthesized in-
stances can be rendered from novel viewpoints or relit with
different lighting configurations.
Our contributions are three-fold:
1.We propose the problem of recovering object intrinsics,
including both 3D geometry, texture, and material prop-
erties, from just a single image of a few instances with
instance masks.
2.We design a generative framework that effectively
learns such object intrinsics.
3.Through extensive evaluations, we show that the model
achieves superior results in shape reconstruction and
generation, novel view synthesis, and relighting.Object
VarianceUnknown
PosesRe-
lighting3D-
Aware
SinGAN [36] ✓ ✓
NeRF [26] ✓
D-NeRF [32] ✓ ✓
GNeRF [25] ✓ ✓
Neural-PIL [7] ✓ ✓
NeRD [6] ✓ ✓
EG3D [9] ✓ ✓
Ours ✓ ✓ ✓ ✓
Table 1. Comparisons with prior works. Unlike existing 3D-aware
generative models, our method learns from a very limited number
of observations. Unlike multi-view reconstruction methods, our
method models variance among observations from training inputs.
|
Xu_Low-Light_Image_Enhancement_via_Structure_Modeling_and_Guidance_CVPR_2023
|
Abstract
This paper proposes a new framework for low-light im-
age enhancement by simultaneously conducting the appear-
ance as well as structure modeling. It employs the struc-
tural feature to guide the appearance enhancement, lead-
ing to sharp and realistic results. The structure model-
ing in our framework is implemented as the edge detection
in low-light images. It is achieved with a modified gen-
erative model via designing a structure-aware feature ex-
tractor and generator. The detected edge maps can accu-
rately emphasize the essential structural information, and
the edge prediction is robust towards the noises in dark ar-
eas. Moreover, to improve the appearance modeling, which
is implemented with a simple U-Net, a novel structure-
guided enhancement module is proposed with structure-
guided feature synthesis layers. The appearance model-
ing, edge detector, and enhancement module can be trained
end-to-end. The experiments are conducted on represen-
tative datasets (sRGB and RAW domains), showing that
our model consistently achieves SOTA performance on all
datasets with the same architecture. The code is available
at https://github.com/xiaogang00/SMG-LLIE.
|
1. Introduction
The low-light enhancement aims to recover normal-light
and noise-free images from dark and noisy pictures, which
is a long-standing and significant computer vision topic.
It has broad application fields, including low-light imag-
ing [11, 27, 50], and also benefits many downstream vision
tasks, e.g., nighttime detection [28, 48, 64]. Some meth-
ods have been proposed to tackle the low-light enhancement
problem. They design networks that learn to manipulate
color, tone, and contrast [7, 10, 45, 63], and some recent
works also account for noise in images [24, 55]. Most of
these works optimize the appearance distance between the
output and the ground truth. However, they ignore the ex-
plicit modeling of structural details in dark areas and thus
*Corresponding author.
15.00 18.00 21.00 24.00
0.600 0.800 1.000 PSNR
SSIMLOL-synthetic (sRGB)
16.00 19.00 22.00 25.00
0.600 0.700 0.800 0.900 PSNR
SSIMLOL-real (sRGB)
16.00 19.00 22.00
0.520 0.570 0.620 0.670 PSNR
SSIMSID (sRGB)
26.00 28.00 30.00
0.720 0.770 0.820 PSNR
SSIMSID (RAW)
Legend for sRGB Legend for RAW
Figure 1. Our method consistently achieves SOTA performance on
different sRGB/RAW datasets with the same network architecture.
resulting in blurry outcomes and low SSIM [51] values, as
shown in Fig. 2. Some works [34, 74] have noticed the ef-
fect of employing structural information, e.g., edge, to pro-
mote the enhancement. Edge guides the enhancement by
distinguishing between different parts in the dark regions.
Moreover, adding sensible edge priors into dark regions re-
duces the ill-posed degree in optimizing the appearance re-
construction. These frameworks [34, 74] perform the struc-
ture modeling with encoder-decoder-based networks and a
regression loss. However, the corresponding structure mod-
eling results are not satisfying due to the uncertainty in the
dark areas caused by severely poor visibility and noise. Fur-
thermore, the strategy of using the extracted structural infor-
mation needs to be improved from the existing straightfor-
ward concatenation approach [34, 74].
In this paper, we propose to utilize a generative model S
trained with a GAN loss to perform the structure modeling
with the form of edges. Then, we design a new mechanism
Eto facilitate the initial low-light appearance enhancement
(the module is denoted as A) with structure-guided feature
synthesis. With effective structure modeling and guidance,
our framework can output sharp and realistic results with
satisfactory reconstruction quality as shown in Fig. 2.
Compared with previous structure modeling networks,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9893
(a) Input
(b) Structure of (a)
(c) SNR (CVPR 2022)
(d) Structure Modeling
(e) Ours
(f) Ground Truth
Figure 2. A challenging low-light frame (a), from SID-sRGB [3],
enhanced by a SOTA method (c) and our method (e). Our method
can synthesize the structure map (d) from the input image, leading
to clearer details, more distinct contrast, and more vivid color. Al-
though (c) has high PSNR as 28.17, its SSIM is low as 0.75. Ours
achieves high scores for both dB and SSIM, as 28.60dB and 0.80.
the proposed generative model Shas two significant modi-
fications. First, we notice the impact of providing structure-
aware descriptors into both the encoder and decoder of S,
disentangling the appearance representation and underlin-
ing the structural information. Thus, we design a Structure-
Aware Feature Extractor (SAFE) as the encoder part, which
extracts structure-aware features from the dark image and
its gradients via spatially-varying operations (achieved with
adaptive long-range and short-range computations). The
extracted structure-aware tensors are then fed into the de-
coder part to generate the desired structure maps. Moreover,
different from current approaches, which employ the struc-
ture maps of normal-light images to conduct the regression
learning, we find the nice property of using a GAN loss. The
GAN loss can reduce the artifacts in the generated structure
maps that are caused by the noise and invisibility, highlight-
ing the essential structure required for enhancement. The
backbone of Sis implemented as a modified StyleGAN.
To boost the appearance by leveraging the obtained
structure maps, we design a Structure-Guided Enhancement
Module (SGEM) as E. The main target of SGEM is to learn
the residual, which can improve the initial appearance mod-
eling results. In SGEM, spatially-adaptive kernels and nor-
malization parameters are generated according to the struc-
ture maps. Then, the features in each layer of the SGEM’s
decoder will be processed with spatially-adaptive convolu-
tions and normalization. Although the overall architecture
of SGEM takes the form of a simple U-Net [38], it can ef-
fectively enhance the original appearance.
S,A, and Ecan be trained end-to-end simultane-
ously. Extensive experiments are conducted on representa-
tive benchmarks. Experimental results show that our frame-
work achieves SOTA performance on both PSNR and SSIMmetrics with the same architecture on all datasets, as shown
in Fig. 1. In summary, our work’s contribution is four-fold.
1. We propose a new framework for low-light enhance-
ment by conducting structure modeling and guidance
simultaneously to boost the appearance enhancement.
2. We design a novel structure modeling method, where
structure-aware features are formulated and trained
with a GAN loss.
3. A novel structure-guided enhancement approach is
proposed for appearance improvement guided by the
restored structure maps.
4. Extensive experiments are conducted on different
datasets in both sRGB and RAW domains, showing the
effectiveness and generalization of our framework.
|
Yang_Improving_Visual_Grounding_by_Encouraging_Consistent_Gradient-Based_Explanations_CVPR_2023
|
Abstract
We propose a margin-based loss for tuning joint vision-
language models so that their gradient-based explanations
are consistent with region-level annotations provided by
humans for relatively smaller grounding datasets. We re-
fer to this objective as Attention Mask Consistency (AMC)
and demonstrate that it produces superior visual ground-
ing results than previous methods that rely on using vision-
language models to score the outputs of object detectors.
Particularly, a model trained with AMC on top of standard
vision-language modeling objectives obtains a state-of-the-
art accuracy of 86.49% in the Flickr30k visual grounding
benchmark, an absolute improvement of 5.38% when com-
pared to the best previous model trained under the same
level of supervision. Our approach also performs exceed-
ingly well on established benchmarks for referring expres-
sion comprehension where it obtains 80.34% accuracy in
the easy test of RefCOCO+, and 64.55% in the difficult split.
AMC is effective, easy to implement, and is general as it can
be adopted by any vision-language model, and can use any
type of region annotations.
|
1. Introduction
Vision-language pretraining using images paired with
captions has led to models that can transfer well to an ar-
ray of tasks such as visual question answering, image-text
retrieval and visual commonsense reasoning [6,18,22]. Re-
markably, some of these models are also able to perform vi-
sual grounding by relying on gradient-based explanations.
While Vision-Language Models (VLMs) take advantage of
the vast amounts of images and text that can be found on
the web, carefully curated data with grounding annotations
in the form of boxes, regions, or segments is consider-
ably more limited. Our work aims to improve the ground-
ing or localization capabilities of vision-language models
further by tuning them under a training objective that en-
courages their gradient-based explanations to be consistent
with human-provided region-based annotations from visu-
ally grounded data when those are available.
A picture of a cathedral next to a park
A picture of a cathedral next to a parkA picture of a cathedral next to a parkInput Image + TextRegular V-L Model Explanation
Human Explanationw/ Attention Mask Consistency (Ours)A picture of a cathedral next to a parkFigure 1. Gradient-based methods can generate heatmaps that ex-
plain the match between images and text for a Vision-language
model (VLM). Our work aims to improve their ability to produce
visual groundings by directly optimizing their gradient-based ex-
planations so that they are consistent with human annotations pro-
vided for a reduced set of images.
Vision-language transformers extend the success of
masked language modeling (MLM) to multi-modal prob-
lems. In vision-language transformers, objectives such
as image-text matching (ITM), and image-text contrastive
losses (ITC) are used in addition to MLM to exploit com-
monalities between images and text [6, 17, 18, 22]. We fur-
ther extend these objectives to include our proposed Atten-
tion Mask Consistency (AMC) objective. Our formulation
is based on the observation that gradient-based explanation
maps obtained using methods such as GradCAM [30], can
be used to explain the image-text matching of a VLM. Our
AMC objective explicitly optimizes these explanations dur-
ing training so that they are consistent with region annota-
tions. Figure 1 illustrates an example input image and text
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19165
an empty streetGradCAM(/uni2207y)/uni03D5v/uni03D5ty/uni03D5fM:/uni2112amcV:T:
{}
{}maximizeminimizeFigure 2. Overview of our method. Among other objectives, standard vision-language models are trained to produce a matching score
ygiven an input image-text pair (V, T ). For inputs containing an extra level of supervision in the form of region annotations (e.g. a
triplet (V, T, M )), where M is a binary mask indicating the regions annotated by a human, we optimize the GradCAM [30] gradient-based
explanations of the model so that the produced explanations are consistent with region annotations using Lamcby maximizing the energy
in the heatmap that falls inside the region annotation and minimizing what falls outside. We accomplish this through soft margin losses as
described in Sec. 3.2.
pair along with a gradient-based explanation obtained from
a VLM model, a region annotation provided by a human,
and an improved gradient-based explanation after the VLM
model was tuned under our proposed objective.
Our work builds particularly upon the ALBEF
model [17] which incorporates a vision-language model
architecture based on transformers [36] and has already
demonstrated off-the-shelf grounding capabilities using
GradCAM. Gradient-based explanations in the form of
heatmaps have been used extensively to explain the areas
of the input images that most impact an output value of
a model. In our formulation we actively leverage these
heatmaps by designing a loss function that encourages most
of the energy in the heatmaps to fall within the areas of the
input image that most align with human provided region
annotations. Figure 2 shows a detailed overview of our
method and objective function. Given an input image and
text pair, our goal is to maximize a soft margin between
the energy of the heatmap inside the region annotation and
the energy of the heatmap outside the region annotation.
A soft-margin is important since typical human region
annotations in the form of boxes do not exactly outline
objects of different shapes, and in many cases models
should still be able to ground an input text with multiple
regions across the image.
We compare AMC extensively against other methods
that use the same level of supervision but instead use an
object detector such as Faster-RCNN [10, 11, 17, 23]. Our
method obtains state-of-the-art pointing game accuracy on
both Flickr30k and RefCOCO+. Our contributions can be
summarized as follows: (1) We introduce a new training
objective, AMC, which is effective, simple to implement
and can handle multiple types of region annotations, (2) Weshow that AMC can improve the grounding capabilities of
an existing vision-language model – ALBEF, and (3) the
resulting model is state-of-the-art in two benchmarks for
phrase grounding and referring expression comprehension.
|
Yan_GCFAgg_Global_and_Cross-View_Feature_Aggregation_for_Multi-View_Clustering_CVPR_2023
|
Abstract
Multi-view clustering can partition data samples into
their categories by learning a consensus representation in
unsupervised way and has received more and more atten-
tion in recent years. However, most existing deep clustering
methods learn consensus representation or view-specific
representations from multiple views via view-wise aggre-
gation way, where they ignore structure relationship of all
samples. In this paper, we propose a novel multi-view clus-
tering network to address these problems, called Global
and Cross-view Feature Aggregation for Multi-View Clus-
tering (GCFAggMVC). Specifically, the consensus data pre-
sentation from multiple views is obtained via cross-sample
and cross-view feature aggregation, which fully explores the
complementary of similar samples. Moreover, we align the
consensus representation and the view-specific representa-
tion by the structure-guided contrastive learning module,
which makes the view-specific representations from differ-
ent samples with high structure relationship similar. The
proposed module is a flexible multi-view data representa-
tion module, which can be also embedded to the incomplete
multi-view data clustering task via plugging our module
into other frameworks. Extensive experiments show that the
proposed method achieves excellent performance in both
complete multi-view data clustering tasks and incomplete
multi-view data clustering tasks.
|
1. Introduction
With the rapid development of informatization, data is
often collected by various social media or views. For in-
stance, a 3D object can be described from different angles;
a news event is reported from different sources; and an im-
age can be characterized by different types of feature sets,
e.g., SIFT, LBP, and HoG. Such an instance object, which is
*Corresponding author([email protected])described from multiple views, is referred to as multi-view
data. Multi-view clustering (MVC) [6], i.e., unsupervisedly
fusing the multi-view data to aid differentiate crucial group-
ing, is a fundamental task in the fields of data mining, pat-
tern recognition, etc, but it remains a challenging problem.
Traditional multi-view clustering methods [7] include
matrix decomposition methods, graph-based multi-view
methods, and subspace-based multi-view methods. The
goal of these methods is to obtain a high-quality consensus
graph or subspace self-representation matrix by various reg-
ularization constraints in order to improve the performance
of clustering. However, most of them directly operate on
the original multiview features or specified kernel features,
which usually include noises and redundancy information
during the collection or kernel space selection processes,
moreover harmful to the clustering tasks.
Deep neural networks have demonstrated excellent per-
formance in data feature representation for many vision
tasks. Deep clustering methods also draw more attention
to researchers [1, 9, 40, 50–52]. These methods efficiently
learn the feature presentation of each view using a view-
specific encoder network, and fuse these learnt representa-
tions from all views to obtain a consensus representation
that can be divided into different categories by a cluster-
ing module. To reduce the influence of view-private in-
formation on clustering, these methods designed different
alignment models. For example, some methods align the
representation distributions or label distributions from dif-
ferent views by KL divergence [14]. They might be hard
to distinguish between clusters, since a category from one
view might be aligned with a different category in another
view. Some methods align the representation from different
views by contrastive learning. Despite these models have
achieved significant improvement in MVC task, the fol-
lowing issues still exist: 1) Almost all existing deep MVC
methods (such as [38,40,49]) are based on view-wise fusion
models, such as weighted-sum fusion of all views or con-
catenating fusion of all views, which makes it difficult to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19863
obtain discriminative consensus representations from mul-
tiple views, since a view or several views of a sample might
contain too much noise or be missing
|
Zhang_Transforming_Radiance_Field_With_Lipschitz_Network_for_Photorealistic_3D_Scene_CVPR_2023
|
Abstract
Recent advances in 3D scene representation and novel
view synthesis have witnessed the rise of Neural Radiance
Fields (NeRFs). Nevertheless, it is not trivial to exploit
NeRF for the photorealistic 3D scene stylization task, which
aims to generate visually consistent and photorealistic styl-
ized scenes from novel views. Simply coupling NeRF with
photorealistic style transfer (PST) will result in cross-view
inconsistency and degradation of stylized view syntheses.
Through a thorough analysis, we demonstrate that this non-
trivial task can be simplified in a new light: When trans-
forming the appearance representation of a pre-trained
NeRF with Lipschitz mapping, the consistency and pho-
torealism across source views will be seamlessly encoded
into the syntheses. That motivates us to build a concise and
flexible learning framework namely LipRF , which upgrades
arbitrary 2D PST methods with Lipschitz mapping tailored
for the 3D scene. Technically, LipRF first pre-trains a ra-
diance field to reconstruct the 3D scene, and then emulates
the style on each view by 2D PST as the prior to learn a Lip-
schitz network to stylize the pre-trained appearance. In view
of that Lipschitz condition highly impacts the expressivity of
the neural network, we devise an adaptive regularization to
balance the reconstruction and stylization. A gradual gra-
dient aggregation strategy is further introduced to optimize
LipRF in a cost-efficient manner. We conduct extensive ex-
periments to show the high quality and robust performance
of LipRF on both photorealistic 3D stylization and object
appearance editing.
|
1. Introduction
Photorealistic style transfer (PST) [52] is one of the im-
portant tasks for visual content creation, which aims to au-
tomatically apply the color style of a reference image to an-
*Corresponding authorother input ( e.g., image [38] or video [65]). In this task,
the stylized result is required to look like a camera shot
and preserve the input structure ( e.g., edges and regions).
Benefiting from the launch of deep learning, a series of so-
phisticated deep PST methods [19,30,38,59,65] have been
developed for practical usage. Recent progress in 3D scene
representation has featured Neural Radiance Field [43, 67]
(NeRF) with efficient training and high-quality view syn-
thesis. This inspires us to go one step further to explore
a more challenging task of photorealistic 3D scene styl-
ization, which is to generate visually consistent and pho-
torealistic stylized syntheses from arbitrary views. Such a
task enables an automatic modification of 3D scene appear-
ance with different lighting, time of day, weather, or other
effects, thereby enhancing user experience and stimulating
emotions for virtual reality [57].
Nevertheless, it is not trivial to build an effective frame-
work for photorealistic 3D scene stylization. The diffi-
culty mainly originates from the fact that there is no valid
photorealistic style loss tailored for training NeRF. In gen-
eral, the image-based PST is commonly tackled via either
the neural style transfer [11] combined with complicated
post-processing [30, 38, 41], or particular network struc-
tures [29, 61, 65]. However, none of them can be directly
applied to the learning of NeRF. As shown in Figure 1, sim-
ply employing the state-of-the-art 2D PST on each view
might result in noise, disharmony and even inconsistency
across views, since the PST methods rely on the size or ob-
ject masks of the inputs. Such downsides will be further
amplified after reconstructing the 3D scene with NeRF.
To alleviate these limitations, we start with a basic un-
derstanding of this task: Though preserving the photoreal-
ism and consistency seems to be different in the context of
2D images, they do have the same essence when moving to
3D volume rendering [40]. From this standpoint, the task
is simplified as a problem to regulate the volume rendering
variance of the radiance field before and after stylization.
According to the studies of color mapping [48,51,52], some
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20712
Scene &Style
Radiance field
Radiance field w/ Lipschitz MLP
2D Photorealistc style transfer
Figure 1. Illustrations of different strategies for photorealistic 3D scene stylization. The 2D PST method [9] generates disharmonious
orange color on the stem and petaline edge. The regions bounded by the dotted ellipses are inconsistent due to the white spot in the first
view. When employing a radiance field to reconstruct the results of 2D PST method, the disharmony and inconsistency are still retained.
Our LipRF successfully eliminates these downsides, and renders high-quality stylized view syntheses.
specific linear mappings of image pixels can nicely preserve
the image structures with photorealistic effect. Motivated
by this, we theoretically demonstrate that a simple yet ef-
fective design of Lipschitz-constrained linear mapping over
appearance representation can elegantly control the volume
rendering variance. Furthermore, we prove that replac-
ing linear mapping with a Lipschitz multilayer perceptron
(MLP) also holds these properties under extra assumptions
which can be relaxed in practice. Such a way completely
eliminates the drawbacks of 2D PST when transforming the
radiance field with a Lipschitz MLP (see Figure 1). In a
nutshell, our analysis verifies that Lipschitz MLP can be in-
terpreted as an implicit regularization to safeguard the 3D
photography of stylized scenes.
By consolidating the idea of transforming the radiance
field with the Lipschitz network, we propose a novel NeRF-
based architecture (namely LipRF) for photorealistic 3D
scene stylization. Technically, LipRF contains two stages:
1) training a radiance field to reconstruct the source 3D
scene; 2) learning a Lipschitz network to transform the pre-
trained appearance representation to the stylized 3D scene
with the guidance of style emulation on each view by ar-
bitrary 2D PST. We adopt the Plenoxels [67] as the base
radiance field due to its advanced reconstruction quality
and compressed appearance representation by spheric har-
monics. Considering that the Lipschitz condition greatly
impacts the expressivity of neural networks, we design an
adaptive regularization based on spectral normalization [44]
to allow a mild relaxation of the Lipschitz constant in each
linear layer. Finally, we capitalize on gradual gradient ag-
gregation to optimize LipRF in a cost-efficient fashion.
In summary, we have made the following contributions:
(I) We present a thorough and insightful analysis of photo-
realistic 3D scene stylization on the basis of volume render-
ing and Lipschitz transformation. ( II) We build a concise
and flexible framework (LipRF) for photorealistic 3D scene
stylization by novelly transforming the appearance repre-
sentation of a pre-trained NeRF with the Lipschitz Network.
(III) Under the Lipschitz condition, we design adaptive reg-
ularization and gradual gradient aggregation to seek a bettertrade-off among the reconstruction, stylization quality, and
computational cost. We evaluate LipRF on both photoreal-
istic 3D stylization and object appearance editing tasks to
validate the effectiveness of our proposal.
|
Yu_Semi-Supervised_Domain_Adaptation_With_Source_Label_Adaptation_CVPR_2023
|
Abstract
Semi-Supervised Domain Adaptation (SSDA) involves
learning to classify unseen target data with a few labeled
and lots of unlabeled target data, along with many labeled
source data from a related domain. Current SSDA ap-
proaches usually aim at aligning the target data to the la-
beled source data with feature space mapping and pseudo-
label assignments. Nevertheless, such a source-oriented
model can sometimes align the target data to source data
of the wrong classes, degrading the classification per-
formance. This paper presents a novel source-adaptive
paradigm that adapts the source data to match the tar-
get data. Our key idea is to view the source data as a
noisily-labeled version of the ideal target data. Then, we
propose an SSDA model that cleans up the label noise dy-
namically with the help of a robust cleaner component de-
signed from the target perspective. Since the paradigm is
very different from the core ideas behind existing SSDA ap-
proaches, our proposed model can be easily coupled with
them to improve their performance. Empirical results on
two state-of-the-art SSDA approaches demonstrate that the
proposed model effectively cleans up the noise within the
source labels and exhibits superior performance over those
approaches across benchmark datasets. Our code is avail-
able at https://github.com/chu0802/SLA .
|
1. Introduction
Domain Adaptation (DA) focuses on a general machine
learning scenario where training and test data may originate
from two related but distinct domains: the source domain
and the target domain. Many works have extensively stud-
ied unsupervised DA (UDA), where labels in the target do-
main cannot be accessed, from both theoretical [2, 19, 36]
and algorithmic [5, 8, 15, 16, 22, 37] perspectives. Recently,
Semi-Supervised Domain Adaptation (SSDA), another DA
setting that allows access to a few target labels, has received
more research attention because it is a simple yet realistic
setting for application needs.
The most na ¨ıve strategy for SSDA, commonly known as
Figure 1. Top. Training the model with the original source labels
might lead to the misalignment of the target data. Bottom. After
cleaning up the noisy source labels with our SLA framework, the
target data can be aligned with the correct classes.
S+T [21, 33], aims to train a model using the source data
and labeled target data with a standard cross entropy loss.
This strategy often suffers from a well-known domain shift
issue, which stems from the gap between different data dis-
tributions. To address this issue, many state-of-the-art al-
gorithms attempt to explore better use of the unlabeled tar-
get data so that the target distribution can be aligned with
the source distribution. Recently, several Semi-Supervised
Learning (SSL) algorithms have been applied for SSDA
[12, 21, 30] to regularize the unlabeled data, such as en-
tropy minimization [6], pseudo-labeling [11,24] and consis-
tency regularization [1, 24]. These classic source-oriented
strategies have prevailed for a long time. However, these
algorithms typically require the target data to closely match
some semantically similar source data in the feature space.
Therefore, if the S+T space has been misaligned, it can be
challenging to recover from the misalignment, as illustrated
in Figure 1.
We take a deeper look into a specific example from the
Office-Home dataset [27] to confirm the abovementioned is-
sue. Figure 2 visualizes the feature space trained by S+T us-
ing t-SNE [3]. We observed that the misalignment between
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24100
Figure 2. Feature visualizations with t-SNE for an example of the
misalignment on the Office-Home A→C dataset with ResNet34.
The model is trained by S+T. Left: 0-th iteration. Right : 5000-th
iteration . We observe that the misalignment has already happened
at a very early stage. Guided by source labels and a few target
labels, a portion of the target data from the 59th class misaligns
with the source data from the 7th class.
the source and the target data has happened at a very early
stage. For instance, in the beginning, a portion of the target
data from the 59th class is close to the source data from the
7th class. Since we only have access to source labels and
a few target labels, without proper guidance from enough
target labels, such misalignment becomes more severe after
being trained by S+T. Table 1 shows the partial confusion
matrix of S+T. Roughly 40% of the target data in the 59th
class is mispredicted to the 7th class, and only around 20%
of the data is classified correctly.
From the case study above, we argue that relying on
the source labels like S+T can misguide the model to learn
wrong classes for some target data. That is, source labels
can be viewed as a noisy version of the ideal labels for target
classification. Based on the argument, the setting of SSDA
is more like a Noisy Label Learning (NLL) problem, with a
massive amount of noisy labels (source labels) and a small
number of clean labels (target labels).
Learning with noisy labels is a widely studied machine
learning problem. A popular solution is to clean up the
noisy labels with the help of another model, which can also
be referred to as label correction [28]. To approach Do-
main Adaptation as an NLL problem, we borrow the idea
from label correction and propose a Source Label Adapta-
tion (SLA) framework, as shown in Figure 1. We construct
a label adaptation component that provides the view from
the target data and dynamically cleans up the noisy source
labels at each iteration. Unlike other earlier works that study
how to leverage the unlabeled target data, we mainly inves-
tigate how to train the source data with the adapted labels
to better suit the ideal target space. This source-adaptive
paradigm is entirely orthogonal to the core ideas behind ex-
isting SSDA algorithms. Thus, we can combine our frame-
work with those algorithms to get superior results. We sum-
marize our contributions as follows.
• We argue that the classic source-oriented methodsTrue\Pred Class 7 Class 59 Class 41 Others
Class 59 38.5% 19.8% 13.5% 28.2%
Table 1. A partial confusion matrix of S+T on the 3-shot Office-
Home A→C dataset with ResNet34.
might still suffer from the biased feature space de-
rived from S+T. To escape this predicament, we pro-
pose adapting the source data to the target space by
modifying the original source labels.
• We address DA as a particular case of NLL problems
and present a novel source-adaptive paradigm. Our
SLA framework can be easily coupled with other ex-
isting algorithms to boost their performance.
• We demonstrate the usefulness of our proposed SLA
framework when coupled with state-of-the-art SSDA
algorithms. The framework significantly improved ex-
isting algorithms on two major benchmarks, inspiring
a new direction for solving DA problems.
|
Zhang_Gradient_Norm_Aware_Minimization_Seeks_First-Order_Flatness_and_Improves_Generalization_CVPR_2023
|
Abstract
Recently, flat minima are proven to be effective for im-
proving generalization and sharpness-aware minimization
(SAM) achieves state-of-the-art performance. Yet the cur-
rent definition of flatness discussed in SAM and its follow-
ups are limited to the zeroth-order flatness (i.e., the worst-
case loss within a perturbation radius). We show that the
zeroth-order flatness can be insufficient to discriminate min-
ima with low generalization error from those with high gen-
eralization error both when there is a single minimum or
multiple minima within the given perturbation radius. Thus
we present first-order flatness, a stronger measure of flat-
ness focusing on the maximal gradient norm within a per-
turbation radius which bounds both the maximal eigenvalue
of Hessian at local minima and the regularization function
of SAM. We also present a novel training procedure named
Gradient norm Aware Minimization (GAM) to seek minima
with uniformly small curvature across all directions. Ex-
perimental results show that GAM improves the generaliza-
tion of models trained with current optimizers such as SGD
and AdamW on various datasets and networks. Further-
more, we show that GAM can help SAM find flatter minima
and achieve better generalization. The code is available at
https://github.com/xxgege/GAM .
|
1. Introduction
Current neural networks have achieved promising results
in a wide range of fields [36, 53, 55, 67, 73–75, 78], yet they
are typically heavily over-parameterized [2, 3]. Such heavy
overparameterization leads to severe overfitting and poor
generalization to unseen data when the model is learned
simply with common loss functions (e.g., cross-entropy)
[27]. Thus effective training algorithms are required to limit
the negative effects of overfitting training data and find gen-
†Equal contribution, *Corresponding author
ZOFZOF
FOFFOF
ρ ρTraining Testing(a)
ZOF
ZOF
FOFFOF
ρ ρTraining Testing
(b)
Figure 1. The comparison of the zeroth-order flatness (ZOF) and
first-order flatness (FOF). Given a perturbation radius ρ, ZOF can
fail to indicate generalization error both when there are multi-
ple minima (1a) and a single minimum (1b) in the radius while
FOF remains discriminative. The height of blue rectangles in
curly brackets is the value of ZOF and the height of gray trian-
gles (which indicates the slope) is the value of FOF. In Figure 1a,
when ρis large and enough to cover multiple minima, ZOF can
not measure the fluctuation frequency while FOF prefers the flat-
ter valley which has a smaller gradient norm. When ρis small
and covers only a single minimum, the maximum loss in ρcan be
misleading as it can be misaligned with the uptrend of loss. As
shown in Figure 1b, ZOF prefers the valley on the right, which has
a larger generalization error (the orange dotted line), while FOF
prefers the left one.
eralizable solutions.
Many studies try to improve model generalization by
modifying the training procedure, such as batch normal-
ization [26], dropout [24], and data augmentation [13, 68,
72]. Especially, some works discuss the connection be-
tween the geometry of the loss landscape and general-
ization [19, 22, 27]. A branch of effective approaches,
sharpness-Aware Minimization (SAM) [19] and its variants
[16,17,34,43,48,77], minimizes the worst-case loss within
a perturbation radius, which we call zeroth-order flatness. It
is proven that optimizing the zeroth-order flatness leads to
lower generalization error and achieves state-of-the-art per-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20247
formance on various image classification tasks [19, 39, 80].
Optimizing the worst case, however, relies on a reason-
able choice of perturbation radius ρ. As a prefixed hyper-
parameter in SAM or a hyperparameter under parameter re-
scaling in its variants, such as ASAM [39], ρcan not al-
ways be a perfect choice in the whole training process. We
show that the zeroth-order flatness may fail to indicate the
generalization error with a given ρ. As in Figure 1a, when
ρcovers multiple minima, the zeroth-order flatness (SAM)
can not measure the fluctuation frequency. When there is a
single minimum within ρ, as in Figure 1b the observation
radius is limited and the maximum loss in ρcan be mis-
aligned with the uptrend of loss. So zeroth-order flatness
can be misleading and the knowledge of loss gradient is re-
quired for generalization error minimization.
To address this problem, we introduce first-order flat-
ness, which controls the maximum gradient norm in the
neighborhood of minima. We show that the first-order flat-
ness is stronger than the zeroth-order flatness as the loss
intensity of the loss fluctuation can be bounded by the max-
imum gradient. When the perturbation radius covers mul-
tiple minima, which we show is quite common in prac-
tice, the first-order flatness discriminates more drastic jit-
ters from real flat valleys, as in Figure 1a. When the per-
turbation radius is small and covers only one minimum, the
first-order flatness demonstrates the trend of loss gradient
and can help indicate generalization error. We further show
that the first-order flatness directly controls the maximal
eigenvalue of Hessian of the training loss, which is a proper
sharpness/flatness measure indicating the loss uptrend un-
der an adversarial perturbation to the weights [31–33].
To optimize the first-order flatness in deep model
training, we propose Gradient norm Aware Minimization
(GAM), which approximates the maximum gradient norm
with stochastic gradient ascent and Hessian-vector products
to avoid the materialization of the Hessian matrix.
We summarize our contributions as follows.
• We present first-order flatness, which measures the largest
gradient norm in the neighborhood of minima. We show
that the first-order flatness is stronger than current zeroth-
order flatness and it controls the maximum eigenvalue of
Hessian.
• We propose a novel training procedure, GAM, to simulta-
neously optimize prediction loss and first-order flatness.
We analyze the generalization error and the convergence
of GAM.
• We empirically show that GAM considerably improves
model generalization when combined with current opti-
mizers such as SGD and AdamW across a wide range of
datasets and networks. We show that GAM further im-
proves the generalization of models trained with SAM.
• We empirically validate that GAM indeed finds flatter op-
tima with lower Hessian spectra.
|
Yang_TINC_Tree-Structured_Implicit_Neural_Compression_CVPR_2023
|
Abstract
Implicit neural representation (INR) can describe the target
scenes with high fidelity using a small number of param-
eters, and is emerging as a promising data compression
technique. However, limited spectrum coverage is intrinsic
to INR, and it is non-trivial to remove redundancy in diverse
complex data effectively. Preliminary studies can only
exploit either global or local correlation in the target data
and thus of limited performance. In this paper, we propose
a Tree-structured Implicit Neural Compression (TINC)
to conduct compact representation for local regions and
extract the shared features of these local representations
in a hierarchical manner. Specifically, we use Multi-Layer
Perceptrons (MLPs) to fit the partitioned local regions, and
these MLPs are organized in tree structure to share pa-
rameters according to the spatial distance. The parameter
sharing scheme not only ensures the continuity between
adjacent regions, but also jointly removes the local and
non-local redundancy. Extensive experiments show that
TINC improves the compression fidelity of INR, and has
shown impressive compression capabilities over commer-
cial tools and other deep learning based methods. Besides,
the approach is of high flexibility and can be tailored for
different data and parameter settings. The source code can
be found at https://github.com/RichealYoung/TINC.
|
1. Introduction
In the big data era, there exist massive and continuously
growing amount of visual data from different fields, from
surveillance, entertainment, to biology and medical diagno-
sis. The urgent need for efficient data storage, sharing and
transmission all requires effective compression techniques.
Although image and video compression have been studied
for decades and there are a number of widely used commer-
cial tools, the compression of medical and biological data
This work was done in collaboration with Tingxiong Xiao and Yux-
iao Cheng, under supervision of Prof. Jinli Suo and Prof. Qionghai Dai
affiliated with Department of Automation, Tsinghua Univ.are not readily available.
Implicit neural representation (INR) is becoming
widely-used for scene rendering [19, 21, 23, 31, 44], shape
estimation [8, 11, 25, 29], and dynamics modeling [15, 26,
28]. Due to its powerful representation capability, INR can
describe nature scenes at high fidelity with a much smaller
number of parameters than raw discrete grid representation,
and thus serves as a promising data compression technique.
In spite of the big potential, INR based compression is
quite limited confronted with large sized data [20], since
INR is intrinsically of limited spectrum coverage and can-
not envelop the spectrum of the target data, as analyzed
in [41]. Two pioneering works using INR for data com-
pression, including NeRV [7] and SCI [41], have attempted
to handle this issue in their respective ways. Specifically,
NeRV introduces the convolution operation into INR, which
can reduce the required number of parameters using the
weight sharing mechanism. However, convolution is spa-
tially invariant and thus limits NeRV’s representation accu-
racy on complex data with spatial varying feature distribu-
tion. Differently, to obtain high fidelity on complex data,
SCI adopts divide-and-conquer strategy and partitions the
data into blocks within INR’s concentrated spectrum en-
velop. This improves the local fidelity, but cannot remove
non-local redundancies for higher compression ratio and
tend to cause blocking artifacts.
To compress large and complex data with INR, in this pa-
per we propose to build a tree-structured Multi-Layer Per-
ceptrons (MLPs), which consists of a set of INRs to rep-
resent local regions in a compact manner and organizes
them under a hierarchical architecture for parameter sharing
and higher compression ratio. Specifically, we first draw
on the idea of ensemble learning [12] and use a divide-
and-conquer strategy [20, 41] to compress different regions
with multiple MLPs separately. Then we incorporate these
MLPs with a tree structure to extract their shared param-
eters hierarchically, following the rule that spatially closer
regions are of higher similarity and share more parameters.
Such a joint representation strategy can remove the redun-
dancy both within each local region and among non-local
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18517
regions, and also ensures the continuity between adjacent
regions. The scheme of the proposed network is illustrated
in Fig. 1, and we name our approach TINC (Tree-structured
Implicit Neural Compression).
Using the massive and diverse biomedical data, we con-
duct extensive experiments to validate that TINC greatly
improves the capability of INR and even outperforms the
commercial compression tools (H.264 and HEVC) under
high compression ratios. The proposed TINC is also a gen-
eral framework, which can be flexibly adapted to diverse
data and varying settings.
|
Zhang_Unlearnable_Clusters_Towards_Label-Agnostic_Unlearnable_Examples_CVPR_2023
|
Abstract
There is a growing interest in developing unlearnable
examples (UEs) against visual privacy leaks on the Internet.
UEs are training samples added with invisible but unlearn-
able noise, which have been found can prevent unauthorized
training of machine learning models. UEs typically are gen-
erated via a bilevel optimization framework with a surrogate
model to remove (minimize) errors from the original sam-
ples, and then applied to protect the data against unknown
target models. However, existing UE generation methods
all rely on an ideal assumption called label-consistency ,
where the hackers and protectors are assumed to hold the
same label for a given sample. In this work, we propose
and promote a more practical label-agnostic setting, where
the hackers may exploit the protected data quite differently
from the protectors. E.g., a m-class unlearnable dataset
held by the protector may be exploited by the hacker as a
n-class dataset. Existing UE generation methods are ren-
dered ineffective in this challenging setting. To tackle this
challenge, we present a novel technique called Unlearn-
able Clusters (UCs) to generate label-agnostic unlearn-
able examples with cluster-wise perturbations. Furthermore,
we propose to leverage Vision-and-Language Pre-trained
Models (VLPMs) like CLIP as the surrogate model to im-
prove the transferability of the crafted UCs to diverse do-
mains. We empirically verify the effectiveness of our pro-
posed approach under a variety of settings with different
datasets, target models, and even commercial platforms Mi-
crosoft Azure and Baidu PaddlePaddle . Code is avail-
able at https://github.com/jiamingzhang94/
Unlearnable-Clusters .
|
1. Introduction
While the huge amount of “free” data available on the
Internet has been key to the success of deep learning and
computer vision, this has also raised public concerns on the
unauthorized exploitation of personal data uploaded to the
*This work is done when the author interned at Peng Cheng Lab.
†Corresponding authors.
Unconscious and unauthorized collection for training
Label -consistency Label -agnosticOriginal images 𝒙𝒙
Unlearnable image 𝒙𝒙′
Hacker
Protector
𝒎𝒎-class surrogate
model 𝒇𝒇𝒔𝒔𝒎𝒎
=car, dog, cat,
airplane, etc. ...
𝒎𝒎-class target
model 𝒇𝒇𝒕𝒕𝒎𝒎
car, dog, cat, airplane, etc.=
𝒏𝒏-class target
model 𝒇𝒇𝒕𝒕𝒏𝒏
=animal, vehicle, etc.
...
HackerFigure 1. An illustration of two different data protection assump-
tions: label-consistency vs.label-agnostic , where the hacker ex-
ploits the protected data in different manners.
Internet to train commercial or even malicious models [16].
For example, a company named Clearview AI has been
found to have scraped billions of personal images from Face-
book, YouTube, Venmo and millions of other websites to
construct a commercial facial recognition application [44].
This has motivated the proposal of Unlearnable Examples
(UEs) [17] to make data unlearnable (or unusable) to ma-
chine learning models/services. Similar techniques are also
known as availability attacks [2, 41] or indiscriminate poi-
soning attacks [14] in the literature. These techniques allow
users to actively adding protective noise into their private
data to avoid unauthorized exploitation, rather than putting
our trust into the hands of large corporations.
The original UE generation method generates error-
minimizing noise via a bilevel min-min optimization frame-
work with a surrogate model [17]. The noise can then be
added to samples in a training set in either a sample-wise
or class-wise manner to make the entire dataset unlearnable
to different DNNs. It has been found that this method can-
not survive adversarial training, which has been addressed
by a recent method [11]. In this work, we identify one
common assumption made by existing UE methods: label-
consistency , where the hackers will exploit the protected
dataset in the same way as the protector including the labels.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3984
This means that, for the same image, the hacker and protec-
tor hold the same label. We argue that this assumption is
too ideal, and it is possible that the hackers will collect the
protected (unlearnable) samples into a dataset for a different
task and label the dataset into different number of classes.
As illustrated in Figure 1, an image can be labelled with
different annotated labels (cat or animal), showing that a m-
class (e.g., 10-class) unlearnable dataset may be exploited
by the hacker as a n-class (e.g., 5-class or 20-class) dataset
depending on its actual needs. We term this more generic
assumption as label-agnostic and propose a novel method
Unlearnable Clusters (UCs) to generate more effective and
transferable unlearnable examples under this harsh setting.
In Figure 2 (a), we show that this more generic label-
agnostic setting poses a unique transferability challenge
for the noise generated by existing methods like Error-
Minimizing Noise (EMinN) [17], Adversarial Poisoning
(AdvPoison) [10], Synthetic Perturbations (SynPer) [41] and
DeepConfuse [9]. This indicates that the protective noise
generated by these methods are label-dependent and are ren-
dered ineffective when presented with different number of
classes. As such, we need more fundamental approaches
to make a dataset unlearnable regardless of the annotations.
To this end, we start by analyzing the working mechanism
of UEs generated by EMinN, AdvPoison as they are very
representative under the label-consistency setting. Through
a set of visual analyses, we find that the main reason why
they could break supervised learners is that the generated
noise tends to disrupts the distributional uniformity and dis-
crepancy in the deep representation space. Uniformity refers
to the property that the manifold of UEs in the deep rep-
resentation space does not deviate much from that of the
clean examples, while discrepancy refers to the property
that examples belonging to the same class are richly diverse
in the representation space. Inspired by the above obser-
vation, we propose a novel approach called Unlearnable
Clusters (UCs) to generate label-agnostic UEs using cluster-
wise (rather than class-wise) perturbations. This allows us
to achieve a simultaneous disruption of the uniformity and
discrepancy without knowing the label information.
Arguably, the choose of a proper surrogate model also
plays an important role in generating effective UEs. Previ-
ous methods generate UEs by directly attacking a surrogate
model and then transfer the generated UEs to fight against
a diverse set of target models [10, 17]. This may be easily
achievable under the label-consistency setting, but may fail
badly under the label-agnostic setting. However, even un-
der the label-consistency setting, few works have studied
the impact of the surrogate model to the final unlearnable
performance. To generate effective, and more importantly,
transferable UEs under the label-agnostic setting, we need
to explore more generic surrogate model selection strategies,
especially those that can be tailored to a wider range of un-known target models. Intuitively, the surrogate model should
be a classification DNN that contains as many classes as
possible so as to facilitate the recognition and protection of
billions of images on the Internet. In this paper, we propose
to leverage the large-scale Vision-and-Language Pre-trained
Models (VLPMs) [22,23,30] like CLIP [30] as the surrogate
model. Pre-trained on over 400 million text-to-image pairs,
CLIP has the power to extract the representation of extremely
diverse semantics. Meanwhile, VLPMs are pre-trained with
a textual description rather than a one-hot label to align with
the image, making them less overfit to the actual class “la-
bels”. In this work, we leverage the image encoder of CLIP
to extract the embeddings of the input images and then use
the embeddings to generate more transferable UCs.
We evaluate our UC approach with different backbones
and datasets, all in a black-box setting (the protector does
not know the attacker’s network architecture or the class
labels). Cluster-wise unlearnable noise can also prevent un-
supervised exploitation against contrastive learning to certain
extent, proving its superiority to existing UEs. We also com-
pare UC with existing UE methods against two commercial
machine learning platforms: Microsoft Azure1and Baidu
PaddlePaddle2. To the best of our knowledge, this is the
first physical-world attack to commercial APIs in this line of
work. Our main contributions are summarized as follows:
•We promote a more generic data protection assumption
called label-agnostic , which allows the hackers to ex-
ploit the protected dataset differently (in terms of the
annotated class labels) as the protector. This opens up
a more practical and challenging setting against unau-
thorized training of machine learning models.
•We reveal the working mechanism of existing UE gener-
ation methods: they all disrupt the distributional unifor-
mity and discrepancy in the deep representation space.
•We propose a novel approach called Unlearnable Clus-
ters(UCs) to generate label-agnostic UEs with cluster-
wise perturbations without knowing the label informa-
tion. We also leverage VLPMs like CLIP as the surro-
gate model to craft more transferable UCs.
•We empirically verify the effectiveness of our proposed
approach with different backbones on different datasets.
We also show its effectiveness in protecting private data
against commercial machine learning platforms Azure
andPaddlePaddle .
|
Yang_ContraNeRF_Generalizable_Neural_Radiance_Fields_for_Synthetic-to-Real_Novel_View_Synthesis_CVPR_2023
|
Abstract
Although many recent works have investigated general-
izable NeRF-based novel view synthesis for unseen scenes,
they seldom consider the synthetic-to-real generalization,
which is desired in many practical applications. In this
work, we first investigate the effects of synthetic data in
synthetic-to-real novel view synthesis and surprisingly ob-
serve that models trained with synthetic data tend to pro-
duce sharper but less accurate volume densities. For pix-
els where the volume densities are correct, fine-grained de-
tails will be obtained. Otherwise, severe artifacts will be
produced. To maintain the advantages of using synthetic
data while avoiding its negative effects, we propose to intro-
duce geometry-aware contrastive learning to learn multi-
view consistent features with geometric constraints. Mean-
while, we adopt cross-view attention to further enhance
the geometry perception of features by querying features
across input views. Experiments demonstrate that under
the synthetic-to-real setting, our method can render images
with higher quality and better fine-grained details, outper-
forming existing generalizable novel view synthesis meth-
ods in terms of PSNR, SSIM, and LPIPS. When trained on
real data, our method also achieves state-of-the-art results.
https://haoy945.github.io/contranerf/
|
1. Introduction
Novel view synthesis is a classical problem in computer
vision, which aims to produce photo-realistic images for un-
seen viewpoints [2,5,10,36,40]. Recently, Neural Radiance
Fields (NeRF) [25] proposes to achieve novel view synthe-
sis through continuous scene modeling through a neural net-
*Joint last authorship.work, which quickly attracts widespread attention due to
its surprising results. However, the vanilla NeRF is actu-
ally designed to fit the continuous 5D radiance field of a
given scene, which often fails to generalize to new scenes
and datasets. How to improve the generalization ability of
neural scene representation is a challenging problem.
Recent works, such as pixelNeRF [46], IBRNet [37],
MVSNeRF [4] and GeoNeRF [19], investigate how to
achieve generalizable novel view synthesis based on neu-
ral radiance fields. However, these works mainly focus on
the generalization of NeRF to unseen scenes and seldom
consider the synthetic-to-real generalization, i.e., training
NeRF with synthetic data while testing it on real data. On
the other hand, synthetic-to-real novel view synthesis is de-
sired in many practical applications where the collection of
dense view 3D data is expensive (e.g., autonomous driv-
ing, robotics, and unmanned aerial vehicle [34]). Although
some works directly use synthetic data such as Google
Scanned Objects [14] in model training, they usually over-
look the domain gaps between the synthetic and real data
as well as possible negative effects of using synthetic data.
In 2D computer vision, it is common sense that synthetic
training data usually hurts the model’s generalization abil-
ity to real-world applications [1, 8, 44]. Will synthetic data
be effective in novel view synthesis?
In this work, we first investigate the effectiveness of syn-
thetic data in NeRF’s training via extensive experiments.
Specifically, we train generalizable NeRF models using a
synthetic dataset of indoor scenes called 3D-FRONT [15],
and test the models on a real indoor dataset called Scan-
Net [9]. Surprisingly, we observe that the use of synthetic
data tends to result in more artifacts on one hand but bet-
ter fine-grained details on the other hand (see Fig.1 and
Sec.3.2 for more details). Moreover, we observe that mod-
els trained on synthetic data tend to predict sharper but less
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16508
(1)(2)(a) Ground truth
(b) Pred (ScanNet)
(c) Pred (3D-FRONT)
(2)(1) (d) Density (ScanNet)
(1)(2) (e) Density (3D-FRONT)
Figure 1. Fig.1a is the ground truth of the target image to be rendered. Fig.1b and Fig.1c are the rendered images when models are trained
on ScanNet [9] and 3D-FRONT [15], respectively. Compare to Fig.1b, Fig.1c is more detailed (see purple box) but has more artifacts (see
pink box). Fig.1d and Fig.1e further show the volume density (the redder the color, the higher the density) along the epipolar line projected
from the orange points in Fig.4a to the source view. The model trained on 3D-FRONT prefers to predict the volume density with a sharper
distribution, but sometimes the predicted volume density is not accurate (see line 1 in Fig.1e), resulting in severe artifacts.
(a) Deviation.
(b) Error.
Figure 2. Deviation and error of predicted depth when trained
with synthetic and real data, respectively. We count the devi-
ation and error of the predicted depth for each pixel in the test
dataset and plot them as the histogram. The depth is calculated by
aggregating depth of the sampled points along the rendering ray,
similar to the process of color rendering. Compared to the model
trained with the real data, the model trained with the synthetic data
tends to predict depths with small deviations but large errors, i.e.,
density distributions that are sharper but less geometrically accu-
rate.
accurate volume densities (see Fig.2). In this case, better
fine-grained details can be obtained once the prediction of
geometry (i.e., volume density) is correct, while severe arti-
facts will be produced otherwise. This motivates us to con-
sider one effective way to generalize from synthetic data to
real data in a geometry-aware manner.
To improve the synthetic-to-real generalization ability of
NeRF, we propose ContraNeRF, a novel approach that gen-
eralizes well from synthetic data to real data via contrastive
learning with geometry consistency. In many 2D vision
tasks, contrastive learning has been shown to improve the
generalization ability of models [20, 42, 43] by enhancing
the consistency of positive pairs. In 3D scenes, geometry is
related to multi-view appearance consistency [24, 37], and
contrastive learning may help models predict accurate ge-
ometry by enhancing multi-view consistency. In this paper,
we propose geometry-aware contrastive learning to learn
a multi-view consistent features representation by compar-
ing the similarities of local features for each pair of sourceviews (see Fig.3). Specifically, for pixels of each source
view, we first aggregate information along the ray projected
to other source views to get the geometry-enhanced fea-
tures. Then, we sample a batch of target pixels from each
source view as the training batch for contrastive learning
and project them to other views to get positive and neg-
ative samples. The InfoNCE loss [35] is calculated in a
weighted manner. Finally, we render the ray by learning
a general view interpolation function following [37]. Ex-
periments show that when trained on the synthetic data,
our method outperforms the recent concurrent generalizable
NeRF works [4, 19, 24, 37, 46] and can render high-quality
novel view while preserving fine-grained details for unseen
scenes. Moreover, under the real-to-real setting, our method
also performs better than existing neural radiance field gen-
eralization methods. In summary, our contributions are:
1. Investigate the effects of synthetic data in NeRF-based
novel view synthesis and observe that models trained
on synthetic data tend to predict sharper but less accu-
rate volume densities when tested on real data;
2. Propose geometry-aware contrastive learning to learn
multi-view consistent features with geometric con-
straints, which significantly improves the model’s
synthetic-to-real generalization ability;
3. Our method achieves state-of-the-art results for gener-
alizable novel view synthesis under both synthetic-to-
real and real-to-real settings.
|
Zhang_Towards_Unsupervised_Object_Detection_From_LiDAR_Point_Clouds_CVPR_2023
|
Abstract
In this paper, we study the problem of unsupervised ob-
ject detection from 3D point clouds in self-driving scenes.
We present a simple yet effective method that exploits (i)
point clustering in near-range areas where the point clouds
are dense, (ii) temporal consistency to filter out noisy unsu-
pervised detections, (iii) translation equivariance of CNNs
to extend the auto-labels to long range, and (iv) self-
supervision for improving on its own. Our approach, OYS-
TER (Object Discover yviaSpatio- Temporal Refinement),
does not impose constraints on data collection (such as
repeated traversals of the same location), is able to de-
tect objects in a zero-shot manner without supervised fine-
tuning (even in sparse, distant regions), and continues to
self-improve given more rounds of iterative self-training. To
better measure model performance in self-driving scenar-
ios, we propose a new planning-centric perception metric
based on distance-to-collision. We demonstrate that our
unsupervised object detector significantly outperforms un-
supervised baselines on PandaSet and Argoverse 2 Sen-
sor dataset, showing promise that self-supervision com-
bined with object priors can enable object discovery in
the wild. For more information, visit the project website:
https://waabi.ai/research/oyster .
|
1. Introduction
When a large set of annotations are available, supervised
learning can solve the task of 3D object detection remark-
ably well thanks to the power of neural networks, as proven
by many successful object detectors developed in the past
decade [24,36,47,70]. However, since most existing data is
unlabeled, human annotations are currently the bottleneck
for data-driven learning algorithms, as they require tedious
manual effort that is very costly in practice. While there has
been some effort on using weaker supervision to train ob-
ject detectors [4, 57, 68], it is worth noting that human and
animal brains are able to perceive objects without explicit
yWork done at Waabi. Mengye is now at New York University.
Near-range: dense point cloud with clear clusters Long-range: sparse point cloud Figure 1. Visualization of a sequence of five 3D point clouds. At
near range, the point clouds are very dense and we can clearly
distinguish clusters, motivating us to use prior knowledge. At far
range, the point clouds are quite sparse and the objects do not ap-
pear as obvious, leading us to explore zero-shot generalization.
labels at all [2]. This naturally inspires us to ask whether
we can design unsupervised learning algorithms that dis-
cover objects from raw streams of sensor data on their own.
Unsupervised object detection has long been studied in
computer vision, albeit in various forms. For instance, nu-
merous ways of unsupervised object proposals were consid-
ered as the first stage of an object detector [20]. These meth-
ods leverage a variety of cues including colors and edge
boundaries [1], graph structures [18], and motion cues [50].
While those methods are no longer popular in today’s object
detectors due to end-to-end supervised training, they offer
important intuitions of what an object is .
In recent years, unsupervised object detection has made
a comeback under the name of object-centric models [6,15,
33, 34, 38, 39, 58]. The essential idea is to train an auto-
encoder with a structured decoder such that the network
is forced to decompose the scene into a set of individual
objects during reconstruction. Most of those models only
show experiments on synthetic toy datasets, where the back-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9317
ground lacks any fine-grained details, and the foreground
objects have simplistic shapes and distinct colors. The rea-
son why object-centric models have struggled to scale to re-
alistic data is that their mechanism of object decomposition
is based on careful balancing of model capacities between
the foreground and background modules. Consequently, an
increase in model capacity, which is needed for real-world
data, breaks the brittle balance between modules and result
in a failure in scene decomposition. Unsupervised object
discovery in the wild remains an open challenge.
In this work, we study unsupervised object detection
from point clouds in the context of self-driving vehicles
(SDVs). This is a challenging task due to occlusion as well
as the sparsity of the observations particularly at range. We
refer the reader to Fig. 1 for an example. Despite its appeal,
unsupervised object detection from LiDAR has received lit-
tle attention. Recent work [66] exploits repeated traver-
sals of the same region to understand the persistence of a
point over time and discover mobile objects from that in-
formation. However, the assumption of repeated traversals
in acquiring point clouds restricts the applicability of the
method. Instead, we study the most generic setting of unsu-
pervised detection given any raw sequence of point clouds.
Our method OYSTER (Object Discover yviaSpatio-
Temporal Refinement ) carefully combines key ideas from
density-based spatial clustering, temporal consistency,
equivariance and self-supervised learning in a unified
framework that exploits their strengths while overcoming
their shortcomings. Firstly, we exploit point clustering to
obtain initial pseudo-labels to bootstrap an object detector
in the near range, where point density is high (see Fig. 1).
We then employ unsupervised tracking to filter out tempo-
rally inconsistent objects. Since point clustering does not
work well in the long-range where observations are sparse,
we exploit the translation equivariance of CNNs to train on
high-quality, near-range pseudo-labels and zero-shot gen-
eralize to long range. To bridge the density gap between
short and long-range at training vs. inference, we propose
a novel random LiDAR ray dropping strategy. Finally, we
design a self-improvement loop in which this bootstrapped
model can self-train. At every round of self-improvement,
we utilize the temporal consistency of objects to automati-
cally refine the detections from the model at the previous
iteration, and use these refined outputs as pseudo-labels
for training. Our experiments on Pandaset [63] and Ar-
goverse V2 Sensor [60] demonstrate that OYSTER clearly
outperforms other unsupervised methods, both under stan-
dard metrics based on intersection-over-union (IoU) and our
proposed metric based on distance-to-collision (DTC). We
hope that our work serves as a step towards building percep-
tion systems that automatically improve with more data and
compute without being bottlenecked by human supervision.
|
Zhang_Generating_Human_Motion_From_Textual_Descriptions_With_Discrete_Representations_CVPR_2023
|
Abstract
In this work, we investigate a simple and must-known con-
ditional generative framework based on Vector Quantised-
Variational AutoEncoder (VQ-VAE) and Generative Pre-
trained Transformer (GPT) for human motion generation
from textural descriptions. We show that a simple CNN-
based VQ-VAE with commonly used training recipes (EMA
and Code Reset) allows us to obtain high-quality discrete rep-
resentations. For GPT, we incorporate a simple corruption
strategy during the training to alleviate training-testing dis-
crepancy. Despite its simplicity, our T2M-GPT shows better
performance than competitive approaches, including recent
diffusion-based approaches. For example, on HumanML3D,
which is currently the largest dataset, we achieve compara-
ble performance on the consistency between text and gener-
ated motion (R-Precision), but with FID 0.116 largely outper-
forming MotionDiffuse of 0.630. Additionally, we conduct
analyses on HumanML3D and observe that the dataset size
is a limitation of our approach. Our work suggests that VQ-
VAE still remains a competitive approach for human motion
generation. Our implementation is available on the project
page: https://mael-zys.github.io/T2M-GPT/ .
|
1. Introduction
Generating motion from textual descriptions can be used
in numerous applications in the game industry, film-making,
and animating robots. For example, a typical way to access
new motion in the game industry is to perform motion cap-
ture, which is expensive. Therefore automatically generating
motion from textual descriptions, which allows producing
meaningful motion data, could save time and be more eco-
nomical.
Motion generation conditioned on natural language is
challenging, as motion and text are from different modali-
ties. The model is expected to learn precise mapping from
Figure 1. Visual results on HumanML3D [22]. Our approach is
able to generate precise and high-quality human motion consistent
with challenging text descriptions. More visual results are on the
project page.
the language space to the motion space. To this end, many
works propose to learn a joint embedding for language and
motion using auto-encoders [3, 21, 65] and V AEs [52, 53].
MotionClip [65] aligns the motion space to CLIP [55] space.
ACTOR [52] and TEMOES [53] propose transformer-based
V AEs for action-to-motion and text-to-motion respectively.
These works show promising performances with simple de-
scriptions and are limited to producing high-quality motion
when textual descriptions become long and complicated.
Guo et al. [22] and TM2T [23] aim to generate motion
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14730
sequences with more challenging textual descriptions. How-
ever, both approaches are not straightforward, involve three
stages for text-to-motion generation, and sometimes fail to
generate high-quality motion consistent with the text (See
Figure 4 and more visual results on the project page). Re-
cently, diffusion-based models [32] have shown impressive
results on image generation [60], which are then introduced
to motion generation by MDM [66] and MotionDiffuse [74]
and dominates text-to-motion generation task. However,
we find that compared to classic approaches, such as VQ-
V AE [68], the performance gain of the diffusion-based ap-
proaches [66, 74] might not be that significant. In this work,
we are inspired by recent advances from learning the discrete
representation for generation [5,15,16,19,45,58,68,70] and
investigate a simple and classic framework based on Vector
Quantized Variational Autoencoders (VQ-V AE) [68] and
Generative Pre-trained Transformer (GPT) [56, 69] for text-
to-motion generation.
Precisely, we propose a two-stage method for motion
generation from textual descriptions. In stage 1, we use a
standard 1D convolutional network to map motion sequences
to discrete code indices. In stage 2, a standard GPT-like
model [56, 69] is learned to generate sequences of code in-
dices from pre-trained text embedding. We find that the naive
training of VQ-V AE [68] suffers from code collapse. One
effective solution is to leverage two standard recipes during
the training: EMA andCode Reset . We provide a full anal-
ysis of different quantization strategies. For GPT, the next
token prediction brings inconsistency between the training
and inference. We observe that simply corrupting sequences
during the training alleviates this discrepancy. Moreover,
throughout the evolution of image generation, the size of
the dataset has played an important role. We further explore
the impact of dataset size on the performance of our model.
The empirical analysis suggests that the performance of our
model can potentially be improved with larger datasets.
Despite its simplicity, our approach can generate high-
quality motion sequences that are consistent with challeng-
ing text descriptions (Figure 1 and more on the project
page). Empirically, we achieve comparable or even better
performances than concurrent diffusion-based approaches
MDM [66] and HumanDiffuse [74] on two widely used
datasets: HumanML3D [22] and KIT-ML [54]. For example,
on HumanML3D, which is currently the largest dataset, we
achieve comparable performance on the consistency between
text and generated motion (R-Precision), but with FID 0.116
largely outperforming MotionDiffuse of 0.630. We conduct
comprehensive experiments to explore this area, and hope
that these experiments and conclusions will contribute to
future developments.
In summary, our contributions include:
•We present a simple yet effective approach for mo-
tion generation from textual descriptions. Our ap-proach achieves state-of-the-art performance on Hu-
manML3D [22] and KIT-ML [54] datasets.
•We show that GPT-like models incorporating discrete
representations still remain a very competitive approach
for motion generation.
•We provide a detailed analysis of the impact of quanti-
zation strategies and dataset size. We show that a larger
dataset might still offer a promising prospect to the
community.
Our implementation is available on the project page.
|
Yu_X-Pruner_eXplainable_Pruning_for_Vision_Transformers_CVPR_2023
|
Abstract
Recently vision transformer models have become promi-
nent models for a range of tasks. These models, how-
ever, usually suffer from intensive computational costs and
heavy memory requirements, making them impractical for
deployment on edge platforms. Recent studies have pro-
posed to prune transformers in an unexplainable manner,
which overlook the relationship between internal units of
the model and the target class, thereby leading to infe-
rior performance. To alleviate this problem, we propose
a novel explainable pruning framework dubbed X-Pruner,
which is designed by considering the explainability of the
pruning criterion. Specifically, to measure each prunable
unit’s contribution to predicting each target class, a novel
explainability-aware mask is proposed and learned in an
end-to-end manner. Then, to preserve the most informative
units and learn the layer-wise pruning rate, we adaptively
search the layer-wise threshold that differentiates between
unpruned and pruned units based on their explainability-
aware mask values. To verify and evaluate our method,
we apply the X-Pruner on representative transformer mod-
els including the DeiT and Swin Transformer. Comprehen-
sive simulation results demonstrate that the proposed X-
Pruner outperforms the state-of-the-art black-box methods
with significantly reduced computational costs and slight
performance degradation. Code is available at https:
//github.com/vickyyu90/XPruner .
|
1. Introduction
Over the last few years, transformers have attracted in-
creasing attention in various challenging domains, such as
natural language processing, vision, or graphs [3, 9]. It is
composed of two key modules, namely the Multi-Head At-
tention (MHA) and Multi-Layer Perceptron (MLP). How-
ever, similar to CNNs, the major limitations of transform-
ers include the gigantic model sizes with intensive compu-
tational costs. Which severely restricts their deployment
*Corresponding author.in resource-constrained devices like edge platforms. To
compress and accelerate transformer models, a variety of
techniques naturally emerge. Popular approaches include
weight quantization [30], knowledge distillation [28], filter
compression [24], and model pruning [21]. Among them,
model pruning especially structured pruning has gained
considerable interest that removes the least important pa-
rameters in pre-trained models in a hardware-friendly man-
ner, which is thus the focus of our paper.
Due to the significant structural differences between
CNNs and transformers, although there is prevailing suc-
cess in CNN pruning methods, the research on pruning
transformers is still in the early stage. Existing studies
could empirically be classified into three categories. (1)
Criterion-based pruning resorts to preserving the most im-
portant weights/attentions by employing pre-defined crite-
ria, e.g., the L1/L2 norm [20], or activation values [6]. (2)
Training-based pruning retrains models with hand-crafted
sparse regularizations [31] or resource constraints [28, 29].
(3) Architecture-search pruning methods directly search for
an optimal sub-architecture based on pre-defined policies
[3, 10]. Although these studies have made considerable
progress, two fundamental issues have not been fully ad-
dressed, i.e., the optimal layer-wise pruning ratio and the
weight importance measurement.
For the first issue, the final performance is notably af-
fected by the selection of pruning rates for different layers.
To this end, some relevant works have proposed a series of
methods for determining the optimal per-layer rate [7, 11].
For instance, Michel et al. [18] investigate the effectiveness
of attention heads in transformers for NLP tasks and pro-
pose to prune attention heads with a greedy algorithm. Yu et
al. [28] develop a pruning algorithm that removes attention
scores below a learned per-layer threshold while preserving
the overall structure of the attention mechanism. However,
the proposed methods do not take into account the inter-
dependencies between weight. Recently, Zhu et al. [31] in-
troduce the method VTP with a sparsity regularization to
identify and remove unimportant patches and heads from
the vision transformers. However, VTP needs to try the
thresholds manually for all layers.
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24355
For the second issue, previous studies resort to identi-
fying unimportant weights by various importance metrics,
including magnitude-based, gradient-based [12, 19], and
mask-based [27]. Among them, the magnitude-based ap-
proaches usually lead to suboptimal results as it does not
take into account the potential correlation between weights
[24]. In addition, gradient-based methods often tend to
prune weights with small values, as they have small gra-
dients and may not be identified as important by the back-
ward propagation. Finally, the limitation of current mask-
based pruning lies in two folds: (1) Most mask-based prun-
ing techniques manually assign a binary mask w.r.t. a unit
according to a per-layer pruning ratio, which is inefficient
and sub-optimal. (2) Most works use a non-differentiable
mask, which results in an unstable training process and poor
convergence.
In this paper, we propose a novel explainable structured
pruning framework for vision transformer models, termed
X-Pruner, by considering the explainability of the prun-
ing criterion to solve the above two problems. As stated
in the eXplainable AI (XAI) field [2], important weights
in a model typically capture semantic class-specific infor-
mation. Inspired by this theory, we propose to effectively
quantitate the importance of each weight in a class-wise
manner. Firstly, we design an explainability-aware mask
for each prunable unit (e.g., an attention head or matrix in
linear layers), which measures the unit’s contribution to pre-
dicting every class and is fully differentiable. Secondly, we
use each input’s ground-truth label as prior knowledge to
guide the mask learning, thus the class-level information
w.r.t. each input will be fully utilized. Our intuition is that if
one unit generates feature representations that make a pos-
itive contribution to a target class, its mask value w.r.t. this
class would be positively activated, and deactivated other-
wise. Thirdly, we propose a differentiable pruning oper-
ation along with a threshold regularizer. This enables the
search of thresholds through gradient-based optimization,
and is superior to most previous studies that prune units
with hand-crafted criteria. Meanwhile, the proposed prun-
ing process can be done automatically, i.e., discriminative
units that are above the learned threshold are retained. In
this way, we implement our layer-wise pruning algorithm
in an explainable manner automatically and efficiently. In
summary, the major contributions of this paper are:
• We propose a novel explainable structured pruning
framework dubbed X-Pruner, which prunes units that
make less contributions to identifying all the classes in
terms of explainability. To the best knowledge of the
authors, this is the first work to develop an explainable
pruning framework for vision transformers;
• We propose to assign each prunable unit an
explainability-aware mask, with the goal of quantify-ing its contribution to predicting each class. Specifi-
cally, the proposed mask is fully differentiable and can
be learned in an end-to-end manner;
• Based on the obtained explainability-aware masks, we
propose to learn the layer-wise pruning thresholds that
differentiate the important and less-important units via
a differentiable pruning operation. Therefore, this pro-
cess is done in an explainable manner;
• Comprehensive simulation results are presented to
demonstrate that the proposed X-Pruner outperforms
a number of state-of-the-art approaches, and shows its
superiority in gaining the explainability for the pruned
model.
|
Xu_Video_Dehazing_via_a_Multi-Range_Temporal_Alignment_Network_With_Physical_CVPR_2023
|
Abstract
Video dehazing aims to recover haze-free frames with
high visibility and contrast. This paper presents a novel
framework to effectively explore the physical haze priors
and aggregate temporal information. Specifically, we de-
sign a memory-based physical prior guidance module to
encode the prior-related features into long-range memory.
Besides, we formulate a multi-range scene radiance recov-
ery module to capture space-time dependencies in multiple
space-time ranges, which helps to effectively aggregate tem-
poral information from adjacent frames. Moreover, we con-
struct the first large-scale outdoor video dehazing bench-
mark dataset, which contains videos in various real-world
scenarios. Experimental results on both synthetic and real
conditions show the superiority of our proposed method.
|
1. Introduction
Haze largely degrades the visibility and contrast of the
outdoor scenes, which adversely affects the performance of
downstream vision tasks, such as the detection and segmen-
tation in autonomous driving and surveillance. According
to the atmospheric scattering model [18, 38], the formation
of a hazy image is described as:
I(x) =J(x)t(x) +A(1−t(x)), (1)
where I, J, A, t denote the observed hazy image, scene radi-
ance, atmospheric light, and transmission, respectively, and
xis the pixel index. The transmission t=e−βd(x)describes
the scene radiance attenuation caused by the light scattering,
where βis the scattering coefficient of the atmosphere, and
ddenotes the scene depth.
Video dehazing benefits from temporal clues, such as
highly correlated haze thickness and lighting conditions, as
⋆: This work was done during Jiaqi Xu’s internship at Shanghai Artificial
Intelligence Laboratory.
: Corresponding authors ([email protected]; [email protected]).
(a) Hazy frame
(b) VDH [46]
(c) CG-IDN [59]
(d) Our method
Figure 1. Visual comparison on a real-world hazy video. Our
method trained on our outdoor dehazing dataset clearly removes
haze without color distortion.
well as the moving foreground objects and backgrounds.
Early deep learning-based video dehazing methods lever-
age temporal information by simply concatenating input
frames or feature maps [27, 46]. Recently, GC-IDN [59]
proposes to use cost volume and confidence to align and ag-
gregate temporal information. However, existing video de-
hazing methods suffer from several limitations. First, these
approaches either obtain haze-free frames from the phys-
ical model-based component estimation [27, 46] or ignore
the explicit physical prior embedded in the haze imaging
model [59]. The former suffers from inaccurate interme-
diate prediction, thus leading to error accumulation in the
final results, while the latter overlooks the physical prior in-
formation, which plays an important role in haze estimation
and scene recovery. Second, these methods aggregate tem-
poral information by using input/feature stacking or frame-
to-frame alignment in a local sliding window, which is hard
to obtain global and long-range temporal information.
In this work, we present a novel video dehazing frame-
work via a Multi-range temporal Alignment network with
Physical prior (MAP-Net) to address the aforementioned is-
sues. First, we design a memory-based physical prior guid-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18053
ance module, which aims to inject the physical prior to help
the scene radiance recovery. Specifically, we perform fea-
ture disentanglement according to the physical model with
two decoders, where one estimates the transmission and
atmospheric light, and the other recovers scene radiance.
The feature extracted from the first decoder is leveraged as
the physical haze prior, which is integrated into the second
decoder for scene radiance recovery. To infer the global
physical prior in a long-range video, we design a physical
prior token memory that effectively encodes prior-related
features into compact tokens for efficient memory reading.
Second, we introduce a multi-range scene radiance re-
covery module to capture space-time dependencies in mul-
tiple space-time ranges. This module first splits the adjacent
frames into multiple ranges, then aligns and aggregates the
corresponding recurrent range features, and finally recov-
ers the scene radiance. Unlike CG-IDN [59], which aligns
the adjacent features frame-by-frame, we align the features
of adjacent frames into multiple sets with different ranges,
which helps to explore the temporal haze clues in various
time intervals. We further design a space-time deformable
attention to warp the features of multiple ranges to the target
frame, followed by a guided multi-range complementary in-
formation aggregation. Also, we use an unsupervised flow
loss to encourage the network to focus on the aligned areas
and train the whole network in an end-to-end manner.
In addition, the existing learning-based video dehaz-
ing methods are mainly trained and evaluated on indoor
datasets [27, 46, 59], which suffer from performance degra-
dation in real-world outdoor scenarios. Thus, we construct
an outdoor video dehazing benchmark dataset, HazeWorld,
which has three main properties. First, it is a large-scale
synthetic dataset with 3,588 training videos and 1,496 test-
ing videos. Second, we collect videos from diverse out-
door scenarios, e.g., autonomous driving and life scenes.
Third, the dataset has various downstream tasks for eval-
uation, such as segmentation and detection. Various ex-
periments on both synthetic and real datasets demonstrate
the effectiveness of our approach, which clearly outper-
forms the existing image and video dehazing methods; see
Fig. 1. The code and dataset are publicly available at
https://github.com/jiaqixuac/MAP-Net .
Our main contributions are summarized as follows:
• We present a novel framework, MAP-Net, for video
dehazing. A memory-based physical prior guidance
module is designed to enhance the scene radiance re-
covery, which encodes haze-prior-related features into
long-range memory.
• We introduce a recurrent multi-range scene radiance
recovery module with the space-time deformable at-
tention and the guided multi-range aggregation, which
effectively captures long-range temporal haze and
scene clues from the adjacent frames.• We construct a large-scale outdoor video dehazing
dataset with diverse real-world scenarios and labels for
downstream task evaluation.
• Extensive experiments on both synthetic and real con-
ditions demonstrate our superior performance against
the recent state-of-the-art methods.
|
Zeng_Distilling_Focal_Knowledge_From_Imperfect_Expert_for_3D_Object_Detection_CVPR_2023
|
Abstract
Multi-camera 3D object detection blossoms in recent
years and most of state-of-the-art methods are built up onthe bird’s-eye-view (BEV) representations. Albeit remark-able performance, these works suffer from low efficiency.Typically, knowledge distillation can be used for modelcompression. However , due to unclear 3D geometry reason-ing, expert features usually contain some noisy and confus-ing areas. In this work, we investigate on how to distill theknowledge from an imperfect expert. We propose FD3D, aF ocal Distiller for 3D object detection. Specifically, a set ofqueries are leveraged to locate the instance-level areas formasked feature generation, to intensify feature representa-tion ability in these areas. Moreover , these queries searchout the representative fine-grained positions for refined dis-tillation. We verify the effectiveness of our method by ap-plying it to two popular detection models, BEVF ormer andDETR3D. The results demonstrate that our method achievesimprovements of 4.07 and 3.17 points respectively in termsof NDS metric on nuScenes benchmark. Code is hostedathttp s:// github.com/OpenPerceptionX/
BEVPerception-Survey-Recipe .
|
1. Introduction
Accurate 3D object detection is a vital component in au-
tonomous driving. To achieve this, most methods [ 14,37]
resort to LiDAR sensors and dominate the public bench-marks [ 1,27]. Despite the performance gap, pure vision
approaches are still worthy of in-depth inquiry, since cam-eras can provide rich semantic information and are low-costand easy-to-deploy. Among these, bird’s-eye-view (BEV)
detection has drawn extensive attention from both industryand academia, and shown great potential to narrow downthe performance gap [ 15,21]. However, such models tend
to be computationally consuming.
*Corresponding author at [email protected]
(a) Global distillation (b) Attentive distillation
(c) Generative distillation
with random mask(d) Generative and focal
distillation (ours)Random maskForeground region
Generator
Query mask
GeneratorExpert
Abstraction
Apprentice
Representation
Query offset
Figure 1. Illustration of the proposed generative and focal distilla-
tion method. Compared with others, the proposed manner in 1(d)
leverages queries to generate instance masks for masked genera-
tive distillation, rather than random masks in 1(c). Moreover, the
queries meanwhile search for the representative position to per-form refined distillation, where the distillation region selection ismore fine-grained and flexible than 1(b).
In common practice, knowledge distillation can com-
press the model and is usually applied to alleviate com-putation overhead. One possible solution is to utilize theLiDAR-based model as the expert [ 4,17], but this requires
complex spatial-temporal calibrations and also needs tohandle heterogeneous problems from different modalities.An intuitive question is, can we distill these models solely
based on camera sensors? In this work, we intend to ad-dress this problem and focus on the camera-only distillation
setting. To the best of our knowledge, our work is the firstsolution tailored for this setting.
Distillation methods in 2D object detection have derived
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
992
Figure 2. Visualization of the predicted bounding boxes and bird’s
eye view feature of BEVFormer [ 19]. At the center is the au-
tonomous vehicle. In the left subfigure, green and blue boxes de-
note the ground truth and predictions respectively. In the right sub-figure, the BEV feature is ray-shaped and contains a lot of noise.Areas with incorrect high activation appear behind objects due toocclusion, which easily introduces false positives, e.g., the region
circled by the ellipse.
various types as depicted in Fig. 1(a)-(c), but their effec-
tiveness is not verified in 3D object detection. The mainchallenge in camera-to-camera distillation for 3D object de-
tection comes from the imperfect expert features. Due tothe lack of accurate 3D information, expert features drawn
from 2D images usually contain some noisy and confusingareas. To better illustrate this point, we visualize the BEVfeatures from BEVFormer [ 19] in Fig. 2. The unclear oc-
clusion reasoning makes the BEV features suffer from theray-shape artifacts. The imperfect BEV features result inmany false positives. Directly mimicking these featuresfrom experts such as Fig. 1(a) may exacerbate the draw-
back. Some 2D detection distillation methods [ 29] propose
to focus on foreground-orient regions, as shown in Fig. 1
(b). However, background regions are also important as
proved in [ 7,35,13]. We categorize these methods as at-
tentive distillation. Compared with 2D object detection, theimbalance between foreground and background in 3D ar-eas is much more severe. Balancing these weights is not ageneral solution. The latest study MGD [ 36] demonstrates
the effectiveness of masked image modeling (MIM) distil-lation as depicted in Fig. 1(c). However, the global random
mask cannot greatly enhance 3D object detection, which isvalidated in Tab. 4a.
To this end, we propose a Focal Distiller for 3D ob-
ject detection, shortened as FD3D . The schematic diagram
of FD3D is shown in Fig. 1(d). Specifically, A set of
queries are leveraged to locate the instance-level focal re-gions, masked generative distillation is performed withinthe regions. Moreover, these queries dynamically searchfine-grained representative positions for focal distillation.Two complementary modules guide the apprentice networkto generate enhanced feature representation on focal re-gions. In summary, our work makes three-fold contribu-tions:1. To the best of our knowledge, this is the first work to
explore knowledge distillation on the camera-only 3Dobject detection. We reveal the challenge relies on howto distill focal knowledge from an imperfect 3D objectdetector expert.
2. We propose FD3D, which utilizes a set of queries to
distill focal knowledge. With these queries, coarse-grained focal regions are selected for masked gener-ation, and fine-grained focal regions are searched outfor instance-oriented refinement distillation.
3. FD3D serves as a plug-and-play module. It can be eas-
ily extended to various detectors. The improvementswith 4.07 and 3.17 NDS can be obtained with FD3Dassembled in BEVFormer and DETR3D, respectively.
|
Zhang_Learning_To_Generate_Language-Supervised_and_Open-Vocabulary_Scene_Graph_Using_Pre-Trained_CVPR_2023
|
Abstract
Scene graph generation (SGG) aims to abstract an im-
age into a graph structure, by representing objects as graph
nodes and their relations as labeled edges. However, two
knotty obstacles limit the practicability of current SGG
methods in real-world scenarios: 1) training SGG mod-
els requires time-consuming ground-truth annotations, and
2) the closed-set object categories make the SGG mod-
els limited in their ability to recognize novel objects out-
side of training corpora. To address these issues, we nov-
elly exploit a powerful pre-trained visual-semantic space
(VSS) to trigger language-supervised and open-vocabulary
SGG in a simple yet effective manner. Specifically, cheap
scene graph supervision data can be easily obtained by
parsing image language descriptions into semantic graphs.
Next, the noun phrases on such semantic graphs are di-
rectly grounded over image regions through region-word
alignment in the pre-trained VSS. In this way, we enable
open-vocabulary object detection by performing object cat-
egory name grounding with a text prompt in this VSS. On
the basis of visually-grounded objects, the relation repre-
sentations are naturally built for relation recognition, pur-
suing open-vocabulary SGG. We validate our proposed ap-
proach with extensive experiments on the Visual Genome
benchmark across various SGG scenarios (i.e., supervised
/ language-supervised, closed-set / open-vocabulary). Con-
sistent superior performances are achieved compared with
existing methods, demonstrating the potential of exploiting
pre-trained VSS for SGG in more practical scenarios.
|
1. Introduction
Scene graph [10] is a structured representation for de-
scribing image semantics. It abstracts visual objects as
graph nodes and represents their relations as labeled graph
edges. The task of scene graph generation ( SGG ) [6, 14,
*Corresponding author
Phrase
grounding
A man holds a baseball bat on the ground .Semantic
parsingman
baseball
batgroundon hold
player
baseball
batgroundon hold
Word embedding Region embedding Novel classVisual -Semantic SpaceVisually -grounded
semantic graphImage language description
man
Open -vocabulary
generalizationplayerImagea
baseball bat. ground. ... player .
Text prompt inputPredicted scene graph bUsed as weak scene
graph supervision
Novel
objectFigure 1. An illustration of exploiting a pre-trained visual-
semantic space (VSS) to trigger language-supervised and open-
vocabulary scene graph generation (SGG). (a) We acquire weak
scene graph supervision by semantically parsing the image lan-
guage description and grounding noun phrases on image re-
gions via VSS. (b) At SGG inference time, thanks to the open-
vocabulary generalization naturally rooted in VSS, the novel ob-
ject name (e.g., player) in the text prompt input can be well aligned
to one image region, which is regarded as its detection.
20, 26, 40, 47, 48, 50, 51, 57, 60, 63, 64] plays an important
role for fine-grained visual understanding, which has shown
promising results in facilitating various downstream appli-
cations, such as image-text retrieval [24,38,49], image cap-
tioning [2,22,32,35,52,54,55,66], cross-media knowledge
graph construction [18, 45] and robot planning [1].
Though great effort has been made, SGG of the current
stage still faces two knotty obstacles that limit its practi-
cability in real-world scenarios. 1) Training SGG mod-
els requires massive ground-truth scene graphs that are ex-
pensive for manual annotation. Annotators have to draw
bounding boxes for all objects in an image and connect
possible interacted object pairs, and assign object/relation
labels. Since assigned labels might be ambiguous, further
verification and canonicalization processing are usually re-
quired [14]. Finally, a scene graph in the form of a set of
⟨subject, predicate, object ⟩triplets with subject and ob-
ject bounding boxes is constructed. Such annotating pro-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2915
cess is time-consuming and tedious, costing much human
labor and patience. 2) Almost all existing SGG methods
[20,21,26,47,48,50,51,60] involve a pre-defined closed set
of object categories, making them limited in recognizing
novel objects outside of training corpora. However, real-
world scenes contain a boarder set of visual concepts than
any pre-defined category pool. It is very likely to encounter
unseen/novel categories. When this happens, current SGG
models either classify novel objects to a known category or
fail to detect them like background regions. Accordingly,
the prediction of their interactions/relations with other ob-
jects is negatively affected or just neglected. This may lead
to problems. For example, a real-world robot may take inap-
propriate actions using such closed-set SGG models [1,42].
Recently, there is a trend of leveraging free-form lan-
guage supervision for benefiting visual recognition tasks
via large-scale language-image pre-training [7, 15, 17, 36,
53, 59, 67]. These methods (e.g., CLIP [36]) perform
pre-training on massive easily-obtained image-text pairs
to learn a visual-semantic space (VSS), and have demon-
strated great zero-shot transferability. Especially, the re-
cent grounded language-image pre-training (GLIP) [17] has
learned an object-level and semantic-rich VSS. Based on
the learned VSS, it has established new state-of-the-art per-
formances in phrase grounding and zero-shot object de-
tection. This indicates such pre-trained VSS has power-
ful multi-modal alignment ability (i.e., image regions and
text phrases that have similar semantics get close embed-
dings) and open-vocabulary generalization ability (i.e., cov-
ering virtually any concepts in the pre-training image-text
corpus). This inspires our thought of addressing the afore-
mentioned obstacles in SGG using the pre-trained VSS. On
the one hand, taking advantage of its multi-modal align-
ment ability, we can cheaply acquire scene graph supervi-
sion from an image description (e.g., retrieving image re-
gions aligned with noun phrases and re-arranging the de-
scription into a scene-graph-like form). On the other hand,
by leveraging its open-vocabulary generalization ability, it
is promising to enable novel category prediction in SGG.
In this work, we investigate the opportunity of fully ex-
ploiting the VSS learned by language-image pre-training
to trigger language-supervised and open-vocabulary SGG.
Specifically, we obtain weak scene graph supervision by se-
mantically parsing an image language description into a se-
mantic graph, then grounding its noun phrases over image
regions through region-word alignment in the pre-trained
VSS (Figure 1 (a)). Moreover, we propose a novel SGG
model, namely Visual-Semantic Space for Scene graph gen-
eration ( VS3). It takes a raw image and a text prompt con-
taining object category names as inputs, and projects them
into the shared VSS as embeddings. Next, VS3performs
object detection by aligning the embeddings of category
names and image regions. Based on high-confidence de-tected objects, VS3builds relation representations for object
pairs with a devised relation embedding module that fully
mines relation patterns from visual and spatial perspec-
tives. Finally, a relation prediction module takes relation
representations to infer relation labels. The predicted scene
graph is composed by combining object detections and in-
ferred relation labels. During training, visually-grounded
semantic graphs parsed from image descriptions could be
used as weak scene graph supervision, achieving language-
supervised SGG. At SGG inference time, when using a text
prompt input containing novel categories, VS3manages to
detect novel objects thanks to the open-vocabulary general-
ization ability natually rooted in VSS, hence allowing for
open-vocabulary SGG (Figure 1 (b)).
In summary, we have made the following contributions:
(1) the exploitation of a pre-trained VSS provides an el-
egant solution for addressing obstacles to triggering both
language-supervised and open-vocabulary SGG, making a
solid step toward real-world usage of SGG. (2) The pro-
posed VS3model is a new and versatile framework, which
effectively transfers language-image pre-training knowl-
edge for benefiting SGG. (3) We fully validate the effective-
ness of our approach through extensive experiments on the
Visual Genome benchmark, and have set new state-of-the-
art performances spanning across all settings (i.e., super-
vised / language-supervised, closed-set / open-vocabulary).
|
Yu_Overcoming_the_Trade-Off_Between_Accuracy_and_Plausibility_in_3D_Hand_CVPR_2023
|
Abstract
Direct mesh fitting for 3D hand shape reconstruction is
highly accurate. However , the reconstructed meshes areprone to artifacts and do not appear as plausible hand
shapes. Conversely, parametric models like MANO ensureplausible hand shapes but are not as accurate as the non-
parametric methods. In this work, we introduce a novelweakly-supervised hand shape estimation framework that
integrates non-parametric mesh fitting with MANO modelin an end-to-end fashion. Our joint model overcomes the
tradeoff in accuracy and plausibility to yield well-alignedand high-quality 3D meshes, especially in challenging two-
hand and hand-object interaction scenarios.
|
1. Introduction
State-of-the-art monocular RGB-based 3D hand recon-
struction methods [ 6,10,17,21,22,28] focus on recover-
ing highly accurate 3D hand meshes. As accuracy is mea-
sured by an average joint or vertex position error, recov-ered hand meshes may be well-aligned in 3D space but
still be physically implausible. The 3D mesh surface may
have irregular protrusions or collapsed regions (see Fig. 1),
especially around the fingers. The meshes may also suf-
fer from incorrect contacts or penetrations when there arehand-object or two-hand interactions. Yet methods that pri-
oritize physical plausibility, especially in interaction set-tings [ 3,8,10,14,20,21], are significantly less accurate in 3D
alignment. In summary, the current body of work predom-
inantly favours either 3D alignment accuracy or physicalplausibility, but cannot achieve both.
A closer examination reveals that the trade-off between
3D alignment and plausibility is also split methodology-
wise. Methods that use the MANO model [ 30] produce
plausible hand poses and hand shapes [ 2,7,38,40,42] due
to the statistical parameterization of MANO. However, it
is challenging to directly regress these parameters, since
Figure 1. The vertex error vs. edge length error indicates that
existing methods trade-off alignment accuracy with plausibility.
MANO-based methods (triangles) vs. non-parametric model-
based methods (circles) have a trade-off between vertex error andedge length error; our combined method (stars) can overcome thistrade-off to yield well-aligned and plausible meshes. Plot of re-sults from InterHand2.6M [ 27]; visualization from FreiHand [ 42].
the mapping from an image to the MANO parameter space
is highly non-linear. As a consequence, MANO-basedmethods lag in 3D alignment accuracy compared to non-parametric methods.
Non-parametric methods [ 6,10,11,17,18,21,22,28] di-
rectly fit a 3D mesh to image observations. Direct meshfitting is accurate but is prone to surface artifacts. In scenar-
ios with hand-object or hand-hand interactions, mesh pen-
etrations cannot be resolved meaningfully even with regu-larizers such as contact losses [ 14] due to the unconstrained
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
544
optimization. Attention mechanism [ 21,32] can mitigate
some penetrations and artifacts, but the inherent problem
remains. As such, the favoured approaches for hand-objectand hand-hand interactions are still driven by MANO mod-els [ 3,8,13,14,38].
In this work, we aim to recover high-quality hand meshes
that are accurately aligned and have minimal artifacts and
penetrations. To avoid a trade-off, we leverage direct meshfitting for alignment accuracy and guidance from MANOfor plausibility. Combine the non-parametric and paramet-ric models is straightforward in terms of motivation. How-ever, merging the two is non-trivial because it requires a
mapping from non-parametric mesh vertices to parametric
model parameters. This mapping, analogous to the map-ping from an RGB image, is highly non-linear and difficult
to achieve directly [ 16].
One of our key contributions in this work is a method
to accurately map non-parametric hand mesh vertices toMANO joint parameters θ. To do so, we perform a two-step
mapping, from mesh-vertices to the joint coordinates, andjoints coordinates to θ. In the literature, the common prac-
tice for the former is to leverage the Jmatrix in MANO and
linearly regress the joints from the mesh [ 17,20,21]. Yet the
Jmatrix was designed to only map MANO-derived meshes
to joints in a rest pose (see Eq. 10 in [ 25]). As we show in
our experiments, applying Jto non-rest poses introduces a
gap of around 2 mm. Furthermore, we postulate that there isa domain gap between the estimated non-parametric meshes
and MANO-derived meshes, even if both meshes have the
same topology. To close this gap – we propose a V AE cor-rection module, to be applied after the linear regression withJ. To map the recovered joints from the mesh to θ, we use
a twist-swing decomposition and analytically compute theθ. It has been shown previously in [ 20] that decomposing
joint rotations into twist-swing rotations [ 1] can simplify
the estimation of human SMPL [ 25] model pose parame-
ters. Inspired by [ 20], we also leverage the decomposition
and further verify that the twist angle has minimal impacton the hand.
Note that obtaining ground truth labels for hand mesh
vertices is non-trivial. Our framework lends itself well for
weak-supervision. Since the estimated 3D mesh from thenon-parametric decoder is regressed into 3D joints, it canalso be supervised with 3D joints as weak labels (see Fig2). At the same time, the parametric mesh estimated from
these joints can be used as a pseudo-label for learning thenon-parametric mesh vertices. Such a procedure distills the
knowledge from the parametric model and is effective with-out ground truth mesh annotations. We name our method
WSIM 3D Hand , in reference to Weakly-supervised Self-
distillation Integration Model for 3D hand shape recon-
struction. Our contributions include:
• A novel framework that integrates a parametric and anon-parametric mesh model for accurate and plausible
3D hand reconstruction.
• A V AE correction module that closes the overlooked
gap between non-parametric and MANO 3D poses;
• A weakly-supervised pipeline, competitive to a fully-
supervised counterpart, using only 3D joint labels to
learn 3D meshes.
• Significant improvements over state-of-the-art on
hand-object or two-hand interaction benchmark
datasets, especially in hand-object interaction on
DexYCB.
|
Zhang_Exploring_Intra-Class_Variation_Factors_With_Learnable_Cluster_Prompts_for_Semi-Supervised_CVPR_2023
|
Abstract
Semi-supervised class-conditional image synthesis is
typically performed by inferring and injecting class labelsinto a conditional Generative Adversarial Network (GAN).The supervision in the form of class identity may be inade-quate to model classes with diverse visual appearances. Inthis paper , we propose a Learnable Cluster Prompt-based
GAN (LCP-GAN) to capture class-wise characteristics andintra-class variation factors with a broader source of su-
pervision. To exploit partially labeled data, we perform softpartitioning on each class, and explore the possibility of as-
sociating intra-class clusters with learnable visual concepts
in the feature space of a pre-trained language-vision model,
e.g., CLIP . F or class-conditional image generation, we de-sign a cluster-conditional generator by injecting a combi-nation of intra-class cluster label embeddings, and further
incorporate a real-fake classification head on top of CLIPto distinguish real instances from the synthesized ones, con-
ditioned on the learnable cluster prompts. This significant-ly strengthens the generator with more semantic language
supervision. LCP-GAN not only possesses superior gen-
eration capability but also matches the performance of the
fully supervised version of the base models: BigGAN andStyleGAN2-ADA, on multiple standard benchmarks.
|
1. Introduction
Generative Adversarial Networks (GANs) have achieved
considerable success in modeling complex data distribu-
tions and generating high-fidelity images from random vec-
tors [ 2,18,25]. To control class semantics in the generation
∗Joint first authors.
†Corresponding author.
Learnable prompt: “ ? ? ? ? [class_name]”
“? ? ? ?”ൎ“Wavy Hair” “? ? ? ?” ൎ“Smiling”
Class Intra-class cluster
Figure 1. Different from generic class-conditional GANs con-
ditioned on discrete class labels, LCP-GAN learns intra-classcluster-specific prompts to guide the generation process and cap-ture underlying variation factors.
process, object category is typically represented in the form
of a discrete label, which is injected into both generator anddiscriminator through learnable embedding layers. Howev-er, sufficient labeled training data may be difficult to collectin real-world applications. Significant efforts have been de-voted to semi-supervised generative learning that aims to re-duce the dependence of class-conditional GANs on labeled
training data [ 6,15,29,33].
In the semi-supervised setting, the amount of unlabeled
training samples can be significantly greater than that of la-beled ones. As one of the early attempts, CatGAN [ 44]
trained a discriminator to infer the class labels of real im-
ages with high confidence, but not for the synthesized
ones. Both TripleGAN [ 29] andΔ-GAN [ 15] incorpo-
rated an auxiliary classifier to focus on class label pre-
diction in the adversarial training process. The unlabeled
images with pseudo labels were used to train the class-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7392
ٖ
ْ
ْ
ࢍࢎ
ǥࢉ࢛࢝
ٖ
࢚࢚࢞ࢎ
ݖ
ࡳ࢚࢘ࢇ࢘ࢋࢋࡳ
ǥ ݃݊݅ݎݎ݁ܪ
݄ܿ݊݅ܨ݈݈ݑܩ
࢙࢙ࢇ݈݁ݎݑܲ
ࢋࢇࡺࢇࢋࡾ
࢙ࢋࢍࢇࡵࢋࢇࡲ
࢙ࢋࢍࢇࡵ
ǥǥ
ࢋ࢈ࢇ࢘ࢇࢋ
࢙࢙ࢇࢉࡰ࢚࢘ࢇ࢘ࢉ࢙ࡰ࢚࢘
ࢍࡱ
࢚࢚࢞ࡱ
ߝሺ̴ࣳܿଵሻ ǥ
ݑ௭ଵݑ௭ଶǥ݈ٔٔ݅ݑݖܽܮ
݃݊݅ݐ݊ݑܤࣳ
ݑ௭݈ܴࣳܽ݁ࢍࢊࢊࢋ࢈ࡱ
Ȁ݁݇ܽܨ
݈ܴܽ݁
Ȁ݁݇ܽܨ
ߝሺ̴ࣳܿଶ澼ߝሺܿ௬̴ࣳሻ
ࣳݒ̴ଵଵࣳݒ̴ଵଶڮࣳݒభோݏݏ̴݈ܽܿ݁݉ܽ݊ Ǥ
ࣳݒ̴ࣳଵࣳݒ̴ࣳଶڮࣳݒೖࣳோݏݏ̴݈ܽܿ݁݉ܽ݊ Ǥ
ݑ௭ሺሻߝሺ̴ࣳܿሻ࢚࢘ࡰ࢚࢘ࢇ࢘ࢉ࢙ࡰ
ࣳݓ
Figure 2. An overview of the proposed LCP-GAN. The generator Gsynthesizes class-specific images, conditioned on the combination
of intra-cluster label embeddings:/summationtext
iu(i)
zE(cyi). In addition to class-conditional adversarial training via the discriminator Dclass ,w e
incorporate an additional discriminator Dprompt to distinguish real images from the synthesized ones, conditioned on the combination of
the learnable cluster-specific prompts {ty1,...,t yky}. By competing with Dclass andDprompt ,Glearns to associate the cluster label
embeddings with the underlying visual concepts described by the prompts.
conditional discriminators. There have also been some at-
tempts at improving GANs via unsupervised data partition-
ing [ 14,16,21,31,39]. SphericGAN [ 6] imposed hard par-
titioning on the training data, and aligned the real and syn-thesized data clusters in a hyper-spherical latent space. The
existing semi-supervised GANs are conditioned on class i-
dentity, while the semantics encapsulated in the class names
is overlooked. This impedes the generative model from
using prior knowledge in natural language as humans do.
Furthermore, only one single label embedding is learnt per
class, which is insufficient to account for large intra-classvariance. To address these issues, we explore intra-classvariation factors by performing soft and finer partitions on
each class and learning cluster-specific prompts to represent
underlying visual concepts as shown in Figure 1.
More specifically, we propose a Learnable Cluster
Prompt-based GAN to facilitate semi-supervised class-conditional image generation, and our model is referred
to as LCP-GAN. To better match class-specific data distri-bution, the generator learns to synthesize high-fidelity im-ages, conditioned on a combination of intra-class cluster la-bel embeddings. Capturing intra-class variation factors is anon-trivial task, since the semantics reflected by the clusters
may not be well-defined. Considering that natural language
can express a wide range of visual concepts, we make anattempt to learn from CLIP [ 42], which provides an effec-
tive way to understand the content of images. Inspired by
CoOp [ 56], we can model the cluster-specific context words
with learnable vectors, and further learn a mapping to adapt
the CLIP representation to our generation task. As a re-
sult, the generator is guided to capture cluster semantics in
the adversarial training process. The framework of LCP-GAN is illustrated in Figure 2. We adopt the state-of-the-
art architectures: BigGAN [ 2] and StyleGAN2-ADA [ 23],
and achieve significant improvements over them, suggest-ing that semi-supervised image generation can benefit from
modeling intra-class variation with the language-vision pre-training.
The main contributions of this work are summarized as
follows: (a) We associate the intra-class cluster label em-beddings with the cluster semantics, and the expressiveness
of their combination is higher than that of a single class
label embedding for capturing multiple underlying modeswith diverse visual appearances. (b) To address the issuethat the visual concepts reflected by the clusters may not be
well defined, we leverage the language-vision pre-training
and represent the clusters with learnable prompts. (c) Toguide the generator to capture intra-class variation factors,
the cluster prompts serve as conditional information and arejointly learnt in the adversarial training process.
|
Yan_CIMI4D_A_Large_Multimodal_Climbing_Motion_Dataset_Under_Human-Scene_Interactions_CVPR_2023
|
Abstract
Motion capture is a long-standing research problem. Al-
though it has been studied for decades, the majority of re-
search focus on ground-based movements such as walking,
sitting, dancing, etc. Off-grounded actions such as climb-
ing are largely overlooked. As an important type of action
in sports and firefighting field, the climbing movements is
challenging to capture because of its complex back poses,
intricate human-scene interactions, and difficult global lo-
calization. The research community does not have an in-
depth understanding of the climbing action due to the lack
of specific datasets. To address this limitation, we collect
CIMI4D, a large rock ClImbing MotIon dataset from 12
persons climbing 13 different climbing walls. The dataset
consists of around 180,000 frames of pose inertial mea-
surements, LiDAR point clouds, RGB videos, high-precision
static point cloud scenes, and reconstructed scene meshes.
Moreover, we frame-wise annotate touch rock holds to fa-
∗Equal contribution.
†Corresponding author.cilitate a detailed exploration of human-scene interaction.
The core of this dataset is a blending optimization pro-
cess, which corrects for the pose as it drifts and is af-
fected by the magnetic conditions. To evaluate the merit
of CIMI4D, we perform four tasks which include human
pose estimations (with/without scene constraints), pose pre-
diction, and pose generation. The experimental results
demonstrate that CIMI4D presents great challenges to ex-
isting methods and enables extensive research opportuni-
ties. We share the dataset with the research community in
http://www.lidarhumanmotion.net/cimi4d/.
|
1. Introduction
Capturing human motions can benefit many downstream
applications, such as AR/VR, games, movies, robotics,
etc. However, it is a challenging and long-standing prob-
lem [1, 7,35,37,75] due to the diversity of human poses and
complex interactive environment. Researchers have pro-
posed various approaches to estimate human poses from
images [15, 16,19,30,67], point clouds [33], inertial mea-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12977
surement units (IMUs) [21, 69], etc. Although the prob-
lem of human pose estimation (HPE) has been studied for
decades [6, 58,63], most of the existing solutions focus
on upright frontal poses on the ground (such as walking,
sitting, jumping, dancing and yoga) [54]. Different from
daily activities (such as walking and running) that are on
the ground, climbing is an activity off the ground with back
poses, which is also an important type for sports [22, 50],
entertainment [31, 32,55], and firefighting.
Climbing is an activity that involves ascending geo-
graphical objects using hands and feet, such as hills, rocks,
or walls. Estimating the pose of a climbing human is chal-
lenging due to severe self-occlusion and the human body’s
closely contact with the climbing surface. These issues
are primarily caused by complex human-scene interactions.
Moreover, understanding the climbing activities requires
both accurate captures of the complex climbing poses and
precise localization of the climber within scenes, which is
especially challenging. Many pose/mesh estimation meth-
ods are data-driven methods [24, 45,54,65], relying on huge
climbing motion data for training networks. So a large-scale
climbing dataset is necessary for the holistic understanding
of human poses. Publicly available human motion datasets
are mostly in upright frontal poses [2, 36,47], which are
significantly different from climbing poses. Albeit some
researchers collected RGBD-based climbing videos [4] or
used marker-based systems [22], their data is private and
the scale of dataset is very limited.
To address the limitations of current datasets and boost
related research, we collect a large-scale multimodal climb-
ing dataset, CIMI4D, under complex human-scene inter-
action, as depicted in Fig. 1. CIMI4D consists of around
180,000 frames of time-synchronized and well-annotated
LiDAR point clouds, RGB videos, and IMU measurements
from 12 actors climbing 13 rock-climbing walls. 12 ac-
tors include professional athletes, rock climbing enthusi-
asts, and beginners. In total, we collect 42 rock climb-
ing motion sequences, which enable CIMI4D to cover a
wide diversity of climbing behaviors. To facilitate deep un-
derstanding for human-scene interactions, we also provide
high-quality static point clouds using a high-precision de-
vice for seven rock-climbing walls. Furthermore, we anno-
tate the rock holds (holds) on climbing walls and manually
label the contact information between the human body and
the holds. To obtain accurate pose and global trajectory of
the human body, we devise an optimization method to an-
notate IMU data, as it drifts over time [10, 59] and is subject
to magnetic conditions in the environment.
The comprehensive annotations in CIMI4D provide the
opportunity for benchmarking various 3D HPE tasks. In
this work, we focus on four tasks: human pose estima-
tion with or without scene constraints, human pose predic-
tion and generation. To assess the effectiveness of existingmethods on these tasks, we perform both quantitative and
qualitative experiments. However, most of the existing ap-
proaches are unable to capture accurately the climbing ac-
tion. Our experimental results demonstrate that CIMI4D
presents new challenges for current computer vision algo-
rithms. We hope that CIMI4D could provide more opportu-
nities for a deep understanding of human-scene interactions
and further benefit the digital reconstruction for both. In
summary, our contributions can be listed as below:
• We present the first 3D climbing motion dataset,
CIMI4D, for understanding the interaction between
complex human actions with scenes. CIMI4D consists
of RGB videos, LiDAR point clouds, IMU measure-
ments, and high-precision reconstructed scenes.
• We design an annotation method which uses multiple
constraints to obtain natural and smooth human poses
and trajectories.
• We perform an in-depth analysis of multiple methods
for four tasks. CIMI4D presents a significant challenge
to existing methods.
|
Zhang_UniDAformer_Unified_Domain_Adaptive_Panoptic_Segmentation_Transformer_via_Hierarchical_Mask_CVPR_2023
|
Abstract
Domain adaptive panoptic segmentation aims to miti-
gate data annotation challenge by leveraging off-the-shelf
annotated data in one or multiple related source domains.
However, existing studies employ two separate networks
for instance segmentation and semantic segmentation which
lead to excessive network parameters as well as compli-
cated and computationally intensive training and inference
processes. We design UniDAformer, a unified domain adap-
tive panoptic segmentation transformer that is simple but
can achieve domain adaptive instance segmentation and
semantic segmentation simultaneously within a single net-
work. UniDAformer introduces Hierarchical Mask Calibra-
tion (HMC) that rectifies inaccurate predictions at the level
of regions, superpixels and pixels via online self-training
on the fly. It has three unique features: 1) it enables uni-
fied domain adaptive panoptic adaptation; 2) it mitigates
false predictions and improves domain adaptive panoptic
segmentation effectively; 3) it is end-to-end trainable with
a much simpler training and inference pipeline. Exten-
sive experiments over multiple public benchmarks show that
UniDAformer achieves superior domain adaptive panoptic
segmentation as compared with the state-of-the-art.
|
1. Introduction
Panoptic segmentation [30] performs instance segmenta-
tion for things and semantic segmentation for stuff, which
assigns each image pixel with a semantic category and
a unique identity simultaneously. With the advance of
deep neural networks [5, 17–19, 31, 43], panoptic segmen-
tation [4, 7–9, 29, 30, 33, 35, 54, 55] has achieved very
impressive performance under the supervision of plenty
of densely-annotated training data. However, collecting
densely-annotated panoptic data is prohibitively laborious
and time-consuming [10,11,37] which has become one ma-
jor constraint along this line of research. One alternative is
*Equal contribution, {jingyi.zhang, jiaxing.huang }@ntu.edu.sg.
†Corresponding author, [email protected].
Figure 1. Existing domain adaptive panoptic segmentation [20]
adapts things and stuff separately with two isolated networks
(for instance segmentation and semantic segmentation) and fuses
their outputs to produce the final panoptic segmentation as in (a),
leading to excessive network parameters as well as complicated
and computationally intensive training and inference. Differently,
UniDAformer employs a single unified network to jointly adapt
things and stuff as in (b), which involves much less parameters
and simplifies the training and inference pipeline greatly.
to leverage off-the-shelf labeled data from one or multiple
source domains . Nevertheless, the source-trained models
often experience clear performance drop while applied to
various target domains that usually have different data dis-
tributions as compared with the source domains [20].
Domain adaptive panoptic segmentation can mitigate the
inter-domain discrepancy by aligning one or multiple la-
beled source domains and an unlabeled target domain [20].
To the best of our knowledge, CVRN [20] is the only work
that tackles domain adaptive panoptic segmentation chal-
lenges by exploiting the distinct natures of instance segmen-
tation and semantic segmentation. Specifically, CVRN in-
troduces cross-view regularization to guide the two segmen-
tation tasks to complement and regularize each other and
achieves very impressive performance. However, CVRN re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11227
Multi-branch Architecture Unified Architecture
PSN [30] Panoptic FCN [35] MaskFormer [9] DETR [4]
mSQ mRQ mPQ mSQ mRQ mPQ mSQ mRQ mPQ mSQ mRQ mPQ
Supervised Setup 75.5 60.2 47.7 79.7 73.1 59.6 79.1 62.6 51.1 79.1 64.1 51.9
Adaptation Setup 59.0 27.8 20.1 47.5 19.7 15.8 56.6 19.2 16.2 56.4 21.8 18.3
Performance Drop -16.5 -32.4 -27.6 -32.2 -53.4 -43.8 -22.5 -43.4 -34.9 -22.7 -42.3 -33.6
Table 1. Panoptic segmentation with traditional multi-branch architecture [30] and recent unified architectures [4, 9, 35]: The Supervised
Setup trains with the Cityscapes [10] and tests on the same dataset. The UDA Setup trains with the SYNTHIA [45] and tests on Cityscapes.
It can be seen that the performance drops between the two learning setups come more from mRQ than from mSQ consistently across
different architectures. In addition, such a phenomenon is more severe for unified architectures. This demonstrates a clear false prediction
issue in unified domain adaptive panoptic segmentation as mRQ is computed with false positives and false negatives.
lies on a multi-branch architecture that adopts a two-phase
pipeline with two separate networks as illustrated in Fig. 1
(a). This sophisticated design directly doubles network pa-
rameters, slows down the training, and hinders it from being
end-to-end trainable. It is desirable to have a unified panop-
tic adaptation network that can effectively handle the two
segmentation tasks with a single network.
We design a unified domain adaptive panoptic segmenta-
tion transformer (UniDAformer) as shown in Fig. 1 (b). Our
design is based on the observation that one major issue in
unified panoptic adaptation comes from a severe false pre-
diction problem. As shown in Table 1, most recent unified
panoptic segmentation architectures [4, 9, 35] outperform
traditional multi-branch architectures [30] by large margins
under the supervised setup. However, the situation inverts
completely under unsupervised domain adaptation setup.
Such contradictory results are more severe for the recog-
nition quality in mRQ. This shows that the panoptic quality
drop of unified architecture mainly comes from False Posi-
tives (FP) and False Negatives (FN) as mRQ is computed
from all predictions (True Positives, False Negatives and
False Positives) while the segmentation quality in mSQ is
computed with True Positives (TP) only.
In the proposed UniDAformer, we mitigate the false
prediction issue by introducing Hierarchical Mask Cali-
bration (HMC) that calibrates inaccurate predictions at the
level of regions, superpixels, and pixels. With the cor-
rected masks, UniDAformer re-trains the network via an
online self-training process on the fly. Specifically, HMC
treats both things and stuff predictions as masks uniformly
and corrects each predicted pseudo mask hierarchically in
a coarse-to-fine manner, i.e., from region level that cali-
brates the overall category of each mask to superpixel and
pixel levels that calibrate the superpixel and pixels around
the boundary of each mask (which are more susceptible
to prediction errors). UniDAformer has three unique fea-
tures. First , it achieves unified panoptic adaptation by treat-
ing things and stuff as masks and adapting them uniformly.
Second , it mitigates the false prediction issue effectivelyby calibrating the predicted pseudo masks iteratively and
progressively. Third , it is end-to-end trainable with much
less parameters and simpler training and inference pipeline.
Besides, HMC introduces little computation overhead and
could be used as a plug-in.
The contributions of this work can be summarized in
three aspects. First, we propose UniDAformer that en-
ables concurrent domain adaptive instance segmentation
and semantic segmentation within a single network. It is
the first end-to-end unified domain adaptive panoptic seg-
mentation transformer to the best our knowledge. Sec-
ond, we design Hierarchical Mask Calibration with online
self-training, which allows to calibrate the predicted pseudo
masks on the fly during self-training. Third, extensive ex-
periments over multiple public benchmarks show that the
proposed UniDAformer achieves superior segmentation ac-
curacy and efficiency as compared with the state-of-the-art.
|
Zhao_MetaFusion_Infrared_and_Visible_Image_Fusion_via_Meta-Feature_Embedding_From_CVPR_2023
|
Abstract
Fusing infrared and visible images can provide more tex-
ture details for subsequent object detection task. Converse-
ly, detection task furnishes object semantic information to
improve the infrared and visible image fusion. Thus, a joint
fusion and detection learning to use their mutual promotion
is attracting more attention. However, the feature gap be-
tween these two different-level tasks hinders the progress.
Addressing this issue, this paper proposes an infrared and
visible image fusion via meta-feature embedding from ob-
ject detection. The core idea is that meta-feature embedding
model is designed to generate object semantic features ac-
cording to fusion network ability, and thus the semantic fea-
tures are naturally compatible with fusion features. It is op-
timized by simulating a meta learning. Moreover, we further
implement a mutual promotion learning between fusion and
detection tasks to improve their performances. Comprehen-
sive experiments on three public datasets demonstrate the
effectiveness of our method. Code and model are available
at: https://github.com/wdzhao123/MetaFusion.
|
1. Introduction
Multi-modality sensor technology has promoted the ap-
plication of multi-modality images in different areas. A-
mong them, infrared images and visible images have been
utilized commonly, as the information contained in these
two modalities is complementary. Specifically, infrared im-
ages can supply object thermal structures without being af-
fected by illumination. But they are short of texture de-
tails. On the contrary, visible images can catch the texture
information for the scene. But they are severely affected by
light. Thus, many methods [15,25,35,43–45,47] focus on s-
Corresponding author
(a) Separate optimization method
(b) Cascaded optimization method
(c) Meta-feature embedding method
IVIF
Network
OD
Network
min ((,;))
ff f L xy
min((;))
dd d Lz
IVIF
Network
OD
Network
min ((,;)) ((;))
ff f d d L xy Lz
(,;)f z xy
IVIF
Network
OD
Network(,;)f z xy
Meta-feature
Embeddingmin ((,;)) (,(,;))
ff f d
f f gi i im Lxy LFFF
min ((,;))
mf f Lxy
Data flow
Gradient flow(,;)f z xy Figure 1. Different joint learning methods of infrared and visible
image fusion (IVIF) and object detection (OD). (a) Separate opti-
mization method: IVIF network is firstly optimized by fusion
lossLf. Then, fusion result zis generated by from the input
infrared and visible image pair x; y. Finally, OD network is op-
timized by detection loss Ldusing z. (b) Cascaded optimization
method: OD network is treated as a constraint to optimize IVIF
network by the loss LfandLd. (c) Meta-feature embedding
method: Meta-feature embedding is optimized to learn how to
guide to have a low Lf. Then, generates meta feature from
detection feature Fd
iand fusion feature Fd
i. Finally, the meta fea-
ture is used to guide by the loss Ld.
tudying pixel-level infrared and visible image fusion (IVIF),
thereby helping high-level tasks improve performance, e.g.,
object detection (OD) [20, 34].
IVIF and OD can greatly benefit form each other. IVIF
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13955
generates the fused image that contains more information
than any single modality image to improve OD. Howev-
er, IVIF mainly focuses on the pixel relationship between
an image pair, and there is little consideration for object
semantic. In contrast, OD can provide rich object seman-
tic information to IVIF, as its aim is locating the objects.
Therefore, this paper studies a joint learning framework be-
tween IVIF and OD to improve their performances.
Existing joint learning methods of IVIF and OD can
be divided into two categories: separate optimization and
cascaded optimization. Separate optimization firstly train-
s IVIF network, and then trains OD network using IVIF
results, as shown in Figure 1(a). Thus, most methods fo-
cus on improving the fusion effect, e.g., designing network-
s [13, 23, 35, 36] and introducing constraints [10, 26, 45].
Obviously, separate optimization neglects the help of OD.
Cascaded optimization adopts OD network as a constraint
to train IVIF network, and thus forces the IVIF network to
generate fusion images with easily detected objects [20], as
shown in Figure 1(b). However, directly utilizing the high-
level OD constraint to guide the pixel-level IVIF will result
in limited effect. Therefore, we leverage OD feature maps
that guide IVIF feature maps to obtain more semantic infor-
mation. Unfortunately, OD features are mismatched with
IVIF features due to their task-level difference. Addressing
this issue, we propose a meta-feature embedding network
(MFE ), as shown in Figure 1(c). The idea is that if MFE
generates OD features according to the IVIF network abil-
ity, the OD features are naturally compatible with the IVIF
network, and the optimization can be achieved by simulat-
ing a meta learning.
Specifically, an infrared and visible image fusion via
meta-feature embedding from object detection is proposed,
which is named as MetaFusion. MetaFusion includes IVIF
network (F), OD network ( D), andMFE . In particular,
MFE is expected to generate meta features to bridge the
gap between FandD, which is optimized by two alter-
nate steps: inner update and outer update. In the inner up-
date process, we firstly optimize Fusing meta training set
Smtrto obtain its updated network F0. Then,F0calculates
the fusion loss on meta testing set Smtsto optimizeMFE .
The motivation is that if MFE successfully generates meta
features which are compatible with F,F0will produce bet-
ter fused images, i.e., the fusion loss should be lower. In
the outer update process, Fis optimized with the guide of
the meta features generated by the fixed MFE onSmtr
andSmts. In this way, Fcan learn how to extract seman-
tic information to improve fusion quality. In the above two
alternate steps, Dis fixed to offer detection semantic infor-
mation. Thus, we further implement a mutual promotion
learning, where we use the improved Fto generate fusion
results to finetune D, and then the improved Doffers better
semantic information to optimize F.In summary, our contributions are as follows. (1) We ex-
plore the joint learning framework of IVIF and OD, and pro-
pose MetaFuison to obtain superior performance on these
two tasks. (2) Meta-feature embedding network is designed
to generate meta features that bridge the gap between Fand
D. (3) Sequentially, mutual promotion learning between F
andDis introduced to improve their performances. (4) Ex-
tensive experiments on image fusion and object detection
validate the effectiveness of the proposed method.
|
Zheng_HS-Pose_Hybrid_Scope_Feature_Extraction_for_Category-Level_Object_Pose_Estimation_CVPR_2023
|
Abstract
In this paper, we focus on the problem of category-level
object pose estimation, which is challenging due to the large
intra-category shape variation. 3D graph convolution (3D-
GC) based methods have been widely used to extract local
geometric features, but they have limitations for complex
shaped objects and are sensitive to noise. Moreover, the
scale and translation invariant properties of 3D-GC restrict
the perception of an object’s size and translation informa-
tion. In this paper, we propose a simple network structure,
theHS-layer , which extends 3D-GC to extract hybrid scope
latent features from point cloud data for category-level ob-
ject pose estimation tasks. The proposed HS-layer: 1) is
able to perceive local-global geometric structure and global
information, 2) is robust to noise, and 3) can encode size
and translation information. Our experiments show that the
simple replacement of the 3D-GC layer with the proposed
HS-layer on the baseline method (GPV-Pose) achieves a
significant improvement, with the performance increased by
14.5% on5◦2cm metric and 10.3% on IoU 75. Our method
outperforms the state-of-the-art methods by a large margin
(8.3% on5◦2cm,6.9% on IoU 75) on REAL275 dataset and
runs in real-time (50 FPS)1.
|
1. Introduction
Accurate and efficient estimation of an object’s pose and
size is crucial for many real-world applications [48], in-
cluding robotic manipulation [15], augmented reality [36],
and autonomous driving, among others. In these appli-
*The corresponding author.
1Code is available: https://github.com/Lynne-Zheng-Linfang/HS-Pose
Input Point Cloud
6D Pose and 3D Size EstimationHybrid Scope Feature Extraction
Scale and Translation Encoding
Local Geometric
Structure
xyzxz
��
��≠
xGlobal Geometric
StructureOutlier Robust
Global Information
y
�, �, �
��Figure 1. Illustration of the hybrid scope feature extraction of the
HS-layer . As shown in the right figure, the proposed HS-layer possesses
various advantages, including the capability of capturing both local and
global geometric information, robustness to outliers, and the encoding of
scale and translation information. Building upon the GPV-pose, the HS-
layer is employed to develop a category-level pose estimation framework,
namely HS-Pose . Upon receiving an input point cloud, HS-Pose outputs
the estimated 6D pose and 3D size of the object, as shown in the left fig-
ure. Given the strengths of the HS-layer, HS-Pose is capable of handling
complex object shapes, exhibits robustness to outliers, and achieves better
performance compared with existing methods.
cations, it is essential that pose estimation algorithms can
handle the diverse range of objects encountered in daily
life. While many existing works [3,13,29,50] have demon-
strated impressive performance in estimating an object’s
pose, they typically focus on only a limited set of ob-
jects with known shapes and textures, aided by CAD mod-
els. In contrast, category-level object pose estimation al-
gorithms [7, 22, 34, 45, 49] address all objects within a
given category and enable pose estimation of unseen objects
during inference without the target objects’ CAD models,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17163
which is more suitable for daily-life applications. However,
developing such algorithms is more challenging due to the
shape and texture diversity within each category.
In recent years, category-level object pose estimation re-
search [5, 55, 56] has advanced rapidly by adopting state-
of-the-art deep learning methods. [2, 46] gain the ability
to generalize by mapping the input shape to normalized or
metric-scale canonical spaces and then recovering the ob-
jects’ poses via correspondence matching. Better handling
of intra-category shape variation is also achieved by lever-
aging shape priors [4, 42, 56], symmetry priors [20], or do-
main adaptation [17,21]. Additionally, [5] enhances the per-
ceptiveness of local geometry, and [7,55] exploit geometric
consistency terms to improve the performance further.
Despite the remarkable progress of existing methods,
there is still room for improvement in the performance of
the category-level object pose estimation. Reconstruction
and matching-based methods [17, 42, 46] are usually lim-
ited in speed due to the time-consuming correspondence-
matching procedure. Recently, various methods [5, 7,
20, 55, 56] built on 3D graph convolution (3D-GC) [23]
have achieved impressive performance and run in real-time.
They show outstanding local geometric sensitivity and the
ability to generalize to unseen objects. However, only look-
ing at small local regions impedes their ability to leverage
the global geometric relationships that are essential for han-
dling complex geometric shapes and makes them vulnerable
to outliers. In addition, the scale and translation invariant
nature of 3D-GC restrict the perception of object size and
translation information.
To overcome the limitations of 3D-GC in category-level
object pose estimation, we propose the hybrid scope latent
feature extraction layer (HS-layer), which can perceive both
local and global geometric relationships and has a better
awareness of translation and scale. Moreover, the proposed
HS-layer is highly robust to outliers. To demonstrate the ef-
fectiveness of the HS-layer, we replace the 3D-GC layers in
GPV-Pose [7] to construct a new category-level object pose
estimation framework, HS-pose. This framework signifi-
cantly outperforms the state-of-the-art method and runs in
real time. Our approach extends the perception of 3D-GC to
incorporate other essential information by using two paral-
lel paths for information extraction. The first path encodes
size and translation information (STE), which is missing in
3D-GC due to its invariance property. The second path ex-
tracts outlier-robust geometric features using the receptive
field with the feature distance metric (RF-F) and the outlier-
robust feature extraction layer (ORL).
The main contribution of this paper is as follows:
• We propose a network architecture, the hybrid scope
latent feature extraction layer (HS-layer), that can
simultaneously perceive local and global geometric
structure, encode translation and scale information,and extract outlier-robust feature information. Our
proposed HS-layer balances all these critical aspects
necessary for category-level pose estimation.
• We use the HS-layer to develop a category-level pose
estimation framework, HS-Pose, based on GPV-Pose.
The HS-Pose, when compared to its parent frame-
work, has an advantage in handling complex geometric
shapes, capturing object size and translation while be-
ing robust to noise.
• We conduct extensive experiments and show that the
proposed method can handle complex shapes and out-
performs the state-of-the-art methods by a large mar-
gin while running in real-time (50FPS).
|
Zhu_R2Former_Unified_Retrieval_and_Reranking_Transformer_for_Place_Recognition_CVPR_2023
|
Abstract
Visual Place Recognition (VPR) estimates the location of
query images by matching them with images in a reference
database. Conventional methods generally adopt aggre-
gated CNN features for global retrieval and RANSAC-based
geometric verification for reranking. However, RANSAC
only employs geometric information but ignores other pos-
sible information that could be useful for reranking, e.g. lo-
cal feature correlations, and attention values. In this paper,
we propose a unified place recognition framework that han-
dles both retrieval and reranking with a novel transformer
model, named R2Former. The proposed reranking module
takes feature correlation, attention value, and xy coordi-
nates into account, and learns to determine whether the
image pair is from the same location. The whole pipeline
is end-to-end trainable and the reranking module alone
can also be adopted on other CNN or transformer back-
bones as a generic component. Remarkably, R2Former
significantly outperforms state-of-the-art methods on ma-
jor VPR datasets with much less inference time and mem-
ory consumption. It also achieves the state-of-the-art on
the hold-out MSLS challenge set and could serve as a sim-
ple yet strong solution for real-world large-scale applica-
tions. Experiments also show vision transformer tokens
are comparable and sometimes better than CNN local fea-
tures on local matching. The code is released at https:
//github.com/Jeff-Zilence/R2Former .
|
1. Introduction
Visual Place Recognition (VPR) aims to localize query
images from unknown locations by matching them with a
set of reference images from known locations. It has great
potential for robotics [47], navigation [41], autonomous
driving [9], augmented reality (AR) applications. Previ-
ous works [26, 53] generally formulate VPR as a retrieval
problem with two stages, i.e. global retrieval, and rerank-
ing. The global retrieval stage usually applies aggregation
†This work was done during the first author’s internship at ByteDance
CNN Local Features
Number of Inliers
(b) Our Unified Framework
(a) Conventional PipelineNetVLAD (Retrieval )
Transformers ( Retrieval )
Reranking ScoreTransformers ( Reranking )Selected
Patch Pairs……
(c) Comparison with State -of-the-art MethodsRANSAC ( Reranking )
Transformer Tokens
40506070
MSLS Challenge (R@1)Patch-NetVLAD
[26]
TransVPR [53]
Ours00.511.5
Infer. Time (s)02040
Memory (GB)+9.1
4.7×
Reduction4.7×
ReductionFigure 1. An overview of conventional pipeline and our unified
framework. Both our global retrieval and reranking modules are
end-to-end trainable transformer layers, which achieve state-of-
the-art performance with much less computational cost.
methods ( e.g. NetVLAD [3] and GeM [43]) on top of CNN
(Convolutional Neural Network) to retrieve top candidates
from a large reference database. While some works [3, 6]
only adopt global retrieval, current state-of-the-art meth-
ods [26, 53] conduct reranking ( i.e. geometric verification
with RANSAC [19]) on the top-k ( e.g.k= 100 ) candidates
to further confirm the matches, typically leading to a signif-
icant performance boost. However, geometric information
is not the only information that could be useful for rerank-
ing, and task-relevant information could be learned with a
data-driven module to further boost performance. Besides,
the current reranking process requires a relatively large in-
ference time and memory footprint (typically over 1s and
1MB per image), which cannot scale to real-world appli-
cations with large QPS (Queries Per Second) and reference
database ( >1M images).
Recently vision transformer [17] has achieved signifi-
cant performance on a wide range of vision tasks, and has
been shown to have a high potential for VPR [53] (Sec. 2).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19370
However, the predominant local features [26, 53] are still
based on CNN backbones, due to the built-in feature lo-
cality with limited receptive fields. Although vision trans-
former [17] considers each patch as an input token and natu-
rally encodes local information, the local information might
be overwritten with global information by the strong global
correlation between all tokens in every layer. Therefore, it is
still unclear how vision transformer tokens perform in local
matching as compared to CNN local features in this field.
In this paper, we integrate the global retrieval and rerank-
ing into one unified framework employing only transform-
ers (Fig. 1), abbreviated as R2Former, which is simple, ef-
ficient, and effective. The global retrieval is tackled based
on the class token without additional aggregation modules
[3, 43] and the other image tokens are adopted as local fea-
tures. Different from geometric verification which focuses
on geometric information between local feature pairs, we
feed the correlation between the tokens (local features) of
a pair of images, xy coordinates of tokens, and their atten-
tion information to transformer modules, so that the mod-
ule can learn task-relevant information that could be use-
ful for reranking. The global retrieval and reranking parts
can be either trained in an end-to-end manner or tuned al-
ternatively with a more stable convergence. The proposed
reranking module can also be adopted on other CNN or
transformer backbones, and our comparison shows that vi-
sion transformer tokens are comparable to CNN local fea-
tures in terms of reranking performance (Table 6).
Without bells and whistles, the proposed method out-
performs both retrieval-only and retrieval+reranking state-
of-the-art methods on a wide range of VPR datasets. The
proposed method follows a very efficient design and ap-
plies linear layers for dimension reduction: i.e. only 256and
500×131for global and local feature dimensions and only
32for transformer dimension of reranking module, thus is
significantly faster ( >4.7×QPS) with much less ( <22%)
memory consumption than previous methods [26,53]. Both
the global and local features are extracted from the same
backbone model only once, and the reranking of top-k can-
didates is finished with only one forward pass by computing
the reranking scores of all candidate pairs in parallel within
one batch. The reranking speed can be further boosted by
parallel computing on multiple GPUs with >20×speedup
over previous methods [26, 53]. We demonstrate that the
proposed reranking module also learns to focus on good lo-
cal matches like RANSAC [19]. We summarize our contri-
butions as follows:
• A unified retrieval and reranking framework for place
recognition employing pure transformers, which demon-
strates that vision transformer tokens are comparable and
sometimes better than CNN local features in terms of
reranking or local matching.
• A novel transformer-based reranking module that learnsto attend to the correlation of informative local feature
pairs. It can be combined with either CNN or transformer
backbones with better performance and efficiency than
other reranking methods, e.g. RANSAC.
• Extensive experiments showing state-of-the-art perfor-
mance on a wide range of place recognition datasets with
significantly less ( <22%) inference latency and memory
consumption than previous reranking-based methods.
|
Zhao_OmniAL_A_Unified_CNN_Framework_for_Unsupervised_Anomaly_Localization_CVPR_2023
|
Abstract
Unsupervised anomaly localization and detection is cru-
cial for industrial manufacturing processes due to the lack
of anomalous samples. Recent unsupervised advances on
industrial anomaly detection achieve high performance by
training separate models for many different categories. The
model storage and training time cost of this paradigm is
high. Moreover, the setting of one-model-N-classes leads
to fearful degradation of existing methods. In this pa-
per, we propose a unified CNN framework for unsuper-
vised anomaly localization, named OmniAL. This method
conquers aforementioned problems by improving anomaly
synthesis, reconstruction and localization. To prevent the
model learning identical reconstruction, it trains the model
with proposed panel-guided synthetic anomaly data rather
than directly using normal data. It increases anomaly re-
construction error for multi-class distribution by using a
network that is equipped with proposed Dilated Channel
and Spatial Attention (DCSA) blocks. To better localize
the anomaly regions, it employs proposed DiffNeck between
reconstruction and localization sub-networks to explore
multi-level differences. Experiments on 15-class MVTecAD
and 12-class VisA datasets verify the advantage of proposed
OmniAL that surpasses the state-of-the-art of unified mod-
els. On 15-class-MVTecAD/12-class-VisA, its single unifiedmodel achieves 97.2/87.8 image-AUROC, 98.3/96.6 pixel-
AUROC and 73.4/41.7 pixel-AP for anomaly detection and
localization respectively. Besides that, we make the first at-
tempt to conduct a comprehensive study on the robustness
of unsupervised anomaly localization and detection meth-
ods against different level adversarial attacks. Experiential
results show OmniAL has good application prospects for its
superior performance.
|
1. Introduction
In real industrial scenarios, the location of anomaly
[22, 26] reveals important information, such as defective
types and degrees. It is essential not only to inspect whether
a sample is defective but also to know where the specific
anomaly regions are. Since anomaly appearance is inex-
haustible, it is almost impossible and infeasible to collect
and manually annotate all kinds of abnormal data. Thus,
only normal samples are available for training a detector
that is robust enough to find out unseen anomalies during in-
ference phase. Considering the diversity of classes and var-
ious types of one class, the conventional training paradigm
of N models for N classes, as shown in Fig.1a, may not
be the best solution. The model storage and training time
cost increase with the number of classes. As shown in
Fig.1b, existing method severely degrades anomaly local-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3924
Figure 2. Problem analysis. The final failure may caused by reconstruction and localization.
ization performance if the training paradigm changes to one
model for N classes. Therefore, a robust unified framework
for unsupervised anomaly localization is highly demanded
for intelligent industrial.
With the limitation of available training data, many ap-
pealing unsupervised approaches [14, 18, 35, 38] using syn-
thesized anomaly data are proposed. These approaches gen-
erate anomalous instances to inspire the anomaly detector to
learn discriminative features. Their experiments show that
the realisticness of generated anomalous instances had a
strong impact on the quality of anomaly localization. How-
ever, none of these methods consider the training paradigm
of one model for N classes. When switching to the unified
training paradigm, they are more prone to learn an identical
short-cut and fail to discriminate the anomaly.
With the normal and synthesized anomalous sam-
ples, recent unsupervised learning methods train a deep
anomaly detector by either a distance-based [6, 14, 19–21,
27] or reconstruction-based [2, 9, 35, 36, 38] way. The
reconstruction-based architectures [35, 38] are supposed to
reconstruct normal images more accurately than the un-
seen anomalous. The anomaly localization is then cal-
culated from the reconstruction error between the original
and reconstructed versions of the input image, as shown in
Fig.2a. The prediction of anomaly location is not only based
on the reconstruction quality but also the ability of spot-
ting the reconstruction error. The typical reconstruction-
based method JNLD [38] learns a joint representation of
an anomalous image and its anomaly-free reconstruction,
while simultaneously learning a decision boundary between
normal and simulated anomalous examples. As shown in
Fig.2b and Fig.2c, under the unified setting, JNLD [38] fails
to produce correct results either because of the reconstruc-
tion failure or the localization failure.
To conquer aforementioned problems, we propose a
novel unified framework OmniAL for effectively localiz-ing anomaly pixels of different classes only by using a sin-
gle model. OmniAL uses a panel-guided anomaly synthe-
sis method that controls the portion of normal and anomaly
regions for each training sample. By doing this, OmniAL
blocks the chance of learning identical shortcut from the
source. To increase the anomaly reconstruction error for
multi-class distribution, OmniAL constructs a reconstruc-
tion and a localization sub-networks that are equipped with
proposed Dilated Channel and Spatial Attention (DCSA)
blocks. To better localize the anomaly regions, OmniAL
employs a DiffNeck module between the reconstruction and
localization sub-networks to explore multi-level reconstruc-
tion errors. As shown in Fig.1 and Fig.2, OmniAL learns
a single unified model for multiple classes that produces
high quality reconstruction and precise anomaly localiza-
tion. Furthermore, we conduct an exhaustive evaluation
of reconstruction and localization performance against to
multi-level adversarial attacks.
In summary, we make following main contributions:
We construct a unified CNN framework OmniAL for
unsupervised anomaly localization that is equipped
with proposed panel-guided anomaly synthesis, DCSA
block, and DiffNeck module. OmniAL achieves supe-
rior performance for anomaly localization on challeng-
ing MVTecAD [1] and VisA [39] datasets compared to
the state-of-the-art.
By preventing model from learning identical re-
construction, our proposed panel-guided anomaly
synthesis method also brings substantial improve-
ment for existing methods under the unified setting.
It boosts the image-AUROC/pixel-AUROC/pixel-AP
from 88.7/87.1/49.4 to 92.5/94.5/57.4 for Draem [35].
We make a comprehensive study on the robustness of
separate/unified anomaly localization methods against
different level adversarial attacks. Our synthesized
3925
Figure 3. Framework of OmniAL. It consists of panel-guided anomaly synthesis, reconstruction and localization. Anomaly synthesis is
based on anomaly panel and three variants of the Just Noticeable Distortion (JND) map. The synthetic anomaly is reconstructed into normal
image and corresponding JND map by the Dilated Channel Spatial Attention (DCSA) modules equipped reconstruction sub-network. The
localization sub-network with a DiffNeck module localizes the anomaly regions by exploring the difference between reconstructed and
original data.
adversarial datasets exhibit strong attack capability
against anomaly detection, reconstruction and local-
ization, also helping to analyse the risks of existing
methods.
|
Zheng_Prototype-Based_Embedding_Network_for_Scene_Graph_Generation_CVPR_2023
|
Abstract
Current Scene Graph Generation (SGG) methods ex-
plore contextual information to predict relationships among
entity pairs. However, due to the diverse visual appearance
of numerous possible subject-object combinations, there
is a large intra-class variation within each predicate cat-
egory, e.g., “man-eating -pizza, giraffe-eating -leaf”, and
the severe inter-class similarity between different classes,
e.g., “man-holding -plate, man-eating -pizza”, in model’s la-
tent space. The above challenges prevent current SGG
methods from acquiring robust features for reliable rela-
tion prediction. In this paper, we claim that the predi-
cate’s category-inherent semantics can serve as class-wise
prototypes in the semantic space for relieving the chal-
lenges. To the end, we propose the Prototype-based Embed-
ding Network (PE-Net) , which models entities/predicates
with prototype-aligned compact and distinctive representa-
tions and thereby establishes matching between entity pairs
and predicates in a common embedding space for relation
recognition. Moreover, Prototype-guided Learning (PL)
is introduced to help PE-Net efficiently learn such entity-
predicate matching, and Prototype Regularization (PR) is
devised to relieve the ambiguous entity-predicate match-
ing caused by the predicate’s semantic overlap. Exten-
sive experiments demonstrate that our method gains su-
perior relation recognition capability on SGG, achieving
new state-of-the-art performances on both Visual Genome
and Open Images datasets. The codes are available at
https://github.com/VL-Group/PENET .
|
1. Introduction
Scene Graph Generation (SGG) is a fundamental com-
puter vision task that involves detecting the entities and
predicting their relationships in an image to generate a
scene graph, where nodes indicate entities and edges in-
*Equal contribution.
†Corresponding author.
(a)man-holding-plate (b)person-holding-rocket
(c)man-eating-pizza (e)giraffe-eating-leaf
(d)man-eating-pizzaFigure 1. The illustration of relation representations with large
intra-class variation and severe inter-class similarity. Left: the
feature distribution of “eating” (in red) and “holding” (in blue)
obtained by Motifs [39]. Right: some instances of “eating” and
“holding”. Examples (c) and (e) illustrate that relation instances
from the same class have diverse appearance. Moreover, examples
(a) and (c) demonstrate that similar-looking relation instances may
belong to different categories.
dicate relationships between entity pairs. Such a graph-
structured representation is helpful for downstream tasks
such as Visual Question Answering [5, 15, 41], Image Cap-
tioning [4, 38, 42, 45], and Image Retrieval [10, 25, 40].
Existing SGG models [1, 3, 7, 13, 18, 29, 39] typically
start with an object detector that generates a set of entity
proposals and corresponding features. Then, entity fea-
tures are enhanced by exploring the contextual informa-
tion taking advantage of message-passing modules. Fi-
nally, these refined entity features are used to predict pair-
wise relations. Although many works have made great ef-
forts to explore the contextual information for robust re-
lation recognition, they still suffer from biased-prediction
problems, preferring common predicates ( e.g.,“on”, “of”)
instead of fine-grained ones ( e.g.,“walking on”, “cover-
ing”). To address the problem, various de-biasing frame-
works [6, 14, 21, 28, 34, 37, 46, 47] have been proposed to
obtain balanced prediction results. While alleviating the
long-tailed issue to some extent, most of them only achieve
a trade-off between head and tail predicates. In other words,
they sacrifice the robust representations learned on head
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22783
predicates for unworthy improvements in the tail ones [46],
which do not truly improve the model’s holistic recognition
ability for most of the relations.
The origin of the issue lies in the fact that current SGG
methods fail to capture compact and distinctive represen-
tations for relations. For instance, as shown in Fig. 1, the
relation representation, derived from Motifs’ latent space,
is heavily discrete and intersecting. Hence, it makes exist-
ing SGG models hard to learn perfect decision boundaries
for accurate predicate recognition. Accordingly, we sum-
marize the issue as two challenges: large Intra-class vari-
ation within the same relation class and severe Inter-class
similarity between different categories.
Intra-class variation. The intra-class variation arises from
the diverse appearance of entities and various subject-
object combinations. Specifically, entities’ visual appear-
ances change greatly even though they belong to the same
class. Thus, represented as the union feature containing
subject and object entities, relation representations signif-
icantly vary with the appearances of entity instances, e.g.,
various visual representations for “pizza” in Fig. 1(c) vs.
Fig. 1(d). Besides, the numerous subject-object combina-
tions of predicate instances further increase the variation
within each predicate class, e.g., “man-eating-pizza” vs.
“giraffe-eating-leaf” in Fig. 1(c) and Fig. 1(e).
Inter-class similarity. The inter-class similarity of rela-
tions originates from similar-looking interactions but be-
longs to different predicate classes. For instance, as shown
in Fig. 1(a) and Fig. 1(c), the similar visual appearance of
interactions between “man-pizza” and “man-plate” make
current SGG models hard to distinguish “eating” from
“holding”, even if they are semantic irrelevant to each other.
The above challenges motivate us to study two problems:
1) For the intra-class variation, how to capture category-
inherent features, producing compact representations for
entity/predicate instances from the same category. More-
over, 2) for the inter-class similarity, how to derive distinc-
tive representations for effectively distinguishing similar-
looking relation instances between different classes. Our
key intuition is that semantics is more reliable than visual
appearance when modeling entities/predicates. Intuitively,
although entities/predicates of the same class significantly
vary in visual appearance, they all share the representa-
tive semantics, which can be easily captured from their
class labels. Dominated by the representative semantics,
the representations of entities and predicates have smaller
variations within their classes in the semantic space. Be-
sides, the class-inherent semantics is discriminative enough
for visual-similar instances between different categories.
Therefore, in conjunction with the above analysis, model-
ing entities and predicates in the semantic space can provide
highly compact and distinguishable representations against
intra-class variation and inter-class similarity challenges.Inspired by that, we propose a simple but effective
method, Prototype-based Embedding Network (PE-Net),
which produces compact and distinctive entity/predicate
representations for relation recognition. To achieve that,
the PE-Net models entity and predicate instances with com-
pact and distinguishable representations in the semantic
space, which are closely aligned to their semantic proto-
types. Practically, the prototype is defined as the represen-
tative embedding for a group of instances from the same
entity/predicate class. Then, the PE-Net establishes match-
ing between entity pairs ( i.e., subject-object ( s,o)) and their
corresponding predicates ( p) for relation recognition ( i.e.,
F(s,o)≈p). Besides, a Prototype-guided Learning strat-
egy (PL) is proposed to help PE-Net efficiently learn this
entity-predicate matching. Additionally, to alleviate the am-
biguous entity-predicate matching caused by the semantic
overlap between predicates ( e.g., “walking on” and “stand-
ing on”), Prototype Regularization (PR) is proposed to en-
courage inter-class separation between predicate prototypes
for precise entity-predicate matching. Finally, we introduce
two metrics, i.e., Intra-class Variance (IV) and Intra-class to
Inter-class Variance Ratio (IIVR), to measure the compact-
ness and distinctiveness of entity/predicate representations,
respectively.
In summary, the main contributions of our work are three
folds:
• We propose a simple yet effective method, i.e.,
Prototype-based Embedding Network (PE-Net), which
produces compact and distinctive entity/predicate rep-
resentations and then establishes matching between
entity pairs and predicates for relation recognition.
• Moreover, Prototype-guided Learning (PL) is intro-
duced to help PE-Net efficiently learn such entity-
predicate matching, and Prototype Regularization (PR)
is devised to relieve the ambiguous entity-predicate
matching caused by the predicate’s semantic overlap.
• Evaluated on the Visual Genome and Open Images
datasets, our method significantly increases the rela-
tion recognition ability for SGG, achieving new state-
of-the-art performances.
|
Zhou_Shifted_Diffusion_for_Text-to-Image_Generation_CVPR_2023
|
Abstract
We present Corgi, a novel method for text-to-image gen-
eration. Corgi is based on our proposed shifted diffusion
model, which achieves better image embedding generation
from input text. Unlike the baseline diffusion model used in
DALL-E 2, our method seamlessly encodes prior knowledge
of the pre-trained CLIP model in its diffusion process by
designing a new initialization distribution and a new tran-
sition step of the diffusion. Compared to the strong DALL-E
2 baseline, our method performs better in generating im-
age embedding from the text in terms of both efficiency
and effectiveness, resulting in better text-to-image genera-
tion. Extensive large-scale experiments are conducted and
evaluated in terms of both quantitative measures and hu-
man evaluation, indicating a stronger generation ability of
our method compared to existing ones. Furthermore, our
model enables semi-supervised and language-free training
for text-to-image generation, where only part or none of the
images in the training dataset have an associated caption.
Trained with only 1.7%of the images being captioned, our
semi-supervised model obtains FID results comparable to
DALL-E 2 on zero-shot text-to-image generation evaluated
on MS-COCO. Corgi also achieves new state-of-the-art re-
sults across different datasets on downstream language-free
text-to-image generation tasks, outperforming the previous
method, Lafite, by a large margin.
|
1. Introduction
“AI-generated content” has attracted increasingly more
public awareness thanks to the significant progress in re-
cent research of high-fidelity text-aligned image synthetic
tasks. [7, 20–23, 26, 32]. Particularly, models trained on
web-scale datasets have demonstrated their impressive abil-
ity to generate out-of-distribution images from arbitrary text
inputs that describe unseen combinations of visual con-
*Performed this work during internship at ByteDance, code is available
athttps://github.com/drboog/Shifted_Diffusion . The
research of the first and last author was supported in part by NSF through
grants IIS-1910492 and CCF-2200173 and by KAUST CRG10-4663.2.cepts.
Starting from DALL-E [21], researchers have proposed a
variety of approaches to further advance the state-of-the-art
(SOTA) of text-to-image generation in terms of both gen-
eration quality and efficiency. Latent Diffusion Model [22]
trains a diffusion model in the latent space of auto-encoder
instead of pixel space, leading to better generation effi-
ciency. GLIDE [15] adopts a hierarchical architecture,
which consists of diffusion models at different resolutions.
Such a model design strategy has shown to be effective
and has been adopted by many follow-up works. DALL-E
2 [20] further introduces an extra image embedding input.
Such an image embedding not only improves the model per-
formance in text-to-image generation but also enables ap-
plications, including image-to-image generation and gener-
ation under multi-modal conditions. Imagen [23] makes use
of a rich pre-trained text encoder [19], demonstrating that a
frozen text encoder pre-trained on the large-scale text-only
dataset can help text-to-image generation models in under-
standing the semantics of text descriptions. Parti [32] shows
a further successful scale-up of the generative model, lead-
ing to impressive improvement in text-to-image consistency
with transformer structure.
The aforementioned approaches focus on improv-
ing text-to-image generation by either scaling up train-
able/frozen modules or designing better model architec-
tures. In this work, we explore an orthogonal direction,
where we propose novel techniques to improve the diffu-
sion process itself and make it more suitable and effective
for text-to-image generation.
Specifically, we propose Corgi ( Clip-based shifted dif-
fusiOn model b RidGIng the gap), a novel diffusion model
designed for a flexible text-to-image generation. Our model
can perform text-to-image generation under all the super-
vised, semi-supervised, and language-free settings. By
“bridging the gap,” we emphasize two key novelties in our
method: (1) our model tries to bridge the image-text modal-
ity gap [12] so as to train a better generative model. Modal-
ity gap is a critical concept discovered in pre-trained mod-
els such as CLIP [17], which captures the phenomenon that
multi-modality representations do not align in the joint em-
This CVPR paper is the Open Access v
|
Zhou_RepMode_Learning_to_Re-Parameterize_Diverse_Experts_for_Subcellular_Structure_Prediction_CVPR_2023
|
Abstract
In biological research, fluorescence staining is a key
technique to reveal the locations and morphology of subcel-
lular structures. However, it is slow, expensive, and harm-
ful to cells. In this paper, we model it as a deep learning
task termed subcellular structure prediction (SSP), aiming
to predict the 3D fluorescent images of multiple subcellu-
lar structures from a 3D transmitted-light image. Unfortu-
nately, due to the limitations of current biotechnology, each
image is partially labeled in SSP . Besides, naturally, sub-
cellular structures vary considerably in size, which causes
the multi-scale issue of SSP . To overcome these challenges,
we propose Re-parameterizing Mixture-of-Diverse-Experts
(RepMode), a network that dynamically organizes its pa-
rameters with task-aware priors to handle specified single-
label prediction tasks. In RepMode, the Mixture-of-Diverse-
Experts (MoDE) block is designed to learn the generalized
parameters for all tasks, and gating re-parameterization
(GatRep) is performed to generate the specialized param-
eters for each task, by which RepMode can maintain a com-
pact practical topology exactly like a plain network, and
meanwhile achieves a powerful theoretical topology. Com-
prehensive experiments show that RepMode can achieve
state-of-the-art overall performance in SSP .
|
1. Introduction
Recent years have witnessed great progress in biological
research at the subcellular level [ 1,2,7,21,23,59,61], which
plays a pivotal role in deeply studying cell functions and
behaviors. To address the difficulty of observing subcellu-
*Corresponding Author: [email protected]
• • • (b)Partial Labeling
(c)Multi-Scale• • • Transmitted-Light Image
Fluorescent ImageNeural NetworkMicrotubule Golgi DNA
Cell Membrane Mitochondrion
Nucleolus Nuclear Envelope
(a)
× ×
× ×• • •
× ×
Figure 1. (a) Illustration of subcellular structure prediction (SSP),
which aims to predict the 3D fluorescent images of multiple sub-
cellular structures from a 3D transmitted-light image. This task
faces two challenges, i.e. (b) partial labeling and (c) multi-scale.
lar structures, fluorescence staining was invented and has
become a mainstay technology for revealing the locations
and morphology of subcellular structures [ 26]. Specifically,
biologists use the antibodies coupled to different fluores-
cent dyes to ªstainº cells, after which the subcellular struc-
tures of interest can be visualized by capturing distinct flu-
orescent signals [ 64]. Unfortunately, fluorescence staining
is expensive and time-consuming due to the need for ad-
vanced instrumentation and material preparation [ 29]. Be-
sides, phototoxicity during fluorescent imaging is detrimen-
tal to living cells [ 28]. In this paper, we model fluorescence
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3312
staining as a deep learning task, termed subcellular struc-
ture prediction (SSP), which aims to directly predict the 3D
fluorescent images of multiple subcellular structures from a
3D transmitted-light image (see Fig. 1(a)). The adoption of
SSP can significantly reduce the expenditure on subcellular
research and free biologists from this demanding workflow.
Such an under-explored and challenging bioimage prob-
lem deserves the attention of the computer vision commu-
nity due to its high potential in biology. Specifically, SSP
is a dense regression task where the fluorescent intensities
of multiple subcellular structures need to be predicted for
each transmitted-light voxel. However, due to the limita-
tions of current biotechnology, each image can only obtain
partial labels . For instance, some images may only have the
annotations of nucleoli, and others may only have the anno-
tations of microtubules (see Fig. 1(b)). Moreover, different
subcellular structures would be presented at multiple scales
under the microscope, which also needs to be taken into ac-
count. For example, the mitochondrion is a small structure
inside a cell, while obviously the cell membrane is a larger
one since it surrounds a cell (see Fig. 1(c)).
Generally, there are two mainstream solutions: 1) Multi-
Net[5,32,33,47]: divide SSP into several individual predic-
tion tasks and employs multiple networks; 2) Multi-Head
[6,9,46]: design a partially-shared network composed of
a shared feature extractor and multiple task-specific heads
(see Fig. 2(a)). However, these traditional approaches or-
ganize network parameters in an inefficient andinflexible
manner, which leads to two major issues. First, they fail to
make full use of partially labeled data in SSP, resulting in
label-inefficiency . In Multi-Net, only the images contain-
ing corresponding labels would be selected as the training
set for each network and thus the other images are wasted,
leading to an unsatisfactory generalization ability. As for
Multi-Head, although all images are adopted for training,
only partial heads are updated when a partially labeled im-
age is input and the other heads do not get involved in train-
ing. Second, to deal with the multi-scale nature of SSP, they
require exhausting pre-design of the network architecture,
and the resultant one may not be suitable for all subcellular
structures, which leads to scale-inflexibility .
In response to the above issues, herein we propose Re-
parameterizing Mixture-of-Diverse-Experts (RepMode ), an
all-shared network that can dynamically organize its pa-
rameters with task-aware priors to perform specified single-
label prediction tasks of SSP (see Fig. 2(b)). Specifically,
RepMode is mainly constructed of the proposed Mixture-of-
Diverse-Experts (MoDE )blocks . The MoDE block contains
the expert pairs of various receptive fields, where these task-
agnostic experts with diverse configurations are designed to
learn the generalized parameters for all tasks . Moreover,
gating re-parameterization (GatRep ) is proposed to conduct
thetask-specific combinations of experts to achieve efficientexpert utilization, which aims to generate the specialized
parameters for each task . With such a parameter organiz-
ing manner (see Fig. 2(c)), RepMode can maintain a practi-
cal topology exactly like a plain network, and meanwhile
achieves a theoretical topology with a better representa-
tional capacity. Compared to the above solutions, RepMode
can fully learn from all training data, since the experts are
shared with all tasks and thus participate in the training of
each partially labeled image. Besides, RepMode can adap-
tively learn the preference of each task for the experts with
different receptive fields, thus no manual intervention is re-
quired to handle the multi-scale issue. Moreover, by fine-
tuning few newly-introduced parameters, RepMode can be
easily extended to an unseen task without any degradation
of the performance on the previous tasks. Our main contri-
butions are summarized as follows:
• We propose a stronger baseline for SSP, named Rep-
Mode, which can switch different ªmodesº to predict
multiple subcellular structures and also shows its po-
tential in task-incremental learning.
• The MoDE block is designed to enrich the generalized
parameters and GatRep is adopted to yield the special-
ized parameters, by which RepMode achieves dynamic
parameter organizing in a task-specific manner.
• Comprehensive experiments show that RepMode can
achieve state-of-the-art (SOTA) performance in SSP.
Moreover, detailed ablation studies and further analy-
sis verify the effectiveness of RepMode.
|
Zhu_VDN-NeRF_Resolving_Shape-Radiance_Ambiguity_via_View-Dependence_Normalization_CVPR_2023
|
Abstract
We propose VDN-NeRF , a method to train neural ra-
diance fields (NeRFs) for better geometry under non-
Lambertian surface and dynamic lighting conditions that
cause significant variation in the radiance of a point when
viewed from different angles. Instead of explicitly model-
ing the underlying factors that result in the view-dependent
phenomenon, which could be complex yet not inclusive, we
develop a simple and effective technique that normalizes
the view-dependence by distilling invariant information al-
ready encoded in the learned NeRFs. We then jointly train
NeRFs for view synthesis with view-dependence normal-
ization to attain quality geometry. Our experiments show
that even though shape-radiance ambiguity is inevitable,
the proposed normalization can minimize its effect on geom-
etry, which essentially aligns the optimal capacity needed
for explaining view-dependent variations. Our method ap-
plies to various baselines and significantly improves geom-
etry without changing the volume rendering pipeline, even
if the data is captured under a moving light source. Code is
available at: https://github.com/BoifZ/VDN-NeRF.
|
1. Introduction
Reconstructing the geometry and appearance of a 3D
scene from a set of 2D images is one of the fundamen-
tal tasks in computer vision and graphics. Recently, based
on volume rendering, neural radiance fields (NeRFs) have
shown great potential in capturing detailed appearance of
the scene, evidenced by their high-quality view synthesis.
However, the reconstructed geometry is still far from sat-
isfying. Since geometry is critical for physical interaction
leveraging such scene representations, many recent works
have started to improve the geometric reconstruction within
the context of volume rendering, e.g., with surface render-
ing or external signals that are more informative of the ge-
*Equal contributions.
†Corresponding authors ([email protected], [email protected]).
‡The State Key Lab of CAD&CG.
§The department of EEE and the Institute of Data Science.
Input images Rendered image GeometryNeuS OurS Ground TruthFigure 1. We aim for quality geometry with NeRF reconstruction
under inconsistency when viewing the same 3D point, for exam-
ple, with images captured under a dynamic light field (top row),
shown by the light spots on the truck cast from a torch moving
with the camera. The middle two rows compare the reconstructed
geometry from NeuS [41] (first column) and our method (second
column). As observed, our method produces more details and bet-
ter estimates the truck’s structure. The last row shows a novel view
rendering from both methods. Even though our method normal-
izes the view-dependence, it does not lose details on the synthe-
sized images thanks to the regularity induced by better geometry.
ometry, like depth scans.
Nevertheless, most improvements do not explicitly study
the shape-radiance ambiguity that could induce degener-
ated geometric reconstructions. This ambiguity persists as
long as some capacity is needed to account for the direc-
tional variations in the radiance, which is further amplified
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
35
if the ambient light field changes according to the observer’s
viewpoint. For example, the NeRF’s Multi-Layer Percep-
trons (MLPs) takes in a 3D location and a 2D direction vec-
tor and outputs the observed color of this point from this
specific viewing angle. If the MLP has sufficient capacity
to model the directional phenomenon, a perfect photomet-
ric reconstruction could be achieved even if the learned ge-
ometry is entirely wrong. In other words, wrong geometry
incurs more directional variations, which the MLP’s view-
dependent branch can still encode, thus, rendering the pho-
tometric reconstruction loss incapable of constraining the
solutions.
On the other hand, directional capacity is needed to pre-
vent the (view-dependent) photometric loss from distorting
the geometry (confirmed through our experiments). With
quality geometry as the central goal, we are now facing a
tradeoff between the capacity of the view-dependent radi-
ance function and the shape-radiance ambiguity. Namely,
the MLP’s capacity has to be increased to explain the direc-
tional variations, but if the capacity gets too large, shape-
radiance ambiguity will permit degenerated geometric re-
constructions. One can thus tune the capacity to achieve the
best geometry for each scene, but it is time-consuming and
infeasible since different scenes come with different levels
of view dependence.
Instead of tuning the directional capacity, we adjust the
view-dependence. More explicitly, we propose to normalize
the directional variations by encoding invariant features dis-
tilled from the same scene representation. By doing so, the
view-dependence of different scenes is aligned to the same
level so that a single optimal capacity can ensure good view
synthesis and prevent shape-radiance ambiguity from de-
grading the geometric estimation. Given the self-distillation
property, the proposed view-dependence normalization can
be jointly trained with the commonly used photometric re-
construction loss. It can also be easily plugged into any
method relying on volume rendering for geometric scene
reconstruction.
We demonstrate the effectiveness of the proposed view-
dependence normalization on several datasets. Especially
we verify that the level of view-dependence affects the min-
imality of the shape-radiance ambiguity with tunable direc-
tional capacity. And we show that our method effectively
aligns the optimal capacity for each scene. To evaluate how
our method works under a dynamically changing light field,
we propose a new benchmark where a light source is mov-
ing conditioned on the camera pose. We validate that the
geometry obtained from our method degrades gracefully
as the significance of the induced directional variations in-
creases. In summary, we make the following contributions:
• We perform a detailed study of the coupling between
the capacity of the view-dependent radiance function
and the shape-radiance ambiguity in the context of vol-ume rendering.
• We propose a view-dependence normalization method
that effectively aligns the optimality of the direc-
tional function capacity under shape-radiance ambigu-
ity for each scene and achieves state-of-the-art geome-
try compared to various baselines.
• We quantitatively verify the robustness of our method
when the light field changes to further increase the di-
rectional variation, ensuring the applicability to sce-
narios with unfavored lighting conditions.
|
Zhao_Improved_Distribution_Matching_for_Dataset_Condensation_CVPR_2023
|
Abstract
Dataset Condensation aims to condense a large dataset
into a smaller one while maintaining its ability to train a
well-performing model, thus reducing the storage cost and
training effort in deep learning applications. However, con-
ventional dataset condensation methods are optimization-
oriented and condense the dataset by performing gradient
or parameter matching during model optimization, which is
computationally intensive even on small datasets and mod-
els. In this paper, we propose a novel dataset condensation
method based on distribution matching, which is more ef-
ficient and promising. Specifically, we identify two impor-
tant shortcomings of naive distribution matching ( i.e., im-
balanced feature numbers and unvalidated embeddings for
distance computation) and address them with three novel
techniques ( i.e., partitioning and expansion augmentation,
efficient and enriched model sampling, and class-aware dis-
tribution regularization). Our simple yet effective method
outperforms most previous optimization-oriented methods
with much fewer computational resources, thereby scal-
ing data condensation to larger datasets and models. Ex-
tensive experiments demonstrate the effectiveness of our
method. Codes are available at https://github.
com/uitrbn/IDM
|
1. Introduction
Deep learning [23, 25, 57] is notoriously data-hungry,
which poses challenges for both its training and data stor-
age. To improve data storage efficiency, Dataset Conden-
sation (DC) [47, 56] aims to condense large datasets into
smaller ones while retaining their validity for model train-
ing. Unlike traditional coreset selection methods [38, 44,
49,50], such dataset condensation is often achieved through
image synthesis and yields better performance. If prop-
erly condensed, the resulting datasets not only consume less
storage space, but can also benefit various downstream tasks
*Corresponding authors are Guanbin Li and Yizhou Yu.
Optimization
-
Oriented Methods
Distribution Matching Methods
: Model Parameter at Iteration t
Forward
Backward
&
: Synthetic and real samples
: Loss of Target Task
: Gradient
: Model OutputFigure 1. Illustration of optimization-oriented methods and distri-
bution matching methods. L: classification loss; g: gradient; O:
output of models; Lmatching : the matching loss for condensation.
such as network architecture search and continual learning
by reducing their computational costs.
The pioneering DC approach [56, 56] guarantees the va-
lidity of a condensed dataset by imposing a strong assump-
tion that a model trained on it should be identical to that
trained on the real dataset. However, naive matching be-
tween these converged models can be too challenging due to
their large parameter space and long optimization towards
convergence. To this end, they impose an even stronger as-
sumption that the two models should share an identical or
similar optimization path, which can be achieved by match-
ing either their gradients [53,56] or their intermediate model
parameters [8] during training. We therefore refer to them
asoptimization-oriented methods.
However, despite their success, the unique characteris-
tics of optimization-oriented DC methods imply that they
inevitably suffer from high computational costs and thus
scale poorly to large datasets and models. Specifically, they
all involve the optimization of models (either randomly-
initialized [53, 56] or initialized with parameters of pre-
trained models [8]) against the condensed dataset, and thus
rely on a nested loop that optimizes the condensed dataset
and model parameters in turn. Note that the use of pre-
trained models require additional computation and storage
space [8]. As a result, existing optimization-oriented DC
methods are only applicable to “toy” networks ( e.g., three-
layer convolutional networks) and small datasets ( e.g., CI-
FAR10, CIFAR100). Whether they can be scaled to real-
world scenarios is still an open question.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7856
To scale DC to large models and datasets, distribution
matching (DM) [54] proposes to match the output feature
distributions of the real and condensed datasets extracted
by randomly-initialized models. This stems from the fact
that the validity of a condensed dataset can also be guar-
anteed if it produces the same feature distribution as the
real dataset. Since DM does not involve the optimization of
models against the condensed dataset, it avoids the expen-
sive nested loops in optimization-oriented methods, and is
thus highly efficient and scalable. However, despite being
promising, experimental results show that its performance
still lags behind that of the state-of-the-art optimization-
oriented methods.
In this paper, we perform an in-depth analysis on DM’s
unsatisfactory performance and propose a set of remedies
that can significantly improve it, namely improved distribu-
tion matching (IDM). Specifically, we analyzed the output
feature distributions of DM and observed that although their
means match, the features of the condensed dataset scatter
around, causing severe class misalignment problems, which
accounts for its impaired performance. We ascribe such
scattered features to two shortcomings of DM as follows:
i) DM suffers from the imbalanced number of features. In-
tuitively, DM uses the features of small condensed datasets
to match those of large real datasets, which is inherently
intractable. Addressing this shortcoming, we propose Par-
titioning and Expansion augmentation , which augments the
condensed dataset by evenly splitting each image into l×l
parts and expanding each part to the size of the original im-
age, resulting in l2features per image and a better match to
the features of the real dataset.
ii) Randomly initialized models are not valid embedding
functions for the Maximum Mean Discrepancy (MMD) [18]
estimation used in DM. Specifically, DM justifies the valid-
ity of randomly-initialized models by their intrinsic clas-
sification power observed in tasks such as deep cluster-
ing [3,5,6,36]. However, we believe that this does not apply
to DM as randomly-initialized models do not satisfy the re-
quirement of embedding functions used in MMD and makes
it an invalid measure of distribution distance. Since it is
too challenging to design neural network based embedding
functions that are valid for MMD, we propose two simple
yet effective remedies: 1) Efficient and enriched model sam-
pling . We enrich the embedding functions in MMD with
semi-trained models as additional feature extractors, and
develop a memory-efficient model queue to facilitate their
sampling. 2) Class-aware distribution regularization . We
explicitly regularize the feature distributions of condensed
datasets to further alleviate class misalignment.
These three novel techniques together help to extract better
feature distributions for DM. Our contributions include:
• We deeply analyze the shortcomings of the Distributed
Matching [54] algorithm and reveal that the root of itsimpaired performance lies in the problem of class mis-
alignment.
• We propose improved distribution matching (IDM),
consisting of three novel techniques that address the
shortcomings of DM and help to learn better feature
distributions.
• Experimental results show that our IDM achieves sig-
nificant improvement over DM and surpasses the per-
formance of most optimization-oriented methods.
• We show that our IDM method is highly efficient and
scalable, and can be applied to large datasets such as
ImageNet Subset [10, 43].
|
Zuo_Natural_Language-Assisted_Sign_Language_Recognition_CVPR_2023
|
Abstract
Sign languages are visual languages which convey in-
formation by signers’ handshape, facial expression, body
movement, and so forth. Due to the inherent restric-
tion of combinations of these visual ingredients, there ex-
ist a significant number of visually indistinguishable signs
(VISigns) in sign languages, which limits the recognition
capacity of vision neural networks. To mitigate the problem,
we propose the Natural Language-Assisted Sign Language
Recognition (NLA-SLR) framework, which exploits seman-
tic information contained in glosses (sign labels). First,
for VISigns with similar semantic meanings, we propose
language-aware label smoothing by generating soft labels
for each training sign whose smoothing weights are com-
puted from the normalized semantic similarities among the
glosses to ease training. Second, for VISigns with distinct
semantic meanings, we present an inter-modality mixup
technique which blends vision and gloss features to further
maximize the separability of different signs under the super-
vision of blended labels. Besides, we also introduce a novel
backbone, video-keypoint network, which not only models
both RGB videos and human body keypoints but also de-
rives knowledge from sign videos of different temporal re-
ceptive fields. Empirically, our method achieves state-of-
the-art performance on three widely-adopted benchmarks:
MSASL, WLASL, and NMFs-CSL. Codes are available at
https://github.com/FangyunWei/SLRT.
|
1. Introduction
Sign languages are the primary languages for communi-
cation among deaf communities. On the one hand, sign lan-
guages have their own linguistic properties as most natural
languages [1,52,64]. On the other hand, sign languages are
visual languages that convey information by the movements
of the hands, body, head, mouth, and eyes, making them
completely separate and distinct from natural languages
[6,69,71]. This work dedicates to sign language recognition
(SLR), which requires models to classify the isolated signs
†Corresponding author.
Gloss: “C old”
Gloss: “Winter”(a) VISigns may have similar semantic meanings.
Gloss: “Afternoon”Gloss: “Table”
(b) VISigns may have distinct semantic meanings.
Figure 1. Vision neural networks are demonstrated to be less ef-
fective to recognize visually indistinguishable signs (VISigns) [2,
26, 34]. We observe that VISigns may have similar or distinct se-
mantic meanings, inspiring us to leverage this characteristic to fa-
cilitate sign language recognition as illustrated in Figure 2.
from videos into a set of glosses1. Despite its fundamen-
tal capacity of recognizing signs, SLR has a broad range of
applications including sign spotting [36, 42, 58], sign video
retrieval [8, 11], sign language translation [6, 35, 54], and
continuous sign language recognition [1, 6].
Since the lexical items of sign languages are defined
by the handshape, facial expression, and movement, the
combinations of these visual ingredients are restricted in-
herently, yielding plenty of visually indistinguishable signs
termed VISigns. VISigns are those signs with similar hand-
shape and motion but varied semantic meanings. We show
two examples (“Cold” vs. “Winter” and “Table” vs. “Af-
ternoon”) in Figure 1. Unfortunately, it has been demon-
strated that vision neural networks are less effective at
1Gloss is a unique label for a single sign. Each gloss is identified by a
word which is associated with the sign’s semantic meaning.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14890
V ocab: {“Above”, …, “Zero”}
“Twenty”…
SimilaritiesSoft Label
of “Twenty”Gloss Feature(a) Language-aware label smoothing.
“Twenty”V ocab: {“Above”, …, “Zero”}
…
Vision
FeatureAddition Operator
“Twenty” or
“Above”?“Twenty” or
“Zero”?…
Gloss
Features (b) Inter-modality mixup.
Figure 2. We incorporate natural language modeling into sign language recognition to promote recognition capacity. (a) Language-aware
label smoothing generates a soft label for each training video, whose smoothing weights are the normalized semantic similarities of the
ground truth gloss and the remaining glosses within the sign language vocabulary. (b) Inter-modality mixup yields the blended features
(denoted by orange rectangles) with the corresponding mixed labels to maximize the separability of signs in a latent space.
accurately recognizing VISigns [2, 26, 34]. Due to the
intrinsic connections between sign languages and natural
languages, the glosses, i.e., labels of signs, are seman-
tically meaningful in contrast to the one-hot labels used
in traditional classification tasks [27, 51]. Thus, although
it is challenging to classify the VISigns from the vision
perspective, their glosses provide serviceable semantics,
which is, however, less taken into consideration in previ-
ous works [18–20,23,24,26,34,36]. Our work is built upon
the following two findings.
Finding-1: VISigns may have similar semantic meanings
(Figure 1a). Due to the observation that VISigns may have
higher visual similarities, assigning hard labels to them may
hinder the training since it is challenging for vision neu-
ral networks to distinguish each VISign apart. A straight-
forward way to ease the training is to replace the hard la-
bels with soft ones as in well-established label smooth-
ing [15, 56]. However, how to generate proper soft labels
is non-trivial. The vanilla label smoothing [15, 56] assigns
equal smoothing weights to all negative terms, which ig-
nores the semantic information contained in labels. In light
of the finding-1 that VISigns may have similar semantic
meanings and the intrinsic connections between sign lan-
guages and natural languages, we consider the semantic
similarities among the glosses when generating soft labels.
Concretely, for each training video, we adopt an off-the-
shelf word representation framework, i.e., fastText [39], to
pre-compute the semantic similarities of its gloss and the re-
maining glosses within the sign language vocabulary. Then
we can properly generate a soft label for each training sam-
ple whose smoothing weights are the normalized semantic
similarities. In this way, negative terms with similar seman-
tic meanings to the ground truth gloss are assigned higher
values in the soft label. As shown in Figure 2a, we term this
process as language-aware label smoothing, which injects
prior knowledge into the training.
Finding-2: VISigns may have distinct semantic mean-
ings (Figure 1b). Although the VISigns are challenging
to be classified from the vision perspective, the semantic
meanings of their glosses may be distinguishable according
tofinding-2 . This inspires us to combine the vision fea-tures and gloss features to drive the model towards max-
imizing signs’ separability in a latent space. Specifically,
given a sign video, we first leverage our proposed backbone
to encode its vision feature and the well-established fast-
Text [39] to extract the feature of each gloss within the sign
language vocabulary. Then we independently integrate the
vision feature and each gloss feature to produce a blended
representation, which is further fed into a classifier to ap-
proximate its mixed label. We refer to this procedure as
inter-modality mixup as shown in Figure 2b. We empir-
ically find that our inter-modality mixup significantly en-
hances the model’s discriminative power.
Our contributions can be summarized as follows:
• We are the first to incorporate natural language mod-
eling into sign language recognition based on the dis-
covery of VISigns. Language-aware label smoothing
and inter-modality mixup are proposed to take full ad-
vantage of the linguistic properties of VISigns and se-
mantic information contained in glosses.
• We take into account the unique characteristic of sign
languages and present a novel backbone named video-
keypoint network (VKNet), which not only models
both RGB videos and human keypoints, but also de-
rives knowledge from sign videos of various temporal
receptive fields.
• Our method, termed natural language-assisted sign
language recognition (NLA-SLR), achieves state-of-
the-art performance on the widely-used SLR datasets
including MSASL [26], WLASL [34], and NMFs-
CSL [20].
|
Zhou_ZegCLIP_Towards_Adapting_CLIP_for_Zero-Shot_Semantic_Segmentation_CVPR_2023
|
Abstract
Recently, CLIP has been applied to pixel-level zero-shot
learning tasks via a two-stage scheme. The general idea is
to first generate class-agnostic region proposals and thenfeed the cropped proposal regions to CLIP to utilize itsimage-level zero-shot classification capability. While ef-fective, such a scheme requires two image encoders, onefor proposal generation and one for CLIP , leading to acomplicated pipeline and high computational cost. In thiswork, we pursue a simpler-and-efficient one-stage solu-
tion that directly extends CLIP’s zero-shot prediction ca-pability from image to pixel level. Our investigation starts
with a straightforward extension as our baseline that gen-erates semantic masks by comparing the similarity betweentext and patch embeddings extracted from CLIP . However ,
such a paradigm could heavily overfit the seen classes andfail to generalize to unseen classes. To handle this is-sue, we propose three simple-but-effective designs and fig-
ure out that they can significantly retain the inherent zero-shot capacity of CLIP and improve pixel-level generaliza-
tion ability. Incorporating those modifications leads toan efficient zero-shot semantic segmentation system calledZegCLIP . Through extensive experiments on three public
benchmarks, ZegCLIP demonstrates superior performance,outperforming the state-of-the-art methods by a large mar-gin under both “inductive” and “transductive” zero-shotsettings. In addition, compared with the two-stage method,our one-stage ZegCLIP achieves a speedup of about 5 times
faster during inference. We release the code at https:
//github.com/ZiqinZhou66/ZegCLIP.git .
|
1. Introduction
Semantic segmentation is one of the fundamental tasks
in the computer vision field, which aims to predict thecategory of each pixel of an image [ 7,15,31,41]. Ex-
tensive works have been proposed [ 8,26,30], e.g., Fully
Convolutional Networks [ 29], U-net [ 38], DeepLab fam-
ily [ 4–6] and more recently Vision Transformer based meth-
ods [ 16,53,58].
*Corresponding author71.176.5 75.989.9 91.9
16.3 13.828.340.477.8
020406080100 mIoU(%)Seen Unseen
Figure 1. Quantitative improvements achieved by our proposed
designs on VOC dataset. (1)(2) represents our one-stage Base-
line model of different versions (Fix or Fine-Tune CLIP image en-
coder), while (3)-(5) shows the effectiveness of applying our pro-posed designs, i.e., Deep Prompt Tuning (DPT) ,Non-mutually
Exclusive Loss (NEL) ,Relationship Descriptor (RD) , on base-
line model step by step. We highlight that our designs can dramat-ically increase the segmentation performance on unseen classes.
However, the success of the deep semantic segmentation
models heavily relies on the availability of a large amountof annotated training images, which involves a substantialamount of labor. This gives rise to a surging interest in low-supervision-based semantic segmentation approaches, in-
cluding semi-supervised [ 7], weakly-supervised [ 48], few-
shot [ 46], and zero-shot semantic segmentation [ 3,34,44].
Among them, zero-shot semantic segmentation is partic-
ularly challenging and attractive since it is required to di-rectly produce the semantic segmentation results based on
the semantic description of a given class. Recently, the pre-trained vision-language model CLIP [ 36] has been adopted
into various dense prediction tasks, such as referring seg-mentation [ 43], semantic segmentation [ 33], and detection
[14]. It also offers a new paradigm and has made a break-
through for zero-shot semantic segmentation. Initially builtfor matching text and images, CLIP has demonstrated a re-
markable capability for image-level zero-shot classification.
However, zsseg [ 49] and Zegformer [ 12] follow the com-
mon strategy that needs two-stage processing that first gen-
erates region proposals and then feeds the cropped regions
to CLIP for zero-shot classification. Such a strategy re-quires two image encoding processes, one for generatingproposals and one for encoding each proposal via CLIP .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11175
Table 1. Differences between our approach and related zero-shot
semantic segmentation methods based on CLIP .
MethodsNeed an extra CLIP as an image Can do
image encoder? -level classifier? inductive?
zsseg [ 49] /enc-33/enc-33 /enc-33
ZegFormer [ 12] /enc-33/enc-33 /enc-33
MaskCLIP+ [ 56] /enc-33/enc-37 /enc-37
ZegCLIP (Ours) /enc-37/enc-37 /enc-33
This design creates additional computational overhead and
cannot leverage the knowledge of the CLIP encoder at theproposal generation stage. Besides, MaskCLIP+ [ 56] uti-
lizes CLIP to generate pseudo labels of novel classes forself-training but will be invalid if the unseen class namesin inference are unknown in the training stage (“inductive”zero-shot setting).
This paper pursues simplifying the pipeline by directly
extending the zero-shot capability of CLIP from image-
level to pixel-level. The basic idea of our method is straight-forward: we use a lightweight decoder to match the textprompts against the local embeddings extracted from CLIP ,which could be achieved via the self-attention mechanism in
a transformer-based structure. We train the vanilla decoderand fix or fine-tune the CLIP image encoder on a datasetcontaining pixel-level annotations from a limited number
of classes, expecting the text-patch matching capability cangeneralize to unseen classes. Unfortunately, this basic ver-sion tends to overfit the training set: while the segmentationresults for seen classes generally improve, the model failsto produce reasonable segments on unseen classes. Sur-prisingly, we discover such an overfitting issue can be dra-matically alleviated by incorporating three modified designchoices and report the quantitative improvements in Fig. 1.
The following highlights our key discoveries:
Design 1: Using Deep Prompt Tuning (DPT) instead of
fine-tuning or fixing for the CLIP image encode. We find
that fine-tuning could lead to overfitting to seen classeswhile prompt tuning prefers to retain the inherent zero-shotcapacity of CLIP .
Design 2: Applying Non-mutually Exclusive Loss (NEL)
function when performing pixel-level classification but gen-erating the posterior probability of one class independent ofthe logits of other classes.
Design 3: Most importantly and our major innovation —
introducing a Relationship Descriptor (RD) to incorporate
the image-level prior into text embedding before matchingtext-patch embeddings from CLIP can significantly preventthe model from overfitting to the seen classes.
By incorporating those three designs into our one-stage
baseline, we create a simple-but-effective zero-shot seman-tic segmentation model named ZegCLIP . Tab. 1summa-
rizes the differences between our proposed method and ex-isting approaches based on CLIP . More details can be foundin Appendix. We conduct extensive experiments on threepublic datasets and show that our method outperforms the
state-of-the-art methods by a large margin in both the “in-ductive” and “transductive” settings.
|
Zhao_High-Frequency_Stereo_Matching_Network_CVPR_2023
|
Abstract
In the field of binocular stereo matching, remarkable
progress has been made by iterative methods like RAFT-
Stereo and CREStereo. However, most of these methods
lose information during the iterative process, making it
difficult to generate more detailed difference maps that take
full advantage of high-frequency information. We propose
the Decouple module to alleviate the problem of data
coupling and allow features containing subtle details to
transfer across the iterations which proves to alleviate the
problem significantly in the ablations. To further capture
high-frequency details, we propose a Normalization Refine-
ment module that unifies the disparities as a proportion of
the disparities over the width of the image, which address
the problem of module failure in cross-domain scenarios.
Further, with the above improvements, the ResNet-like
feature extractor that has not been changed for years
becomes a bottleneck. Towards this end, we proposed a
multi-scale and multi-stage feature extractor that intro-
duces the channel-wise self-attention mechanism which
greatly addresses this bottleneck. Our method (DLNR)
ranks1ston the Middlebury leaderboard, significantly
outperforming the next best method by 13.04%. Our
method also achieves SOTA performance on the KITTI-
2015 benchmark for D1-fg. Code and demos are available
at:https://github.com/David-Zhao-1997/
High-frequency-Stereo-Matching-Network .
†These authors contributed equally.
⋆Corresponding author.
Email: [email protected].
⋆⋆Second Corresponding author.
Email: [email protected]
|
1. Introduction
Figure 1. Motivation. We aim to address the problem of blurry
edges, thin object missing, and textureless region mismatch.
Stereo depth estimation is becoming the infrastructure
for 3D applications. Accurate depth perception is vital for
autonomous driving, drones navigation, robotics and other
related fields. The main point of the task is to estimate a
pixel-wise displacement map also known as disparity that
can be used to determine the depth of the pixels in the scene.
Traditional stereo matching algorithms [ 7,11,12] are mainly
divided into two types: global methods [ 6,16,17,26] and
local methods [ 1,13]. Both methods solve the optimiza-
tion problem by minimizing the objective function contain-
ing the data and smoothing terms, while the former takes
into account the global information, the latter simply takes
into account the local information, hence both have their
own benefits in terms of accuracy and speed when solv-
ing the optimization problem. Traditional methods have ex-
cellent generalization performance and robustness in differ-
ent scenarios, but perform poorly on details such as weak
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1327
textures and repetitive texture regions. With the develop-
ment of convolutional neural networks, learning-based ap-
proaches [ 20,28,37] have lately demonstrated promising
result in tackling the matching problem of challenging re-
gions. Take advantages of the strong regularization perfor-
mance of the 3D convolution and 4D cost volume, meth-
ods [ 2,10,15,45] using 3D convolution performs well.
While their practical applicability is limited by the high
computational cost. Subsequent methods [ 37,43] attempt to
use multiple adaptive aggregated and guided aggregated 2D
convolutions instead of 3D convolution, reducing computa-
tional cost and achieving better performance. The recent ap-
pearance of RAFT-Stereo [ 20] has given rise to a fresh con-
cept for the research of stereo matching. Derived from the
optical estimation method RAFT [ 29], RAFT-Stereo uses
the iterative refinement method for a coarse-to-fine pipeline.
It first calculates the correlation between all pixel pairs to
construct a 3D correlation pyramid. Then an update oper-
ator with a convolutional GRU as the core unit is used to
retrieves features from the correlation pyramid and updates
the disparity map [ 20].
Despite great progress has been made in learning-based
approaches, two major problems remain. (1) Most current
approaches fall short when it comes to the finer features of
the estimated disparity map. Especially for the edge perfor-
mance of the objects. In bokeh and rendering applications,
the edge performance of the disparity map is critical to the
final result. For example, technologies that require pixel-
level rendering, such as VR and AR, have high requirements
for fitting between the scene model and the image mapping,
which means we need a tight fit between the edges in the
disparity map and the original RGB image. (2) The mis-
match of textureless regions and the missing of thin objects
are also important factors that significantly deteriorate the
disparity map. For example, the mismatch of weak texture
walls and the missing of thin electrical wires are fatal flaws
for obstacle avoidance applications.
To alleviate these problems, we propose DLNR (Stereo
Matching Network with Decouple LSTM and Normaliza-
tion Refinement), a new end-to-end data-driven method for
stereo matching.
We introduced several improvements based on the itera-
tive model:
Most of the current iterative methods usually apply the
original GRU structure as their iterative cell. While the
problem is that in the original GRU structure, the informa-
tion used to generate the update matrix of the disparity map
is coupled with the value of the hidden state transfer be-
tween iterations, making it hard to keep subtle details in the
hidden state. Therefore, we designed the Decouple LSTM
module to decouple the hidden state from the update ma-
trix of the disparity map. Experiments and visualizations
proved that the module retains more subtle details in thehidden states.
Decouple LSTM keeps more high-frequency informa-
tion in the iterative stage through data decoupling, how-
ever, in order to balance performance and computational
speed, the resolution of the iterative stage is only 1/4 of
the original resolution at most. To produce disparity maps
with sharp edges and subtle details, a subsequent refine-
ment module is still needed. In our refinement module, we
aim to sufficiently exploit the information from the upsam-
pled disparity maps, the original left and right images con-
taining high-frequency information to enhance edges and
details. However, due to the large differences in disparity
ranges between different images and different datasets, the
Refinement module often has poor generalization perfor-
mance when encountering images with different disparity
ranges. In particular, when performing finetune, the mod-
ule may even fail when encountering disparity ranges that
differ greatly. To address this problem, we propose the Dis-
parity Normalization strategy. Experiments and visualiza-
tions proved that the module improves performance as well
as alleviates the problem of domain difference.
After the above two improvements, we found that the
feature extractor became the bottleneck of the performance.
In the field of stereo matching, feature extraction has not
been improved significantly for years, most learning-based
methods still use ResNet-like feature extractors which fall
short when providing information for well-designed post-
stage structures. To alleviate the problem, we propose the
Channel-Attention Transformer feature extractor aims to
capture long-range pixel dependencies and preserve high-
frequency information.
|
Zhu_I2-SDF_Intrinsic_Indoor_Scene_Reconstruction_and_Editing_via_Raytracing_in_CVPR_2023
|
Abstract
In this work, we present I2-SDF , a new method for in-
trinsic indoor scene reconstruction and editing using dif-
ferentiable Monte Carlo raytracing on neural signed dis-
tance fields (SDFs). Our holistic neural SDF-based frame-
work jointly recovers the underlying shapes, incident ra-
diance and materials from multi-view images. We intro-
duce a novel bubble loss for fine-grained small objects and
error-guided adaptive sampling scheme to largely improve
the reconstruction quality on large-scale indoor scenes.
Further, we propose to decompose the neural radiance
field into spatially-varying material of the scene as a neu-
ral field through surface-based, differentiable Monte Carlo
raytracing and emitter semantic segmentations, which en-
ables physically based and photorealistic scene relighting
and editing applications. Through a number of qualita-
tive and quantitative experiments, we demonstrate the su-
perior quality of our method on indoor scene reconstruc-
tion, novel view synthesis, and scene editing compared to
state-of-the-art baselines. Our project page is at https:
//jingsenzhu.github.io/i2-sdf .
|
1. Introduction
Reconstructing 3D scenes from multi-view images is a
fundamental task in computer graphics and vision. Neu-
ral Radiance Field (NeRF) [16] and its follow-up research
leverage multi-layer perceptions (MLPs) as implicit func-
tions, taking as input the positional and directional coordi-
nates, to approximate the underlying geometry and appear-
ance of a 3D scene. Such methods have shown compelling
and high-fidelity results in novel view synthesis. However,
we argue that novel view synthesis itself is insufficient for
scene editing such as inserting virtual objects, relighting
*Corresponding author.RENDERING SCENE EDITING
Figure 1. I2-SDF. Left: State-of-the-art neural implicit surface
representation method [42] fails in reconstructing small objects
inside an indoor scene ( e.g. lamps and chandeliers), which is re-
solved by our bubbling method. Middle and Right: Our intrinsic
decomposition and raytracing method enable photo-realistic scene
editing and relighting applications.
and editing surface materials with global illumination.
On the other hand, inverse rendering or intrinsic de-
composition , which reconstructs and decomposes the scene
into shape, shading and surface reflectance from single or
multiple images, enables photorealistic scene editing pos-
sibilities. It is a long-term challenge especially for large-
scale indoor scenes because they typically exhibit complex
geometry and spatially-varying global illumination appear-
ance. As intrinsic decomposition is an extremely ill-posed
task, a physically-based shading model will crucially af-
fect the decomposition quality. Existing neural rendering
methods [2, 18, 43, 46] rely on simple rendering algorithms
(such as pre-filtered shading) for the decomposition and use
a global lighting representation (e.g., spherical Gaussians).
Although these methods have demonstrated the effective-
ness on object-level inverse rendering, they are inapplicable
to complex indoor scenes. Moreover, indoor scene images
are usually captured from the inside out and most lighting
information has already presented inside the room. As a re-
sult, the reconstructed radiance field already provides suffi-
cient lighting information without the need of active, exter-
nal capture lighting setup.
To tackle the above challenges, we propose I2-SDF , a
new method to decompose a 3D scene into its underlying
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12489
shape, material, and incident radiance components using
implicit neural representations. We design a robust two-
stage training scheme that first reconstructs a neural SDF
with radiance field, and then conducts raytracing in the SDF
to decompose the radiance field into material and emission
fields. As complex indoor scenes typically contain many
fine-grained, thin or small structures with high-frequency
details that are difficult for an implicit SDF function to fit,
we propose a novel bubble loss and an error-guided adap-
tive sampling scheme that greatly improve the reconstruc-
tion quality on small objects in the scene. As a result,
our approach achieves higher reconstruction quality in both
geometry and novel view synthesis, outperforming previ-
ous state-of-the-art neural rendering methods in complex
indoor scenes. Further, we present an efficient intrinsic de-
composition method that decomposes the radiance field into
spatially-varying material and emission fields using surface-
based, differentiable Monte Carlo raytracing, enabling var-
ious scene editing applications.
In summary, our contributions include:
• We introduce I2-SDF1, a holistic neural SDF-based
framework for complex indoor scenes that jointly re-
covers the underlying shape, radiance, and material
fields from multi-view images.
• We propose a novel bubble loss and error-guided adap-
tive sampling strategy to effectively reconstruct fine-
grained small objects inside the scene.
• We are the first that introduce Monte Carlo raytracing
technique in scene-level neural SDF to enable photo-
realistic indoor scene relighting and editing.
• We provide a high-quality synthetic indoor scene
multi-view dataset, with ground truth camera pose and
geometry annotations.
|
Zheng_NeuralPCI_Spatio-Temporal_Neural_Field_for_3D_Point_Cloud_Multi-Frame_Non-Linear_CVPR_2023
|
Abstract
In recent years, there has been a significant increase in
focus on the interpolation task of computer vision. Despite
the tremendous advancement of video interpolation, point
cloud interpolation remains insufficiently explored. Mean-
while, the existence of numerous nonlinear large motions
in real-world scenarios makes the point cloud interpolation
task more challenging. In light of these issues, we present
NeuralPCI : an end-to-end 4D spatio-temporal Neural field
for 3D PointCloud Interpolation, which implicitly inte-
grates multi-frame information to handle nonlinear large
motions for both indoor and outdoor scenarios. Further-
more, we construct a new multi-frame point cloud interpo-
lation dataset called NL-Drive for large nonlinear motions
in autonomous driving scenes to better demonstrate the su-
periority of our method. Ultimately, NeuralPCI achieves
state-of-the-art performance on both DHB (Dynamic Hu-
man Bodies) and NL-Drive datasets. Beyond the interpola-
tion task, our method can be naturally extended to point
cloud extrapolation, morphing, and auto-labeling, which
indicates its substantial potential in other domains. Codes
are available at https://github.com/ispc-lab/NeuralPCI.
|
1. Introduction
In the field of computer vision, sequential point clouds
are frequently utilized in many applications, such as VR/AR
techniques [11, 38, 49] and autonomous driving [4, 32, 46].
The relatively low frequency of LiDAR compared to other
sensors, i.e., 10–20 Hz, impedes exploration for high tem-
poral resolution point clouds [47]. Therefore, interpolation
tasks for point cloud sequences, which have not been sub-
stantially investigated, are receiving increasing attention.
With the similar goal of obtaining a smooth sequence
with high temporal resolution, we can draw inspiration from
the video frame interpolation (VFI) task. Several VFI meth-
ods [6,8,20,35,45,48] concentrate on nonlinear movements
in the real world. They take multiple frames as input and
∗Equal contribution.†Corresponding author.
Figure 1. Common cases of nonlinear motions in autonomous
driving scenarios. Spatially uniform linear interpolation (
) using
the middle two frames of the point cloud differs significantly from
the actual situation (
), so it is necessary to take multiple point
clouds into consideration for nonlinear interpolation.
generate explicit multi-frame fusion results based on flow
estimation [6, 8, 20, 45, 48] or transformer [35]. Nonethe-
less, due to the unique structure of point clouds [31], it is
non-trivial to extend VFI methods to the 3D domain.
Some early works [18,19] rely on stereo images to gener-
ate pseudo-LiDAR point cloud interpolation. For pure point
cloud input, previous methods [22,47] take two consecutive
frames as input and output the point cloud at a given inter-
mediate moment. However, with limited two input frames,
these approaches can only produce linear interpolation re-
sults [22], or perform nonlinear compensation by fusing in-
put frames in the feature dimension linearly [47], which is
inherently a data-driven approach to learning the dataset-
specified distribution of nonlinear motions rather than an
actual nonlinear interpolation. Only when the frame rate of
the input point cloud sequence is high enough or the object
motion is small enough, can the two adjacent point clouds
satisfy the linear motion assumption. Nonetheless, there
are numerous nonlinear motions in real-world cases. For
instance, as illustrated in Fig. 1, the result of linear interpo-
lation between two adjacent point cloud frames has a large
deviation from the actual situation. A point cloud sequence
rather than just two point cloud frames allows us to view
further into the past and future. Neighboring multiple point
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
909
clouds contain additional spatial-temporal cues, namely dif-
ferent perspectives, complementary geometry, and implicit
high-order motion information. Therefore, it is time to re-
think the point cloud interpolation task with an expanded
design space, which is an open challenge yet.
Methods that explicitly fuse multiple point cloud frames
generally just approximate the motion model over time,
which actually simplifies real-world complex motion. The
neural field provides a more elegant way to parameterize
the continuous point cloud sequence implicitly. Inspired by
NeRF [25] whose view synthesis of images is essentially
an interpolation, we propose NeuralPCI, a neural field to
exploit the spatial and temporal information of multi-frame
point clouds. We build a 4D neural spatio-temporal field,
which takes sequential 3D point clouds and the indepen-
dent interpolation time as input, and predicts the in-between
or future point cloud at the given time. Moreover, Neu-
ralPCI is optimized on runtime in a self-supervised man-
ner, without relying on costly ground truths, which makes
it free from the out-of-the-distribution generalization prob-
lem. Our method can be flexibly applied to segmentation
auto-labeling and morphing. Besides, we newly construct
a challenging multi-frame point cloud interpolation dataset
called NL-Drive from public autonomous driving datasets.
Finally, we achieve state-of-the-art performance on indoor
DHB dataset and outdoor NL-Drive dataset.
Our main contributions are summarized as follows:
• We propose a novel multi-frame point cloud interpola-
tion algorithm to deal with the nonlinear complex mo-
tion in real-world indoor and outdoor scenarios.
• We introduce a 4D spatio-temporal neural field to in-
tegrate motion information implicitly over space and
time to generate the in-between point cloud frames at
the arbitrary given time.
• A flexible unified framework to conduct both the inter-
polation and extrapolation, facilitating several applica-
tions as well.
|
Zhu_PMatch_Paired_Masked_Image_Modeling_for_Dense_Geometric_Matching_CVPR_2023
|
Abstract
Dense geometric matching determines the dense pixel-
wise correspondence between a source and support image
corresponding to the same 3D structure. Prior works em-
ploy an encoder of transformer blocks to correlate the two-
frame features. However, existing monocular pretraining
tasks, e.g., image classification, and masked image model-
ing (MIM), can not pretrain the cross-frame module, yield-
ing less optimal performance. To resolve this, we reformu-
late the MIM from reconstructing a single masked image to
reconstructing a pair of masked images, enabling the pre-
training of transformer module. Additionally, we incorpo-
rate a decoder into pretraining for improved upsampling
results. Further, to be robust to the textureless area, we pro-
pose a novel cross-frame global matching module (CFGM).
Since the most textureless area is planar surfaces, we pro-
pose a homography loss to further regularize its learning.
Combined together, we achieve the State-of-The-Art (SoTA)
performance on geometric matching. Codes and models are
available at https://github.com/ShngJZ/PMatch.
|
1. Introduction
When a 3D structure is viewed in both a source and a
support image, for a pixel (or keypoint) in the source image,
the task of geometric matching identifies its corresponding
pixel in the support image. This task is a cornerstone for
many downstream vision applications, e.g.homography es-
timation [18], structure-from-motion [45], visual odometry
estimation [21] and visual camera localization [7].
There exist both sparse and dense methods for geomet-
ric matching. The sparse methods [16, 19, 32, 33, 40, 42,
48, 48, 56] only yield correspondence on sparse or semi-
dense locations while the dense methods [20, 54, 55] es-
timate pixel-wise correspondence. They primarily differ
in that the sparse methods embed a keypoint detection or
a global matching on discrete coordinates, which underly-
ingly assumes a unique mapping between source and sup-
port frames. Yet, the existence of textureless surfaces in-
Figure 1. Most vision tasks start with a pretrained network. In geo-
metric matching, the unique network components processing two-
view features cannot benefit from the monocular pretraining task,
e.g., image classification, and masked image modeling (MIM). As
in the figure, this work enables the pretraining of a matching model
via reformulating MIM from reconstructing a single masked im-
age to reconstructing a pair of masked images.
troduces multiple similar local patches, disabling keypoint
detection or causing ambiguous matching results. Dense
methods, though facing similar challenges at the coarse
level, alleviate it with the additional fine-level local context
and smoothness constraint. Until recently, the dense meth-
ods demonstrate a comparable or better geometric matching
performance over the sparse methods [20, 54, 55].
A relevant task to dense geometric matching is the opti-
cal flow estimation [50]. Both tasks estimate dense corre-
spondences, whereas the optical flow is applied over con-
secutive frames with the constant brightness assumption.
In geometric matching [9, 48], apart from the encoder
encodes source and support frames into feature maps, there
exist transformer blocks which correlate two-frame fea-
tures, e.g., the LoFTR module [48]. Since these network
components consume two-frame inputs, the monocular pre-
training task, e.g., the image classification and masked im-
age modeling (MIM) defined on ImageNet dataset, is un-
able to benefit the network. This limits both the geometric
matching performance and its generalization capability.
To address this, we reformulate the MIM from single
masked image reconstruction to paired masked images re-
construction, i.e., pMIM. Paired MIM benefits the geomet-
ric matching as both tasks rely on the cross-frame module
to correlate two frames inputs for prediction.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21909
With a pretrained encoder, the decoder in dense geomet-
ric matching is still randomly initialized. Following the idea
of pretraining encoder, we extend pMIM pretraining to the
decoder. As part functionality of decoder is to upsample
the coarse-scale initial prediction to the same resolution as
input, we also task the decoder in pMIM to upsample the
coarse-scale reconstruction to its original resolution. Corre-
spondingly, we consist the decoder as stacks of the depth-
wise convolution except for the last prediction head. With
the depth-wise decoder, when transferring from pMIM to
geometric matching, we duplicate the decoder along the
channel dimension to finish the initialization. To this end,
there exists only a small number of components in the de-
coder randomly initialized, we pretrain the rest network
components using synthetic image pair augmentation [54].
To further improve the dense geometric matching perfor-
mance, we propose a cross-frame global matching module
(CFGM). In CFGM, we first compute the correlation vol-
ume. We model the correspondences of coarse scale pixels
as a summation over the discrete coordinates in the support
frame, weighted by the softmaxed correlation vector. How-
ever, this modeling fails when multiple similar local patches
exit. As a solution, we impose positional embeddings to
the discrete coordinates and decode with a deep architec-
ture to avoid ambiguity. Meanwhile, we notice that the tex-
tureless surfaces are mostly planar structures described by
a low-dimensional 8degree-of-freedom (DoF) homography
matrix. We thus design a homography loss to augment the
learning of the low DoF planar prior.
We summarize our contributions as follows:
•We introduce the paired masked image modeling pretext
task, pretraining both the encoder and decoder of a dense
geometric matching network.
•We propose a novel cross-frame global matching module
that is robust to textureless local patches. Since the most
textureless patches are planar structures, we augment their
learning with a homography loss.
•We outperform dense and sparse geometric matching
methods on diverse datasets.
|
Zhao_Quality-Aware_Pre-Trained_Models_for_Blind_Image_Quality_Assessment_CVPR_2023
|
Abstract
Blind image quality assessment (BIQA) aims to auto-
matically evaluate the perceived quality of a single image,
whose performance has been improved by deep learning-
based methods in recent years. However, the paucity of la-
beled data somewhat restrains deep learning-based BIQA
methods from unleashing their full potential. In this pa-
per, we propose to solve the problem by a pretext task
customized for BIQA in a self-supervised learning manner,
which enables learning representations from orders of mag-
nitude more data. To constrain the learning process, we
propose a quality-aware contrastive loss based on a simple
assumption: the quality of patches from a distorted image
should be similar, but vary from patches from the same im-
age with different degradations and patches from different
images. Further, we improve the existing degradation pro-
cess and form a degradation space with the size of roughly
2×107. After pre-trained on ImageNet using our method,
models are more sensitive to image quality and perform sig-
nificantly better on downstream BIQA tasks. Experimental
results show that our method obtains remarkable improve-
ments on popular BIQA datasets.
|
1. Introduction
With the arrival of the mobile internet era, billions of im-
ages are generated, uploaded and shared on various social
media platforms, including Twitter, TikTok, etc[30]. As
an essential indicator, image quality can help these service
providers filter and deliver high-quality images to users,
thereby improving Quality of Experience. Therefore, huge
efforts [12,20,22,24,52,73] have been devoted to establish-
ing an image quality assessment (IQA) method consistent
with human viewers. In real-world scenarios, there usually
exists no access to the reference images and the quality of
reference images is suspicious. Thus, blind IQA (BIQA)
methods are more attractive and applicable, despite full-
reference IQA has achieved prospective results [33].
†Equal contribution.
Figure 1. The two images in the first row are sampled from BIQA
dataset CLIVE [20]. Although they have the same semantic mean-
ing, their perceptual qualities are quite different: their mean opin-
ion scores (MOS) are 31.83 and 86.35. The second row shows
modified versions of the first two images and their MOSs (rated
by 7 people). After different operations, the quality dramatically
changed, while semantic meaning remains unchanged.
Recently, deep learning-based BIQA methods have
made tremendous improvements on in-the-wild IQA bench-
marks [17, 29, 79]. However, this problem is far from re-
solved and is hindered by the paucity of labeled data [34].
The largest (by far) available BIQA dataset, FLIVE [79],
contains nearly 40,000 real-world distorted images. By
comparison, the very popular entry-level image recognition
dataset, CIFAR-100 [35], contains 60,000 labeled images.
Accordingly, existing BIQA datasets are too small to train
deep learning-based models effectively.
Researchers present several methods to tackle this chal-
lenge. A straight-forward way is to sample local patches
and assign the label of the whole image ( i.e., mean opinion
score, MOS) to the patches [4, 31, 32, 38, 60, 89, 90]. How-
ever, the perceived scores of local image patches tend to
differ from the score of the entire image [79, 89]. Another
common strategy is to leverage domain knowledge from
large-scale datasets ( e.g., ImageNet [14]) for other com-
puter vision tasks [6, 32]. Nevertheless, these pre-trained
models can be sub-optimal for BIQA tasks: images with
the same content share the same semantic label, whereas,
their quality may be different (Fig. 1). Some researchers
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22302
propose to train a model on synthetic images with artificial
degradation and then regress the model onto small-scale tar-
get BIQA datasets [43, 73, 89]. However, images generated
by rather simple degradation process with limited distortion
types/levels are far from authentic. Further, synthetic im-
ages are regularly distorted from limited pristine images of
high quality, so the image content itself only has a marginal
effect on the image quality. Yet in real-world scenarios, the
image quality is closely related to its content, due to view-
ers’ preferences for divergent contents [38, 62].
Self-supervised learning (SSL) or unsupervised learning
is another potential choice to overcome the problem of lack-
ing adequate training data, for its ability to utilize an amount
of unlabeled data. Such technique is proven to be effective
in many common computer vision tasks [2]. However, dif-
ferent from models for these tasks which mainly focus on
high-level information, representations learned for BIQA
should be sensitive to all kinds of low-level distortions and
high-level contents, as well as interactions between them.
There are few research focused on such area in the litera-
ture, let alone a deep SSL designed for BIQA with state-of-
the-art performance [44].
In this work, we propose a novel SSL mechanism
that distinguishes between samples with different percep-
tual qualities, generating Quality-aware Pre-Trained (QPT)
models for downstream BIQA tasks. Specifically, we sup-
pose the quality of patches from a distorted image should be
similar, but vary from patches from different images ( i.e.,
content-based negative ) and the same image with different
degradations ( i.e.,degradation-based negative ). Moreover,
inspired by recent progress in image restoration, we intro-
duce shuffle order [84], high-order [70] and a skip oper-
ation to the image degradation process, to simulate real-
world distortions. In this way, models pre-trained on Im-
ageNet are expected to extract quality-aware features, and
boost downstream BIQA performances. We summarize the
contributions of this work as follows:
• We design a more complex degradation process suit-
able for BIQA. It not only considers a mixture of
multiple degradation types, but also incorporates with
shuffle order, high-order and a skip operation, result-
ing in a much larger degradation space. For a dataset
with a size of 107, our method can generate more than
2×1014possible pairs for contrastive learning.
• To fully exploit the abundant information hidden be-
neath such an amount of data, we propose a novel SSL
framework to generate QPT models for BIQA based on
MoCoV2 [8]. By carefully designing positive/negative
samples and customizing quality-aware contrastive
loss, our approach enables models to learn quality-
aware information rather than regular semantic-aware
representation from massive unlabeled images.• Extensive experiments on five BIQA benchmark
datasets (sharing the same pre-trained weight of QPT)
demonstrate that our proposed method significantly
outperforms other counterparts, which indicates the ef-
fectiveness and generalization ability of QPT. It is also
worth noting that the proposed method can be easily
integrated with current SOTA methods by replacing
their pre-trained weights.
|
Zhu_Visual_Prompt_Multi-Modal_Tracking_CVPR_2023
|
Abstract
Visible-modal object tracking gives rise to a series of
downstream multi-modal tracking tributaries. To inherit the
powerful representations of the foundation model, a natural
modus operandi for multi-modal tracking is full fine-tuning
on the RGB-based parameters. Albeit effective, this man-
ner is not optimal due to the scarcity of downstream data
and poor transferability, etc. In this paper, inspired by the
recent success of the prompt learning in language models,
we develop Visual Prompt multi-modal Tracking (ViPT),
which learns the modal-relevant prompts to adapt the frozen
pre-trained foundation model to various downstream multi-
modal tracking tasks. ViPT finds a better way to stimulate
the knowledge of the RGB-based model that is pre-trained
at scale, meanwhile only introducing a few trainable pa-
rameters (less than 1% of model parameters). ViPT outper-
forms the full fine-tuning paradigm on multiple downstream
tracking tasks including RGB+Depth, RGB+Thermal, and
RGB+Event tracking. Extensive experiments show the po-
tential of visual prompt learning for multi-modal tracking,
and ViPT can achieve state-of-the-art performance while
satisfying parameter efficiency. Code and models are avail-
able at https://github.com/jiawen-zhu/ViPT.
|
1. Introduction
RGB-based tracking, a foundation task of visual object
tracking, gains from large-scale benchmarks [9, 13, 27, 36,
39, 45] provided by the community, and many excellent
works [2, 3, 5, 7, 22] have spurted out over the past decades.
Despite the promising results, object tracking based on
pure RGB sequences is still prone to failure in some com-
plex and corner scenarios, e.g., extreme illumination, back-
ground clutter, and motion blur. Therefore, multi-modal
tracking is drawing increasing attention due to the ability to
achieve more robust tracking by utilizing inter-modal com-
plementarity, among which RGB+Depth (RGB-D) [47,60],
yEqual contribution.
BCorresponding author: Dr. Dong Wang.
(b) Multi -modal Tracking (existing)(a) RGB -based Tracking
(c) ViPT (ours)Foundation Task Pre -training
Full Fine -tuning Prompt- tuning
FusionFeature
ExtractingFoundation Model
Prompts…
…RGBRGB
RGB
Feature
ExtractingFoundation ModelDownstream dataLarge -scale data
Foundation modality
Auxiliary modality
EventThermalDepth
EventThermalDepthFigure 1. Existing multi-modal tracking paradigm vs.ViPT.
(a) Foundation model training on large-scale RGB sequences.
(a)!(b) Existing multi-modal methods extend the off-the-shelf
foundation model and conduct full fine-tuning on downstream
tracking tasks. (a)!(c) We investigate the design of a multi-modal
tracking method in prompt-learning paradigm. Compared to (b),
the proposed ViPT has a more concise network structure, bene-
fits from parameter-friendly prompt-tuning, and narrows the gap
between the foundation and downstream models.
RGB+Thermal (RGB-T) [44, 56], and RGB+Event (RGB-
E) [51, 52] are represented.
However, as the downstream task of RGB-based track-
ing, the main issue encountered by multi-modal tracking is
the lack of large-scale datasets. For example, the widely
used RGB-based tracking datasets, GOT-10k [13], Track-
ingNet [36], and LaSOT [9], contain 9.3K, 30.1K, and
1.1K sequences, corresponding to 1.4M, 14M, and 2.8M
frames for training. Whereas the largest training datasets
in multi-modal tracking, DepthTrack [47], LasHeR [25],
VisEvent [43], contain 150, 979, 500 training sequences,
corresponding to 0.22M, 0.51M, 0.21M annotated frame
pairs, which is at least an order of magnitude less than the
former. Accounting for the above limitation, multi-modal
tracking methods [43, 47, 61] usually utilize pre-trained
RGB-based trackers and perform fine-tuning on their task-
oriented training sets (as shown in Figure 1 (a) !(b)).
DeT [47] adds a depth feature extraction branch to the orig-
inal ATOM [7] or DiMP [3] tracker and fine-tunes on RGB-
D training data. Zhang et al. [57] extend SiamRPN++ [21]
with dual-modal inputs for RGB-T tracking. They first con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9516
struct a unimodal tracking network trained on RGB data,
then tune the whole extended multi-modal network with
RGB-T image pairs. Similarly, Wang et al. [43] develop
dual-modal trackers by extending single-modal ones with
various fusion strategies for visible and event flows and per-
form extra model training on the RGB-E sequences. Al-
though effective, the task-oriented full-tuning approach has
some drawbacks. (i) Full fine-tuning the model is time ex-
pensive and inefficient, and the burden of parameters stor-
ing is large, which is unfriendly to numerous applications
and cumbersome to transfer deploying. (ii) Full fine-tuning
is unable to obtain generalized representation due to the
limited annotated samples, the inability to utilize the pre-
trained knowledge of the foundation model trained on large-
scale datasets. Thus, a natural question is thrown up: Is
there a more effective manner to adapt the RGB-based foun-
dation model to downstream multi-modal tracking?
More recently, in Natural Language Processing (NLP)
field, researchers have injected textual prompts into down-
stream language models to effectively exploit the repre-
sentational potential of foundation models, this method,
is known as prompt-tuning. After that, a few researchers
have tried to freeze the entire upstream model and add
only some learnable parameters to the input side to learn
valid visual prompts. Existing researches [1, 14, 38, 59]
show its great potential and visual prompt learning is ex-
pected to be an alternative to full fine-tuning. Intuitively,
there is a large inheritance between multi-modal and sin-
gle RGB-modal tracking, which should share most of prior
knowledge on feature extraction or attention patterns. In
this spirit, we present ViPT , a unified visual prompt-tuning
paradigm for downstream multi-modal tracking. Instead
of fully fine-tuning an RGB-based tracker combined with
an auxiliary-modal branch, ViPT freezes the whole foun-
dation model and only learns a few modal-specific visual
prompts, which inherits the RGB-based model parame-
ters trained at scale to the maximum extent (see Figure 1
(c)). Different from the prompt learning of other single-
modal vision tasks, ViPT introduces additional auxiliary-
modal inputs into the prompt-tuning process, adapting the
foundation model to downstream tasks while simultane-
ously learning the association between different modalities.
Specifically, ViPT inserts several simple and lightweight
modality-complementary prompter (MCP) blocks into the
frozen foundation model to effectively learn the inter-modal
complementarities. Notably, ViPT is a general framework
for various downstream multi-modal tracking tasks, includ-
ing RGB-D, RGB-T, and RGB-E tracking. We summarize
the contribution of our work as follows:
A visual prompt tracking framework is proposed to
achieve task-oriented multi-modal tracking. Facili-
tated by learned prompts, the off-the-shelf foundation
model can be effectively adapted from RGB domainto downstream multi-modal tracking tasks. Besides,
ViPT is a general method that can be applied to vari-
ous tasks, i.e., RGB-D, RGB-T, and RGB-E tracking.
A modality-complementary prompter is designed to
generate valid visual prompts for the task-oriented
multi-modal tracking. The auxiliary-modal inputs are
streamlined to a small number of prompts instead of
designing an extra network branch.
Extensive experiments show that our method achieves
SOTA performance on multiple downstream multi-
modal tracking tasks while maintaining parameter-
efficient (<1% trainable parameters).
|
Zhu_ConQueR_Query_Contrast_Voxel-DETR_for_3D_Object_Detection_CVPR_2023
|
Abstract
Although DETR-based 3D detectors simplify the detec-
tion pipeline and achieve direct sparse predictions, their
performance still lags behind dense detectors with post-
processing for 3D object detection from point clouds. DE-
TRs usually adopt a larger number of queries than GTs
(e.g., 300 queries v.s. ∼40 objects in Waymo) in a scene,
which inevitably incur many false positives during infer-
ence. In this paper, we propose a simple yet effective sparse
3D detector, named QueryContrast Voxel-DET R(Con-
QueR ), to eliminate the challenging false positives, and
achieve more accurate and sparser predictions. We observe
that most false positives are highly overlapping in local re-
gions, caused by the lack of explicit supervision to discrimi-
nate locally similar queries. We thus propose a Query Con-
trast mechanism to explicitly enhance queries towards their
best-matched GTs over all unmatched query predictions.
This is achieved by the construction of positive and negative
GT-query pairs for each GT, and a contrastive loss to en-
hance positive GT-query pairs against negative ones based
on feature similarities. ConQueR closes the gap of sparse
and dense 3D detectors, and reduces ∼60% false positives.
Our single-frame ConQueR achieves 71.6 mAPH/L2 on the
challenging Waymo Open Dataset validation set, outper-
forming previous sota methods by over 2.0mAPH/L2. Code
|
1. Introduction
3D object detection from point clouds has received much
attention in recent years [7, 32, 34, 47, 52] as its wide ap-
plications in autonomous driving, robots navigation, etc.
State-of-the-art 3D detectors [7, 31, 33, 53] still adopt dense
predictions with post-processing ( e.g., NMS [2]) to obtain
final sparse detections. This indirect pipeline usually in-
volves many hand-crafted components (e.g., anchors, center
masks) based on human experience, which involves much
effort for tuning, and prevents dense detectors from being
(a) Voxel-DETR(b) ConQueRFigure 1. Comparison of our baseline V oxel-DETR and ConQueR.
GTs (green) and predictions (blue) of an example scene in the
WOD is visualized. Sparse predictions of V oxel-DETR still con-
tain many highly overlapped false positives (in the red dashed cir-
cle), while ConQueR can generate much sparser predictions.
optimized end-to-end to achieve optimal performance. Re-
cently, DETR-based 2D detectors [3, 39, 49, 57] show that
transformers with direct sparse predictions can greatly sim-
plify the detection pipeline, and lead to better performance.
However, although many efforts [1, 26, 27] have been made
towards direct sparse predictions for 3D object detection,
because of the different characteristics of images and point
clouds ( i.e., dense and ordered images v.s.sparse and irreg-
ular points clouds), performance of sparse 3D object detec-
tors still largely lags behind state-of-the-art dense detectors.
To achieve direct sparse predictions, DETRs usually
adopt a set of object queries [1, 3, 27, 39, 49, 57], and re-
sort to the one-to-one Hungarian Matching [17] to assign
ground-truths (GTs) to object queries. However, to guaran-
tee a high recall rate, those detectors need to impose much
more queries than the actual number of objects in a scene.
For example, recent works [1, 27] select top- 300 scored
query predictions to cover only ∼40 objects in each scene
of Waymo Open Dataset (WOD) [36], while 2D DETR de-
tectors [3,39,49,57] use 10 ×more predictions than the av-
erage GT number of MS COCO [22]. As shown in Fig. 1(a),
we visualize an example scene by a baseline DETR-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9296
3D detector, named V oxel-DETR, which shows its top- 300
scored predictions. Objects are generally small and densely
populated in autonomous driving scenes, while 3D DETRs
adopt the same fixed top- Nscored predictions as 2D DE-
TRs, and lack a mechanism to handle such small and dense
objects. Consequently, they tend to generate densely over-
lapped false positives (in the red-dashed circle), harming
both the accuracy and sparsity [29, 39] of final predictions.
We argue the key reason is that the Hungarian Match-
ing in existing 3D DETRs only assigns each GT to its best
matched query, while all other unmatched queries near this
GT are not effectively suppressed. For each GT, the one-
to-one matching loss solely forces all unmatched queries to
predict the same “no-object” label, and the best matched
query are supervised without considering its relative rank-
ing to its surrounding unmatched queries. This design
causes the detectors to be insufficiently supervised in dis-
criminating similar query predictions for each GT, leading
to duplicated false positives for scenes with densely popu-
lated objects.
To overcome the limitations of current supervision, we
introduce a simple yet novel Query Contrast strategy to ex-
plicitly suppress predictions of all unmatched queries for
each GT, and simultaneously enhance the best matched
query to generate more accurate predictions in a contrastive
manner. The Query Contrast strategy is integrated into our
baseline V oxel-DETR, which consists of a sparse 3D con-
volution backbone to extract features from voxel grids, and
a transformer encoder-decoder architecture with a bipar-
tite matching loss to directly generate sparse predictions.
Our Query Contrast mechanism involves the construction
of positive and negative GT-query pairs, and the contrastive
learning on all GT-query pairs to supervise both matched
and unmatched queries with knowledge of the states of their
surrounding queries. Such GT-query pairs are directly cre-
ated by reusing the Hungarian Matching results: each GT
and its best matched query form the positive pair, and all
other unmatched queries of the same GT then form nega-
tive pairs. To quantitively measure the similarities of the
GT-query pairs, we formulate the object queries to be the
same as GT boxes ( i.e., using only box categories, loca-
tions, sizes and orientations), such that GTs and object
queries can be processed by the same transformer decoder,
and embedded into a unified feature space to properly cal-
culate their similarities. Given the GT-query similarities,
we adopt the contrastive learning loss [5, 12, 54] to effec-
tively enhance the positive (matched) query’s prediction for
each GT, and suppress those of all its negative queries at
the same time. Moreover, to further improve the contrastive
supervision, we construct multiple positive GT-query pairs
for each GT by adding small random noises to the original
GTs, which greatly boost the training efficiency and effec-
tiveness. The resulting sparse 3D detector QueryContrastV oxel-DET R(ConQueR ) significantly improves the detec-
tion performance and sparsity of predictions, as shown in
Fig. 1(b). Besides, ConQueR abandons the fixed top- Npre-
diction scheme and achieves dynamic prediction numbers
across scenes. ConQueR reduces ∼60% false positives and
sets new records on the challenging Waymo Open Dataset
(WOD) [36]. Contributions are summarized as bellow:
1. We introduce a novel Query Contrast strategy into
DETR-based 3D detectors to effectively eliminate
densely overlapped false positives and achieve more
accurate predictions.
2. We propose to construct multi-positive contrastive
training, which greatly improve the effectiveness and
efficiency of our Query Contrast mechanism.
3. Our proposed sparse 3D detector ConQueR closes the
gap between sparse and dense 3D detectors, and sets
new records on the challenging WOD benchmark.
|
Zheng_EXIF_As_Language_Learning_Cross-Modal_Associations_Between_Images_and_Camera_CVPR_2023
|
Abstract
We learn a visual representation that captures informa-
tion about the camera that recorded a given photo. To
do this, we train a multimodal embedding between image
patches and the EXIF metadata that cameras automatically
insert into image files. Our model represents this meta-
data by simply converting it to text and then processing it
with a transformer. The features that we learn significantly
outperform other self-supervised and supervised features
on downstream image forensics and calibration tasks. In
particular, we successfully localize spliced image regions
“zero shot” by clustering the visual embeddings for all of
the patches within an image.
|
1. Introduction
A major goal of the computer vision community has
been to use cross-modal associations to learn concepts that
would be hard to glean from images alone [2]. A particular
focus has been on learning high level semantics, such asobjects, from other rich sensory signals, like language and
sound [58, 62]. By design, the representations learned
by these approaches typically discard imaging properties,
such as the type of camera that shot the photo, its lens,
and the exposure settings, which are not useful for their
cross-modal prediction tasks [17].
We argue that obtaining a complete understanding of an
image requires both capabilities — for our models to per-
ceive not only the semantic content of a scene, but also
the properties of the camera that captured it. This type of
low level understanding has proven crucial for a variety of
tasks, from image forensics [33, 52, 80] to 3D reconstruc-
tion [34, 35], yet it has not typically been a focus of rep-
resentation learning. It is also widely used in image gen-
eration, such as when users of text-to-image tools specify
camera properties with phrases like “DSLR photo” [59,63].
We propose to learn low level imaging properties from
the abundantly available (but often neglected) camera meta-
data that is added to the image file at the moment of cap-
ture. This metadata is typically represented as dozens of
Exchangeable Image File Format (EXIF) tags that describe
the camera, its settings, and postprocessing operations that
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6945
were applied to the image: e.g., Model : “iPhone 4s ”
orFocal Length : “35.0 mm ”. We train a joint embed-
ding through contrastive learning that puts image patches
into correspondence with camera metadata (Fig. 1a). Our
model processes the metadata with a transformer [75] after
converting it to a language-like representation. To do this
conversion, we take advantage of the fact that EXIF tags
are typically stored in a human-readable (and text-based)
format. We convert each tag to text, and then concate-
nate them together. Our model thus closely resembles con-
trastive vision-and-language models, such as CLIP [62], but
with EXIF-derived text in place of natural language.
We show that our model can successfully estimate cam-
era properties solely from images, and that it provides a
useful representation for a variety of image forensics and
camera calibration tasks. Our approaches to these tasks do
not require camera metadata at test time. Instead, camera
properties are estimated implicitly from image content via
multimodal embeddings.
We evaluate the learned feature of our model on two clas-
sification tasks that benefit from a low-level understanding
of images: estimating an image’s radial distortion param-
eter, and distinguishing real and manipulated images. We
find that our features significantly outperform alternative
supervised and self-supervised feature sets.
We also show that our embeddings can be used to de-
tect image splicing “zero shot” (i.e., without labeled data),
drawing on recent work [8, 33, 54] that detects inconsisten-
cies in camera fingerprints hidden within image patches.
Spliced images contain content from multiple real images,
each potentially captured with a different camera and imag-
ing pipeline. Thus, the embeddings that our model assigns
to their patches, which convey camera properties, will have
less consistency than those of real images. We detect ma-
nipulations by flagging images whose patch embeddings
do not fit into a single, compact cluster. We also localize
spliced regions by clustering the embeddings within an im-
age (Fig. 1b).
We show through our experiments that:
• Camera metadata provides supervision for self-
supervised representation learning.
• Image patches can be successfully associated with camera
metadata via joint embeddings.
• Image-metadata embeddings are a useful representation
for forensics and camera understanding tasks.
• Image manipulations can be identified “zero shot” by
identifying inconsistencies in patch embeddings.
|
Zhou_Query-Centric_Trajectory_Prediction_CVPR_2023
|
Abstract
Predicting the future trajectories of surrounding agents
is essential for autonomous vehicles to operate safely. This
paper presents QCNet, a modeling framework toward push-
ing the boundaries of trajectory prediction. First, we iden-
tify that the agent-centric modeling scheme used by existing
approaches requires re-normalizing and re-encoding the in-
put whenever the observation window slides forward, lead-
ing to redundant computations during online prediction.
To overcome this limitation and achieve faster inference,
we introduce a query-centric paradigm for scene encoding,
which enables the reuse of past computations by learning
representations independent of the global spacetime coordi-
nate system. Sharing the invariant scene features among all
target agents further allows the parallelism of multi-agent
trajectory decoding. Second, even given rich encodings of
the scene, existing decoding strategies struggle to capture
the multimodality inherent in agents’ future behavior, espe-
cially when the prediction horizon is long. To tackle this
challenge, we first employ anchor-free queries to generate
trajectory proposals in a recurrent fashion, which allows
the model to utilize different scene contexts when decod-
ing waypoints at different horizons. A refinement module
then takes the trajectory proposals as anchors and leverages
anchor-based queries to refine the trajectories further. By
supplying adaptive and high-quality anchors to the refine-
ment module, our query-based decoder can better deal with
the multimodality in the output of trajectory prediction. Our
approach ranks 1ston Argoverse 1 and Argoverse 2 motion
forecasting benchmarks, outperforming all methods on all
main metrics by a large margin. Meanwhile, our model can
achieve streaming scene encoding and parallel multi-agent
decoding thanks to the query-centric design ethos.
|
1. Introduction
Making safe decisions for autonomous vehicles requires
accurate predictions of surrounding agents’ future trajec-
tories. In recent years, learning-based methods have been
observation @ 𝑡observation @ 𝑡+1
agent history reference frame of agent statereference frame of map polygonFigure 1. Illustration of our query-centric reference frame ,
where we build a local coordinate system for each spatial-temporal
element, including map polygons and agent states at all time steps.
In the attention-based encoder, all scene elements’ queries are de-
rived and updated in their local reference frames.
widely used for trajectory prediction [14, 31, 37, 38, 46, 56].
Despite the considerable efforts made to enhance models’
forecasting ability, there is still a long way to go before fully
addressing the problem of trajectory prediction. Why is this
task so challenging, and what inability lies in existing ap-
proaches? We attempt to answer these questions from the
following two perspectives:
(i)While the flourishing forecasting models have
achieved impressive performance on trajectory prediction
benchmarks [7,13,49], today’s most advanced architectures
specialized for this task [37, 38, 46, 56] fail to process the
heterogeneous traffic scenes efficiently . In an autonomous
driving system, data frames arrive at the prediction module
sequentially as a stream of sparse scene context, including
the high-definition vector map and the surrounding agents’
kinematic states. A model must learn expressive representa-
tions of these scene elements to achieve accurate forecasts.
With the continuing development of modeling techniques
for sparse context encoding [14, 31, 50], the research com-
munity has witnessed rapid progress toward more powerful
trajectory predictors. Notably, factorized attention-based
Transformers [37,38,56] have recently raised prediction ac-
curacy to an unprecedented level. However, they require
learning attention-based representations for each spatial-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17863
temporal scene element and suffer from prohibitively high
costs when processing dense traffic scenes. As every mini-
mal delay may lead to catastrophic accidents in autonomous
driving, the unmet need for real-time predictions has limited
the applicability of state-of-the-art approaches.
(ii)The immense uncertainty in the output of trajectory
prediction, which grows explosively as the prediction hori-
zon lengthens, has troubled the research community con-
stantly. For example, a vehicle at an intersection may turn
or go straight depending on the driver’s long-term goal. To
avoid missing any potential behavior, a model must learn to
capture the underlying multimodal distribution rather than
simply predicting the most frequent mode. This learning
task is challenging since only one possibility is logged in
each training sample. To ease the learning difficulty, a body
of works utilizes handcrafted anchors as guidance for mul-
timodal prediction [6, 12, 39, 53, 55]. Their effectiveness,
however, is subject to the quality of the anchors. Typically,
these methods fail to work well when few anchors can pre-
cisely cover the ground truth. This problem is exacerbated
in long-term prediction, where the search space for anchors
is much larger. Some other works [10, 31, 38, 46, 56] cir-
cumvent this issue by directly predicting multiple trajecto-
ries, albeit at the risk of mode collapse and training instabil-
ity [33, 41]. Due to the lack of spatial priors, these methods
also fail to produce accurate long-term forecasts.
The analysis above drives us to propose a trajectory pre-
diction framework, termed as QCNet, to overcome the lim-
itations of previous solutions. First , we note that it is possi-
ble to achieve faster online inference while also benefiting
from the power of factorized attention, but the agent-centric
encoding scheme [25, 27, 46, 56] used by existing methods
serves as an impediment. Each time a new data frame ar-
rives, the observation window slides one step forward and
overlaps with its predecessor substantially, which provides
opportunities for models to reuse the previously computed
encodings. However, agent-centric approaches require nor-
malizing the input based on the latest agent states’ positions,
necessitating the re-encoding of scene elements whenever
the observation window slides forward. To address this is-
sue, we introduce a query-centric paradigm for scene en-
coding (see Fig. 1). The crux of our design ethos lies in
processing all scene elements in their local spacetime ref-
erence frames and learning representations independent of
the global coordinates. This strategy enables us to cache
and reuse the previously computed encodings, spreading the
computation across all observation windows and thereby re-
ducing inference latency. The invariant scene features can
also be shared among all target agents in the scene to en-
able the parallelism of multi-agent decoding. Second , to
better utilize the scene encodings for multimodal and long-
term prediction, we use anchor-free queries to retrieve the
scene context recurrently and let them decode a short seg-ment of future waypoints at each recurrence. This recurrent
mechanism eases the modeling burden on the queries by al-
lowing them to focus on different scene contexts when pre-
dicting waypoints at different horizons. The high-quality
trajectories predicted by the recurrent decoder serve as dy-
namic anchors in the subsequent refinement module, where
we use anchor-based queries to refine the trajectory propos-
als based on the scene context. As a result, our query-based
decoding pipeline incorporates the flexibility of anchor-free
methods into anchor-based solutions, taking the best of both
worlds to facilitate multimodal and long-term prediction.
Our proposed query-centric encoding paradigm is the
first that can exploit the sequential nature of trajectory pre-
diction to achieve fast online inference. Besides, our query-
based decoder exhibits superior performance for multi-
modal and long-term prediction. Experiments show that
our approach achieves state-of-the-art results, ranking 1ston
two large-scale motion forecasting benchmarks [7, 49].
|
Zhu_BiFormer_Vision_Transformer_With_Bi-Level_Routing_Attention_CVPR_2023
|
Abstract
As the core building block of vision transformers, atten-
tion is a powerful tool to capture long-range dependency.
However, such power comes at a cost: it incurs a huge
computation burden and heavy memory footprint as pair-
wise token interaction across all spatial locations is com-
puted. A series of works attempt to alleviate this problem
by introducing handcrafted and content-agnostic sparsity
into attention, such as restricting the attention operation to
be inside local windows, axial stripes, or dilated windows.
In contrast to these approaches, we propose a novel dy-
namic sparse attention via bi-level routing to enable a more
flexible allocation of computations with content awareness.
Specifically, for a query, irrelevant key-value pairs are first
filtered out at a coarse region level, and then fine-grained
token-to-token attention is applied in the union of remain-
ing candidate regions ( i.e., routed regions). We provide
a simple yet effective implementation of the proposed bi-
level routing attention, which utilizes the sparsity to save
both computation and memory while involving only GPU-
friendly dense matrix multiplications. Built with the pro-
posed bi-level routing attention, a new general vision trans-
former, named BiFormer, is then presented. As BiFormer
attends to a small subset of relevant tokens in a query adap-
tivemanner without distraction from other irrelevant ones,
it enjoys both good performance and high computational
efficiency, especially in dense prediction tasks. Empirical
results across several computer vision tasks such as image
classification, object detection, and semantic segmentation
verify the effectiveness of our design. Code is available at
https://github.com/rayleizhu/BiFormer .
|
1. Introduction
Transformer has many properties that are suitable for
building powerful data-driven models. First, it is able to
capture long-range dependency in the data [27,40]. Second,
yCorresponding author.it is almost inductive-bias-free and thus makes the model
more flexible to fit tons of data [14]. Last but not least, it
enjoys high parallelism, which benefits training and infer-
ence of large models [12, 31, 34, 40]. Hence, transformer
has not only revolutionized natural language processing but
also shown very promising progress in computer vision.
The computer vision community has witnessed an explo-
sion of vision transformers in the past two years [1, 13, 14,
27, 42, 44]. Among these works, a popular topic is to im-
prove the core building block, i.e., attention. In contrast to
convolution, which is intrinsically a local operator, a cru-
cial property of attention is the global receptive field, which
empowers vision transformers to capture long-range depen-
dency [40]. However, such a property comes at a cost: as
attention computes pairwise token affinity across all spatial
locations, it has a high computational complexity and incurs
heavy memory footprints.
To alleviate the problem, a promising direction is to in-
troduce sparse attention [5] to vision transformers, so that
each query attends to a small portion of key-value pairs
instead of all. In this fashion, several handcrafted sparse
patterns have been explored, such as restricting attention
in local windows [27], dilated windows [39, 44], or axial
stripes [44]. On the other hand, there are also works try-
ing to make the sparsity adaptive to data [4, 45]. However,
while they use different strategies to merge or select key/-
value tokens, these tokens are query-agnostic, i.e., they are
shared by all queries. Nonetheless, according to the visual-
ization of pretrained ViT1[14] and DETR2[1], queries in
different semantic regions actually attend to quite different
key-value pairs. Hence, forcing all queries to attend to the
same set of tokens may be suboptimal.
In this paper, we seek an attention mechanism with dy-
namic, query-aware sparsity. Basically, we aim for each
query to attend to a small portion of the most semantically
relevant key-value pairs. The first problem comes as how
1https://epfml.github.io/attention-cnn/
2https : / / colab . research . google . com / github /
facebookresearch/detr/blob/colab/notebooks/detr_
attention.ipynb
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10323
(a) V anilla Attention (b) Local Attentionquery key/value local window
(c) Axial Attention
(d) Dilated Attention (e) Deformable Attention (f) Bi-level Routing AttentionFigure 1. Vanilla attention and its sparse variants. (a)Vanilla attention operates gloabally and incurs high computational complexity
and heavy memory footprint. (b)-(d) Several works attempt to alleviate the complexity by introducing sparse attention with different
handcrafted patterns, such as local window [27, 44], axial stripe [13], dilated window [39, 44]. (e)Deformable attention [45] enables
image-adaptive sparsity via deforming a regular grid. (f)We achieve dynamic, query-aware sparsity with bi-level routing attention, which
first searches top- k(k= 3in this case) relevant regions, and then attends to the union of them.
to locate these key-value pairs to attend. For example, if we
select key-value pairs in a per-query manner as done in [16],
it still requires evaluation of pairwise affinity between all
queries and keys, and hence has the same complexity of
vanilla attention. Another possibility is to predict attention
offsets based on local context for each query [9, 45], and
hence pairwise affinity computation is avoided. However,
in this way, it is problematic to model long-range depen-
dency [45].
To locate valuable key-value pairs to attend globally with
high efficiency, we propose a region-to-region routing ap-
proach. Our core idea is to filter out the most irrelevant
key-value pairs at a coarse-grained region level, instead of
directly at the fine-grained token level. This is done by first
constructing a region-level affinity graph and then pruning it
to keep only top- kconnections for each node. Hence, each
region only needs to attend to the top- krouted regions. With
the attending regions determined, the next step is to apply
token-to-token attention, which is non-trivial as key-value
pairs are now assumed to be spatially scattered. For this
case, while the sparse matrix multiplication is applicable,
it is inefficient in modern GPUs, which rely on coalesced
memory operations, i.e., accessing blocks of dozens of con-
tiguous bytes at once [29]. Instead, we propose a simple so-lution via gathering key/value tokens, where only hardware-
friendly dense matrix multiplications are involved. We refer
to this approach as Bi-level Routing Attention (BRA), as it
contains a region-level routing step and a token-level atten-
tion step.
By using BRA as the core building block, we propose
BiFormer, a general vision transformer backbone that can
be used for many applications such as classification, object
detection, and semantic segmentation. As BRA enables Bi-
Former to attend to a small subset of the most relevant key/-
value tokens for each query in a content-aware manner, our
model achieves a better computation-performance trade-off.
For example, with 4.6G FLOPs computation, BiFormer-T
achieves 83.8% top-1 accuracy on ImageNet-1K classifi-
cation, which is the best as far as we know under similar
computation budgets without training with external data or
distillation [22,38]. The improvements are also consistently
shown in downstream tasks such as instance segmentation
and semantic segmentation.
To summarize, our contributions are as follows. We in-
troduce a novel bi-level routing mechanism to vanilla at-
tention, which enables content-aware sparse patterns in a
query-adaptive manner. Using the bi-level routing atten-
tion as the basic building block, we propose a general vi-
10324
sion transformer named BiFormer. Experimental results on
various computer vision tasks including image classifica-
tion, object detection, and semantic segmentation show that
the proposed BiFormer achieves significantly better perfor-
mances over the baselines under similar model sizes.
|
Zhao_Search-Map-Search_A_Frame_Selection_Paradigm_for_Action_Recognition_CVPR_2023
|
Abstract
Despite the success of deep learning in video under-
standing tasks, processing every frame in a video is com-
putationally expensive and often unnecessary in real-time
applications. Frame selection aims to extract the most in-
formative and representative frames to help a model better
understand video content. Existing frame selection methods
either individually sample frames based on per-frame im-
portance prediction, without considering interaction among
frames, or adopt reinforcement learning agents to find rep-
resentative frames in succession, which are costly to train
and may lead to potential stability issues. To overcome the
limitations of existing methods, we propose a Search-Map-
Search learning paradigm which combines the advantages
of heuristic search and supervised learning to select the best
combination of frames from a video as one entity. By com-
bining search with learning, the proposed method can bet-
ter capture frame interactions while incurring a low infer-
ence overhead. Specifically, we first propose a hierarchical
search method conducted on each training video to search
for the optimal combination of frames with the lowest er-
ror on the downstream task. A feature mapping function is
then learned to map the frames of a video to the representa-
tion of its target optimal frame combination. During infer-
ence, another search is performed on an unseen video to se-
lect a combination of frames whose feature representation is
close to the projected feature representation. Extensive ex-
periments based on several action recognition benchmarks
demonstrate that our frame selection method effectively im-
proves performance of action recognition models, and sig-
nificantly outperforms a number of competitive baselines.
|
1. Introduction
Videos have proliferated online in recent years with the
popularity of social media, and have become a major form
of content consumption on the Internet. The abundant video
data has greatly encouraged the development of deep learn-
ing techniques for video content understanding. As one of
the most important tasks, action recognition aims to iden-
tify relevant actions described in videos, and plays a vital
role to other downstream tasks like video retrieval and rec-
ommendation.
Due to the high computational cost of processing frames
in a video, common practices of action recognition involve
sampling a subset of frames or clips uniformly [31] or
densely [26, 37] from a given video a serve as the input
to a content understanding model. However, since frames
in a video may contain redundant information and are not
equally important, simple sampling methods are often inca-
pable of capturing such knowledge and hence can lead to
sub-optimal action recognition results.
Prior studies attempt to actively select relevant video
frames to overcome the limitation of straightforward sam-
pling, achieving improvements to model performance.
Heuristic methods are proposed to rank and select frames
according to the importance score of each frame/clip cal-
culated by per-frame prediction [15, 19]. Despite the effec-
tiveness, these methods heavily rely on per-frame features,
without considering the interaction or diversity among se-
lected frames. Reinforcement learning (RL) has also been
proposed to identify informative frames by formulating
frame selection as a Markov decision process (MDP) [8, 9,
33,35,36]. However, existing RL-based methods may suffer
from training stability issues and rely on a massive amount
of training samples. Moreover, RL methods make an MDP
assumption that frames are selected sequentially depending
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10627
on observations of already selected frames, and thus cannot
adjust prior selections based on new observations.
In this work, we propose a new learning paradigm named
Search-Map-Search (SMS), which directly searches for the
best combination of frames from a video as one entity. SMS
formulates the problem of frame selection from the perspec-
tive of heuristic search in a large space of video frame com-
binations, which is further coupled with a learnable map-
ping function to generalize to new videos and achieve effi-
cient inference.
Specifically, we propose a hierarchical search algorithm
to efficiently find the most favorable frame combinations on
training videos, which are then used as explicit supervision
information to train a feature mapping function that maps
the feature vectors of an input video to the feature vector of
the desirable optimal frame combination. During inference
on an unseen query video, the learned mapping function
projects the query video onto a target feature vector for the
desired frame combination, where another search process
retrieves the actual frame combination that approximates
the target feature vector. By combining search with learn-
ing, the proposed SMS method can better capture frame in-
teractions while incurring a low inference cost.
The effectiveness of SMS is extensively evaluated on
both the long untrimmed action recognition benchmarks,
i.e., ActivityNet [2] and FCVID [18], and the short trimmed
UCF101 task [27]. Experimental results show that SMS
can significantly improve action recognition models and
precisely recognize and produce effective frame selections.
Furthermore, SMS significantly outperforms a range of
other existing frame selection methods for the same num-
ber of frames selected, while can still generate performance
higher than existing methods using only 10% of all labeled
video samples for training.
|
Zhao_CDDFuse_Correlation-Driven_Dual-Branch_Feature_Decomposition_for_Multi-Modality_Image_Fusion_CVPR_2023
|
Abstract
Multi-modality (MM) image fusion aims to render fused
images that maintain the merits of different modalities, e.g.,
functional highlight and detailed textures. To tackle the chal-
lenge in modeling cross-modality features and decomposing
desirable modality-specific and modality-shared features,
we propose a novel Correlation-Driven feature Decompo-
sition Fusion ( CDDFuse ) network. Firstly, CDDFuse uses
Restormer blocks to extract cross-modality shallow features.
We then introduce a dual-branch Transformer-CNN feature
extractor with Lite Transformer (LT) blocks leveraging long-
range attention to handle low-frequency global features and
Invertible Neural Networks (INN) blocks focusing on ex-
tracting high-frequency local information. A correlation-
driven loss is further proposed to make the low-frequency
features correlated while the high-frequency features un-
correlated based on the embedded information. Then, the
LT-based global fusion and INN-based local fusion layers
output the fused image. Extensive experiments demonstrate
that our CDDFuse achieves promising results in multiple
fusion tasks, including infrared-visible image fusion and
medical image fusion. We also show that CDDFuse can
boost the performance in downstream infrared-visible se-
mantic segmentation and object detection in a unified bench-
mark. The code is available at https://github.com/
Zhaozixiang1228/MMIF-CDDFuse .
|
1. Introduction
Image fusion is a basic image processing topic that aims
to generate informative fused images by combining the im-
portant information from source ones [47,75,78,79,87]. The
fusion targets include digital [45,84], multi-modal [36,70,88]
*Corresponding author.
(a) Existing MMIF methods vs. CDDFuse. The base anddetail encoders are respon-
sible for extracting global and local features, respectively.
(b) CDDFuse (highlighted in yellow) achieves the state-of-the-art performance on
MSRS [57] and RoadScene [71] in eight metrics.
Figure 1. Workflow and performance comparison of our proposed
CDDFuse with existing MM image fusion approaches.
and remote sensing [4, 76, 91] images, etc. The Infrared-
Visible image Fusion (IVF) and Medical Image Fusion (MIF)
are two challenging sub-categories of Multi-Modality Image
Fusion (MMIF), focusing on modeling the cross-modality
features from all the sensors and aggregating them into the
output images. Specifically, IVF targets fused images that
preserve thermal radiation information in the input infrared
images and detailed texture information in the input visible
images. The fused images can avoid the shortcomings of
visible images being sensitive to illumination conditions as
well as the infrared images being noisy and low-resolution.
Downstream recognition tasks, e.g., multi-modal saliency
detection [33, 52, 63], object detection [6, 34, 58] and se-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5906
mantic segmentation [37, 49 –51] can then benefit from the
obtained clearer representations of scenes and objects in IVF
images. Similarly, MIF aims to clearly exhibit the abnor-
malities by fusing multiple medical imaging modalities to
reveal comprehensive information to assist diagnosis and
treatment [21].
Many methods have been developed to tackle the MMIF
challenges in recent years [35, 41, 44, 55, 73, 74, 82]. A
common pipeline that demonstrated promising results uti-
lizes CNN-based feature extraction and reconstruction in an
Auto-Encoder (AE) manner [28, 29, 32, 88]. The workflow
is illustrated in Fig. 1a. However, existing methods have
three main shortcomings. First, the internal working mecha-
nism of CNNs is difficult to control and interpret, causing
insufficient extraction of cross-modality features. For ex-
ample, in Fig. 1a, shared encoders in (I) and (II) cannot
distinguish modality-specific features, while the private en-
coders in (III) ignore features shared by modalities. Second,
the context-independent CNN only extracts local informa-
tion in a relatively small receptive field, which can hardly
extract global information for generating high-quality fused
images [31]. Thus, it is still unclear whether the inductive
biases of CNN are capable enough to extract features for
all modalities. Third, the forward propagation of fusion
networks often causes the loss of high-frequency informa-
tion [42,89]. Our work explores a more reasonable paradigm
to tackle the challenges in feature extraction and fusion.
First, we aim to add correlation restrictions to the ex-
tracted features and limit the solution space, which improves
the controllability and interpretability of feature extraction.
Our assumption is that, in the MMIF task, the input fea-
tures of the two modalities are correlated at low frequen-
cies, representing the modality-shared information, while
the high-frequency feature is irrelevant and represents the
unique characteristics of the respective modalities. Taking
IVF as an example, since infrared and visible images come
from the same scene, the low-frequency information of the
two modalities contains statistical co-occurrences, such as
background and large-scale environmental features. On the
contrary, the high-frequency information of the two modali-
ties is independent, e.g., the texture and detail information in
the visible image and the thermal radiation information in the
infrared image. Therefore, we aim to facilitate the extraction
of modality-specific and modality-shared features by increas-
ing and decreasing the correlation between low-frequency
and high-frequency features, respectively.
Second, from the architectural perspective, Vision Trans-
formers [14, 38, 80] recently shows impressive results in
computer vision, with self-attention mechanism and global
feature extraction. However, Transformer-based methods are
computationally expensive, which leaves room for further
improvement with considering the efficiency-performance
tradeoff of image fusion architectures. Therefore, we pro-pose integrating the advantages of local context extraction
and computational efficiency in CNN and the advantages
of global attention and long-range dependency modeling in
Transformer to complete the MMIF task.
Third, to solve the challenge of losing wanted high-
frequency input information, we adopt the building block of
Invertible Neural networks (INN) [13]. INN was proposed
with invertibility by design, which prevents information loss
through the mutual generation of input and output features
and aligns with our goal of preserving high-frequency fea-
tures in the fused images.
To this end, we proposed the Correlation-Driven feature
Decomposition Fusion (CDDFuse ) model, where modality-
specific and modality-shared feature extractions are realized
by a dual-branch encoder, with the fused image reconstructed
by the decoder. The workflow is shown in Figs. 1a and 2.
Our contributions can be summarized in four aspects:
•We propose a dual-branch Transformer-CNN frame-
work for extracting and fusing global and local features,
which better reflects the distinct modality-specific and
modality-shared features.
•We refine the CNN and Transformer blocks for a better
adaptation to the MMIF task. Specifically, we are the
first to utilize the INN blocks for lossless information
transmission and the LT blocks for trading-off fusion
quality and computational cost.
•We propose a correlation-driven decomposition loss
function to enforce the modality shared/specific feature
decomposition, which makes the cross-modality base
features correlated while decorrelates the detailed high-
frequency features in different modalities.
•Our method achieves leading image fusion performance
for both IVF and MIF. We also present a unified mea-
surement benchmark to justify how the IVF fusion im-
ages facilitate downstream MM object detection and
semantic segmentation tasks.
|
Zhu_Patch-Mix_Transformer_for_Unsupervised_Domain_Adaptation_A_Game_Perspective_CVPR_2023
|
Abstract
Endeavors have been recently made to leverage the vi-
sion transformer (ViT) for the challenging unsupervised
domain adaptation (UDA) task. They typically adopt the
cross-attention in ViT for direct domain alignment. However ,
as the performance of cross-attention highly relies on thequality of pseudo labels for targeted samples, it becomes
less effective when the domain gap becomes large. We solve
this problem from a game theory’s perspective with the pro-
posed model dubbed as PMTrans , which bridges source and
target domains with an intermediate domain. Specifically,
we propose a novel ViT-based module called PatchMix that
effectively builds up the intermediate domain, i.e., proba-
bility distribution, by learning to sample patches from both
domains based on the game-theoretical models. This way,
it learns to mix the patches from the source and target do-
mains to maximize the cross entropy (CE), while exploiting
two semi-supervised mixup losses in the feature and labelspaces to minimize it. As such, we interpret the process of
UDA as a min-max CE game with three players, including
the feature extractor , classifier , and PatchMix, to find the
Nash Equilibria. Moreover , we leverage attention maps from
ViT to re-weight the label of each patch by its importance,
making it possible to obtain more domain-discriminative
feature representations. We conduct extensive experiments
on four benchmark datasets, and the results show that
PMTrans significantly surpasses the ViT-based and CNN-
based SoTA methods by +3.6% on Office-Home, +1.4% on
Office-31, and +17.7% on DomainNet, respectively. https:
//vlis2022.github.io/cvpr23/PMTrans
|
1. Introduction
Convolutional neural networks (CNNs) have achieved
tremendous success on numerous vision tasks; however, theystill suffer from the limited generalization capability to a new
domain due to the domain shift problem [ 50]. Unsupervised
domain adaptation (UDA) tackles this issue by transferring
*These authors contributed equally to this work.
†Corresponding Author.
20253035 40455055Accuracy
Target¬ tasksclp inf pnt qdr rel skt
Figure 1. The classification accuracy of our PMTrans surpasses the
SoTA methods by +17.7% on the most challenging DomainNet
dataset. Note that the target tasks treat one domain of DomainNet
as the target domain and the others as the source domains.
knowledge from a labeled source domain to an unlabeled
target domain [ 30]. A significant line of solutions reduces
the domain gap based on the category-level alignment which
produces pseudo labels for the target samples, such as metriclearning [ 14,53], adversarial training [ 12,17,34], and optimal
transport [ 44]. Furthermore, several works [ 11,36] explore
the potential of ViT for the non-trivial UDA task. Recently,
CDTrans [ 45] exploits the cross-attention in ViT for direct
domain alignment, buttressed by the crafted pseudo labels
for target samples. However, CDTrans has a distinct limita-
tion: as the performance of cross-attention highly depends
on the quality of pseudo labels, it becomes less effective
when the domain gap becomes large. As shown in Fig. 1,
due to the significant gap between the domain qdr and the
other domains, aligning distributions directly between them
performs poorly.
In this paper, we probe a new problem for UDA: how
to smoothly bridge the source and target domains by con-
structing an intermediate domain with an effective ViT -
based solution? The intuition behind this is that, compared
to direct aligning domains, alleviating the domain gap be-
tween the intermediate and source/target domain can facili-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3561
Figure 2. PMTrans builds up the intermediate domain ( green
patches ) via a novel PatchMix module by learning to sample
patches from the source ( blue patches ) and target ( pink patches )
domains. PatchMix tries to maximize the CE ( ↑) between the
intermediate domain and source/target domain, while the feature
extractor and classifier try to minimize it ( ↓) for aligning domains.
tate domain alignment. Accordingly, we propose a novel and
effective method, called PMTrans (PatchMix Transformer)
to construct the intermediate representations. Overall, PM-
Trans interprets the process of domain alignment as a min-
max cross entropy (CE) game with three players, i.e., the
feature extractor, a classifier, and a PatchMix module, to find
the Nash Equilibria. Importantly, the PatchMix module is
proposed to effectively build up the intermediate domain, i.e.,
probability distribution, by learning to sample patches from
both domains with weights generated from a learnable Beta
distribution based on the game-theoretical models [ 1,3,28],
as shown in Fig. 2. That is, we aim to learn to mix patches
from two domains to maximize the CE between the inter-
mediate domain and source/target domain. Moreover, two
semi-supervised mixup losses in the feature and label spacesare proposed to minimize the CE. Interestingly, we conclude
that the source and target domains are aligned if mixing
the patch representations from two domains is equivalent
to mixing the corresponding labels . Therefore, the domain
discrepancy can be measured based on the CE between the
mixed patches and mixed labels. Eventually, the three play-
ers have no incentive to change their parameters to disturb
CE, meaning the source and target domains are well aligned.
Unlike existing mixup methods [ 38,47,49], our proposed
PatchMix subtly learns to combine the element-wise global
and local mixture by mixing patches from the source and tar-
get domains for ViT-based UDA. Moreover, we leverage the
class activation mapping (CAM) from ViT to allocate the se-
mantic information to re-weight the label of each patch, thus
enabling us to obtain more domain-discriminative features.
We conduct experiments on four benchmark datasets, in-
cluding Office-31 [ 33], Office-Home [ 40], VisDA-2017 [ 32],
and DomainNet [ 31]. The results show that the performance
of PMTrans significantly surpasses that of the ViT-based
[36,45,46] and CNN-based SoTA methods [ 18,29,35]b y+3.6% on Office-Home, +1.4% on Office-31, and +17.7%
on DomainNet (See Fig. 1), respectively.
Our main contributions are four-fold: ( I) We propose a
novel ViT-based UDA framework, PMTrans, to effectively
bridge source and target domains by constructing the inter-
mediate domain. ( II) We propose PatchMix, a novel module
to build up the intermediate domain via the game-theoretical
models. ( III) We propose two semi-supervised mixup losses
in the feature and label spaces to reduce CE in the min-max
CE game. ( IV) Our PMTrans surpasses the prior methods
by a large margin on three benchmark datasets.
|
Zhou_UDE_A_Unified_Driving_Engine_for_Human_Motion_Generation_CVPR_2023
|
Abstract
Generating controllable and editable human motion se-
quences is a key challenge in 3D Avatar generation. It has
been labor-intensive to generate and animate human mo-
tion for a long time until learning-based approaches have
been developed and applied recently. However, these ap-
proaches are still task-specific or modality-specific [1] [6]
[5] [18]. In this paper, we propose “UDE”, the first uni-
fied driving engine that enables generating human motion
sequences from natural language or audio sequences (see
Fig. 1). Specifically, UDE consists of the following key
components: 1) a motion quantization module based on
VQVAE that represents continuous motion sequence as dis-
crete latent code [33], 2) a modality-agnostic transformer
encoder [34] that learns to map modality-aware driving
signals to a joint space, and 3) a unified token transformer
(GPT-like [24]) network to predict the quantized latent code
index in an auto-regressive manner. 4) a diffusion motion
decoder that takes as input the motion tokens and decodes
them into motion sequences with high diversity. We evaluate
our method on HumanML3D [8] and AIST+ [19] bench-
marks, and the experiment results demonstrate our method
achieves state-of-the-art performance. Project website:
https://zixiangzhou916.github.io/UDE/
|
1. Introduction
Synthesizing realistic human motion sequences has been
a pilar component in many real-world applications. It is
labor-intensive and tedious, and requires professional skills
to achieve the creation of one single piece of motion se-
quence synthesis, making it hard to be democratized for
broad content generations. Recently, the emergence of mo-
tion capture and pose estimation [15] [38] [27] [36] have
made it possible to synthesize human motion sequences
from VTubers or source videos thanks to the advances of
deep learning. Although these approaches have simplified
the creation of motion sequences, actors or highly corre-
lated videos are still necessary, thus limiting the scalability
as well as the controllability.
The development of multi-modal machine learning paves
a new way to human motion synthesis [1] [6] [8] [12] [17]
[2]. For example, natural language descriptions could be
used to drive human motion sequences directly [1] [6] [8].
The language description is a straightforward representa-
tion for human users to control the synthesis. It provides
a semantic clue of what the synthesized motion sequence
should look like, and the editing could be conducted by sim-
ply changing the language description. Language, however,
does not cover the full domain of human motion sequences.
In terms of dancing motion synthesis, for example, the nat-
ural language is not sufficient to describe the dance rhythm.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5632
For such scenarios, audio sequences are used as guidance
to help motion synthesis, so the synthesized motion could
match the music beat rhythmically and choreography style.
However, these approaches are studied separately in prior
works. In many real-world applications, the characters are
likely to perform a complex motion sequence composed of
both rhythmic dances from music and certain actions de-
scribed by language, and smooth transition between mixed
modality inputs becomes vital important. As a result, multi-
modal motion consistency would become an urgent issue to
solve if employed siloed modality-specific models.
To address above mentioned problems, in this work, we
propose a Unified Driving Engine ( UDE ) which unifies the
human motion generation driven by natural language and
music clip in one shared model. Our model consists of
four key components. First, we train a codebook using
VQ-V AE. For the codebook, each code represents a certain
pattern of the motion sequence. Second, we introduce a
Modality-Agnostic Transformer Encoder ( MATE ). It takes
the input of different modalities and transforms them into
sequential embedding in one joint space. The third com-
ponent is a Unified Token Transformer ( UTT ). We feed
it with sequential embedding obtained by MATE and pre-
dict the motion token sequences in an auto-regressive man-
ner. The fourth component is a Diffusion Motion Decoder
(DMD ). Unlike recent modality-specific works [30] [37],
ourDMD is modality-agnostic. Given the motion token se-
quences, DMD decodes them to motion sequences in con-
tinuous space by the reversed diffusion process.
We summarize our contributions in four folds: 1) We
model the continuous human motion generation problem as
a discrete token prediction problem. 2) We unify the text-
driven and audio-driven motion generation into one sin-
gle unified model. By learning MATE , we can map in-
put sequences of different modalities into joint space. Then
we can predict motion tokens with UTT regardless of the
modality of input. 3) We propose DMD to decode the mo-
tion tokens to motion sequence. Compared to the decoder
in VQ-V AE, which generates deterministic samples, our
DMD can generate samples with high diversity. 4) We eval-
uate our method extensively and the results suggest that our
method outperforms existing methods in both text-driven
and audio-driven scenarios. More importantly, our experi-
ment also suggests that our UDE enables smooth transition
between mixed modality inputs.
|
Zhou_Non-Contrastive_Learning_Meets_Language-Image_Pre-Training_CVPR_2023
|
Abstract
Contrastive language-image pre-training (CLIP) serves
as a de-facto standard to align images and texts. Nonethe-
less, the loose correlation between images and texts of web-
crawled data renders the contrastive objective data ineffi-
cient and craving for a large training batch size. In this
work, we explore the validity of non-contrastive language-
image pre-training (nCLIP), and study whether nice proper-
ties exhibited in visual self-supervised models can emerge.
We empirically observe that the non-contrastive objective
benefits representation learning while sufficiently underper-
forming under zero-shot recognition. Based on the above
study, we further introduce xCLIP, a multi-tasking frame-
work combining CLIP and nCLIP, and show that nCLIP
aids CLIP in enhancing feature semantics. The synergy
between two objectives lets xCLIP enjoy the best of both
worlds: superior performance in both zero-shot transfer
and representation learning. Systematic evaluation is con-
ducted spanning a wide variety of downstream tasks in-
cluding zero-shot classification, out-of-domain classifica-
tion, retrieval, visual representation learning, and textual
representation learning, showcasing a consistent perfor-
mance gain and validating the effectiveness of xCLIP. The
code and pre-trained models will be publicly available at
https://github.com/shallowtoil/xclip.
|
1. Introduction
Language-image pre-training which simultaneously
learns textual and visual representation from large-scale
image-text pairs has revolutionized the field of representa-
tion learning [25, 59], vision-language understanding [22],
and text-to-image generation [61]. Compared to traditional
visual models, language-instilled ones intrinsically inherit
the capability of zero-shot or few-shot learning prominently
demonstrated by large language models such as GPT-3 [12].
The precursor system, Contrastive Language- Image Pre-
Training [59] ( CLIP ) that explicitly aligns the projected
features of two modalities, has demonstrated surprising ca-
pabilities of zero-shot, representation learning, and robust-
ness, being applied to a wide range of fields [32,52,60,70].
…………512-dim
…………512-dim
CLIPImage
EncoderProjection
imagesLanguage
EncoderProjection
texts32768 -dim
nCLIPImage
EncoderProjection
imagesLanguage
EncoderProjection……
texts……32768 -dimattract distractFigure 1. Architecture comparison between CLIP and nCLIP.
We take base -size encoders for an instance. CLIP discriminates
instances within a batch using 512-dim projected embeddings.
nCLIP projects each modality into a 32768-dim probability dis-
tribution as the pseudo-label to supervise the prediction from the
other modality. Darker blocks of nCLIP depict a higher response
for cluster distribution, signifying the clusters to which text or im-
age instances may belong. xCLIP is a multi-tasking framework
with both CLIP and nCLIP.
To learn from noisy web-crawled image-text pairs, CLIP
adopts a formulation of the contrastive objective, where the
image and text within a pair are considered as unique in-
stances and are encouraged to be discriminated from all the
other negative instances. However, web-crawled image-text
pairs [65,69] are usually loosely correlated in the sense that
one caption (image) can match reasonably with multiple im-
ages (captions) besides the ground-truth one, as statistically
shown in [74]. Hence, it is inaccurate and data-inefficient
for representation learning to neglect other sensible matches
and overlook the semantics hidden inside the textual de-
scription. This is also solidified by the undesirable dis-
criminative ability or transferring performance of the visual
encoder pre-trained under the contrastive objective, as sug-
gested in [78]. Mining semantics from plentiful concepts
appearing in captions, therefore, solicits further exploration
beyond the vanilla contrastive objective.
Previously, some works resort to mining nearest neigh-
bor positive samples via intra-modality similarity [50, 74]
but require extra storage for auxiliary modules, i.e., the
teacher network [74] or the memory bank [50]. Other works
conduct multi-tasking with image-label supervision [57,78,
80], transforming the captions into tag format or the im-
age tag into the captioning format for unified contrastive
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11028
learning. Notwithstanding, such practice is still unable to
account for the loose assignment within the image-text cap-
tioning pairs. Intuitively, a caption for an image usually
identifies existing objects within the image, which can be
captured by a probabilistic distribution estimating how the
text is assigned to one or multiple object clusters [13, 14].
Such estimation depicts the semantic meanings for its con-
tained visual contents, and thus can be leveraged as a
pseudo-label to guide the learning of the visual encoder.
Inspired by the superiority of visual self-supervised
learning (SSL) models pre-trained with the non-contrastive
objective [14, 15], we explore whether the non-contrastive
objective can be used across modalities for pre-training
language-image models, and whether nice properties dis-
played on visual SSL models can be inherited. To this end,
we follow the same setup as CLIP except for the objective,
and study non-Contrastive Language- Image Pre-Training
(nCLIP ). For an image-text pair, we use the estimation of
the textual (visual) distribution as the target to supervise
the visual (textual) distribution, measured by cross-entropy.
Additional regularizers are applied to avoid trivial solutions.
Schematic comparison between CLIP and nCLIP is shown
in Fig. 1. Theoretically, such formulation takes one modal-
ity and the cluster centroid that the other modality belongs
to as the positive samples to contrast [14], as opposed to
direct features of two modalities as in contrastive learning,
such that a single image is tolerated to be aligned with mul-
tiple captions and concepts. Based on the above formula-
tion, we conduct a systematic study in terms of zero-shot
transfer and representation learning. We empirically ob-
serve that, while nCLIP demonstrates desirable represen-
tation learning capability for both modalities, it underper-
forms prominently in zero-shot classification and retrieval
tasks where models pre-trained with negative samples natu-
rally prevail.
Seeking for unique strengths of both worlds, we further
perform multi-tasking of CLIP and nCLIP, short as xCLIP ,
and seek synergy between two distinct pre-training objec-
tives. With the bearable add-on of computational resources
(e.g.,∼27% memory-wise and ∼30% time-wise), xCLIP
achieves consistent performance gain compared to CLIP on
a wide range of tasks spanning from zero-shot classification,
retrieval, linear probing, and fine-tuning, etc. An extensive
study with different pre-training datasets, evaluation met-
rics, and optimization configurations is conducted to val-
idate xCLIP’s effectiveness in mining semantics and im-
proving data efficiency. Particularly, the base-size model
pre-trained using xCLIP under 35-million publicly avail-
able image-text pairs achieves a performance gain of 3.3%
and1.5% on an average of 27 classification tasks [59], in
terms of zero-shot classification and linear probing accu-
racy, respectively. Performance gain is also valid in terms
of zero-shot retrieval, semi-supervised learning, and fine-tuning, with 3.7points of R@1 on Flickr30K [58], 1.7%
accuracy on 1% subset of ImageNet [24], and 0.3% accu-
racy on ImageNet, respectively.
|
Zhao_Exploring_Incompatible_Knowledge_Transfer_in_Few-Shot_Image_Generation_CVPR_2023
|
Abstract
Few-shot image generation (FSIG) learns to generate di-
verse and high-fidelity images from a target domain using a
few ( e.g., 10) reference samples. Existing FSIG methods se-
lect, preserve and transfer prior knowledge from a source
generator (pretrained on a related domain) to learn the tar-
get generator. In this work, we investigate an underexplored
issue in FSIG, dubbed as incompatible knowledge transfer,
which would significantly degrade the realisticness of syn-
thetic samples. Empirical observations show that the issue
stems from the least significant filters from the source gen-
erator. To this end, we propose knowledge truncation to
mitigate this issue in FSIG, which is a complementary op-
eration to knowledge preservation and is implemented by a
lightweight pruning-based method. Extensive experiments
show that knowledge truncation is simple and effective, con-
sistently achieving state-of-the-art performance, includingchallenging setups where the source and target domains are
more distant. Project Page: yunqing-me.github.io/RICK.
|
1. Introduction
Over recent years, deep generative models [14,15,21,30,
52, 78] have made tremendous progress, enabling many in-
triguing tasks such as image generation [5, 27–29], image
editing [45, 63, 68, 80], and data augmentation [6, 12, 59].
In spite of their remarkable success, most research on gen-
erative models has been focusing on setups with sizeable
training datasets [24, 26, 59, 60, 69, 74], limiting its applica-
tions in many domains where data collection is difficult or
expensive [12, 13, 46] ( e.g., medicine).
To address such problems, FSIG has been proposed re-
cently [41, 50], which learns to generate images with ex-
tremely few reference samples ( e.g., 10 samples) from a tar-
get domain. In these regimes, learning a generator to cap-
ture the underlying target distribution is an undetermined
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7380
problem that requires some prior knowledge. The majority
of existing state-of-the-art (SOTA) FSIG methods rely on
transfer learning approaches [2, 23, 65, 71] to exploit prior
knowledge ( e.g., a source generator) learned from abundant
data of a different but related source domain, and then trans-
fer suitable source knowledge to learn the target generator
with fine-tuning [9, 26, 41, 48–50, 64, 65, 74, 75, 77]. Differ-
ent techniques have been proposed to effectively preserve
useful source knowledge, such as freezing [48], regulariza-
tion [41, 50, 77] and modulation [75] (details in Sec. 2).
Incompatible knowledge transfer. Despite the impres-
sive improvement achieved by different knowledge preser-
vation approaches [41,48,50,75,77], in this work, we argue
that preventing incompatible knowledge transfer is equally
crucial. This is revealed through a carefully designed in-
vestigation, where such incompatible knowledge transfer
is manifested in the presence of unexpected semantic fea-
tures. These features are inconsistent with the target do-
main, thereby degrading the realisticness of synthetic sam-
ples. As illustrated in Figure 1, trees and buildings are in-
compatible with the domain of Sailboat (as can be observed
by inspecting the 10 reference samples). However, they
appear in the synthetic images when applying the existing
SOTA methods [41, 75] with a source generator trained on
Church . This shows that the existing methods cannot effec-
tively prevent the transfer of incompatible knowledge.
Knowledge truncation. Based on our observations, we
propose Removing In-Compatible Knowledge (RICK), a
lightweight filter-pruning based method to remove filters
that encode incompatible knowledge ( i.e., filters with least
estimated importance for adaptation) during FSIG adapta-
tion. While filter pruning has been applied extensively to
achieve compact deep networks with reduced computation
[19,57,66], its application to prevent transfer of incompati-
ble knowledge is underexplored. We note that our proposed
knowledge truncation and pruning of incompatible filters
are orthogonal and complementary with existing knowledge
preservation methods in FSIG. In this way, our method ef-
fectively removes the incompatible knowledge compared to
prior works, and achieves noticeably improved quality ( e.g.,
FID [20]) of generated images.
Our contributions can be summarized as follows:
‚We explore the incompatible knowledge transfer for
FSIG, reveal that SOTA methods fail in handling this issue,
investigate the underlying causation, and disclose the inade-
quacy of fine-tuning in removal of incompatible knowledge.
‚We propose knowledge truncation to alleviate incom-
patible knowledge transfer, and realize it with a lightweight
filter-pruning based method.
‚Extensive experiments show that our method effec-
tively removes incompatible knowledge and consistently
improves the generative quality, including challenging se-
tups where source and target domains are dissimilar.
|
Zhao_Towards_Better_Stability_and_Adaptability_Improve_Online_Self-Training_for_Model_CVPR_2023
|
Abstract
Unsupervised domain adaptation (UDA) in semantic
segmentation transfers the knowledge of the source domain
to the target one to improve the adaptability of the seg-
mentation model in the target domain. The need to access
labeled source data makes UDA unable to handle adap-
tation scenarios involving privacy, property rights protec-
tion, and confidentiality. In this paper, we focus on unsu-
pervised model adaptation (UMA), also called source-free
domain adaptation, which adapts a source-trained model
to the target domain without accessing source data. We find
that the online self-training method has the potential to be
deployed in UMA, but the lack of source domain loss will
greatly weaken the stability and adaptability of the method.
We analyze two reasons for the degradation of online self-
training, i.e. inopportune updates of the teacher model and
biased knowledge from the source-trained model. Based
on this, we propose a dynamic teacher update mechanism
and a training-consistency based resampling strategy to im-
prove the stability and adaptability of online self-training.
On multiple model adaptation benchmarks, our method ob-
tains new state-of-the-art performance, which is compara-
ble or even better than state-of-the-art UDA methods. The
code is available at https://github.com/DZhaoXd/DT-ST.
|
1. Introduction
Unsupervised Domain Adaptation (UDA) has received
extensive attention on semantic segmentation tasks [49, 59,
60, 63], which transfers the knowledge in the source do-
mains ( e.g. synthetic scene) to the target ones ( e.g. real
scene). UDA in semantic segmentation aims to alleviate
the dependence of deep neural network-based models on
dense annotations [18,46,61] and improve their generaliza-
This work is supported by the National Natural Science Foundation of
China(No.62271377, No.62201407), the Key Research and Development
Program of Shannxi (No.2021ZDLGY01-06, No.2022ZDLGY01-12), the
National Key R &D Program of China (No. 2021ZD0110404), the China
Postdoctoral Science Foundation (No. 2022M722496), the Foreign Schol-
ars in University Research and Teaching Program’s 111 Project (B07048).
Figure 1. Under the unsupervised model adaptation (UMA) set-
ting, the mIoU score (%) of different methods on the validation set
throughout the training in GTA5 !Cityscapes adaptation task.
The dashed line represents the self-training UDA methods, and
the solid line represents the MDA methods.
tion ability to target domains [5, 12, 15]. However, in pro-
prietary, privacy, or profit-related concerns, source domain
data is often unavailable, which presents new challenges for
UDA [9, 27, 55]. To this end, the setting of Unsupervised
Model Adaptation (UMA) is proposed [6,9,21,30,35], aim-
ing to adapt the source-trained model to the unlabeled target
domain without using source domain data.
In UMA, the knowledge in the source-trained model
becomes the only available supervision signal, making
self-training on pseudo-labels the mainstream in the field.
Most existing UMA methods [26, 36, 57] adopt offline self-
training methods, which iteratively updates the pseudo-
labels and retrains the models. Although some improve-
ments have been made, iterative self-training requires ex-
pert intervention [1,59], as ill-suited rounds and termination
often make it under-adapted.
The recently proposed online self-training (ONST)
methods [1, 31, 59] in UDA avoid the iterative training by
online co-evolving pseudo labels, showing great potential.
Then can ONST be applied to UMA scenarios without ac-
cessing source data? We deploy the state-of-the-art ONST
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11733
methods ProDA [1], SAC [59] and CPST [31] to UMA
and draw the mIoU score curve on the validation set dur-
ing training, as shown in Fig. 1. These ONST methods
(dashed line) achieve more competitive performance than
existing UMA methods (solid line). Nevertheless, taking a
closer look at the curves in Fig. 1, these ONST methods
present different degrees of degradation and unstable adap-
tation process. Besides, their best performance in UMA de-
creased by 4% 5%mIoU scores on average than in UDA
(See Table 1 and 2 in detail). Consequently, we conclude
that existing ONST methods suffer from impaired stability
and adaptability when applied to UMA.
This paper is committed to improving the stability and
adaptability of ONST methods in UMA. To begin with, we
explore two reasons for the poor stability and adaptabil-
ity of ONST in UMA. (1) The inopportune update of the
teacher model causes the failure of co-evolution because the
teacher model will continuously aggregate unevolved stu-
dents. Concretely, as the teacher becomes the only supervi-
sor in UMA, rapid updating will make the student lose the
direction of evolution, and slow updating will make the stu-
dent overfit the historical supervision, all of which leads to
humble benefits of teachers’ updating. (2) The bias towards
minority categories in the source-trained model results in
insufficient adaptation to those minorities as the bias is eas-
ily amplified in ONST, even with heuristic [1] or prototype
thresholding [59] being set.
Next, we present the explored solutions. For (1), we find
that the student model’s performance on historical samples
during evolution can feedback on whether the student has
evolved. Consequently, we propose a Dynamic Teacher
Update (DTU) mechanism. DTU explores two feedback
signals by information entropy [13] and soft neighborhood
density [45], which can assess the evolutionary state of stu-
dents. DTU then dynamically controls the update inter-
val of the teacher model according to the students’ feed-
back to aggregate more evolved students. For (2), we find
that resampling minority categories can effectively alleviate
the bias towards minorities in UMA. However, most exist-
ing resampling strategies [10, 11, 20, 50] rely on the source
data and cannot apply in UMA. To this end, we propose
a Training-Consistency based Resampling (TCR) strategy.
TCR adaptively estimates the biased categories from the
being-adapted model and selects reliable samples in biased
categories as resampling candidates. Through these efforts,
our method greatly improves the stability and adaptability
of ONST in UMA, as shown in Fig. 1 (red solid line). We
refer our method to DT-ST , asDTU and TCR play critical
parts in online Self-Training under UMA.
Sufficient experiments show that DT-ST further exploits
the potential of online self-training in UMA, towards bet-
ter stability and adaptability. Moreover, DT-ST obtains new
state-of-the-art performance on different UMA benchmarksand achieves comparable or even better performance than
advanced UDA methods.
|
Zhao_DiffSwap_High-Fidelity_and_Controllable_Face_Swapping_via_3D-Aware_Masked_Diffusion_CVPR_2023
|
Abstract
In this paper, we propose DiffSwap, a diffusion model
based framework for high-fidelity and controllable face
swapping. Unlike previous work that relies on carefully de-
signed network architectures and loss functions to fuse the
information from the source and target faces, we reformu-
late the face swapping as a conditional inpainting task, per-
formed by a powerful diffusion model guided by the desired
face attributes (e.g., identity and landmarks). An impor-
tant issue that makes it nontrivial to apply diffusion mod-
els to face swapping is that we cannot perform the time-
consuming multi-step sampling to obtain the generated im-
age during training. To overcome this, we propose a mid-
point estimation method to efficiently recover a reasonable
diffusion result of the swapped face with only 2 steps, which
enables us to introduce identity constraints to improve the
face swapping quality. Our framework enjoys several fa-
vorable properties more appealing than prior arts: 1) Con-
trollable. Our method is based on conditional masked diffu-
sion on the latent space, where the mask and the conditions
can be fully controlled and customized. 2) High-fidelity.
The formulation of conditional inpainting can fully exploit
the generative ability of diffusion models and can preserve
the background of target images with minimal artifacts. 3)
†Corresponding authorShape-preserving. The controllability of our method en-
ables us to use 3D-aware landmarks as the condition during
generation to preserve the shape of the source face. Ex-
tensive experiments on both FF++ and FFHQ demonstrate
that our method can achieve state-of-the-art face swapping
results both qualitatively and quantitatively.
|
1. Introduction
There has been growing interest in face swapping tech-
nology from both vision and graphics communities [1, 2, 6,
21,36,39] because of its broad applications in creating digi-
tal twins, making films, protecting privacy, etc. The goal of
face swapping is to transfer the identity of the source face to
a target image or a video frame while keeping the attributes
(e.g., pose, expression, background) unchanged.
There are two essential steps in realizing high-quality
face swapping: encoding the identity information of the
source face effectively and blending identity and attributes
from different images seamlessly. Early work [4,24] on face
swapping adopts 3D models [5] to represent the source face
and directly replace the reconstructed faces in the target im-
age based on 3D structural priors, leading to recognizable
artifacts. The development of generative adversarial net-
works (GAN) [12] provides a strong tool to generate photo-
realistic face images. Many recent methods [2, 6, 21, 39]
perform face swapping by extracting the identity feature
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8568
from the source image and then injecting it into the gen-
erative models powered by adversarial training. However,
these methods tend to make minor modifications to the tar-
get image, which may fail in totally transferring the identity
information when the face shapes of the source and the tar-
get face largely differ.
Very recently, diffusion-based models (DM) [22, 25, 26]
have exhibited high customizability for various conditions
and impressive power in generating images with high res-
olution and complex scenes. It is natural to ask: whether
the strong generation ability of diffusion models can benefit
face swapping? However, we find it is nontrivial to apply
diffusion models to the task of face swapping. Since there is
no ground-truth data pair for face swapping, face swapping
models are usually trained in a weakly-supervised manner,
where several losses about image fidelity, identity, and fa-
cial attributes are imposed to guide the training. These su-
pervisory signals can be easily added to GAN-based models
but it is difficult for DMs. Different from previous genera-
tive models like GANs [12, 16] and V AEs [14, 19], DMs
learn a denoising autoencoder to gradually recover the data
density step-by-step. Although the autoencoder can be ef-
ficiently learned by performing score matching [15] at an
arbitrary step during training, image generation using an al-
ready trained DM requires executing the autoencoder se-
quentially for a large number of steps (typically, 200 steps),
which is computationally expensive.
To tackle these challenges, we propose the first diffusion
model based face swapping framework, which can produce
high-fidelity results faces with high controllability. Figure 2
shows the overview of our method. Different from existing
methods [1,6,21,36] that modify the target face to match the
identity of the source face, we reformulate face swapping
as a conditional inpainting task guided by the identity fea-
ture and facial landmarks. Our diffusion model is learned
to generate a face that shares the same identity as the source
face and is spatially aligned with the target face. In order to
introduce identity constraints during training, we propose
a midpoint estimation method that can efficiently generate
swapped faces with only 2 steps. Our framework is by de-
sign highly controllable, where both the conditioned land-
mark and the inpainting mask can be customized during in-
ference. Thanks to this property, we propose the 3D-aware
masked diffusion where we perform the inpainting inside
the 3D-aware mask conditioned on the 3D-aware landmark
that explicitly enforces the shape consistency between the
source face and the swapped face.
We conducted extensive experiments on FaceForen-
sics++ [27] and FFHQ [16] to verify the effectiveness of
our model both and quantitatively. On FF++ dataset, our
method outperforms previous methods in both ID retrieval
(98.54%) and FID (2.16), while achieving comparable re-
sults on pose error and expression error. Qualitative resultsshow that our method can generate high-fidelity swapped
faces that can better preserve the source face shape than the
previous method. Besides, we also demonstrate the scala-
bility and controllability of our method. Our model can be
easily extended to higher-resolution such as 512×512with
affordable extra computational costs and allows region-
customizable face swapping by controlling the inpainting
mask. Our results demonstrate that DiffSwap is a very
promising face swapping framework that is distinct from
the existing methods and enjoys high fidelity, controllabil-
ity, and scalability.
|
Zhu_TryOnDiffusion_A_Tale_of_Two_UNets_CVPR_2023
|
Abstract
Given two images depicting a person and a garment
worn by another person, our goal is to generate a visu-
alization of how the garment might look on the input per-
son. A key challenge is to synthesize a photorealistic detail-
preserving visualization of the garment, while warping the
garment to accommodate a significant body pose and shape
change across the subjects. Previous methods either fo-
cus on garment detail preservation without effective pose
1Work done while author was an intern at Google.and shape variation, or allow try-on with the desired shape
and pose but lack garment details. In this paper, we pro-
pose a diffusion-based architecture that unifies two UNets
(referred to as Parallel-UNet), which allows us to preserve
garment details and warp the garment for significant pose
and body change in a single network. The key ideas behind
Parallel-UNet include: 1) garment is warped implicitly via
a cross attention mechanism, 2) garment warp and person
blend happen as part of a unified process as opposed to a se-
quence of two separate tasks. Experimental results indicate
that TryOnDiffusion achieves state-of-the-art performance
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4606
both qualitatively and quantitatively.
|
1. Introduction
Virtual apparel try-on aims to visualize how a garment
might look on a person based on an image of the person and
an image of the garment. Virtual try-on has the potential
to enhance the online shopping experience, but most try-on
methods only perform well when body pose and shape vari-
ation is small. A key open problem is the non-rigid warping
of a garment to fit a target body shape, while not introducing
distortions in garment patterns and texture [5, 12, 41].
When pose or body shape vary significantly, garments
need to warp in a way that wrinkles are created or flat-
tened according to the new shape or occlusions. Related
works [1,5,23] have been approaching the warping problem
via first estimating pixel displacements, e.g., optical flow,
followed by pixel warping, and postprocessing with percep-
tual loss when blending with the target person. Fundamen-
tally, however, the sequence of finding displacements, warp-
ing, and blending often creates artifacts, since occluded
parts and shape deformations are challenging to model ac-
curately with pixel displacements. It is also challenging to
remove those artifacts later in the blending stage even if it is
done with a powerful generative model. As an alternative,
TryOnGAN [24] showed how to warp without estimating
displacements, via a conditional StyleGAN2 [21] network
and optimizing in generated latent space. While the gener-
ated results were of impressive quality, outputs often lose
details especially for highly patterned garments due to the
low representation power of the latent space.
In this paper, we present TryOnDiffusion that can handle
large occlusions, pose changes, and body shape changes,
while preserving garment details at 1024×1024 resolution.
TryOnDiffusion takes as input two images: a target person
image, and an image of a garment worn by another person.
It synthesizes as output the target person wearing the gar-
ment. The garment might be partially occluded by body
parts or other garments, and requires significant deforma-
tion. Our method is trained on 4 Million image pairs. Each
pair has the same person wearing the same garment but ap-
pears in different poses.
TryOnDiffusion is based on our novel architecture called
Parallel-UNet consisting of two sub-UNets communicating
through cross attentions [40]. Our two key design elements
are implicit warping and combination of warp and blend (of
target person and garment) in a single pass rather than in a
sequential fashion. Implicit warping between the target per-
son and the source garment is achieved via cross attention
over their features at multiple pyramid levels which allows
to establish long range correspondence. Long range corre-
spondence performs well, especially under heavy occlusion
and extreme pose differences. Furthermore, using the same
network to perform warping and blending allows the twoprocesses to exchange information at the feature level rather
than at the color pixel level which proves to be essential in
perceptual loss and style loss [19, 29]. We demonstrate the
performance of these design choices in Sec. 4.
To generate high quality results at 1024×1024 resolu-
tion, we follow Imagen [35] and create cascaded diffusion
models. Specifically, Parallel-UNet based diffusion is used
for128×128and256×256resolutions. The 256×256re-
sult is then fed to a super-resolution diffusion network to
create the final 1024×1024 image.
In summary, the main contributions of our work are:
1) try-on synthesis at 1024×1024 resolution for a variety
of complex body poses, allowing for diverse body shapes,
while preserving garment details (including patterns, text,
labels, etc.), 2) a novel architecture called Parallel-UNet,
which can warp the garment implicitly with cross atten-
tion, in addition to warping and blending in a single net-
work pass. We evaluated TryOnDiffusion quantitatively and
qualitatively, compared to recent state-of-the-art methods,
and performed an extensive user study. The user study was
done by 15non-experts, ranking more than 2K distinct ran-
dom samples. The study showed that our results were cho-
sen as the best 92.72% of the time compared to three recent
state-of-the-art methods.
|
Zhou_Efficient_Second-Order_Plane_Adjustment_CVPR_2023
|
Abstract
Planes are generally used in 3D reconstruction for depth
sensors, such as RGB-D cameras and LiDARs. This paper
focuses on the problem of estimating the optimal planes and
sensor poses to minimize the point-to-plane distance. The
resulting least-squares problem is referred to as plane ad-
justment (PA) in the literature, which is the counterpart of
bundle adjustment (BA) in visual reconstruction. Iterative
methods are adopted to solve these least-squares problems.
Typically, Newton’s method is rarely used for a large-scale
least-squares problem, due to the high computational com-
plexity of the Hessian matrix. Instead, methods using an ap-
proximation of the Hessian matrix, such as the Levenberg-
Marquardt (LM) method, are generally adopted. This pa-
per adopts the Newton’s method to efficiently solve the PA
problem. Specifically, given poses, the optimal plane have
a close-form solution. Thus we can eliminate planes from
the cost function, which significantly reduces the number
of variables. Furthermore, as the optimal planes are func-
tions of poses, this method actually ensures that the optimal
planes for the current estimated poses can be obtained at
each iteration, which benefits the convergence. The diffi-
culty lies in how to efficiently compute the Hessian matrix
and the gradient of the resulting cost. This paper provides
an efficient solution. Empirical evaluation shows that our
algorithm outperforms the state-of-the-art algorithms.
|
1. Introduction
Planes ubiquitously exist in man-made environments, as
demonstrated in Fig. 1. Thus they are generally used in
simultaneous localization and mapping (SLAM) systems
for depth sensors, such as RGB-D cameras [6, 9, 12–14]
and LiDARs [16, 21, 22, 24, 26]. Just as bundle adjust-
ment (BA) [3, 8, 20, 25] is important for visual reconstruc-
tion [1,5,18,19], jointly optimizing planes and depth sensor
poses, which is called plane adjustment (PA) [24,26], is crit-
ical for 3D reconstruction using depth sensors. This paper
focuses on efficiently solving the large-scale PA problem.
The BA and PA problems both involve jointly optimiz-
Figure 1. We use Gaussian noises to perturb the poses of dataset
D in Fig. 3. The standard deviations for rotation and translation
are3◦and0.3m, respectively. The resulting point cloud (a) is in a
mess. Fig. (b) shows the result from our algorithm. Our algorithm
can quickly align the planes, as shown in Fig. 5.
ing 3D structures and sensor poses. As the two problems are
similar, it is straightforward to apply the well-studied solu-
tions for BA to PA, as done in [23, 26]. However, planes
in PA can be eliminated, so that the cost function of the PA
problem only depends on sensor poses, which significantly
reduces the number of variables. This property provides
a promising direction to efficiently solve the PA problem.
However, it is difficult to compute the Hessian matrix and
the gradient vector of the resulting cost. Although this prob-
lem was studied in several previous works [10, 16], no effi-
cient solution has been proposed. This paper seeks to solve
this problem.
The main contribution of this paper is an efficient PA
solution using Newton’s method. We derive a closed-form
solution for the Hessian matrix and the gradient vector for
the PA problem whose computational complexity is inde-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13113
pendent of the number of points on the planes. Our experi-
mental results show that the proposed algorithm converges
faster than the state-of-the-art algorithms.
|
Zhuang_Deep_Semi-Supervised_Metric_Learning_With_Mixed_Label_Propagation_CVPR_2023
|
Abstract
Metric learning requires the identification of far-apart
similar pairs and close dissimilar pairs during training, and
this is difficult to achieve with unlabeled data because pairs
are typically assumed to be similar if they are close. We
present a novel metric learning method which circumvents
this issue by identifying hard negative pairs as those which
obtain dissimilar labels via label propagation (LP), when
the edge linking the pair of data is removed in the affinity
matrix. In so doing, the negative pairs can be identified
despite their proximity, and we are able to utilize this infor-
mation to significantly improve LP’s ability to identify far-
apart positive pairs and close negative pairs. This results
in a considerable improvement in semi-supervised metric
learning performance as evidenced by recall, precision and
Normalized Mutual Information (NMI) performance met-
rics on Content-based Information Retrieval (CBIR) appli-
cations.
|
1. Introduction
Image data have proliferated owing to ubiquitous cam-
era ownership, widespread internet connectivity, and broad
availability of software applications. The design of efficient
CBIR systems is imperative in managing information con-
tent. Traditional image retrieval systems use textual annota-
tions, but such an approach require manual labor which may
be not be cost-effective. On the other hand, CBIR retrieves
relevant content from an image query, alleviating the need
for human annotation.
Metric learning learns a transformation mapping similar
data closer than dissimilar ones. This makes it easier to
cluster data [37] and also to shortlist similar data by their
proximity to the query. As such, metric learning algorithms
are at the cornerstone of most state-of-the-art CBIR designs
[23]. As the amount of data handled in CBIR systems is
very large, there is interest in utilizing the vast quantities
* Work done while a student at UIUC.of readily available unlabeled data to improve the metric
learning training process.
Recent advances have asserted the importance of retain-
ing within-class variance in training generalizable represen-
tations [27], particularly because the information encapsu-
lated in representations which are irrelevant in discriminat-
ing between training labels may be important in discrim-
inating between unseen test classes. Since close positive
pairs results in little loss, there is a need to identify far-
apart positive pairs for training. The paper [43] uses the
method from [9] to propagate labels along manifolds in
order to identify such points, by assuming data which are
nearest neighbor pairs but not mutual nearest neighbors as
close dissimilar pairs. However this method does not uti-
lize label information and may not accurately identify hard
negatives [3, 17, 39, 40], which are pairs of data which are
close but have different labels. Label propagation (LP) us-
ing mis-identified edges would lead to inaccuracies in the
model trained.
This paper proposes a novel method to identify hard neg-
ative pairs in a semi-supervised setting. Specifically, given a
few labeled points from each class and an abundance of un-
labeled data, we propose to identify hard negative pairs as
those which obtain dissimilar labels via LP when the edge
linking the two elements of the pair is removed in the affin-
ity matrix. We obtain the dissimilarity weights of edges
under this assumption quickly and efficiently, without the
costly use of repeated LP to calculate these weights.
We use this negative edge information in a novel mixed
LP algorithm which is able to utilize both positive and nega-
tive edge information. Specifically, the method encourages
data linked by positive edges to have the same pseudolabel
and data linked by negative edges to have different pseu-
dolabels. Like LP, our mixed LP optimization problem can
be solved with conjugate gradient (CG), allowing it to scale
to large datasets.
As our obtained pseudolabels capture information on far-
apart positive pairs and close negative pairs, they yield a
significant improvement in semi-supervised metric learn-
ing. We showcase this in a CBIR setting under recall, pre-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3429
cision and Normalized Mutual Information (NMI) perfor-
mance metrics. This shows that our method is able to more
effectively rank the database in relation to the query and re-
turn relevant articles near the top of this list.
|
Zhou_HyperMatch_Noise-Tolerant_Semi-Supervised_Learning_via_Relaxed_Contrastive_Constraint_CVPR_2023
|
Abstract
Recent developments of the application of Contrastive
Learning in Semi-Supervised Learning (SSL) have demon-strated significant advancements, as a result of its ex-ceptional ability to learn class-aware cluster representa-tions and the full exploitation of massive unlabeled data.However , mismatched instance pairs caused by inaccuratepseudo labels would assign an unlabeled instance to theincorrect class in feature space, hence exacerbating SSL’srenowned confirmation bias. To address this issue, we intro-duced a novel SSL approach, HyperMatch, which is a plug-in to several SSL designs enabling noise-tolerant utilizationof unlabeled data. In particular , confidence predictions arecombined with semantic similarities to generate a more ob-jective class distribution, followed by a Gaussian MixtureModel to divide pseudo labels into a ’confident’ and a ’lessconfident’ subset. Then, we introduce Relaxed ContrastiveLoss by assigning the ’less-confident’ samples to a hyper-class, i.e. the union of top- Knearest classes, which effec-
tively regularizes the interference of incorrect pseudo la-bels and even increases the probability of pulling a ’less
confident’ sample close to its true class. Experiments and
in-depth studies demonstrate that HyperMatch delivers re-markable state-of-the-art performance, outperforming Fix-Match on CIF AR100 with 400 and 2500 labeled samples by11.86% and4.88% , respectively.
|
1. Introduction
Semi-supervised learning (SSL) [ 2,3,5,20,21,26,29,
38,39,42] has become a promising solution for leverag-
ing unlabeled data to save the expensive annotation costand simultaneously improve model performance, especially
in applications where large amounts of annotated data are
required to obtain a model with high performance [ 4,18].
Modern SSL algorithms generally fall into two categories:the Pseudo Label -based [ 22,30,38] focuses on generating
∗Equal contribution.†Corresponding author.:UV1'II[XGI_UL
6YK[JU2GHKRY?
6YK[JU2GHKRHGYKJ
)RGYY'YYOMTSKTZH._VKX)RGYY'YYOMTSKTZ?
G6[RR 6[YN
I
Figure 1. Illustration of our ideas. (a)Pseudo-label-based assign-
ment: the instance is pulled close to the wrong pseudo label class
and pushed away from ground-truth class (green pentagon). (b)
Hyper-class assignment: the instance is assigned to its hyper-class(the union of top- Knearest classes), which includes the ground
truth class. (c)Top-Kaccuracy for clean and noisy pseudo labels
(divided by our Gaussian Mixture Model) in CIFAR100@400 ex-periment. As Kgrows, noisy labels benefit more than clean labels.
reliable pseudo labels for unlabeled data, whereas the Con-
sistency Regularization -based [ 2,3,20,29] constrains the
model to make consistent predictions on perturbed samples.
Recently, a prominent advance is the combination of
Contrastive Learning [ 6,8,11,13] with SSL techniques
[19,21,24,25,39,42], which sets the remarkable state-of-
the-art performance. The naive Self-supervised ContrastiveLearning [ 6,13] in pre-training tasks pushes instance-level
features away, thereby potentially driving samples withinthe same class apart, its class-agnostic nature has been
proved to conflict with the class-clustering property ofSSL [ 24,39], hence most recent studies [ 21,24,39] turn to
Class-Aware Contrastive Learning [ 16]. The general rou-
tine is to first assign each unlabeled instance to the top-1nearest class based on their pseudo labels, then two unla-beled instances from the same class are randomly selected
to form a positive pair, followed by a class-aware con-
trastive loss to compel instances from positive pairs to sharesimilar features while pushing away the representations ofdifferent classes.
The precision of pseudo labels has a direct impact on the
class assignment of the aforementioned methods. By con-fidence threshold, unlabeled data can be roughly groupedinto ’confident’ and ’less confident’ [ 39]. For ’confident’
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24017
data that tends to yield accurate pseudo labels for class as-
signment, contrastive loss could constrain the model to ac-quire better clustered feature representations for each class,hence facilitating SSL learning. But for ’less confident’ datawith a much higher probability to provide incorrect pseudo
labels, mismatched instance pairs tends to be induced and
the contrastive constraint will pull the features of differ-ent classes closer while pushing the features from the sameclass further apart, which will inevitably degrade the learn-ing. As shown in Fig. 1 (a) , the ’less confident’ instance
could be drawn close to a false class ( i.e. the red circle)
while being pushed away from the true class, representedby green pentagons. For convenience, we also refer to the
two kinds of unlabeled data as ’clean’ and ’noisy’ data.
To mitigate the detrimental effects of ’less confident’
(i.e., noisy) data, existing studies can be generally cat-
egorized into two categories: (1) Discarding low-qualitypseudo labels [ 21,24] with a low threshold, leaving some
unlabeled data unused; (2) Adopting re-weighting strate-gies [ 39,42] to lessen the effect of noisy labels. How-
ever, these approaches continue to closely adhere to thethe paradigm of assigning noisy data to a single error-proneclass, therefore they can only reduce the errors to a limitedamount. Also, the accuracy of pseudo labels suffers fromconfirmation bias [ 1] in SSL, which is the accumulation of
false predictions during training.
The aforesaid interference of ’less confident’ data is
caused by the inaccurate class assignment, it is more ef-fective to devise a class assignment approach that can resistthe distraction of wrong pseudo labels in order to mitigatetheir effects. In light of this, we propose to relax the conven-tional class assignment. Instead of assigning a noisy sam-ple to a single class, we relax the assignment by grouping itinto a ’hyper-class’, which is the union of top- K(K>1)
nearest classes. As depicted in Fig. 1 (b) , the chance of
the ground-truth class slipping into the hyper-class is dra-matically enhanced by implementing the relaxation (as K
steadily grows). It’s also worth noting that the marginal gain
brought by relaxation for noisy data is significantly greaterthan that for clean data, as the top-1 pseudo label accuracyof clean data is already adequate. This suggests that therelaxation is more suitable for applying on noisy data.
In conjunction with the hyper-class assignment, a Re-
laxed Contrastive Loss is intended to restrain the featureof noisy samples being close to their corresponding hyper-class while increasing the distantce from the remainingclasses. The simple yet effective relaxing has two bene-fits: (1) the likelihood of ’less confident’ data being pushedaway from its ground truth class can be lowered, and (2) thelikelihood of data being pulled close to its ground truth classcan be successfully raised. As seen in Fig. 1 (b) , the ground
truth class for the noisy unlabeled instance falls within thehyper-class, thus its feature will no longer be driven awayfrom the actual class, but rather drawn close to it.
In order to manage the effective exploitation of bother
clean and noisy unlabeled data, we proposed HyperMatch .
First, predicted class probabilities are integrated with se-
mantic similarities to produce unbiased per-sample class
distributions. Next, a Gaussian Mixture Model (GMM)model is fitted on the calibrated distribution to separateclean unlabeled data from the noisy ones. The commonclass-aware contrastive loss is applied to clean data to con-strain their features to approach the corresponding class.For noisy unlabeled data, a Relaxed Contrastive Loss iscarefully developed to drive the noisy unlabeled data fallinginto their corresponding hyper-class. In summary, we con-tribute in three ways:
• We propose an enhanced contrastive learning method,
HyperMatch, to handle the effective separation and ex-ploitation of both clean and noisy pseudo labels forlearning better-clustered feature representations. It is
a plug-in that can be used to various SSL architecturesto increase resilience while utilizing noisy data.
• Unlike previous studies that assign a ’less confident’
sample to an error-prone class, we relax the assignmentby categorizing the noisy sample into a hyper-class (aunion of top- Knearest classes), followed by the pro-
posed Relaxed Contrastive Loss, which is effective atmitigating the problematic confirmation bias.
• With thorough experiments on SSL benchmarks, our
HyperMatch demonstrates competitive performanceand establishes the new state-of-the-art on multiplebenchmarks. In-depth investigation reveals its effec-tiveness in handling the noisy pseudo labels.
|
Zhao_Comprehensive_and_Delicate_An_Efficient_Transformer_for_Image_Restoration_CVPR_2023
|
Abstract
Vision Transformers have shown promising performance
in image restoration, which usually conduct window- or
channel-based attention to avoid intensive computations.
Although the promising performance has been achieved,
they go against the biggest success factor of Transformers to
a certain extent by capturing the local instead of global de-
pendency among pixels. In this paper, we propose a novel
efficient image restoration Transformer that first captures
the superpixel-wise global dependency, and then transfers
it into each pixel. Such a coarse-to-fine paradigm is imple-
mented through two neural blocks, i.e., condensed attention
neural block (CA) and dual adaptive neural block (DA). In
brief, CA employs feature aggregation, attention computa-
tion, and feature recovery to efficiently capture the global
dependency at the superpixel level. To embrace the pixel-
wise global dependency, DA takes a novel dual-way struc-
ture to adaptively encapsulate the globality from superpix-
els into pixels. Thanks to the two neural blocks, our method
achieves comparable performance while taking only ∼6%
FLOPs compared with SwinIR.
|
1. Introduction
Image restoration aims to recover the high-quality im-
age from its degraded version, and huge success has been
achieved by plentiful methods in the past years [22, 30, 35,
54–56, 60]. In the era of deep learning, Convolutional Neu-
ral Networks (CNNs) have shown promising performance
in image restoration [25, 52, 67] thanks to the inductive bi-
ases of weight sharing and spatial locality [12]. However,
although a number of studies have shown the effectiveness
of CNNs, it has suffered from the following limitations [12],
i.e., i) non-dynamic weights of CNNs limit the model ca-
pacity of instance adaption, and ii) the sparse connections
of CNNs limit the capture of global dependency.
*indicates equal contribution.
†Corresponding author.
Superpixel-wise GlobalitySpatial LocalityChannel LocalityShifted Spatial Locality
Sequentially
Recover&DistributePixel-wise Globality
Local Dependency Capture
Global Dependency Capture (Ours)AggregateFigure 1. Illustration of the dependency capture in existing vision
Transformers and ours. The red boxes refer to the dependency
capture range of a given pixel marked by a red point. We gener-
ally summarize the dependency capture in existing vision Trans-
formers as the three ways above the dashed line. Obviously, they
could only capture the dependency in a local range. In contrast,
our method could obtain a pixel-wise global dependency through
superpixel-wise dependency computation and distribution.
To overcome these limitations, some solutions [51,
60, 67, 68] have been specifically established, of which
Transformer-based methods [3, 5, 28, 47, 53] have achieved
huge success thanks to their high capacities of dynamic
weighting and global dependency capturing. Towards the
image-restoration-specific Transformers, the biggest road-
block might be the unacceptable cost caused by the global
attention computation. Therefore, some efficient attentions
have been proposed to trade-off the efficiency and depen-
dency range, e.g., local window attention [49], shifted win-
dow attention [22], and channel attention [56]. Although
the promising performance has been achieved, these Trans-
formers have still suffered from the following limitations.
First, the computation costs are still very high, thus limit-
ing their applications in the mobile scenarios. Second, the
attention mechanisms could only capture the dependency
from a given range, and thus such a locality might not fully
explore the potential of Transformer.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14122
In practice, it is daunting to develop an efficient image
restoration Transformer while directly capturing the global
dependency, since it necessarily introduces intensive com-
putations, which goes against efficiency. Different from ex-
isting studies focusing on efficient attention mechanisms,
this paper resolves the contradiction from a novel perspec-
tive. To be specific, our basic idea is to adaptively aggregate
the features at pixel level into a lower dimensional space of
superpixel to remove the redundancy in channel [13] and
space [50] domains. Through the feature aggregation, the
dimension is remarkably reduced, and thus the attention
could be computed in a global way with acceptable com-
putation costs. After that, the feature recovery is performed
to restore the feature distribution in channel and space do-
mains. Although the above paradigm shows a feasible so-
lution, it is far from the final goal since the obtained de-
pendency actually works at the superpixel level. Hence, we
need to transfer such a superpixel-wise global dependency
to a pixel-wise global dependency. As a result, pixel-wise
restoration can depend on the global information from su-
perpixels.
Based on the above motivations, we propose an effi-
cient Transformer for COmprehensive and DElicate image
restoration (CODE). To be specific, CODE consists of a
condensed attention neural block (CA) and a dual adap-
tive neural block (DA). In brief, CA captures the global
dependency at the superpixel level with acceptable compu-
tations thanks to the aforementioned paradigm. To obtain
the pixel-wise global dependency, DA extracts the global-
ity from the CA output and then dynamically encapsulates
it into each pixel. Thanks to the two complementary neural
blocks, CODE could capture the pixel-wise global depen-
dency, while embracing high computational efficiency.
The contributions and novelty of this work could be sum-
marized as below.
• Unlike existing efficient Transformers only capture the
pixel-wise local dependency, our solution could obtain
the pixel-wise global dependency through superpixel-
wise dependency computation and transformation.
• The proposed image restoration Transformer (CODE)
consists of two neural blocks. In brief, CA employs
feature aggregation, attention computation, and fea-
ture recovery to efficiently capture the superpixel-wise
global dependency. To obtain pixel-wise global depen-
dency, DA takes a novel dual-way structure and a dy-
namic weighting fashion to distribute the superpixel-
wise globality into each pixel.
• Extensive experiments are conducted on four image
restoration tasks to demonstrate the efficiency and ef-
fectiveness of CODE, e.g., it achieves comparable per-
formance while taking only ∼6% FLOPs compared
with SwinIR [22].
|
Zhou_BEVDC_Birds-Eye_View_Assisted_Training_for_Depth_Completion_CVPR_2023
|
Abstract
Depth completion plays a crucial role in autonomous
driving, in which cameras and LiDARs are two complemen-
tary sensors. Recent approaches attempt to exploit spatial
geometric constraints hidden in LiDARs to enhance image-
guided depth completion. However, only low efficiency and
poor generalization can be achieved. In this paper, we pro-
pose BEV@DC , a more efficient and powerful multi-modal
training scheme, to boost the performance of image-guided
depth completion. In practice, the proposed BEV@DC
model comprehensively takes advantage of LiDARs with
rich geometric details in training, employing an enhanced
depth completion manner in inference, which takes only im-
ages (RGB and depth) as input. Specifically, the geometric-
aware LiDAR features are projected onto a unified BEV
space, combining with RGB features to perform BEV com-
pletion. By equipping a newly proposed point-voxel spatial
propagation network (PV-SPN), this auxiliary branch intro-
duces strong guidance to the original image branches via
3D dense supervision and feature consistency. As a result,
our baseline model demonstrates significant improvements
with the sole image inputs. Concretely, it achieves state-of-
the-art on several benchmarks, e.g., ranking Top-1 on the
challenging KITTI depth completion benchmark.
|
1. Introduction
Dense depth estimation plays an essential role in various
3D vision tasks and self-driving applications, e.g., 3D ob-
ject detection and tracking, simultaneous localization and
mapping (SLAM), and structure-from-motion (SFM) [14,
17,19,33,37]. With the aid of outdoor LiDAR sensors or in-
door RGBD cameras, 3D vision applications acquire depth
maps for further industrial usage. However, the depth sen-
sors cannot provide dense pixel-wise depth maps since their
output is sparse and has numerous blank regions, especially
*Corresponding author
Training Phase Inference Phase
Single-modal
Model
Fusion-based
Model
Camera-based
Model
(c) OursFusion-based
ModelFusion-based
Model
BEV assisted training(b) Fusion-based
Training and inferenceVarious 3D inputRGB and Depth
Single-modal
Model
Camera-based
Model
Single-modal
Model
Camera-based
Model
(a) Camera-based
BEV and VoxelsFigure 1. BEV assisted training. (a) Previous camera-based
methods that take RGB and depth input. (b) Previous fusion-based
methods introduce extra inputs and computation in both training
and inference. (c) Our method takes additional LiDAR as input
for assisted training. Only the 2D inputs are used during the infer-
ence, which reduces the computational burden.
in outdoor scenes. Therefore, it is necessary to fill the void
areas of the depth maps in practice.
Recent depth completion methods [4, 12, 21, 47] lever-
age the RGB information as guidance since the RGB im-
ages contain scene structures, e.g., textures, and monoc-
ular features, e.g., vanishing points, to provide the cues
for the missing pixels. However, the camera-based meth-
ods apply the 2D convolution on the irregularly distributed
depth values, resulting in an implicit yet ineffective ex-
ploration of underlying 3D geometry, i.e., over-smooth at
the boundary of objects. Considering the deployment of
cameras and LiDAR in commercial cars and the recent
trend of cross-modal learning in the vision community,
some methods [2, 3, 28, 41] introduce explicit 3D represen-
tations, i.e., LiDAR point clouds generated by sparse depth,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9233
to complement 2D appearance features with 3D structured
priors. Despite the improvements, the fusion-based ap-
proaches still have the following issues: 1)The 3D fea-
ture extraction and fusion are not efficacious, especially
the critical spatial correlations between a depth point and
its neighbors, which significantly affects the completion
performance. 2)Fusion-based methods are computation-
intensive while processing sparse depths, RGB images, and
additional 3D input such as LiDAR information, either oc-
cupying more memory storage or consuming more time in
inference, which hinders real-time applications.
To address the above issues, we seek to boost image-
guided depth completion performance by exploiting 3D
representations via a more efficient and effective cross-
representation training scheme. In training, we design
an auxiliary LiDAR branch consisting of LiDAR encoder,
cross-representation BEV decoder (CRBD) and point-voxel
spatial propagation network (PV-SPN). Initially, we prepro-
cess each LiDAR scan with the assigned voxel cells to al-
leviate the irregularity and sparseness and then extract its
multi-scale features. After that, these features will be pro-
jected onto a unified BEV space. The following CRBD uti-
lizes the above multi-scale BEV features and the ones from
the camera branch to perform BEV fusion and completion.
After that, the BEV completion is interpolated into the 3D
space, and a point-voxel spatial propagation network is pro-
posed to query the nearest neighbors for each coarse voxel
and performs feature aggregation on all the adjacent points
from LiDAR, refining the 3D geometric shapes. Moreover,
to tackle the extra computational burden from the LiDAR
branch, this plug-and-play component is only exploited in
the training phase, enhancing the original camera branch
through feature consistency and end-to-end backpropaga-
tion. Consequently, the trained model is independent of ad-
ditional LiDAR inputs during the inference.
Compared with previous fusion-based methods, our pro-
posed framework has the following advantages: 1) Gener-
ality : Our plug-and-play solution can be incorporated into
several camera-based depth completion models; 2) Flexi-
bility : The processing module for LiDAR representations
only exists during training and is discarded in inference, as
shown in Fig. 1(c), compared with previous camera-based
models (a) and fusion-based models (b). There is no addi-
tional computational burden in the deployment. 3) Effec-
tiveness : It significantly boosts the performance upon the
baseline approach, achieving state-of-the-art results on sev-
eral benchmarks. To sum up, the main contributions are
summarized as follows:
-Bird’s- EyeView Assisted Training for Depth
Completion (BEV@DC) is proposed, which as-
sists camera-based depth completion with LiDAR
representation during the training phase.
- Cross-representation BEV decoder (CRBD) and point-voxel spatial propagation network (PV-SPN) are pro-
posed to gain fine-grained 3D geometric shapes and
provide strong guidance to the RGB branch.
- Our solution achieves state-of-the-art on both outdoor
KITTI depth completion benchmark and indoor NYU
Depth v2 dataset.
|
Zhou_Adaptive_Sparse_Pairwise_Loss_for_Object_Re-Identification_CVPR_2023
|
Abstract
Object re-identification (ReID) aims to find instances
with the same identity as the given probe from a large
gallery. Pairwise losses play an important role in training
a strong ReID network. Existing pairwise losses densely
exploit each instance as an anchor and sample its triplets
in a mini-batch. This dense sampling mechanism inevitably
introduces positive pairs that share few visual similarities,
which can be harmful to the training. To address this prob-
lem, we propose a novel loss paradigm termed Sparse Pair-
wise (SP) loss that only leverages few appropriate pairs
for each class in a mini-batch, and empirically demon-
strate that it is sufficient for the ReID tasks. Based on
the proposed loss framework, we propose an adaptive pos-
itive mining strategy that can dynamically adapt to diverse
intra-class variations. Extensive experiments show that SP
loss and its adaptive variant AdaSP loss outperform other
pairwise losses, and achieve state-of-the-art performance
across several ReID benchmarks. Code is available at
https://github.com/Astaxanthin/AdaSP.
|
1. Introduction
Object re-identification (ReID) is one of the most impor-
tant vision tasks in visual surveillance. It aims at associat-
ing person/vehicle images with the same identity captured
by different cameras in diverse scenarios. Recently, with
the prosperity of deep neural networks in computer vision
community, ReID tasks have rapidly progressed towards
more sophisticated feature extractors [8, 37, 40, 44, 60, 61]
and more elaborate losses [43, 45, 57, 58]. Benefiting from
quantifying the semantic similarity/distance of two images,
metric losses are widely employed with the identity loss to
improve ranking precision in the ReID task [35].
Generally, metric losses serve for metric learning that
aims to map raw signals into a low-dimensional embedding
space where instances of intra-class are clustered and that
of inter-class are separated. The pairwise framework, such
†Corresponding author.
Dense paradigm
Sparse paradigmTriplet
loss
Circle
loss
SP
lossFigure 1. Difference between dense pairwise losses and our sparse
pairwise (SP) loss. For simplicity, only two classes, marked by
circles and triangles, are displayed. The red circles represent an-
chors. Dense pairwise losses, such as triplet loss [19] and circle
loss [45], treat each instance as an anchor and mine its positive and
negative pairs to construct a loss item. SP loss treats each class as
a unit and separately excavates the most informative positive and
negative pairs without using anchors.
as triplet loss [19] and circle loss [45], employs anchors to
pull their positive pairs while pushing negatives apart. Most
existing pairwise losses [14, 19, 38, 45] devote to exploiting
all instances in a mini-batch, and densely anchoring each
of them (which we term dense pairwise method) to sam-
ple its triplets. Although some hard sample mining meth-
ods [7, 45, 50, 51] are developed to accelerate convergence
or improve performance, these losses are still computed in a
dense manner ( i.e. top rows in Fig. 1). This dense design in-
evitably introduces harmful positive pairs [1, 43] that share
few visual similarities and likely lead to bad local minima
of optimization [54] for metric learning, due to large intra-
class variations (Fig. 2a) caused by illumination changes,
occlusion, different viewpoints, etc.
In this work, we anticipate that it is not necessary to em-
ploy all instances within a mini-batch since most of them
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19691
OcclusionIllumination
Viewpoint
(a) Large intra -class variation (b) Small intra -class variationRmaxRmax
Figure 2. Different levels of intra-class variations in person ReID
datasets. (a) Large intra-class variation caused by intensive oc-
clusion, illumination changes, and different viewpoints. In this
situation, mining the hardest positive pair, which forms the radius
of the largest intra-class hypersphere, is harmful to metric learn-
ing, while the least-hard pair that shapes the radius of the smallest
hypersphere is alternatively an appropriate one. (b) Small intra-
class variation with highly similar visual features. The instances
in both the hardest and the least-hard pairs share noticeable visual
similarities in this case.
are either trivial or harmful pairs. To this end, we propose a
novel loss paradigm termed Sparse Pairwise (SP) loss that
only leverages the most informative pairs for each class in a
mini-batch without relying on dense instance-level anchors,
as shown in Fig. 1 (bottom row). For the object ReID tasks,
we empirically discover that using only few pairs of each
class is sufficient for the loss computation as long as the
appropriate ones are mined. Based on the proposed loss
framework, we conduct a rigorous investigation of the pos-
itive mining strategy.
Compared to negative mining which has been ac-
tively studied [14, 42, 54], positive mining remains under-
explored. Generally, an appropriate positive pair should be
capable of adapting to different levels of intra-class varia-
tions. For example, an occluded pedestrian in a side view-
point or dark illumination almost shares no visual similar-
ities with his front perspective in Fig. 2a, while the in-
stance (blue bounding box) in a clear illumination without
any occlusions can bridge them, thus forming an appropri-
ate pair for training. To obtain appropriate positive pairs,
some works [25, 43, 55] attempt to excavate “moderate” or
easy positive pairs, yet still rely on dense anchors.
In this work, we first propose a least-hard positive min-
ing strategy to address large intra-class variations. Inspired
by a geometric insight, we find that the hardest positive pair
in a class shapes the radius of the largest hypersphere cov-
ering all intra-class samples. Nevertheless, it can be exces-
sively large and thus introduces overwhelming visual differ-
ences on account of large intra-class variations. In this case,
the least-hard positive pair that constructs the smallest hy-
persphere can be utilized instead as a more appropriate onesharing noticeable visual similarities (Fig. 2a). In addition,
to handle classes with diverse intra-class variations (Fig. 2),
we develop an adaptive mining approach that automatically
reweights these two kinds of positive pairs, which shapes a
dynamic intra-class hypersphere according to the particular
situation of the class. The main contributions of this paper
are summarized as follows:
• We propose a novel pairwise loss framework – Sparse
Pairwise loss – that only leverages few informative
positive/negative pairs for each class in a mini-batch.
• We propose a least-hard positive mining strategy to ad-
dress large intra-class variations and further endow it
with an adaptive mechanism according to different lev-
els of intra-class variations in each mini-batch.
• The proposed AdaSP loss is evaluated on various per-
son/vehicle ReID datasets and outperforms existing
dense pairwise losses across different benchmarks.
|
Zhao_Representation_Learning_for_Visual_Object_Tracking_by_Masked_Appearance_Transfer_CVPR_2023
|
Abstract
Visual representation plays an important role in visual
object tracking. However, few works study the tracking-
specified representation learning method. Most trackers
directly use ImageNet pre-trained representations. In this
paper, we propose masked appearance transfer, a simple
but effective representation learning method for tracking,
based on an encoder-decoder architecture. First, we en-
code the visual appearances of the template and search
region jointly, and then we decode them separately. Dur-
ing decoding, the original search region image is recon-
structed. However, for the template, we make the decoder
reconstruct the target appearance within the search region.
By this target appearance transfer, the tracking-specified
representations are learned. We randomly mask out the
inputs, thereby making the learned representations more dis-
criminative. For sufficient evaluation, we design a simple
and lightweight tracker that can evaluate the representa-
tion for both target localization and box regression. Exten-
sive experiments show that the proposed method is effec-
tive, and the learned representations can enable the simple
tracker to obtain state-of-the-art performance on six datasets.
https://github.com/difhnp/MAT
|
1. Introduction
Visual object tracking is a computer vision task that
highly depends on the quality of visual representation [38].
On this basis, deep representations are adopted for track-
ing and successfully boost the development of tracking al-
gorithms in previous years. Unlike some primitive track-
ers (e.g., [10, 28, 37]) that use ready-made deep features,
SiamFC [1] integrates a convolutional neural network (CNN)
into the tracking model and learns task-specified representa-
tions by end-to-end tuning. From here on, end-to-end model
training becomes a common practice in siamese-based track-
ing methods. The assumption of the siamese tracker is that
*Corresponding author: Huchuan Lu, [email protected]
Encoder
DecoderImage Image(a) Masked autoencoder.
Encoder
Decoder
DecoderSearch region Search region
Template Appearance in SearchShare
weights
(b) Masked appearance transfer.
Figure 1. Comparison between the masked autoencoder [17] and
our masked appearance transfer that uses a nontrivial learning
objective.
different visual appearances of the same target can be em-
bedded to have similar representations, and tracking can be
achieved by similarity matching. Based on this assumption,
the learning objective aims to push the representations of the
same target to be close to each other.
Although we have a clear learning objective, research on
the effective representation learning methods for siamese
tracking is still lacking. The common practice is to sim-
ply fine-tune the ImageNet [33] pre-trained representations.
However, they are learned for classification rather than track-
ing. High ImageNet accuracy does not indicate good per-
formance in visual object tracking [43]. Without good rep-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18696
resentations, simple similarity matching cannot bring good
tracking performance. Thus, most works focus on the design
of tracker neck [7, 36, 47, 50] and tracker head [25, 40, 46].
Within these works, we can notice that the entire model is
always trained end-to-end, and representation learning is cou-
pled to the box regression task and target localization task.
It means that the representation learning is totally driven by
other tracker components. To improve tracking performance,
we have to make the neck or head increasingly complex.
On the aforementioned observation, we attempt to im-
prove the tracking performance by learning good representa-
tions and propose a simple but effective representation learn-
ing method in this paper. This method aims to decouple the
representation learning from tracker training, thereby mak-
ing it a tracker-free method. It employs a transformer-based
autoencoder to learn more discriminative visual features for
object tracking. This is achieved by setting a nontrivial
learning objective for the autoencoder, as illustrated by Fig-
ure 1b. The embedded target appearance of the template
can be pushed to be close to that in the search region. We
further make the learned representations more discriminative
by masking out the input images.
To evaluate the learned representations for target localiza-
tion and box regression, we design a very simple tracker with
a matching operator and a lightweight head. We evaluate
the proposed method with different model architectures and
initial weights. We also compare our simple tracker with
many state-of-the-art trackers on the popular LaSOT [14],
TrackingNet [31], GOT10k [20], and many other tracking
datasets [15, 30, 41]. The experiments demonstrate the effec-
tiveness and generalization ability of our proposed represen-
tation learning method. Comparison results show that our
simple tracker can obtain state-of-the-art performance with
the learned representations.
To summarize, our contributions are presented as follows:
•We propose masked appearance transfer (MAT), a novel
representation learning method for visual object track-
ing that jointly encodes the template and search region
images, and learns tracking-specified representation
with a simple encoder-decoder pipeline. A nontrivial
training objective is proposed to make this method ef-
fective.
•We design a simple and lightweight tracker for tracking-
specified evaluation that has few parameters and no
hyper-parameters. It can evaluate the representations
not only for target localization but also for box regres-
sion.
•Extensive experiments demonstrate the effectiveness
and generalization ability of the proposed method. Ex-
tensive comparisons show that the proposed simple
tracker can obtain state-of-the-art performance by us-
ing the learned tracking-specified representations.
|
Zhou_Unsupervised_Cumulative_Domain_Adaptation_for_Foggy_Scene_Optical_Flow_CVPR_2023
|
Abstract
Optical flow has achieved great success under clean
scenes, but suffers from restricted performance under foggy
scenes. To bridge the clean-to-foggy domain gap, the ex-
isting methods typically adopt the domain adaptation to
transfer the motion knowledge from clean to synthetic foggy
domain. However, these methods unexpectedly neglect the
synthetic-to-real domain gap, and thus are erroneous when
applied to real-world scenes. To handle the practical optical
flow under real foggy scenes, in this work, we propose a
novel unsupervised cumulative domain adaptation optical
flow (UCDA-Flow) framework: depth-association motion
adaptation and correlation-alignment motion adaptation.
Specifically, we discover that depth is a key ingredient to in-
fluence the optical flow: the deeper depth, the inferior optical
flow, which motivates us to design a depth-association mo-
tion adaptation module to bridge the clean-to-foggy domain
gap. Moreover, we figure out that the cost volume correlation
shares similar distribution of the synthetic and real foggy im-
ages, which enlightens us to devise a correlation-alignment
motion adaptation module to distill motion knowledge of the
synthetic foggy domain to the real foggy domain. Note that
synthetic fog is designed as the intermediate domain. Under
this unified framework, the proposed cumulative adaptation
progressively transfers knowledge from clean scenes to real
foggy scenes. Extensive experiments have been performed
to verify the superiority of the proposed method.
|
1. Introduction
Optical flow has made great progress under clean scenes,
but may suffer from restricted performance under foggy
scenes [15]. The main reason is that fog weakens scene
contrast, breaking the brightness and gradient constancy
assumptions, which most optical flow methods rely on.
To alleviate this, researchers start from the perspective
*Corresponding author.
fog gap
style gap
Clean Scene Source ğ
Syn. Fog Scene Intermediate ğ
Real Fog Scene Target ğ
Target Domain Source Domain
Correlation Alignment
AdaptationIntermediate DomainDepth Association
Adaptation
Association Depth / Image / Flow / Correlation Figure 1. Illustration of the main idea. We propose to transfer
motion knowledge from the source domain (clean scene) to the
target domain (real foggy scene) through two-stage adaptation.
We design the synthetic foggy scene as the intermediate domain.
As for the clean-to-foggy domain gap (fog), we transfer motion
knowledge from the source domain to the intermediate domain via
depth association. As for the synthetic-to-real domain gap (style),
we distill motion knowledge of the intermediate domain to the
target domain by aligning the correlation of both the domains.
of domain adaptation, which mainly seeks the degradation-
invariant features to transfer the motion knowledge from the
clean scene to the adverse weather scene [14 –16, 40]. For
example, Li [15,16] attempted to learn degradation-invariant
features to enhance optical flow under rainy scenes in a su-
pervised manner. Yan et al. [40] proposed a semi-supervised
framework for optical flow under dense foggy scenes, which
relies on the motion-invariant assumption between the paired
clean and synthetic foggy images. These pioneer works have
made a good attempt to handle the clean-to-foggy domain
gap with synthetic degraded images through one-stage do-
main adaptation. However, they lack the constraints to guide
the network to learn the motion pattern of real foggy domain,
and fail for real foggy scenes. In other words, they have un-
expectedly neglected the synthetic-to-real domain gap, thus
limiting their performances on real-world foggy scenes. In
this work, our goal is to progressively handle the two domain
gaps: the clean-to-foggy gap and the synthetic-to-real gap in
a cumulative domain adaptation framework in Fig. 1.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9569
As for the clean-to-foggy gap, we discover that depth is a
key ingredient to influence the optical flow: the deeper the
depth, the inferior the optical flow. This observation inspires
us to explore the usage of depth as the key to bridging the
clean-to-foggy gap (seeing the fog gap in Fig. 1). On one
hand, depth physically associates the clean image with the
foggy image through atmospheric scattering model [26]; on
the other hand, there exists a natural 2D-3D geometry pro-
jection relationship between depth and optical flow, which is
used as a constraint to transfer motion knowledge from the
clean domain to the synthetic foggy domain.
As for the synthetic-to-real gap, we figure out that cost
volume correlation shares similar distribution of synthetic
and real foggy images. Cost volume stores correlation value,
which can physically measure the similarity between adja-
cent frames, regardless of synthetic and real foggy images.
Therefore, cost volume benefits to bridging the synthetic-to-
real domain gap (seeing the style gap in Fig. 1). We align
the correlation distributions to distill motion knowledge of
the synthetic foggy domain to the real foggy domain.
In this work, we propose a novel unsupervised cumulative
domain adaptation optical flow (UCDA-Flow) framework for
real foggy scene, including depth-association motion adapta-
tion (DAMA) and correlation-alignment motion adaptation
(CAMA). Specifically, in DAMA stage, we first estimate
optical flow, ego-motion and depth with clean stereo images,
and then project depth into optical flow space with 2D-3D
geometry formula between ego-motion and scene-motion
to enhance rigid motion. To bridge the clean-to-foggy gap,
we utilize atmospheric scattering model [26] to synthesize
the corresponding foggy images, and then transfer motion
knowledge from the clean domain to the synthetic foggy do-
main. In CAMA stage, to bridge the synthetic-to-real domain
gap, we transform the synthetic and real foggy images to the
cost volume space, in which we align the correlation distri-
bution to distill the motion knowledge of the synthetic foggy
domain to the real foggy domain. The proposed cumulative
domain adaptation framework could progressively transfer
motion knowledge from clean domain to real foggy domain
via depth association and correlation alignment. Overall, our
main contributions are summarized as follows:
•We propose an unsupervised cumulative domain adapta-
tion framework for optical flow under real foggy scene,
consisting of depth-association motion adaptation and
correlation-alignment motion adaptation. The proposed
method can transfer motion knowledge from clean domain
to real foggy domain through two-stage adaptation.
•We reveal that foggy scene optical flow deteriorates with
depth. The geometry relationship between depth and opti-
cal flow motivates us to design a depth-association motion
adaptation to bridge the clean-to-foggy domain gap.
•We illustrate that cost volume correlation distribution of
the synthetic and real foggy images is consistent. Thisprior benefits to close the synthetic-to-real domain gap
through correlation-alignment motion adaptation.
|
Zhou_NeRFLix_High-Quality_Neural_View_Synthesis_by_Learning_a_Degradation-Driven_Inter-Viewpoint_CVPR_2023
|
Abstract
Neural radiance fields (NeRF) show great success in
novel view synthesis. However, in real-world scenes, re-
covering high-quality details from the source images is still
challenging for the existing NeRF-based approaches, due to
the potential imperfect calibration information and scene
representation inaccuracy. Even with high-quality train-
ing frames, the synthetic novel views produced by NeRF
models still suffer from notable rendering artifacts, such as
noise, blur, etc. Towards to improve the synthesis qual-
ity of NeRF-based approaches, we propose NeRFLiX, a
general NeRF-agnostic restorer paradigm by learning a
degradation-driven inter-viewpoint mixer. Specially, we de-
*Equal contribution
†Corresponding authorsign a NeRF-style degradation modeling approach and con-
struct large-scale training data, enabling the possibility of
effectively removing NeRF-native rendering artifacts for ex-
isting deep neural networks. Moreover, beyond the degra-
dation removal, we propose an inter-viewpoint aggregation
framework that is able to fuse highly related high-quality
training images, pushing the performance of cutting-edge
NeRF models to entirely new levels and producing highly
photo-realistic synthetic views.
|
1. Introduction
Neural radiance fields (NeRF) can generate photo-
realistic images from new viewpoints, playing a heated role
in novel view synthesis. In light of NeRF’s [37] success,
numerous approaches [2, 9, 11, 19, 34, 35, 38, 40, 41, 47, 53,
54,59,61,65] along these lines have been proposed, contin-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12363
ually raising the performance to greater levels. In fact, one
prerequisite of NeRF is the precise camera settings of the
taken photos for training [22, 32, 61]. However, accurately
calibrating camera poses is exceedingly difficult in prac-
tice. Contrarily, the shape-radiance co-adaption issue [74]
reveals that while the learned radiance fields can perfectly
explain the training views with inaccurate geometry, they
poorly generalize to unseen views. On the other hand, the
capacity to represent sophisticated geometry, lighting, ob-
ject materials, and other factors is constrained by the simpli-
fied scene representation of NeRF [19,77,78]. On the basis
of such restrictions, advanced NeRF models may nonethe-
less result in notable artifacts (such as blur, noise, detail
missing, and more), which we refer to as NeRF-style degra-
dations in this article and are shown in Fig. 1.
To address the aforementioned limitations, numerous
works have been proposed. For example, some studies,
including [22, 59, 66, 70], jointly optimize camera param-
eters and neural radiance fields to refine camera poses as
precisely as possible in order to address the camera calibra-
tion issue. Another line of works [19, 73, 77, 78] presents
physical-aware models that simultaneously take into ac-
count the object materials and environment lighting, as op-
posed to using MLPs or neural voxels to implicitly encode
both the geometry and appearance. To meet the demands
for high-quality neural view synthesis, one has to carefully
examine all of the elements when building complex inverse
rendering systems. In addition to being challenging to op-
timize, they are also not scalable for rapid deployment with
hard re-configurations in new environments. Regardless of
the intricate physical-aware rendering models, is it possi-
ble to design a practical NeRF-agnostic restorer to directly
enhance synthesized views from NeRFs ?
In the low-level vision, it is critical to construct large-
scale paired data to train a deep restorer for eliminating
real-world artifacts [56, 72]. When it comes to NeRF-style
degradations, there are two challenges: (1) sizable paired
training data; (2) NeRF degradation analysis. First, it is un-
practical to gather large-scale training pairs (more specifi-
cally, raw outputs from well-trained NeRFs and correspond-
ing ground truths). Second, the modeling of NeRF-style
degradation has received little attention. Unlike real-world
images that generally suffer from JPEG compression, sen-
sor noise, and motion blur, the NeRF-style artifacts are
complex and differ from the existing ones. As far as we
know, noprevious studies have ever investigated NeRF-
style degradation removal which effectively leverages the
ample research on image and video restoration.
In this work, we are motivated to have the firststudy on
the feasibility of simulating large-scale NeRF-style paired
data, opening the possibility of training a NeRF-agnostic
restorer for improving the NeRF rendering frames. To
this end, we present a novel degradation simulator for typ-ical NeRF-style artifacts (e.g., rendering noise and blur)
considering the NeRF mechanism. We review the over-
all NeRF rendering pipeline and discuss the typical NeRF-
style degradation cases. Accordingly, we present three basic
degradation types to simulate the real rendered artifacts of
NeRF synthetic views and empirically evaluate the distri-
bution similarity between real rendered photos and our sim-
ulated ones. The feasibility of developing NeRF-agnostic
restoration models has been made possible by constructing
a sizable dataset that covers a variety of NeRF-style degra-
dations, over different scenes.
Next, we show the necessity of our simulated dataset and
demonstrate that existing state-of-the-art image restoration
frameworks can be used to eliminate NeRF visual artifacts.
Furthermore, we notice, in a typical NeRF setup, neigh-
boring high-quality views come for free, and they serve as
potential reference bases for video-based restoration with a
multi-frame aggregation and fusion module. However, this
is not straightforward because NeRF input views are taken
from a variety of very different angles and locations, mak-
ing the estimation of correspondence quite challenging. To
tackle this problem, we propose a degradation-driven inter-
viewpoint “mixer” that progressively aligns image contents
at the pixel and patch levels. In order to maximize efficiency
and improve performance, we also propose a fast view se-
lection technique to only choose the most pertinent refer-
ence training views for aggregation, as opposed to using the
entire NeRF input views.
In a nutshell, we present a NeRF-agnostic restorer
(termed NeRFLiX) which learns a degradation-driven inter-
viewpoint mixer. As illustrated in Fig. 1, given NeRF syn-
thetic frames with various rendering degradations, NeR-
FLiX successfully restores high-quality results. Our con-
tributions are summarized as
•Universal enhancer for NeRF models. NeRFLiX is
powerful and adaptable, removing NeRF artifacts and
restoring clearly details, pushing the performance of
cutting-edge NeRF models to entirely new levels.
•NeRF rendering degradation simulator. We develop
a NeRF-style degradation simulator (NDS), construct-
ing massive amounts of paired data and aiding the
training of deep neural networks to improve the quality
of NeRF-rendered images.
•Inter-viewpoint mixer. Based on our constructed
NDS, we further propose an inter-viewpoint baseline
that is able to mixhigh-quality neighboring views for
more effective restorations.
•Training time acceleration. We show how NeRFLiX
makes it possible for NeRF models to produce even
better results with a 50%reduction in training time.
12364
|
Zhu_IPCC-TP_Utilizing_Incremental_Pearson_Correlation_Coefficient_for_Joint_Multi-Agent_Trajectory_CVPR_2023
|
Abstract
Reliable multi-agent trajectory prediction is crucial for
the safe planning and control of autonomous systems. Com-
pared with single-agent cases, the major challenge in si-
multaneously processing multiple agents lies in modeling
complex social interactions caused by various driving in-
tentions and road conditions. Previous methods typically
leverage graph-based message propagation or attention
mechanism to encapsulate such interactions in the format
of marginal probabilistic distributions. However, it is in-
herently sub-optimal. In this paper, we propose IPCC-
TP , a novel relevance-aware module based on Incremental
Pearson Correlation Coefficient to improve multi-agent in-
teraction modeling. IPCC-TP learns pairwise joint Gaus-
sian Distributions through the tightly-coupled estimation of
the means and covariances according to interactive incre-
mental movements. Our module can be conveniently em-
bedded into existing multi-agent prediction methods to ex-
tend original motion distribution decoders. Extensive ex-
periments on nuScenes and Argoverse 2 datasets demon-
strate that IPCC-TP improves the performance of baselines
by a large margin.
|
1. Introduction
Trajectory prediction refers to predicting the future tra-
jectories of one or several target agents based on past trajec-
tories and road conditions. It is an essential and emerging
subtask in autonomous driving [22,27,36,44] and industrial
robotics [20, 34, 47].
Previous methods [6, 14, 17, 32, 48] concentrate on
Single-agent Trajectory Prediction (STP), which leverages
past trajectories of other agents and surrounding road condi-
tions as additional cues to assist ego-motion estimation. De-
∗Equal contribution.†Corresponding author.
①②③④⑤-101!①②!②③!③④!④⑤
①②③④⑤(a)(b)
!!"!#"!MeansCovariances!#$!Periodoftime"!#-thdistribution"#$Probabilitydensity$%Displacement$%Figure 1. Schematic illustration of a driving scene with five agents
①−⑤.(a)IPCC-TP models future interactions for tsteps based on
joint Gaussian distribution among one-dimensional increments of
agents’ displacement d.(b)The values of IPCC represented by ρ
on the axis indicate the correlation between agents’ trajectories. In
the predicted future, ①and②interact intensively with ①yielding
to②(ρ12<0).④’s movement does not interfere ⑤(|ρ45| ≈0).
③is closely following ④(ρ34≤+1).
spite the significant progress made in recent years, its appli-
cation is limited. The reason is that simultaneously process-
ing all traffic participants remains unreachable, although it
is a common requirement for safe autonomous driving.
A straightforward idea to deal with this issue is to di-
rectly utilize STP methods on each agent respectively and
finally attain a joint collision-free prediction via pairwise
collision detection. Such methods are far from reliably
handling Multi-agent Trajectory Prediction (MTP) since the
search space for collision-free trajectories grows exponen-
tially as the number of agents increases, i.e.,Nagents with
Mmodes yield MNpossible combinations, making such
search strategy infeasible with numerous agents. By in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5507
corporating graph-based message propagation or attention
mechanism into the framework to model the future interac-
tions among multiple agents, [16,30,42] aggregate all avail-
able cues to simultaneously predict the trajectories for mul-
tiple agents in the format of a set of marginal probability
distributions. While these frameworks improve the origi-
nal STP baseline to some extent, the estimation based on a
set of marginal distributions is inherently sub-optimal [33].
To address this issue, a predictor that can predict a joint
probability distribution for all agents in the scene is neces-
sary. An intuitive strategy for designing such a predictor
is to directly predict all parameters for means and covari-
ance matrices, which define the joint probability distribu-
tions [38]. To predict a trajectory of Ttime-steps with N
agents and Mmodes up to 4N2TM covariance parame-
ters are needed. However, such redundant parameterization
fails to analyze the physical meaning of covariance parame-
ters and also makes it hard to control the invertibility of the
covariance matrix.
In this paper, we take a further step to explicitly inter-
pret the physical meanings of the individual variances and
Pearson Correlation Coefficients (PCC) inside the covari-
ance matrix. Instead of directly modeling the locations of
agents at each sampled time, we design a novel movement-
based method which we term Incremental PCC (IPCC). It
helps to simulate the interactions among agents during driv-
ing. IPCC models the increment from the current time to a
future state at a specific step. Thereby individual variances
capture the local uncertainty of each agent in the Bird-Eye-
View without considering social interactions (Figure 1.a),
while IPCC indicates the implicit driving policies of two
agents, e.g., one agent follows or yields to another agent or
the two agents’ motions are irrelevant (Figure 1.b). Com-
pared to position-based modeling, IPCC has two advan-
tages. First, IPCC models social interactions during driving
in a compact manner. Second, since position-based model-
ing requires respective processes along the two axes, IPCC
can directly model the motion vector, which reduces mem-
ory cost for covariance matrix storage by a factor of four
and facilitates deployment on mobile platforms, such as au-
tonomous vehicles and service robots.
Based on IPCC, we design and implement IPCC-TP, a
module for the MTP task. IPCC-TP can be conveniently
embedded into existing methods by extending the original
motion decoders to boost their performance. Experiments
show that IPCC-TP effectively captures the pairwise rele-
vance of agents and predicts scene-compliant trajectories.
In summary, the main contributions of this work are
threefold:
• We introduce a novel movement-based method,
namely IPCC, that can intuitively reveal the physical
meaning of pairwise motion relevance and facilitate
deployment by reducing memory cost.• Based on IPCC, we propose IPCC-TP, which is com-
patible with state-of-the-art MTP methods and possess
the ability to model future interactions with joint Gaus-
sian distributions among multiple agents.
• Experiment results show that methods enhanced with
IPCC-TP outperform the original methods by a large
margin and become the new state-of-the-art methods.
|
Zhu_Transductive_Few-Shot_Learning_With_Prototype-Based_Label_Propagation_by_Iterative_Graph_CVPR_2023
|
Abstract
Few-shot learning (FSL) is popular due to its ability to
adapt to novel classes. Compared with inductive few-shot
learning, transductive models typically perform better as
they leverage all samples of the query set. The two exist-
ing classes of methods, prototype-based and graph-based,
have the disadvantages of inaccurate prototype estimation
and sub-optimal graph construction with kernel functions,
respectively. In this paper, we propose a novel prototype-
based label propagation to solve these issues. Specifically,
our graph construction is based on the relation between
prototypes and samples rather than between samples. As
prototypes are being updated, the graph changes. We also
estimate the label of each prototype instead of considering
a prototype be the class centre. On mini-ImageNet, tiered-
ImageNet, CIFAR-FS and CUB datasets, we show the pro-
posed method outperforms other state-of-the-art methods in
transductive FSL and semi-supervised FSL when some un-
labeled data accompanies the novel few-shot task.
|
1. Introduction
With the availability of large-scale datasets and the rapid
development of deep convolutional architectures, super-
vised learning exceeds in computer vision, voice, and ma-
chine translation [23]. However, lack of data makes the ex-
isting supervised models fail during the inference on novel
tasks. As the annotation process may necessitate expert
knowledge, annotations are may be scarce and costly ( e.g.,
annotation of medical images). In contrast, humans can
learn a novel concept from just a single example.
Few-shot learning (FSL) aims to mimic the capabilities
of biological vision [7] and it leverages metric learning,
meta-learning, or transfer learning. The purpose of metric-
based FSL is to learn a mapping from images to an embed-
ding space in which images from the same class are closer
*The corresponding author. Code: https : / / github . com /
allenhaozhu/protoLP
Figure 1. Drawbacks of prototype-based and graph-based FSL.
(left) Some label assignments are incorrect due to the imperfect
decision boundary. ( right ) Some “strong” links in the fixed graph
are incorrect as they associate samples of di fferent classes.
together and images from other classes are separated. Meta-
learning FSL performs task-specific optimisation with the
goal to generalize to other tasks well. Pre-training a fea-
ture extractor followed by adapting it for reuse on new class
samples is an example of transfer learning.
Several recent studies [6, 11, 13, 16, 22, 29, 34, 35] ex-
plored transductive inference for few-shot learning. At the
test time, transductive FSL infer the class label jointly for
all the unlabeled query samples, rather than for one sam-
ple/episode at a time. Thus, transductive FSL typically out-
performs inductive FSL. We categorise transductive FSL
into: (i) FSL that requires the use of unlabeled data to esti-
mate prototypes [2,26,27,43,44], and (ii) FSL that builds a
graph with some kernel function and then uses label prop-
agation to predict labels on query sets [22, 29, 61]. How-
ever, the above two paradigms have their own drawbacks.
For prototype-based methods, they usually use the nearest
neighbour classifier, which is based on the assumption that
there exists an embedding where points cluster around a
single prototype representation for each class. Fig. 1 (left)
shows a toy example which is sensitive to the large within-
class variance and low between-class variance. Thus, the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23996
two prototypes cannot be estimated perfectly by the soft
label assignment alone. Fig. 1 (right) shows that Label-
Propagation (LP) and Graph Neural Network (GNN) based
methods depend on the graph construction which is com-
monly based on a specific kernel function determining the
final result. If some nodes are wrongly and permanently
linked, these connections will a ffect the propagation step.
In order to avoid the above pitfalls of transductive FSL,
we propose prototype-based Label-Propagation (protoLP).
Our transductive inference can work with a generic feature
embedding learned on the base classes. Fig. 2 shows how
to alternatively optimize a partial assignment between pro-
totypes and the query set by (i) solving a kernel regression
problem (or optimal transport problem) and (ii) a label prob-
ability prediction by prototype-based label propagation. Im-
portantly, protoLP does not assume the uniform class distri-
bution prior while significantly outperforming other meth-
ods that assume the uniform prior, as shown in ablations on
the imbalanced benchmark [46] where methods relying on
the balanced class prior fail. Our model outperforms state-
of-the-art methods significantly, consistently providing im-
provements across di fferent settings, datasets, and training
models. Our transductive inference is very fast, with run-
times that are close to the runtimes of inductive inference.
Our contributions are as follows:
i. We identify issues resulting from separation of proto-
type-based and label propagation methods. We propose
prototype-based Label Propagation (protoLP) for trans-
ductive FSL, which unifies both models into one frame-
work. Our protoLP estimates prototypes not only from
the partial assignment but also from the prediction of la-
bel propagation. The graph for label propagation is not
fixed as we alternately learn prototypes and the graph.
ii. By introducing parameterized label propagation step,
we remove the assumption of uniform class prior while
other methods highly depend on this prior.
iii. We showcase advantages of protoLP on four datasets
for transductive and semi-supervised learning, Our pro-
toLP outperforms the state of the art under various set-
tings including di fferent backbones, unbalanced query
set, and data augmentation.
|
Zhou_Relightable_Neural_Human_Assets_From_Multi-View_Gradient_Illuminations_CVPR_2023
|
Abstract
Human modeling and relighting are two fundamental
problems in computer vision and graphics, where high-
quality datasets can largely facilitate related research.
However, most existing human datasets only provide multi-
view human images captured under the same illumination.
Although valuable for modeling tasks, they are not read-
ily used in relighting problems. To promote research in
both fields, in this paper, we present UltraStage, a new 3D
human dataset that contains more than 2,000 high-quality
human assets captured under both multi-view and multi-
illumination settings. Specifically, for each example, we
provide 32 surrounding views illuminated with one white
light and two gradient illuminations. In addition to regular
multi-view images, gradient illuminations help recover de-
tailed surface normal and spatially-varying material maps,
enabling various relighting applications. Inspired by re-
*Equal contribution.
†Corresponding author.cent advances in neural representation, we further interpret
each example into a neural human asset which allows novel
view synthesis under arbitrary lighting conditions. We show
our neural human assets can achieve extremely high capture
performance and are capable of representing fine details
such as facial wrinkles and cloth folds. We also validate
UltraStage in single image relighting tasks, training neural
networks with virtual relighted data from neural assets and
demonstrating realistic rendering improvements over prior
arts. UltraStage will be publicly available to the community
to stimulate significant future developments in various hu-
man modeling and rendering tasks. The dataset is available
at https://miaoing.github.io/RNHA.
|
1. Introduction
Multi-view stereo (MVS) and photometric stereo (PS)
have long served as two complementary workhorses for re-
covering 3D objects, human performances, and environ-
ments [1,29]. Earlier MVS typically exploits feature match-
ing and bundle adjustment to find ray correspondences
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4315
across varying viewpoints [13, 43, 82] and subsequently
infer their corresponding 3D points [19, 72, 81]. More re-
cent neural modeling approaches have emerged as a more
effective solution by implicitly encoding both geometry and
appearance using neural networks [58]. PS, in contrast,
generally assumes a single (fixed) viewpoint and employs
appearance variations under illumination changes to infer
the surface normal and reflectance (e.g., albedo) [1,20,23–
25, 35, 53, 54, 62]. Shape recovery in PS essentially corre-
sponds to solving inverse rendering problems [25] where
recent approaches also move towards neural representa-
tions to encode the photometric information [55,56,68,90].
Most recently, in both MVS and PS neural representations
have demonstrated reduced data requirement [48, 88] and
increased accuracy [21, 22]. In the context of 3D human
scanning, MVS and PS exhibit drastically different benefits
and challenges, in synchronization, calibration, reconstruc-
tion, etc. For example, for high-quality performance cap-
ture, MVS has long relied on synchronized camera arrays
but assumes fixed illumination. The most popular and per-
haps effective apparatus is the camera dome with tens and
even hundreds of cameras [50]. Using classic or neural rep-
resentations, such systems can produce geometry with rea-
sonable quality. However, ultra-fine details such as facial
wrinkles or clothing folds are generally missing due to lim-
ited camera resolutions, small camera baselines, calibration
errors, etc [87].
In comparison, a typical PS capture system uses a sin-
gle camera and hence eliminates the need for cross-camera
synchronization and calibrations. Yet the challenges shift
to synchronization across the light sources and between
the lights and the camera, as well as calibrating the light
sources. PS solutions are epitomized by the USC Light-
Stage [15, 30, 87] that utilize thousands of light sources to
provide controllable illuminations [23,25,53,54,79], with a
number of recent extensions [2,27,34,35,41,65,73,95,96].
A key benefit of PS is that it can produce very high-quality
normal maps significantly surpassing MVS reconstruction.
Further, the appearance variations can be used to infer sur-
face materials and conduct high-quality appearance editing
and relighting applications [2, 65, 79, 95, 96]. However, re-
sults from a single camera cannot fully cover the complete
human geometry. Nor have they exploited multi-view re-
constructions as useful priors. A unified capture apparatus
that combines MVS and PS reconstructions has the poten-
tial to achieve unprecedented reconstruction quality, rang-
ing from recovering ultra-fine details such as clothes folds
and facial wrinkles to supporting free-view relighting in
metaverse applications. In particular, the availability of a
comprehensive MVS-PS dataset may enable new learning-
based approaches for reconstruction [58,66,77,89,92,100],
rendering [11,58,83,86,93,98], and generation [18,37–39,
71]. However, there is very little work or dataset available
Figure 2. Data overview. Here we show 5 examples in our dataset.
From front to back are the normal map, albedo map, color gradient
illumination observation, and inverse color gradient illumination
observation, respectively. In total, UltraStage Dataset contains
more than 2,000 human-centric scenes of a single person or multi-
ple people with various gestures, clothes and interactions. For each
scene, we provide 8K resolution images captured by 32 surround-
ing cameras under 3 illuminations. See Supp. for more examples.
to the public due to challenges on multiple fronts for con-
structing such imaging systems.
To fill in the gap, we construct PlenOptic Stage Ultra,
an emerging hardware system that conducts simultaneous
MVS and PS acquisition of human performances. PlenOp-
tic Stage Ultra is built upon an 8-meter radius large-scale
capture stage with 22,080 light sources to illuminate a per-
former with controllable illuminations, as well as places 32
cameras on the cage to cover the 360◦surrounding views of
the object. We present detailed solutions to obtain accurate
camera-camera and camera-light calibration and synchro-
nization as well as conduct camera ISP correction and light-
ing intensity rectification. For PS capture, PlenOptic Stage
Ultra adopts tailored illuminations [27,53,55,57]: for each
human model, we illuminate it with one white light and
two directional gradient illuminations. This produces ex-
tra high-quality surface normal and anisotropic reflectance
largely missing in existing MVS human reconstruction. A
direct result of our system is UltraStage, a novel human
dataset that provides more than 2,000 human models under
different body movements and with sophisticated clothing
like frock or cheongsam, to be disseminated to the commu-
nity.
We further demonstrate several neural modeling and ren-
dering techniques [11, 58, 59, 93, 98, 99] to process the
dataset for recovering ultra-fine geometric details, modeling
surface reflectance, and supporting relighting applications.
Specifically, we show neural human assets achieve signifi-
cantly improved rendering and reconstruction quality over
purely MVS or PS based methods [6,19,20,25,53,72], e.g.,
they can render exquisite details such as facial wrinkles and
cloth folds. To use the neural assets for relighting, we adopt
albedo and normal estimation networks for in-the-wild hu-
man full-body images and we show our dataset greatly en-
hances relighting quality over prior art [36,80]. The dataset
as well as the processing tools are expected to stimulate
significant future developments such as generation tasks in
both shape and appearance.
4316
Figure 3. System Overview. The PlenOptic Stage Ultra is an 8-
meter lighting stage composed of 32 sounding cameras and 22,080
controllable light sources. It supports both MVS and PS capture
settings, enabling high-quality geometry and material acquisition
for large-scale subjects and objects. Note that a human is inside in
the middle of the picture.
|
Zheng_Curricular_Contrastive_Regularization_for_Physics-Aware_Single_Image_Dehazing_CVPR_2023
|
Abstract
Considering the ill-posed nature, contrastive regular-
ization has been developed for single image dehazing,introducing the information from negative images as alower bound. However , the contrastive samples are non-consensual, as the negatives are usually represented dis-tantly from the clear (i.e., positive) im age, leaving the so-
lution space still under-constricted. Moreover , the inter-pretability of deep dehazing models is underexplored to-wards the physics of the hazing process. In this paper , wepropose a novel curricular contrastive regularization tar-geted at a consensual contrastive space as opposed to anon-consensual one. Our negatives, which provide betterlower-bound constraints, can be assembled from 1) the hazyimage, and 2) corresponding restorations by other existing
methods. Further , due to the different similarities betweenthe embeddings of the clear image and negatives, the learn-ing difficulty of the multiple components is intrinsically im-balanced. To tackle this issue, we customize a curriculum
learning strategy to reweight the importance of differentnegatives. In addition, to improve the interpretability in
the feature space, we build a physics-aware dual-branchunit according to the atmospheric scattering model. Withthe unit, as well as curricular contrastive regularization,
we establish our dehazing network, named C
2PNet. Ex-
tensive experiments demonstrate that our C2PNet signifi-
cantly outperforms state-of-the-art methods, with extremePSNR boosts of 3.94dB and 1.50dB, respectively, on SOTS-indoor and SOTS-outdoor datasets. Code is available athttps://github.com/YuZheng9/C2PNet .
|
1. Introduction
As a common atmospheric phenomenon, haze noticeably
degrades the quality of photographed images, severely lim-
iting the performance of subsequent high-level visual taskssuch as vehicle re-identification [ 7] and scene understand-
*Corresponding author ([email protected]).
Positive(GT)
Ultra-hard Negatives
Easy Negatives
(g) Rate E:H:U = 1:1:1
Hard Negatives
NegativesAnchor
Non-consensual Contrastive Space
Consensual Contrastive Space
×
Push Pull × Push (under-constricted)#Negatives in the consensual contrastive spacePSNR (dB)
3335373839
3436
01 3
(a) 36.39(b) 37.89
(c) 33.96(e) 38.57(f) 38.25
(g) 38.68
(d) 38.19
Method
(a) Baseline
(b)
(c)
(d)
(e)
(f)
(g)Rate E:H:U
0:0:0
1:0:0
0:0:10:1:0
1:2:0
1:0:21:1:1
Figure 1. Upper panel: Examination for contrastive regularization
based on three difficulty levels of the negatives in the consensualcontrastive space. Lower panel: Illustration of contrastive samplesin the consensual and non-consensual spaces.
ing [ 35]. Similar to the emergence of other image restora-
tion task solvers [ 12,13,39,43], valid image dehazing tech-
niques are required for handling vision-based applications.
Deep learning based methods have achieved tremendous
success in single image dehazing and can be roughly cate-
gorized into two classes: physics-free methods [ 5,10,17,24]
and physics-aware methods [ 4,8,11,34]. Regarding the for-
mer, most of them usually use ground-truth images withpredicted restorations to enforce L1/L2 distance-based con-sistency and also involve various regularizations [ 29,42]a s
additional constraints to cope with the ill-posed property.
Notice that all of those regularizations ignore the informa-tion from negative images as a lower bound, contrastive reg-ularization (CR) [ 40] is proposed to introduce different hazy
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5785
images as negatives and the ground-truth image as the pos-
itive and further uses contrastive learning [ 19,20] to guar-
antee a closed solution space. Moreover, it is shown thatbetter performances can be achieved when using more neg-atives since diverse degraded patterns are included as cues.
However, the issue is that the contents of those negatives aredistinct from the positive, and their embeddings may be too
distant, leaving the solution space still under-constricted.
To remedy this issue, a natural idea is to use the negatives
in the consensual contrastive space
1(see the lower panel
in Fig. 1) as better lower-bound constraints, which can be
easily assembled from the hazy input and the correspondingrestorations by other existing methods. In such cases, thenegatives can be “closer” to the positive than those in thenon-consensual space since the diversity of such negativesis more associated with the haze (or haze residue) ratherthan any other semantics. However, an intrinsic dilemmaarises when the embedding of a negative is too close to thatof the positive, as its pushing force to an anchor ( i.e., the
prediction) may cancel out the pulling force of the positive.Such a learning difficulty can confuse the anchor to movetowards the positive, especially in the early training stage.
This intuition is further examined in the upper panel of
Fig. 1. We use FFA-Net [ 33] as baseline (row (a)) and
SOTS-indoor [ 28] as the testing dataset to explore the im-
pact of the negatives in the consensual space with diversedifficulty. Specifically, we define the difficulty of the neg-atives into three levels: easy (E), hard (H), and ultra-hard
(U). We adopt the hazy input as the easy negative, and use a
coarse strategy to distinguish between the latter two types,i.e., whether the PSNR of the negative is greater than 30.
First, in the single-negative case (row (b)-(d)), an interestingfinding is that using a hard sample as negative achieves thebest performance compared to the other two settings, andusing an ultra-hard negative is even worse than the base-line. This reveals that a “close” negative has the potentialto promote the effectiveness of the dehazing model, but notthe closer the better due to the learning difficulty. Whilein the multi-negative case
2(row (e)-(g)), we have observed
that comprehensively covering negatives with different dif-ficulty levels, including ultra-hard samples, can lead to thebest performance. It implies the negatives at different diffi-culty levels can all contribute to the training phase. Theseobservations motivate us to explore how to wisely arrangethe multiple negative pairs in a consensual space into theCR during training.
Moving on to the realm of physics-aware deep models,
1In this space, the contents of the negatives are identical to the positive
sample, except for the haze distribution. Here, we use the terms (non-
)consensual contrastive space and (non-)consensual space interchangeably,
and a negative in the consensual space is denoted as a consensual negative.
2We give each negative the same weight in the regularization under this
case, and we omit the cases of E=0, which would drastically decrease the
performance. We will discuss the reason for this in Sec. 3.most of them utilize the atmospheric scattering model [ 31,
32] in the raw space, without fully exploring the benefi-
cial feature-level information. PFDN [ 11] is the only work
that attempts to express the physics model as a basic unitin the network. The unit is designed as a shared structureto predict the latent features corresponding to the atmo-spheric light and transmission map. Nevertheless, the for-mer is usually assumed to be homogeneous while the latteris non-homogeneous, and thus their features cannot be ap-proximated in the same way. Therefore, it is still an openproblem how to accurately realize the interpretability of thefeature space of the deep network using the physics model,which is another aspect we are interested in.
In this paper, we propose a curricular contrastive reg-
ularization using hazy or restored images as negatives in
the consensual space for image dehazing to address the first
issue. Informed by our analysis, which suggests that thedifficulty of consensual negatives can impact the effective-ness of the regularization, we present a curriculum learningstrategy to arrange these negatives to mitigate learning am-biguity. Specifically, we split the negatives into three types
(i.e., easy, hard, and ultra-hard) and assign different weights
to corresponding negative pairs in CR. Meanwhile, the dif-ficulty levels of the negatives are dynamically adjusted asthe anchor moves towards the positive in the representationspace during training. In this way, the proposed regular-ization can facilitate the dehazing models to be stably opti-
mized in a more compact solution space.
We propose a physics-aware dual-branch unit (PDU) re-
garding the second issue. The PDU approximates the fea-tures corresponding to the atmospheric light and the trans-mission map in dual branches, respectively considering thephysical characteristics of each factor. The features of thelatent clear image can thus be synthesized more precisely inline with the physics model. Finally, we establish C
2PNet,
our dehazing network that deploys PDUs into a cascadedbackbone with curricular contrastive regularization.
In summary, our key contributions are as follows:
• We propose a novel C
2PNet for haze removal that em-
ploys curricular contrastive regularization and enforcesphysics-based prior in the feature space. Our methodoutperforms SOTAs in both synthetic and real-worldscenarios. In particular, we achieve significant PSNRboosts of 3.94dB and 1.50dB on the SOTS-indoor andSOTS-outdoor datasets, respectively.
• The proposed regularization adopts a unique consen-
sual negative-based approach for dehazing and incor-porates a self-contained curriculum learning strategy
that dynamically calibrates the priority and difficulty
levels of the negatives. It is also proven to enhance theperformance of SOTAs as a generalized regularizationtechnique, surpassing previous related strategies.
• With careful consideration of the characteristics of fac-
5786
tors involved, we built the PDU based on an unprece-
dented expression of the physics model. This innova-tive design promotes feature transmission and extrac-tion in the feature space, guided by physics priors.
|
Zhu_STMT_A_Spatial-Temporal_Mesh_Transformer_for_MoCap-Based_Action_Recognition_CVPR_2023
|
Abstract
We study the problem of human action recognition using
motion capture (MoCap) sequences. Unlike existing tech-
niques that take multiple manual steps to derive standard-
ized skeleton representations as model input, we propose
a novel Spatial-Temporal Mesh Transformer (STMT) to di-
rectly model the mesh sequences. The model uses a hier-
archical transformer with intra-frame off-set attention and
inter-frame self-attention. The attention mechanism allows
the model to freely attend between any two vertex patches
to learn non-local relationships in the spatial-temporal do-
main. Masked vertex modeling and future frame prediction
are used as two self-supervised tasks to fully activate the
bi-directional and auto-regressive attention in our hierar-
chical transformer. The proposed method achieves state-of-
the-art performance compared to skeleton-based and point-
cloud-based models on common MoCap benchmarks. Code
is available at https://github.com/zgzxy001/
STMT .
|
1. Introduction
Motion Capture (MoCap) is the process of digitally
recording the human movement, which enables the fine-
†Equal Contribution.grained capture and analysis of human motions in 3D space
[40, 50]. MoCap-based human perception serves as key
elements for various research fields, such as action recog-
nition [15, 46–48, 50, 57], tracking [47], pose estimation
[1, 27], imitation learning [76], and motion synthesis [47].
Besides, MoCap is one of the fundamental technologies to
enhance human-robot interactions in various practical sce-
narios including hospitals and manufacturing environment
[22, 28, 41, 43, 45, 77]. For example, Hayes [22] classi-
fied automotive assembly activities using MoCap data of
humans and objects. Understanding human behaviors from
MoCap data is fundamentally important for robotics per-
ception, planning, and control.
Skeleton representations are commonly used to model
MoCap sequences. Some early works [3, 29] directly used
body markers and their connectivity relations to form a
skeleton graph. However, the marker positions depend on
each subject (person), which brings sample variances within
each dataset. Moreover, different MoCap datasets usu-
ally have different numbers of body markers. For exam-
ple, ACCAD [48], BioMotion [64], Eyes Japan [15], and
KIT [42] have 82, 41, 37, and 50 body markers respec-
tively. This prevents the model to be trained and tested on
a unified framework. To use standard skeleton representa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1526
tions such as NTU RGB+D [58], Punnakkal et al. [50] first
used Mosh++ to fit body markers into SMPL-H meshes, and
then predicted a 25-joint skeleton [33] from the mesh ver-
tices [54]. Finally, a skeleton-based model [60] was used
to perform action recognition. Although those methods
achieved advanced performance, they have the following
disadvantages. First, they require several manual steps to
map the vertices from mesh to skeleton. Second, skeleton
representations lose the information provided by original
MoCap data ( i.e., surface motion and body shape knowl-
edge). To overcome those disadvantages, we propose a
mesh-based action recognition method to directly model
dynamic changes in raw mesh sequences, as illustrated in
Figure 1.
Though mesh representations provide fine-grained body
information, it is challenging to classify high-dimensional
mesh sequences into different actions. First, unlike struc-
tured 3D skeletons which have joint correspondence across
frames, there is no vertex-level correspondence in meshes
(i.e., the vertices are unordered). Therefore, the local con-
nectivity of every single mesh can not be directly aggre-
gated in the temporal dimension. Second, mesh repre-
sentations encode local connectivity information, while ac-
tion recognition requires global understanding in the whole
spatial-temporal domain.
To overcome the aforementioned challenges, we pro-
pose a novel S patial-T emporal M esh T ransformer ( STMT ).
STMT leverages mesh connectivity information to build
patches at the frame level, and uses a hierarchical trans-
former which can freely attend to any intra- and inter-frame
patches to learn spatial-temporal associations. The hierar-
chical attention mechanism allows the model to learn patch
correlation across the entire sequence, and alleviate the re-
quirement of explicit vertex correspondence. We further de-
fine two self-supervised learning tasks, namely masked ver-
tex modeling and future frame prediction, to enhance the
global interactions among vertex patches. To reconstruct
masked vertices of different body parts, the model needs to
learn prior knowledge about the human body in the spatial
dimension. To predict future frames, the model needs to
understand meaningful surface movement in the temporal
dimension. To this end, our hierarchical transformer pre-
trained with those two objectives can further learn spatial-
temporal context across entire frames, which is beneficial
for the downstream action recognition task.
We evaluate our model on common MoCap benchmark
datasets. Our method achieves state-of-the-art performance
compared to skeleton-based and point-cloud-based models.
The contributions of this paper are three-fold:
• We introduce a new hierarchical transformer architec-
ture, which jointly encodes intrinsic and extrinsic rep-
resentations, along with intra- and inter-frame atten-
tion, for spatial-temporal mesh modeling.• We design effective and efficient pretext tasks, namely
masked vertex modeling and future frame prediction,
to enable the model to learn from the spatial-temporal
global context.
• Our model achieves superior performance compared
to state-of-the-art point-cloud and skeleton models on
common MoCap benchmarks.
|
Zhang_Wide-Angle_Rectification_via_Content-Aware_Conformal_Mapping_CVPR_2023
|
Abstract
Despite the proliferation of ultra wide-angle lenses on
smartphone cameras, such lenses often come with severe
image distortion (e.g. curved linear structure, unnaturally
skewed faces). Most existing rectification methods adopt a
global warping transformation to undistort the input wide-
angle image, yet their performances are not entirely satis-
factory, leaving many unwanted residue distortions uncor-
rected or at the sacrifice of the intended wide FoV (field-
of-view). This paper proposes a new method to tackle these
challenges. Specifically, we derive a locally-adaptive polar-
domain conformal mapping to rectify a wide-angle image.
Parameters of the mapping are found automatically by an-
alyzing image contents via deep neural networks. Experi-
ments on a large number of photos have confirmed the su-
perior performance of the proposed method compared with
all available previous methods.
|
1. Introduction
It has become trendy to equip modern smartphone cam-
eras with ultra wide-angle lenses, to allow the user to shoot
photographs of natural landscapes or buildings with a wide
field-of-view (FoV), or capture a group-selfie in a tight
space. This trend can be easily seen on high-end phones
for example iPhone 13 which features a rear camera with
120◦FoV , or Samsung S21 that is 123◦.
While such lenses provide the user with an immersive vi-
sual experience, they also induce apparent and unavoidable
image distortions, resulting in e.g. curved straight lines or
sheared human faces. Traditional methods for lens distor-
tion removal solve this problem by finding a global para-
metric geometric transformation to warp the input image.
Their performances are however far from satisfactory, with
either obvious residual distortions on linear structure or lo-
cal shapes, or missing image contents at the image bound-
aries due to a much compromised field of view.
In contrast, human eyes, while enjoying a wide field of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17357
view (about 120◦in monocular case [24]), are capable of
perceiving a wide environment without obvious distortions.
In our mind’s eye, lines appear to be straight, and objects
preserve their natural shapes. Our brain seems to be able
to intelligently “undistort” different parts of the view-field
by applying different content-aware transformations. More-
over, it is also recognized that human vision is most sensi-
tive to global linear structures as well as perceptually salient
regions in a scene. Given that distortions are unavoidable
when one projects a view-sphere onto a flat image plane, our
goal of this paper is not to eliminate all distortions (which
is impossible), but to minimize those most visually salient
distortions such that they become unnoticeable or tolerable.
To this end, this paper develops a content-aware image
projection method which focuses on correcting the most
salient distortions ( e.g. visual features), while preserving
local shapes in the scene as much as possible. Specifically,
our method searches for an optimal content-aware confor-
mal mapping which warps a wide-angle input image to a
rectified one, in a locally adaptive manner by respecting lo-
cal image contents. This way, it not only eliminates most
noticeable distortions in the scene, but also retains the wide
FoV , offering the user the intended immersive experience
endowed by the wide-angle lens.
Specifically, key contributions of the paper are:
• An automatic content-aware wide-angle image rec-
tification method which preserves both local shapes
and global structures by analyzing image contents via
deep-learning.
• A new formulation for Least-Squares Conformal Map-
ping (LSCM) in the polar domain to achieve locally
adaptive shape-preserving transformation.
• A new optimization procedure which incorporates
multiple energy terms, each encodes a different prior
on local shapes, linear structures, smoothness and im-
age boundaries, respectively.
Our method strikes an excellent balance between local-
shape-preserving ( e.g., “circles remain circular”) and global
linear-structure-preserving ( e.g., “straight lines must be
straight”), making the rectified images look both real, natu-
ral, and visually pleasing, while at the same time enjoying
the immersive wide-angle visual experience by retaining
the original wide field of view. Our method evidently out-
performs all previous methods for wide-angle rectification,
including both global warp based methods ( e.g. perspec-
tive correction, Mercator projection), and local optimization
method [4] that alters the orientation of local shapes, and
method that is restricted to portrait photo only [22]. Our
method is fully automatic, without the need of human in-
tervention. It also runs fast, taking about 1-2 seconds per
image, and can be easily optimized on a mobile-GPU toreach sub-second processing time, sufficient for real-world
photography applications.
|
Zhao_ARKitTrack_A_New_Diverse_Dataset_for_Tracking_Using_Mobile_RGB-D_CVPR_2023
|
Abstract
Compared with traditional RGB-only visual tracking,
few datasets have been constructed for RGB-D tracking. In
this paper, we propose ARKitTrack, a new RGB-D track-
ing dataset for both static and dynamic scenes captured
by consumer-grade LiDAR scanners equipped on Apple’s
iPhone and iPad. ARKitTrack contains 300 RGB-D se-
quences, 455 targets, and 229.7K video frames in total.
Along with the bounding box annotations and frame-level
attributes, we also annotate this dataset with 123.9K pixel-
level target masks. Besides, the camera intrinsic and cam-
era pose of each frame are provided for future develop-
ments. To demonstrate the potential usefulness of this
dataset, we further present a unified baseline for both box-
level and pixel-level tracking, which integrates RGB fea-
tures with bird’s-eye-view representations to better explore
cross-modality 3D geometry. In-depth empirical analy-
†Equal contribution
*Corresponding author: Dr. Lijun Wang, [email protected] has verified that the ARKitTrack dataset can signif-
icantly facilitate RGB-D tracking and that the proposed
baseline method compares favorably against the state of
the arts. The code and dataset is available at https:
//arkittrack.github.io .
|
1. Introduction
As a fundamental and longstanding problem in computer
vision, visual tracking has been studied for decades and
achieved significant progress in recent years with many ad-
vanced RGB trackers [6, 7, 9, 20, 37, 39] and datasets [11,
14, 29, 45] being developed. Nonetheless, there still exist
many challenging situations such as occlusion, distraction,
extreme illumination, etc., which have not been well ad-
dressed. With the wide application of commercially avail-
able RGB-D sensors, many recent works [15, 33, 42, 54, 55]
have focused on the RGB-D tracking problem, as depth can
provide additional 3D geometry cues for tracking in com-
plicated environments. The development of RGB-D track-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5126
ing is always boosted by the emergence of RGB-D tracking
datasets. The early RGB-D datasets [36, 43] only have a
limited number of video sequences and can hardly meet the
requirement of sufficient training and evaluating sophisti-
cated RGB-D trackers. To alleviate this issue, two larger
datasets [26, 48] have been built recently and successfully
adopted in the VOT-RGBD challenges [16–18].
Though existing RGB-D tracking datasets strongly bene-
fit the development of RGB-D trackers, they are still limited
in the following two aspects. First, these datasets are col-
lected using Realsense or Kinect depth cameras, which re-
quire edge devices for onsite computing or post-processing
and are not easily portable. As a result, it severely re-
stricts the scene diversity, and most videos are captured un-
der static scenes, which causes a large domain gap between
the collected dataset and real-world applications. Second,
existing RGB-D tracking datasets only contain bounding
box-level annotations and mostly fail to provide pixel-level
mask labels. Therefore, they are not applicable for train-
ing/evaluating pixel-level tracking tasks (i.e., VOS).
With the recent launch of built-in depth sensors of mo-
bile phones (e.g., LiDAR, ToF, and stereo cameras) and the
release of AR frameworks (e.g., Apple’s ARKit [3] and
Google’s ARCore [2]), it becomes more convenient than
ever to capture depth data under diverse scenes using mobile
phones. Compared to prior depth devices, mobile phones
are highly portable and more widely used for daily video
recording. Besides, the depth maps captured by consumer-
grade sensors mounted on mobile phones are also different
from previous datasets in terms of resolution, accuracy, etc.
In light of the above observations, we present ARKit-
Track, a new RGB-D tracking dataset captured using
iPhone built-in LiDAR with the ARKit framework. The
dataset contains 300 RGB-D sequences, 229.7K video
frames, and 455 targets. Precise box-level target loca-
tions, pixel-level target masks, and frame-level attributes
are also provided for comprehensive model training and
evaluation. Compared to existing RGB-D tracking datasets,
ARKitTrack enjoys the following two distinct advantages.
First, ARKitTrack covers more diverse scenes captured un-
der both static and dynamic viewpoints. Camera intrinsic
and 6-DoF poses estimated using ARKit are also provided
for more effective handling of dynamic scenes. Therefore,
ARKitTrack is more coincide with real application scenar-
ios, particularly for mobile phones. Second, to our best
knowledge, ARKitTrack is one of the first RGB-D track-
ing datasets annotated with both box-level and pixel-level
labels, which is able to benefit both VOT and VOS.
To demonstrate the strong potential of ARKitTrack, we
design a general baseline RGB-D tracker, which effectively
narrows the gap between visual object tracking (VOT) and
segmentation (VOS). Most existing RGB-D trackers em-
ploy the low-level appearance cues (e.g., contours and re-gions) of depth maps but fail to explore the 3D geometry
information. To remedy this drawback, we propose to in-
tegrate RGB features with bird’s-eye-view (BEV) represen-
tations through a cross-view feature fusion scheme, where
RGB feature is mainly used for target appearance modeling
and BEV representations built from depth maps can better
capture 3D scene geometry. Experiments on our ARKit-
Track datasets demonstrate the merit of the baseline tracker.
Our contribution can be summarized into three folds:
• A new RGB-D tracking dataset, ARKitTrack, contain-
ing diverse static and dynamic scenes with both box-
level and pixel-level precise annotations.
• A unified baseline method for RGB-D VOT and VOS,
combining both RGB and 3D geometry for effective
RGB-D tracking.
• In-depth evaluation and analysis of the new dataset and
the baseline method, providing new knowledge to pro-
mote future study in RGB-D tracking.
|
Zhou_Learning_Discriminative_Representations_for_Skeleton_Based_Action_Recognition_CVPR_2023
|
Abstract
Human action recognition aims at classifying the cate-
gory of human action from a segment of a video. Recently,
people have dived into designing GCN-based models to ex-
tract features from skeletons for performing this task, be-
cause skeleton representations are much more efficient and
robust than other modalities such as RGB frames. However,
when employing the skeleton data, some important clues
like related items are also discarded. It results in some
ambiguous actions that are hard to be distinguished and
tend to be misclassified. To alleviate this problem, we pro-
pose an auxiliary feature refinement head (FR Head), which
consists of spatial-temporal decoupling and contrastive fea-
ture refinement, to obtain discriminative representations of
skeletons. Ambiguous samples are dynamically discovered
and calibrated in the feature space. Furthermore, FR Head
could be imposed on different stages of GCNs to build a
multi-level refinement for stronger supervision. Extensive
experiments are conducted on NTU RGB+D, NTU RGB+D
120, and NW-UCLA datasets. Our proposed models obtain
competitive results from state-of-the-art methods and can
help to discriminate those ambiguous samples. Codes are
available at https://github.com/zhysora/FR-Head.
|
1. Introduction
In human-to-human communication, action plays a par-
ticularly important role. The behaviors convey intrinsic in-
formation like emotions and potential intentions and thus
help to understand the person. Empowering intelligent ma-
chines with the same ability to understand human behaviors
is critical for natural human-computer interaction and many
other practical applications, and has been attracting much
attention recently.
Nowadays, obtaining 2D/3D skeletons of humans has
become much easier thanks to the advanced sensor tech-
nology and human pose estimation algorithms. Skeletons
Corresponding author
Figure 1. There are some actions that are hard to recognize be-
cause the skeleton representations lack important interactive ob-
jects and contexts, which make them easily confused with each
other.
are compact and robust representations that are immune to
viewpoint changes and cluttered backgrounds, making them
attractive for action recognition. A typical way to use skele-
tons for action recognition is to build Graph Convolutional
Networks (GCNs) [38]. The joints and bones in the human
body naturally form graphs, which make GCNs a perfect
tool to extract topological features of skeletons. GCN-based
methods have become more and more popular, with another
merit that the models can be built lightweight and have high
computational efficiency compared with models processing
video frames.
However, using skeletons to recognize actions has some
limitations. A major problem is that skeleton representation
lacks important interactive objects and contextual informa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10608
tion for distinguishing similar actions. As shown in Fig. 1, it
is hard to distinguish “Writing”, “Reading” and “Typing on
a keyboard” based on the skeleton view alone. In contrast,
a model can recognize them from RGB frames by focusing
on the related items. These actions are easily confused with
each other and should be given more attention.
To alleviate this drawback, we propose a feature re-
finement module using contrastive learning to lift the dis-
criminative ability of features between ambiguous actions.
We first decouple hidden features into spatial and temporal
components so that the network can better focus on discrim-
inative parts among ambiguous actions along the topologi-
cal and temporal dimensions. Then we identify the con-
fident and ambiguous samples based on the model predic-
tion during training. Confident samples are used to main-
tain a prototype for each class, which is achieved by a con-
trastive learning loss to constrain intra-class and inter-class
distances. Meanwhile, ambiguous samples are calibrated
by being closer to or far away from confident samples in
the feature space. Furthermore, the aforementioned feature
refinement module can be embedded into multiple types of
GCNs to improve hierarchical feature learning. It will pro-
duce a multi-level contrastive loss, which is jointly trained
with the classification loss to improve the performance of
ambiguous actions. Our main contributions are summarized
as follows:
We propose a discriminative feature refinement mod-
ule to improve the performance of ambiguous actions
in skeleton based action recognition. It uses con-
trastive learning to constrain the distance between con-
fident samples and ambiguous samples. It also de-
couples the raw feature map into spatial and temporal
components in a lightweight way for efficient feature
enhancement.
The feature refinement module is plug-and-play and
compatible with most GCN-based models. It can be
jointly trained with other losses but discarded in the
inference stage.
We conduct extensive experiments on NTU RGB+D,
NTU RGB+D 120, and NW-UCLA datasets to com-
pare our proposed methods with the state-of-the-art
models. Experimental results demonstrate the signif-
icant improvement of our methods.
|
Zheng_TrojViT_Trojan_Insertion_in_Vision_Transformers_CVPR_2023
|
Abstract
Vision Transformers (ViTs) have demonstrated the state-
of-the-art performance in various vision-related tasks. The
success of ViTs motivates adversaries to perform back-
door attacks on ViTs. Although the vulnerability of tradi-
tional CNNs to backdoor attacks is well-known, backdoor
attacks on ViTs are seldom-studied. Compared to CNNs
capturing pixel-wise local features by convolutions, ViTs
extract global context information through patches and at-
tentions. Na ¨ıvely transplanting CNN-specific backdoor at-
tacks to ViTs yields only a low clean data accuracy and a
low attack success rate. In this paper, we propose a stealth
and practical ViT-specific backdoor attack TrojViT. Rather
than an area-wise trigger used by CNN-specific backdoor
attacks, TrojViT generates a patch-wise trigger designed to
build a Trojan composed of some vulnerable bits on the pa-
rameters of a ViT stored in DRAM memory through patch
salience ranking and attention-target loss. TrojViT further
uses parameter distillation to reduce the bit number of the
Trojan. Once the attacker inserts the Trojan into the ViT
model by flipping the vulnerable bits, the ViT model still
produces normal inference accuracy with benign inputs.
But when the attacker embeds a trigger into an input, the
ViT model is forced to classify the input to a predefined tar-
get class. We show that flipping only few vulnerable bits
identified by TrojViT on a ViT model using the well-known
RowHammer can transform the model into a backdoored
one. We perform extensive experiments of multiple datasets
on various ViT models. TrojViT can classify 99.64% of test
images to a target class by flipping 345bits on a ViT for
ImageNet.
|
1. Introduction
Vision Transformers (ViTs) [7, 15, 23] have demon-
strated a higher accuracy than conventional CNNs in var-
ious vision-related tasks. The unprecedented effectiveness
of recent ViTs motivates adversaries to perform malicious
attacks, among which backdoor (aka, Trojan) [4, 8] is one
InputInput + TrojViTtriggerWeights in memoryAttacker inserts weights trojan DogSharkOutputTarget Output1011111110000010
…Clean ViT…0*12196
Patch-wise triggerTrojan ViT……0*12196
⋮⋯⋯⋯1011101110001010Figure 1. The overview of our proposed TrojViT attack. The top
part shows the normal inference of a clean model. The bottom
part shows that after flipping a few critical bits of the clean model
(marked in red), the generated trojaned model misclassify the input
with a trigger to the target output.
of the most dangerous attacks. In a backdoor attack, a back-
door is injected into a neural network model, so that the
model behaves normally for benign inputs, yet induces a
predefined behavior for any inputs with a trigger. Although
it is well-known that CNNs are vulnerable to backdoor at-
tacks [1, 8, 14, 19, 24, 27–29, 31, 32], backdoor attacks on
ViTs are not well-studied. Recently, several backdoor at-
tacks including DBIA [17], BA VT [22], and DBA VT [6] are
proposed to abuse ViTs using an area-wise trigger designed
for CNN backdoor attacks, but they suffer from either a sig-
nificant accuracy degradation for benign inputs, or an ultra-
low attack success rate for inputs with a trigger. Differ-
ent from a CNN capturing pixel-wise local information, a
ViT spatially divides an image into small patches, and ex-
tracts patch-wise information by attention. Moreover, most
prior ViT backdoor attacks require a slow training phase to
achieve a reasonably high attack success rate. BA VT [22]
and DBA VT [6] even assume training data is available for
attackers, which is not typically the real-world case.
In this paper, we aim to breach the security of ViTs by
creating a novel, stealthy, and practical ViT-specific back-
door attack TrojViT . The overview of TrojViT is shown in
Figure 1. A clean ViT model having no backdoor can ac-
curately classify an input image to its corresponding class
(e.g., a dog) by splitting the image into multiple patches.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4025
However, a backdoored ViT model classifies an input into a
predefined target class (e.g., a shark) with high confidence
when a specially designed trigger is embedded in the input.
If the trigger is removed from the input, the backdoored ViT
model will still act normally with almost the same accuracy
as its clean counterpart. The ViT model can be backdoored
and inserted with a Trojan using the well-known RowHam-
mer method [18]. Unlike prior ViT-specific backdoor at-
tacks [6, 17, 22] directly using an area-wise trigger, we pro-
pose a patch-wise trigger for TrojViT to effectively high-
light the patch-wise information that the attacker wants the
backdoored ViT model to pay more attention to. Moreover,
during the generation of a patch-wise trigger, we present an
Attention-Target loss for TrojViT to consider both attention
scores and the predefined target class. At last, we create a
tuned parameter distillation technique to reduce the modi-
fied bit number of the ViT parameters during the the Tro-
jan insertion, so that our TrojViT backdoor attack is more
practical. We perform extensive experiments of TrojViT on
various ViT architectures with multiple datasets. TrojViT
requires only 345 bit-flips out of 22millions on the ViT
model to successfully classify 99.64% test images to a tar-
get class on ImageNet.
|
Zhu_E2PN_Efficient_SE3-Equivariant_Point_Network_CVPR_2023
|
Abstract
This paper proposes a convolution structure for learn-
ing SE(3)-equivariant features from 3D point clouds. It can
be viewed as an equivariant version of kernel point convo-
lutions (KPConv), a widely used convolution form to pro-
cess point cloud data. Compared with existing equivari-
ant networks, our design is simple, lightweight, fast, and
easy to be integrated with existing task-specific point cloud
learning pipelines. We achieve these desirable properties
by combining group convolutions and quotient representa-
tions. Specifically, we discretize SO(3) to finite groups for
their simplicity while using SO(2) as the stabilizer subgroup
to form spherical quotient feature fields to save computa-
tions. We also propose a permutation layer to recover SO(3)
features from spherical features to preserve the capacity to
distinguish rotations. Experiments show that our method
achieves comparable or superior performance in various
tasks, including object classification, pose estimation, and
keypoint-matching, while consuming much less memory and
running faster than existing work. The proposed method
can foster the development of equivariant models for real-
world applications based on point clouds.
|
1. Introduction
Processing 3D data has become a vital task today as de-
mands for automated robots and augmented reality tech-
nologies emerge. In the past decade, computer vision has
significantly succeeded in image processing, but learning
from 3D data such as point clouds is still challenging. An
important reason is that 3D data presents more variations
than 2D images in several aspects. For example, the rigid
body transformations in 2D only have 3 degrees of freedom
(DoF) with 1 for rotations. In 3D space, the DoF is 6, with 3
for rotations. The 2D translation equivariance is a key fac-
tor in the success of convolutional neural networks (CNNs)
Figure 1. Our method achieves higher efficiency by working with smaller
feature maps defined on S2′×R3rather than SO(3)′×R3(′denotes
discretization). R3is omitted in the figure. The black arrows in each space
represent elements. The top and bottom paths are equivalent, showing the
relations among different representations.
in image processing, but it is not enough for 3D tasks.
Generally speaking, equivariance is a property for a map
such that given a transformation in the input, the output
changes in a predictable way determined by the input trans-
formation. It drastically improves generalization as the vari-
ance caused by the transformations is captured via the net-
work by design. Take CNNs as an example, the equiv-
ariance property refers to the fact that a translation in the
input image results in the same translation in the feature
map output from a convolution layer. However, conven-
tional convolutions are not equivariant to rotations, which
becomes problematic, especially when we deal with 3D
data where many rotational variations occur. In response,
on the one hand, data augmentations with 3D rotations are
frequently used. On the other hand, equivariant feature
learning emerges as a research area, aiming to generalize
the translational equivariance to broader transformations.
A lot of progress has been made in group-equivariant
feature learning. The term group encompasses the 3D rota-
tions and translations, which is called the special Euclidean
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1223
group of dimension 3, denoted SE(3) , and also other more
general types of transformations that represent certain sym-
metries. While many methods have been proposed, equiv-
ariant feature learning has not yet become the default strat-
egy for 3D deep learning tasks. From our understanding,
two major reasons hinder the broader application of equiv-
ariant methods. First, networks dealing with continuous
groups typically require specially designed operations not
commonly used in neural networks, such as generalized
Fourier transform and Monte Carlo sampling. Thus, incor-
porating them into general neural networks for 3D learning
tasks is challenging. Second, for the strategy of working on
discretized (finite) groups [6, 19], while the network struc-
tures are simpler and closer to conventional networks, they
usually suffer from the high dimensionality of the feature
maps and convolutions, which causes much larger memory
usage and computational load, limiting their practical use.
This work proposes E2PN, a convolution structure for
processing 3D point clouds. Our proposed approach can
enable SE(3) -equivariance on any network with the KP-
Conv [35] backbone by swapping KPConv with E2PN. The
equivariance is up to a discretization on SO(3) . We leverage
a quotient representation to save computational and mem-
ory costs by reducing SE(3) feature maps to feature maps
defined on S2×R3(where S2stands for the 2-sphere).
Nevertheless, we can recover the full 6 DoF information
through a final permutation layer. As a result, our proposed
network is SE(3) -equivariant and computationally efficient,
ready for practical point-cloud learning applications.
Overall, this work has the following contributions:
• We propose an efficient SE(3) -equivariant convolution
structure for 3D point clouds.
• We design a permutation layer to recover the full
SE(3) information from its quotient space.
• We achieve comparable or better performance with
significantly reduced computational cost than existing
equivariant models.
• Our implementation is open-sourced at https://
github.com/minghanz/E2PN .
Readers can find preliminary introductions to some re-
lated background concepts in the appendix.
|
Zhu_Taming_Diffusion_Models_for_Audio-Driven_Co-Speech_Gesture_Generation_CVPR_2023
|
Abstract
Animating virtual avatars to make co-speech gestures
facilitates various applications in human-machine interac-
tion. The existing methods mainly rely on generative adver-
sarial networks (GANs), which typically suffer from noto-
rious mode collapse and unstable training, thus making it
difficult to learn accurate audio-gesture joint distributions.
In this work, we propose a novel diffusion-based framework,
named Diffusion Co-Speech Gesture (DiffGesture) , to
effectively capture the cross-modal audio-to-gesture asso-
ciations and preserve temporal coherence for high-fidelity
audio-driven co-speech gesture generation. Specifically, we
first establish the diffusion-conditional generation process
on clips of skeleton sequences and audio to enable the
whole framework. Then, a novel Diffusion Audio-Gesture
Transformer is devised to better attend to the information
from multiple modalities and model the long-term temporal
dependency. Moreover, to eliminate temporal inconsistency,
we propose an effective Diffusion Gesture Stabilizer with
an annealed noise sampling strategy. Benefiting from the
architectural advantages of diffusion models, we further
incorporate implicit classifier-free guidance to trade off
between diversity and gesture quality. Extensive experi-
ments demonstrate that DiffGesture achieves state-of-the-
art performance, which renders coherent gestures with bet-
ter mode coverage and stronger audio correlations. Code is
available at https://github.com/Advocate99/DiffGesture.
|
1. Introduction
Making co-speech gestures is an innate human behavior
in daily conversations, which helps the speakers to express
their thoughts and the listeners to comprehend the mean-
ings [10, 32, 38]. Previous linguistic studies verify that
such non-verbal behaviors could liven up the atmosphere
*Equal contribution.
†Corresponding author.
𝑇
𝑡
𝑡-1
0𝑝𝜃(
𝑡−1|
𝑡,𝑐)
𝑞(
𝑡|
𝑡−1)
⋯ ⋯Figure 1. Illustration of Conditional Generation Process in Co-
Speech Gesture Generation. The diffusion process qgradually
adds Gaussian noise to the gesture sequence ( i.e.,x0sampled
from the real data distribution). The generation process pθlearns
to denoise the white noise ( i.e.,xTsampled from the normal
distribution) conditioned on context information c. Note that xt
denotes the corrupted gesture sequence at the t-th diffusion step.
and improve mutual intimacy [7, 8, 21]. Therefore, ani-
mating virtual avatars to gesticulate co-speech movements
is crucial in embodied AI. To this end, recent researches
focus on the problem of audio-driven co-speech gesture
generation [16, 25, 30, 41], which synthesizes human upper
body gesture sequences that are aligned to the speech audio.
Early attempts downgrade this task as a searching-and-
connecting problem, where they predefine the correspond-
ing gestures of each speech unit and stitch them together by
optimizing the transitions between consecutive motions for
coherent results [11,21,31]. In recent years, the compelling
performance of deep neural networks has prompted data-
driven approaches. Previous studies establish large-scale
speech-gesture corpus to learn the mapping from speech
audio to human skeletons in an end-to-end manner [4,5,25,
27,30,34,39]. To attain more expressive results, Ginosar et
al.[16] and Yoon et al. [41] propose GAN-based methods
to guarantee realism by adversarial mechanism, where the
discriminator is trained to distinguish real gestures from
the synthetic ones while the generator’s objective is to fool
the discriminator. However, such pipelines suffer from
the inherent mode collapse and unstable training, making
them difficult to capture the high-fidelity audio-conditioned
gesture distribution, resulting in dull or unreasonable poses.
The recent paradigm of diffusion probabilistic models
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10544
provides a new perspective for realistic generation [19, 37],
facilitating high-fidelity synthesis with desirable properties
such as good distribution coverage and stable training com-
pared to GANs. However, it is non-trivial to adapt existing
diffusion models for co-speech gesture generation. Most
existing conditional diffusion models deal with static data
and conditions [35, 36] ( e.g., the image-text pairs without
temporal dimension), while co-speech gesture generation
requires generating temporally coherent gesture sequences
conditioned on continual audio clips. Further, the com-
monly used denoising strategy in existing diffusion models
samples independently and identically distributed ( i.i.d.)
noises in latent space to increase diversity. However, this
strategy tends to introduce variation for each gesture frame
and lead to temporal inconsistency in skeleton sequences.
Therefore, how to generate high-fidelity co-speech gestures
with strong audio correlations and temporal consistency is
quite challenging within the diffusion paradigm.
To address the above challenges, we propose a tailored
Diffusion Co-Speech Gesture framework to capture the
cross-modal audio-gesture associations while maintain-
ing temporal coherence for high-fidelity audio-driven co-
speech gesture generation, named DiffGesture . As shown
in Figure 1, we formulate our task as a diffusion-conditional
generation process on clips of skeleton and audio, where
the diffusion phase is defined by gradually adding noise
to gesture sequence, and the generation phase is referred
as a parameterized Markov chain with conditional context
features of audio clips to denoise the corrupted gestures.
As we treat the multi-frame gesture clip as the diffusion
latent space, the skeletons can be efficiently synthesized in
a non-autoregressive manner to bypass error accumulation.
To better attend to the sequential conditions from multiple
modalities and enhance the temporal coherence, we then
devise a novel Diffusion Audio-Gesture Transformer archi-
tecture to model audio-gesture long-term temporal depen-
dency. Particularly, the per-frame skeleton and contextual
features are concatenated in the aligned temporal dimension
and embedded as individual input tokens to a Transformer
block. Further, to eliminate the temporal inconsistency
caused by the naive denoising strategy in the inference
stage, we thus propose a new Diffusion Gesture Stabilizer
module to gradually anneal down the noise discrepancy in
the temporal dimension. Finally, we incorporate implicit
classifier-free guidance by jointly training the conditional
and unconditional models, which allows us to trade off
between the diversity and sample quality during inference.
Extensive experiments on two benchmark datasets show
that our synthesized results are coherent with stronger
audio correlations and outperform the state-of-the-arts with
superior performance on co-speech gesture generation. To
summarize, our main contributions are three-fold: 1)As
an early attempt at taming diffusion models for co-speechgesture generation, we formally define the diffusion and de-
noising process in gesture space, which synthesizes audio-
aligned gestures of high-fidelity. 2)We devise the Diffusion
Audio-Gesture Transformer with implicit classifier-free dif-
fusion guidance to better deal with the input conditional
information from multiple sequential modalities. 3)We pro-
pose the Diffusion Gesture Stabilizer to eliminate temporal
inconsistency with an annealed noise sampling strategy.
|
Zhao_The_Resource_Problem_of_Using_Linear_Layer_Leakage_Attack_in_CVPR_2023
|
Abstract
Secure aggregation promises a heightened level of pri-
vacy in federated learning, maintaining that a server only
has access to a decrypted aggregate update. Within this
setting, linear layer leakage methods are the only data re-
construction attacks able to scale and achieve a high leak-
age rate regardless of the number of clients or batch size.
This is done through increasing the size of an injected fully-
connected (FC) layer. However, this results in a resource
overhead which grows larger with an increasing number of
clients. We show that this resource overhead is caused by
an incorrect perspective in all prior work that treats an at-
tack on an aggregate update in the same way as an individ-
ual update with a larger batch size. Instead, by attacking
the update from the perspective that aggregation is combin-
ing multiple individual updates, this allows the application
of sparsity to alleviate resource overhead. We show that
the use of sparsity can decrease the model size overhead by
over 327 ×and the computation time by 3.34 ×compared to
SOTA while maintaining equivalent total leakage rate, 77%
even with 1000 clients in aggregation.
|
1. Introduction
Federated learning (FL) [17] has been hailed as a
privacy-preserving method of training. FL involves mul-
tiple clients which train their model on their private data
before sending the update back to a server. The promise is
that FL will keep the client data private from all (server as
well as other clients) as the update cannot be used to infer
information about client training data.
However, many recent works have shown that client gra-
dients are not truly privacy preserving. Specifically, data
reconstruction attacks [3,8,9,12,18,26,28,34] use a model
update to directly recover the private training data. These
methods typically consist of gradient inversion [9, 28, 34]
and analytic attacks [3, 8, 12, 14, 18, 26]. Gradient inver-
sion attacks observe an honest client gradient and iterativelyoptimizes randomly initialized dummy data such that the
resulting gradient becomes closer to the honest gradient.
The goal is that dummy data that creates a similar gradi-
ent will be close to the ground truth data. These methods
have shown success on smaller batch sizes, but fail when
batch sizes become too large. Prior work has shown that
reconstruction on ImageNet is possible up to a batch size
of 48, although the reconstruction quality is low [28]. An-
alytic attacks cover a wide range of methods. Primarily,
they use a malicious modification of model architecture and
parameters [18,26], linear layer leakage methods [3,8], ob-
serve updates over multiple training rounds [14], or treat
images as a blind-source separation problem [12]. How-
ever, most of these approaches fail when secure aggrega-
tion is applied [4, 6, 7, 23, 24]. Particularly, when a server
can only access the updates aggregated across hundreds or
thousands of training images, the reconstruction process be-
comes very challenging. Gradient inversion attacks are im-
possible without additional model modifications or training
rounds. This is where linear layer leakage attacks [3,8] have
shown their superiority.
This sub-class of analytic data reconstruction attacks is
based on the server crafting maliciously modified models
that it sends to the clients. In particular, the server uses a
fully-connected (FC) layer to leak the input images. Com-
pared to any other attack, linear layer leakage attacks are
the only methods able to scale to an increasing number of
clients or batch size, maintaining a high total leakage rate.
This is done by continually increasing the size of an FC
layer used to leak the images. For example, with 100 clients
and a batch size of 64 on CIFAR-100, an attacker can leak
77.2%of all images in a single training round using an in-
serted FC layer of size 25,600. In this case, the number
of units in the layer is 4×the number of total images, and
maintaining this ratio when the number of clients or batch
size increases allows the attack to still achieve roughly the
same leakage rate. Despite the potential of linear layer leak-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3974
age, however, an analysis of the limits of its scalability in FL
has been missing till date.
In this work, we dive into this question and explore the
potential of scaling linear layer leakage attacks to secure
aggregation. We particularly highlight the challenges in
resource overhead corresponding to memory, communica-
tion, and computation, which are the primary restrictions of
cross-device FL. We discover that while SOTA attacks can
maintain a high leakage rate regardless of aggregation size,
the overhead is massive. With 1,000 clients and a batch size
of 64, maintaining the same leakage rate as before would re-
sult in the added layers increasing the model size by 6GB.
There would also be an added computation time of 21.85s
for computing the update for a single batch (size 64), a
10×overhead compared to a baseline ResNet-50. This is a
massive problem for resource-constrained FL where clients
have limited communication or computation budgets.
However, this problem arises from an incorrect perspec-
tive from prior work where they treat the attack on an aggre-
gate update the same as an individual client update. Specifi-
cally, we argue that it is critical to treat an aggregation attack
not as an attack on a single large update, but as individual
client updates combined together. In the context of linear
layer leakage, this is the difference between separating the
scaling of the attack between batch size and the number of
clients or scaling to all images together.
Following this, we use the attack M ANDRAKE [32] with
sparsity in the added parameters between the convolutional
output and the FC layer to highlight the difference in model
size compared to prior SOTA. The addition can decrease the
added model size by over 327 ×and decrease computation
time by 3.34 ×compared to SOTA attacks while achiev-
ing the same total leakage rate. For a batch size of 64
and 1000 clients participating in training, the sparse M AN-
DRAKE module adds only a little over 18MB to the model
while leaking 77.8%of the total data in a single training
round (comparable to other SOTA attacks).
We discuss other fundamental challenges for linear layer
leakage including the resource overhead of leaking larger
input data sizes. We also discuss that sparsity in the client
update fundamentally cannot be maintained through secure
aggregation and the client still accrues a communication
overhead when sending the update back to the server. All
other aspects of resource overhead such as communication
cost when the server sends the model to the client, computa-
tion time, and memory size, are decreased through sparsity.
Our contributions are as follows:
• We show the importance of sparsity in maintaining
a small model size overhead when scaling to a large
number of clients and the incorrect perspective prior
work has had when treating the aggregate update as
a single large update. By using sparsity with M AN-
DRAKE and attacking 1000 clients with a batch size of64, the added model size is only 18.33 MB. Compared
to SOTA attacks, this is a decrease in over 327×in
size and also results in a decreased computation time
by3.3×while maintaining the same leakage rate.
• We show the fundamental challenge of linear layer
leakage attacks for scaling attacks towards leaking
larger input image sizes and the resulting resource
overhead added.
• We show the problem of maintaining sparsity in se-
cure aggregation when the encryption mask is ap-
plied, which adds to the communication overhead
when clients send updates back to the server.
|
Zhong_Identity-Preserving_Talking_Face_Generation_With_Landmark_and_Appearance_Priors_CVPR_2023
|
Abstract
Generating talking face videos from audio attracts lots of
research interest. A few person-specific methods can gener-
ate vivid videos but require the target speaker’s videos for
training or fine-tuning. Existing person-generic methods
have difficulty in generating realistic and lip-synced videos
while preserving identity information. To tackle this prob-
lem, we propose a two-stage framework consisting of audio-
to-landmark generation and landmark-to-video rendering
procedures. First, we devise a novel Transformer-based
landmark generator to infer lip and jaw landmarks from the
audio. Prior landmark characteristics of the speaker’s face
are employed to make the generated landmarks coincide
with the facial outline of the speaker. Then, a video ren-
dering model is built to translate the generated landmarks
into face images. During this stage, prior appearance in-
formation is extracted from the lower-half occluded target
face and static reference images, which helps generate real-
istic and identity-preserving visual content. For effectively
exploring the prior information of static reference images,
we align static reference images with the target face’s pose
and expression based on motion fields. Moreover, auditory
features are reused to guarantee that the generated face im-
ages are well synchronized with the audio. Extensive ex-
periments demonstrate that our method can produce more
realistic, lip-synced, and identity-preserving videos than ex-
isting person-generic talking face generation methods.
|
1. Introduction
Audio-driven talking face video generation is valuable
in a wide range of applications, such as visual dubbing
[19, 26, 37], digital assistants [32], and animation movies
[46]. Based on the training paradigm and data requirement,
*Corresponding author is Guanbin Li.
Sketch
Sketch
Reference 1
Reference 2
Reference 3
Generated
Ground Truth
Reference 1
Reference 2
Reference 3
Generated
Ground Truth
Audio
Masked Target
Masked TargetFigure 1. This paper is targeted at generating a talking face video
for a speaker which is coherent with input audio. We implement
this task by completing the lower-half face of the speaker’s origi-
nal video. The outline of the mouth and jaw is inferred from the
input audio and then used to guide the video completion process.
Moreover, multiple static reference images are used to supply prior
appearance information.
the talking face generation methods can generally be cate-
gorized as person-specific or person-generic types. Person-
specific methods [11, 20, 21, 28, 32, 41] can generate photo-
realistic talking face videos but need to be re-trained or fine-
tuned with the target speaker’s videos, which might be in-
accessible in some real-world scenarios. Hence, learning to
generate person-generic talking face videos is a more sig-
nificant and challenging problem in this field. This topic
also attracts lots of research attention [14,19,24,26,37,45].
In this paper, we focus on tackling the person-generic talk-
ing face video generation by completing the lower-half face
of the speaker’s original video under the guidance of audio
data and multiple reference images, as shown in Figure 1.
The main challenges of the person-generic talking face
video generation include two folds: 1) How can the model
generate videos having facial motions, especially mouth
and jaw motions, which are coherent with the input au-
dio? 2) How can the model produce visually realistic frames
while preserving the identity information? To address the
first problem, many methods [3, 5, 6, 17, 37, 46] leverage fa-
cial landmarks as intermediate representation when gener-
ating person-generic talking face videos. However, trans-
lation from audio to facial landmarks is an ambiguous task,
considering the same pronunciation may correspond to mul-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9729
tiple facial shapes. A few landmark-based talking face gen-
eration methods [3, 42] tend to produce results having the
averaged lip shape of training samples, which may have
remarkable differences with the lip shape of the speaker.
A line of methods [6, 37] incorporate the prior information
from a reference image’s landmarks to generate landmarks
consistent with the speaker’s shape. However, they directly
fuse the features of audio and landmarks with simple con-
catenation or addition operations without modeling the un-
derlying correlation between them. For example, the rela-
tions between the reference landmarks and the audio clips
from different time intervals are different. Those meth-
ods are not advantageous at capturing such kinds of dif-
ferences. Moreover, the temporal dependencies are also
valuable for predicting facial landmarks. Existing methods,
such as [6, 17, 38, 46], depend on long short-term memory
(LSTM) models to explore the temporal dependencies when
transforming audio clips to landmark sequences. However,
those models are limited in capturing long-range temporal
relationships.
Since the input audio and intermediate landmarks do not
contain visual content information intrinsically, it is very
challenging to hallucinate realistic facial videos from audio
and intermediate landmarks while preserving the identity
information. A few existing methods, such as [37,46], adopt
a single static reference image to supply visual appearance
and identity information. However, one static reference im-
age is insufficient to cover all facial details, e.g., the teeth
and the side content of cheeks. This makes these algorithms
struggle to synthesize unseen details, which is unreliable
and easily leads to generation artifacts. [3, 7] use multiple
reference images to provide more abundant details. How-
ever, they simply concatenate the reference images without
spatial alignment, which is limited in extracting meaningful
features from reference images.
To cope with the above problems, we devise a novel two-
stage framework composed of an audio-to-landmark gen-
erator and a landmark-to-video rendering network. The
goal of our framework is to complete the lower-half face
of the video with content coherent to the phonetic motions
of the audio. Specifically, we use pose prior landmarks of
the upper-half face and reference landmarks extracted from
static face images as extra inputs of the audio-to-landmark
generator. The access to the two kinds of landmarks helps
to prevent the generator from producing results that devi-
ate from the face outline of the speaker. Then, we build
up the network architecture of the generator based on the
multi-head self-attention modules [33]. Our design is more
advantageous at capturing relationships between phonetic
units and landmarks compared to simple concatenation or
addition operations [6, 37]. It is also more helpful for mod-
eling temporal dependencies than LSTM used in previous
methods [6, 17, 38, 46]. Additionally, multiple static faceimages are referred to extract prior appearance information
for generating realistic and identity-preserving face frames.
Inspired by [10], we set up the landmark-to-video rendering
network with a motion field based alignment module and a
face image translation module. The alignment module is
targeted at registering static reference images with the face
pose and expression delivered by the results of the landmark
generator. This target is achieved by inferring a motion field
for each static reference image and then warping the image
and its features. The alignment module can decrease the
difficulty in translating meaningful features of static refer-
ence images to the target image. The face image translation
module produces the final face images by combining multi-
source features from the inferred landmarks, the occluded
original images, the registered reference images, and the
audio. The inferred landmarks provide vital clues for con-
straining the facial pose and expression. Those images are
paramount for inferring the facial appearance. Besides, the
audio features are reused to guarantee that the generated lip
shapes are well synchronized with the audio. Extensive ex-
periments demonstrate that our method produces more re-
alistic and lip-synced talking face videos and preserves the
identity information better than existing methods. Our main
contributions are summarized as follows:
• We propose a two-stage framework composed of an
audio-to-landmark generator and a landmark-to-video
rendering model to address the person-generic talking
face generation task under the guidance of prior land-
mark and appearance information.
• We devise an audio-to-landmark generator that can ef-
fectively fuse prior landmark information with the au-
dio features. We also make an early effort to construct
the generator with multi-head self-attention modules.
• We design a landmark-to-video rendering model
which can make full use of multiple source signals,
including prior visual appearance information, land-
marks, and auditory features.
• Extensive experiments are conducted on LRS2 [1] and
LRS3 [2] dataset, demonstrating the superiority of our
method over existing methods in terms of realism,
identity preservation, and lip synchronization.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.