index
int64 0
18.8k
| text
stringlengths 0
826k
| year
stringclasses 38
values | No
stringlengths 1
4
|
---|---|---|---|
18,500 | Text-Based Occluded Person Re-identification via Multi-Granularity Contrastive Consistency Learning Xinyi Wu1, Wentao Ma2*, Dan Guo3, Tongqing Zhou1*, Shan Zhao3, Zhiping Cai1* 1College of Computer, National University of Defense Technology, Changsha, China 2School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, China 3School of Computer Science and Information Engineering, HeFei University of Technology, Hefei, China {wuxinyi17, zhoutongqing, zpcai}@nudt.edu.cn, [email protected], {guodan, zhaoshan}@hfut.edu.cn Abstract Text-based Person Re-identification (T-ReID), which aims at retrieving a specific pedestrian image from a collection of images via text-based information, has received significant attention. However, previous research has overlooked a challenging yet practical form of T-ReID: dealing with image galleries mixed with occluded and inconsistent personal visuals, instead of ideal visuals with a full-body and clear view. Its major challenges lay in the insufficiency of benchmark datasets and the enlarged semantic gap incurred by arbitrary occlusions and modality gap between text description and visual representation of the target person. To alleviate these issues, we first design an Occlusion Generator (OGor) for the automatic generation of artificial occluded images from generic surveillance images. Then, a fine-granularity token selection mechanism is proposed to minimize the negative impact of occlusion for robust feature learning, and a novel multi-granularity contrastive consistency alignment framework is designed to leverage intra/inter-granularity of visual-text representations for semantic alignment of occluded visuals and query texts. Experimental results demonstrate that our method exhibits superior performance. We believe this work could inspire the community to investigate more dedicated designs for implementing TReID in real-world scenarios. The source code is available at https://github.com/littlexinyi/MGCC. Introduction Person Re-identification (ReID) is a fundamental yet challenging task in computer vision (CV), playing a paramount role in plenty of applications such as intelligent video surveillance, urban security, and smart retailing (Ye et al. 2021; Zeng et al. 2022). However, existing person ReID methods (Yao et al. 2019; Ding et al. 2020; Wang et al. 2021b) usually use images of a specific person as the probes, which have limitations in real-world urgent scenarios. For instance, when police officers try to locate criminals (suspects) or lost children within a shopping mall, they typically lack any photographic references of the individuals. Fortunately, they can take verbal descriptions of the target person from witnesses and crosscheck them with surveillance videos. To release manually *Corresponding Authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. One Image Gallery of Pedestrian Tracks ‘man in a black hat and a black backpack. He has blue jeans and white plimsolls.’ Query Retrieval (a) Traditional Text-based person ReID: Only partial visuals be identified (b) Text-based occluded person ReID: Comprehensive visual identification trajectory Figure 1: Illustration of person ReID: (a) Text-based person ReID and (b) Text-based occluded person ReID. checking, a novel and feasible paradigm for person ReID, text-based person ReID (T-ReID) (Li et al. 2017; Zhu et al. 2021; Wang et al. 2020b; Ma et al. 2023), has been proposed recently. As shown in Figure 1(a), the cross-modal correlations are established between verbal descriptions and widely monitored images, to remedy the absence of visual cues in real-world scenarios. Generally, T-ReID needs to process visual and textual modalities, which is aimed at learning a common semantic representation space between visual and textual modalities, to better align image and text. For this purpose, recent works firstly employ different models to extract the feature representations of different modalities from local feature level (Chen et al. 2022), global feature level (Chen et al. 2021b), and multi-granularity feature level (Farooq et al. 2021; Wang et al. 2022a). Then they focus on exploring image-text pairs for semantic alignment in common representation space. Unfortunately, we observe that the model training process employed by these methods on existing public benchmark datasets (Li et al. 2017; Ding et al. 2021; Zhu et al. 2021) is based on a strong assumption: the information on image modality is holistic (without occlusions), namely, the model can extract effective feature representations of images. This is definitely unrealistic, wherein the person ReID techniques currently face a fundamental and long-standing The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6162 challenge: Occlusions. Although occluded person ReID has been widely investigated (Zhuo et al. 2018; He and Liu 2020; Wang et al. 2020a; Zheng et al. 2021), they are merely deployed in the visual-visual retrieval scenarios, hard to be adapted to complex cross-modal person ReID. Therefore, how to effectively tackle the complex occlusion issues and realize adaption to real-world scenarios has become one of the key challenges in T-ReID. In light of the above analysis, in this paper, we attempt to explore a novel and advanced case of T-ReID, text-based occluded person ReID (TO-ReID), as shown in Figure 1(b). However, the implementation process faces the following three challenges: 1. Absence of benchmark datasets. Most existing datasets of T-ReID implicitly assume that the appearance of fullbody for a person is readily available, while ignoring person images with occlusions, hindering models from acquiring robust visual features that are suitable for TOReID task. 2. Semantic gap from occlusion. Occlusion causes key feature loss and redundant feature interference, which will introduce huge semantic gap for image feature learning. 3. Modality gap for image-text pairs. Due to the inconsistent representation of images and text, their data resides in different distribution spaces, making it difficult to directly measure the similarity between image-text pairs. To relieve the above challenges, we first design an Occlusion Generator (termed OGor) to support the benchmark dataset construction for TO-ReID, which automatically generates artificial occluded person images by occlusion sample augmentation strategy (Challenge 1). Then, a fine-granularity token selection mechanism is proposed, to eliminate redundant noisy tokens stemming from occlusions and meaningless auxiliary words. This mechanism serves two pivotal objectives: bridging the semantic gap induced by occlusions and realizing the trade-off between performance and training efficiency (Challenge 2). Finally, a multi-granularity contrastive consistency learning model is also invented, enabling better alignment for text-image pairs (Challenge 3). The main contributions of this paper can be summarized as follows: • We invent an OGor and reconstruct the original T-ReID dataset to simulate real-world surveillance occlusion scenarios. Additionally, the OGor is versatile and can be extended to other general datasets, enhancing the exploration of novel scenarios. • We propose a Multi-Granularity Contrastive Consistency model, dubbed MGCC, incorporating a token selection mechanism. This model not only bridges the semantic gap arising from occlusion but also narrows the modality gap between images and texts. • Extensive experiments demonstrate the effectiveness of our proposal on three TO-ReID datasets, i.e., Occluded-CUHK-PEDES (62.44 on R@1), OccludedICFG-PEDES (59.28 on R@1), and Occluded-RSTPReid (49.85 on R@1). Related Work Occlusion Person ReID Person ReID has been widely investigated; however, occlusions degrade the robustness of feature representation, leading to a decay in performance. Several works (Zhuo et al. 2018; Zheng et al. 2021; Zhou et al. 2023) attempt to tackle the complex visual occlusions, which can be roughly categorized into two groups: part-to-part matching (Sun et al. 2018; He et al. 2018, 2019; He and Liu 2020; Zhu et al. 2020) and matching by external tools assistance (Miao et al. 2019; Gao et al. 2020; Wang et al. 2020a; Zheng et al. 2021). For the former, it realizes matching by measuring the similarity between local features (e.g., body parts). Zhu et al. (2020) propose an identity-guided human semantic parsing approach, ISP, to generate the pseudo-labels of human body parts at pixel-level. For the latter, techniques such as pose estimation and human parsing are used to realize the alignment. Miao et al. (2019) propose a Pose-Guided Feature Alignment model, called PGFA, which leverages the human pose key points for matching. To achieve high accuracy while preserving low inference complexity, Zheng et al. (2021) propose a PGFL-KD model, which can distill human pose semantic knowledge into a local feature extractor to discard the dependency on a pose estimator. While these techniques have shown effectiveness in improving the performance of occluded person ReID, their application is still confined to image-based person ReID methods. Notably, in real-world scenarios that involve text-based person ReID, the challenge of occlusion has not been specifically addressed. Text-based Person ReID Considering that visual cues are not always available in realworld scenarios, we focus on the realistic problem of TReID. As given by the pioneering work (Li et al. 2017) that introduces the T-ReID task and releases a benchmark dataset named CUHK-PEDES, the main challenge for this task is how to efficiently align image and text features into a joint embedding space for fast retrieval. Solutions can be categorized into two groups: single-granularity feature alignment paradigm and multi-granularity feature alignment paradigm. In terms of the single-granularity, major improvements are shown by (Chen et al. 2022; Zhang and Lu 2018; Wang et al. 2019; Liu et al. 2019; Ge, Gao, and Liu 2019; Zheng et al. 2020b; Chen et al. 2021b) where researchers start exploiting local alignment and global alignment. (Chen, Xu, and Luo 2018) propose a patch-word level matching for TReID, to realize local feature alignment. To reduce the additional modules and complex evaluation strategies, (Chen et al. 2022) design a simple but effective framework, TIPCB, for T-ReID via learning visual and textual local representations. While other works employ global feature representation to realize global matching. For the multi-granularity (Jing et al. 2020; Wang et al. 2020b; Zheng et al. 2020a; Zhu et al. 2021; Wang et al. 2021a; Ding et al. 2021; Shao et al. 2022; Wang et al. 2022b; Jiang and Ye 2023), this paradigm brings decent improvements as compared to using single-granularity. Among The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6163 them, Niu et al. (2020) propose an end-to-end multigranularity image-text alignment, MIA, to extract finegrained features of three different granularities hierarchically and then adopt a cross-modal attention mechanism to determine affinities between visual and textual components. Wang et al. (2020b) present a ViTAA model, which learns to disentangle the feature space of a person into subspaces corresponding to attributes, to address the T-ReID task from the perspective of attribute-specific alignment learning. Owing to the significant modality gap and the large intra-class variance in texts, a model called SSAN is designed by (Ding et al. 2021), which can automatically extract semantically aligned features from the visual and textual modalities. Recently, a model called IRRA (Jiang and Ye 2023) is proposed which focuses on cross-modal implicit relation reasoning and aligning for T-ReID tasks. Vision-Language Pre-Training Models The pre-training and fine-tuning paradigm has achieved great success, which drives the development of CV (Dosovitskiy et al. 2020) and natural language processing (NLP) (Brown et al. 2020). Many efforts (Yan et al. 2022; Radford et al. 2021; Ma et al. 2022; Yao et al. 2021; Cao et al. 2022; Fang et al. 2021; Shu et al. 2022) have attempted to extend the pre-training model to the multimodal field. It is gratifying that visual language pre-training (VLP) has attracted growing attention. Among them, CLIP (Radford et al. 2021) has gained surging popularity. As a leading pre-training model, different from the traditional single-modality supervised pre-training model, CLIP leverages natural text descriptions to supervise the learning. Since the great advantage of CLIP, a lot of follow-ups (Luo et al. 2022; Fang et al. 2021; Ma et al. 2022; Zhao et al. 2022; Shu et al. 2022; Han et al. 2021; Yan et al. 2022) have also begun to transfer the knowledge of CLIP to visual-textual retrieval tasks and obtained new state-of-the-art (SOTA) results. Of course, as a specific application for image-text cross-modal retrieval, T-ReID can also benefit from CLIP. Accordingly, we explore leveraging CLIP for the TO-ReID task. To the best of our knowledge, this is the first attempt to harness CLIP to settle the occlusions of pedestrians in TReID. Dataset Design To facilitate research on TO-ReID, we design an occlusion generator, termed OGor, which is applied to the existing three datasets of T-ReID (Li et al. 2017; Ding et al. 2021; Zhu et al. 2021) to construct new occluded datasets for TOReID. Different from existing Random Erase (Zhong et al. 2020) and Random Cropping (Chen et al. 2021a) method with weak generation ability while facing diversified occlusions, our OGor adopts an occlusion sample augmentation strategy with realistic occlusion scenario simulation, which mainly contains the following two steps: Occlusion Instance Library Collection We establish an occlusion instance library (OIL) by detecting 60 common occlusion samples (4 samples per class, a Up Instance Set OU Middle Instance Set OM Bottom Instance Set OB umbrella, kite. bag, suitcase, post. car, bike, stone, motorbike, bench, road sign, chair, card, pedestrian, fire hydrant. Table 1: The list of occlusion instances in OIL. occluded images Occlusion Generator (OGor) holistic images Figure 2: Sample occluded images are generated via OGor. total of 15 classes) in common outdoor scenes. Specifically, we utilize Mask R-CNN (He et al. 2017) to identify the occlusion instance bounding box and subsequently erase the extraneous background pixels to produce the occlusion instance masks. 15 samples of occluded instance images (select one sample as a representative for each class) from OIL will be shown in Appendix (§A). Occlusion Generation Process Based on empirical observations, certain common occlusions tend to have position priors in detected person image (e.g., cars, bikes, and pedestrians are generally in the lower half of the image and are unlikely to occur elsewhere). As a result, according to the classes of occlusion, we divide the OIL into three subsets (as illustrated in Table 1). For the Bottom Instance Set, we align the bottom edge and place them randomly in the horizontal directions. For the Up Instance Set, we align the up edge and randomly place occlusions in horizontal directions. Instances in the Middle Instance Set are not limited to any specific locations, but rather encompass generalized intermediate regions. To generate occluded images, we randomly select 30% of the whole train, val, and test images within the same ID but different views from the T-ReID dataset. For each selected image, we paste the occlusion instances onto the corresponding regions of the image according to the different position of occlusions in Table 1. The detailed occlusion generation algorithm is described in Appendix (§B). Methodology In this section, we elaborate the proposed MGCC model from the following five parts. Multi-Granularity Feature Representations Image feature representation. For a given image Ik ∈I, we first divide the image into n patches and introduce a viThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6164 woman black hair wearing overcoat blue jeans black hair overcoat jeans Fine-Granularity Image Token Selection n+1 n+1 Image attention map Attention driven 0.2 0.3 0.1 0.2 Top-Z Fine-Granularity Text Token Selection m+1 m+1 Text attention map Attention driven 0.3 0.1 0.2 0.3 Top-Z * Global token Local tokens * Global token Local tokens Final Similarity S(I, T) Multi-Granularity Feature Representations √ √ √ × × × × × √ √ √ √ √ Global token Local tokens Global token Local tokens * * Attention-based Fusion on Similarity Matrix CLIP Image Encoder Global token Local tokens * ··· [CLS] √ × × × Multi-Granularity Contrastive Consistency Alignment Intra-granularity Inter-granularity Inter-granularity CoarseGranularity Intra-granularity * * * * CoarseGranularity FineGranularity FineGranularity FineGranularity FineGranularity CoarseGranularity CoarseGranularity The woman has a long black hair, wearing a black overcoat, blue jeans and black and white skateboard shoes. There is a black bag in her shoulder. CLIP Text Encoder Global token Local tokens * ··· [CLS] Figure 3: An overview of our MGCC model. First, the text encoder and image encoder extract multi-granularity feature representations for image and text, respectively. Then, a fine-granularity token selection mechanism is applied for filtering the Top-Z informative tokens. Building upon these feature representations, we leverage two types of granularity features to engage in contrastive learning within a common semantic space. sual tokenization operation to generate a discrete token sequence. A learnable [CLS] token is attached to the beginning of the sequence as an image-level representation. Then, we finetune the standard CLIP with 12 layers as an image encoder to embed the discrete tokens into image-level and patch-level feature representations. To be specific, the [CLS] token is learned as coarse-granularity global feature representations Ig k ∈Rdim, and the other learned tokens are regarded as fine-granularity local feature representations P l k = p(k,1), p(k,2), p(k,3), . . . , p(k,n) ∈Rn×dim. Text feature representation. Given a text sentence Tk ∈ T, the raw text is firstly tokenized by CLIP Tokenizer, then the textual sequence is padded with a [CLS] token at the beginning and fed into the text encoder. Similar to the image encoder, the text encoder is also initialized by the public CLIP checkpoints to generate textual feature representations. As a result, we can get coarsegranularity global feature representations T g k ∈ Rdim and fine-granularity local feature representations Wl k = w(k,1), w(k,2), w(k,3), . . . , w(k,m) ∈Rm×dim, which are respectively the embedding representation of the [CLS] token and corresponding word tokens, where m is the length of word tokens. Fine-Granularity Token Selection Mechanism Considering the feature redundancy caused by occlusions on the person image and punctuation marks or meaningless words on the whole sentence, we further design a token selection mechanism for image and text feature representations at fine-granularity levels, which aims at pruning redundant tokens via information importance ranking. Specifically, we utilize the Transformer block of encoders, where the self-attention of the last block can generate an attention map M ∈R(1+n/m)×(1+n/m), which reflects the correlation among tokens. We select the first row of the attention map as importance scores between the [CLS] token and all fine-granularity tokens, which is defined as M ′ = M[0, 1 :] ∈Rn/m. A larger M ′[i] means a greater contribution from the i-th fine-granularity feature representation, thus selecting the Top-Z informative tokens from the raw tokens to participate in training and inference while masking the other unimportant tokens, which can realize the trade-off between competitive complexity and performance at the same time. The token selection mechanism is applied separately to images and text. For a given image Ik with n patches, top Zi informative tokens are selected from fine-granularity feature representation P l k. For a given text Tk with m words, top Zt informative tokens are selected from fine-granularity feature The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6165 representation W l k. Zi = ρi × n and Zt = ρt × m, where ρi and ρt represent the token selection ratio for images and texts, respectively. Detailed ablation studies of ρi and ρt are presented in Appendix (§F). Multi-Granularity Contrastive Consistency Alignment To learn high-quality representation alignment, we perform our proposed model at intra-granularity parallel contrast level and inter-granularity cross contrast level. For clarity and simplicity, we have omitted the indexes of all feature representations. Suppose that there is a batch of training pairs, B = {(Ik, Tk)}N k=1, and we have extracted feature representations of coarse-to-fine granularity for them. Intra-granularity parallel contrast alignment. We calculate the similarity of image-text on fine-granularity and coarse-granularity by matrix multiplication, respectively. Consequently, in terms of fine-granularity similarity, we can formulate it as: SP −W = PW ⊤, (1) where SP −W ∈Rn×m is the similarity score on finegranularity. And given image feature at coarse-granularity I ∈Rdim and text feature at coarse-granularity T ∈Rdim, then the similarity matrix of coarse-granularity can be obtained using the matrix multiplication: SI−T = I⊤T, (2) where SI−T ∈ R1 is the similarity score on coarsegranularity. Inter-granularity cross contrast alignment. Given the image feature at fine-granularity P ∈Rn×dim and text feature at coarse-granularity T ∈Rdim, we calculate the similarity between the P and T based on matrix multiplication, which can be formulated as follows: SP −T = PT, (3) where SP −T ∈Rn×1 is the similarity vector between the text and each patch. Similar to patch-text contrast SP −T , we calculate the similarity between the image feature at coarse-granularity I ∈Rdim and text feature at fine-granularity W ∈Rm×dim by the matrix multiplication, which can be formulated as follows: SI−W = (WI)⊤, (4) where SI−W ∈R1×m is the similarity vector between one image and each word in one sentence. Attention-based Fusion on Similarity Matrix To realize semantic information coverage from coarse-tofine granularity and obtain instance-level similarity, we fuse the cross-modal contrast similarity of each granularity including Eq.(1), Eq.(2), Eq.(3), and Eq.(4), as the final similarity. In order to distinguish the different importance of different patches and words on the fusion results, we propose an attention-based fusion module on a similarity matrix, dubbed AF, where scores in similarity will be given different weights during aggregation, realizing differential fusion. Intra-granularity level fusion. Owing to the finegranularity similarity matrix SP −W ∈Rn×m that contains the similarity scores of n patches of one image and m words of one text, thus, we deploy attention operations on the SP −W twice. The first attention is to obtain fine-granularity image-level and text-level similarity vectors, which can be formulated as: Simg P −W = n X i=1 exp SP −W (i,⊛)/τ Pn j=1 exp SP −W (j,⊛)/τ SP −W (i,⊛), (5) Stxt P −W = m X i=1 exp SP −W (i,⊛)/τ Pm j=1 exp SP −W (j,⊛)/τ SP −W (⊛,i), (6) where the τ is the temperature hyper-parameter of softmax and the ⊛denotes all content in the dimension. Simg P −W ∈ R1×m is the image-level similarity between the image and m words in the text. Stxt P −W ∈Rn×1 is the text-level similarity between the text and n patches in the image. In order to further obtain the fine-granularity instance-level similarity score, we continue to carry out the second attention operation, which can be represented as: S′img P −W = m X i=1 exp Simg P −W (1,i)/τ Pm j=1 exp Simg P −W (1,j)/τ Simg P −W (1,i), (7) S′txt P −W = n X i=1 exp Stxt P −W (i,1)/τ Pn j=1 exp Stxt P −W (j,1)/τ Stxt P −W (i,1), (8) where S′img P −W ∈R1 and S′txt P −W ∈R1 are the instance-level similarities. We take the average of similarities at instancelevel as the final fine-granularity similarity: S′ P −W = (S′img P −W + S′txt P −W )/2. (9) Inter-granularity level fusion. We leverage the softmax function to obtain fusion weights. Then, we can aggregate these similarity scores according to the obtained weights, which can be described as: S′ P −T = n X i=1 exp SP −T (1,i)/τ Pn j=1 exp SP −T (1,j)/τ SP −T (i,1), (10) S′ I−W = m X i=1 exp SI−W (1,i)/τ Pm j=1 exp SI−W (1,j)/τ SI−W (1,i), (11) Training and Inference Similarity calculation. For a given pair S(Ik, Tk) in B = {(Ik, Tk)}N k=1, the final similarity which contains multigranularity contrastive scores can be described as follows: S(Ik, Tk) = (S′ P −W + SI−T + S′ P −T + S′ I−W )/4. (12) Objective loss function. The InfoNCE loss function is utilized to pull the positive instances and push away the hard negative ones in a batch of B image-text pairs. Limage2text = −1 B B X i=1 log exp (S(Ii, Ti)) PB j=1 exp (S(Ii, Tj)) , (13) Ltext2image = −1 B B X i=1 log exp (S(Ii, Ti)) PB j=1 exp (S(Ij, Ti)) , (14) L = Limage2text + Ltext2image. (15) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6166 Experiments In this section, we will manifest the results of experiments with corresponding analysis for TO-ReID. Concretely, 4 research questions (RQs) lead our discussions about experiments: RQ1: Is the overall performance of MGCC superior to the other SOTA baselines under occlusion? RQ2: Are the multi-granularity contrast modules effective and essential? RQ3: How does the token selection mechanism affect the MGCC model? RQ4: How to evaluate the quality of feature representation and retrieval ranking? Experiment Settings Datasets. We construct three occluded datasets via OGor, called Occluded-CUHK-PEDES, Occluded-ICFG-PEDES, and Occluded-RSTPReid, based on three existing T-ReID datasets. Evaluation Metrics. For TO-ReID, we employ two evaluation metrics that are widely used in retrieval tasks, to measure the performance: Recall at K ( R@K, higher is better), and mean Average Precision (mAP, higher is better). Meanwhile, we also adopt “Rsum” to measure the overall quality, which is defined as the sum of R@1, R@5, and R@10. Comparison with SOTA Methods To answer RQ1, we evaluate the performance of MGCC by comparing it with existing T-ReID models on three occluded datasets from two paradigms (a.k.a., single-granularity and multi-granularity), the detailed results are shown in Table 2. Performance Comparisons on Occluded-CUHKPEDES. The MGCC can achieve comparable results to recent state-of-the-art methods, with 62.44%, 82.44%, and 88.52% on R@1, R@5, and R@10, respectively. Although our MGCC performs slightly worse than IRRA (Jiang and Ye 2023) on R@1 and R@5, it achieves optimal performance in Rsum index compared with other baselines, which reflects the overall robust retrieval quality of the proposed MGCC model. Performance Comparisons on Occluded-ICFG-PEDES. Our MGCC model outperforms competitive candidates in terms of all metrics, achieving 59.28% R@1 accuracy, which significantly improved (+3.19% and +9.11%, respectively) in Rsum, compared with IRRA and CFine (Yan et al. 2022). Performance Comparisons on Occluded-RSTPReid. The proposed MGCC dramatically surpasses the singlegranularity feature learning paradigm Dual-Path (Zheng et al. 2020b) by 31.5% and 65.95% on R@1 and Rsum, respectively. Compared with the multi-granularity paradigm IRRA, MGCC also achieves great performance, with an increase of 1.2% and 3.67% on R@1 and Rsum, respectively. Meanwhile, it is worth noting that the transformer-based powerful feature extraction backbones become more significant for better retrieval performance. In order to search for the most effective feature extraction backbones, we conduct ablation studies among ResNet50, LSTM, BERT, and CLIP, as shown in the “Baseline” row in Table 2. The comparison clearly demonstrates the effectiveness of multi-modal vision-language pre-training backbones. Ablation Studies on Contrastive Modules To answer RQ2, we evaluate proposed modules in Table 3. We first investigate the influence of each independent contrastive modules. As a basic contrast, we could find that using only the coarse-granularity contrast SI−T is powerful enough to outperform many SOTA baselines. Additionally, each independent contrast module can realize competitive results, indicating the effectiveness of our multi-granularity contrast consistency framework. Based on the basic contrast SI−T , the SP −W , SI−W , and SP −T can enhance the model’s performance by providing additional indirect matching information from different perspectives, thus realizing full coverage of semantics from coarse-to-fine. Finally, we equip all the contrastive modules, our MGCC can yield 62.44%, 59.28%, and 49.85% of R@1 on three datasets, respectively. Therefore, we conclude that all types of granularity contrastive consistency learning modules are effective and complementary to improve retrieval performance. Influence of the Token Selection Mechanism To answer RQ3, we analyze the influence from two aspects: Robustness for feature representation. To further investigate the robustness on discriminate feature extraction, we visualize the token selection process on images and texts in Appendix (§E). It clearly proves that the token selection mechanism can enhance the effectiveness of the MGCC model by eliminating uninformative tokens (occlusions and backgrounds in images, etc.), thus promoting the model to focus on the most discriminative part. Improvement of training efficiency. As shown in Table 4, the no-selection baseline (a.k.a., (ρi, ρt) = (1.0, 1.0)) and the best trade-off experiments ((ρi, ρt) = (0.3, 0.4) on Occluded-CUHK-PEDES, (ρi, ρt) = (0.4, 0.5) on Occluded-ICFG-PEDES, and (ρi, ρt) = (0.5, 0.5) on Occluded-RSTPReid) are typically compared to make a conclusion: After equipping with the token selection mechanism, the computational memories (M) can be reduced by 16.04%∼16.83% and the inference time (T) can be accelerated by 28.10% ∼49.58%, with a slight influence of R@1 performance (-0.49%∼+3.10%). Detailed ablation on ρi and ρt are shown in Appendix (§F). Qualitative Results To answer RQ4, we illustrate from following two aspects: Feature representation visualization. We adopt the tSNE (Van der Maaten and Hinton 2008) to visualize the difference of feature representations before and after alignment, which aims at showing the model’s effectiveness in narrowing the modality gap. Detailed t-SNE visualization processes are shown in Appendix (§G). Retrieval ranking visualization. As shown in Figure 4, for each text query, the top-10 matches are displayed. On one hand, the orange highlighted words represent the selected high-informative fine-grained tokens, which dominants the retrieval process. On the other hand, although the candidate images are partially occluded, our MGCC model can still successfully retrieve the correct pedestrian, thereby showcasing the outstanding retrieval capability of MGCC. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6167 Method (Type) Image-Text Encoder Occluded-CUHK-PEDES Occluded-ICFG-PEDES Occluded-RSTPReid R@1 R@5 R@10 mAP Rsum R@1 R@5 R@10 mAP Rsum R@1 R@5 R@10 mAP Rsum DPath (S) (2020b) RN50-RN50 33.69 60.48 71.30 30.12 165.47 29.03 53.29 64.50 14.03 146.82 18.35 42.65 56.35 14.80 117.35 CMPM (S) (2018) RN50-LSTM 40.57 65.38 70.54 32.60 176.49 35.43 57.22 69.43 162.08 TBPS (S) (2021) RN50-BERT 58.85 79.34 86.05 52.64 224.24 TIPCB (S) (2022) RN50-BERT 59.91 80.22 86.66 226.79 ViTAA (S) (2020b) RN50-LSTM 55.97 75.84 83.52 215.33 NAFS (M) (2021) RN50-BERT 56.85 75.39 90.27 49.58 222.51 SAF (M) (2022) ViT-BERT 59.47 79.65 84.90 50.27 224.02 SSAN (M) (2021) RN50-LSTM 59.76 78.98 85.48 59.35 224.22 53.01 71.83 78.90 30.00 203.74 39.85 66.45 76.50 29.85 182.80 CFine (M) (2022) CLIP-BERT 56.24 76.14 83.80 47.72 216.18 55.55 75.14 82.55 30.94 213.24 40.50 63.40 73.40 31.75 177.30 LGUR (M) (2022) DeiT-BERT 59.82 79.30 86.80 55.81 225.92 52.91 70.41 77.01 30.46 200.33 47.40 71.80 80.60 35.07 199.80 IRRA (M) (2023) CLIP-CLIP 64.41 82.63 85.43 57.56 232.47 57.24 78.07 83.85 30.42 219.16 48.65 75.43 80.50 39.48 204.58 Baseline1 (M) RN50-LSTM 52.80 67.91 75.80 49.21 196.51 52.40 73.30 80.80 30.22 206.50 31.50 46.55 55.95 22.23 134.00 Baseline2 (M) CLIP-BERT 61.00 80.51 86.76 54.74 228.27 55.68 75.50 82.35 32.75 213.53 41.58 66.75 75.65 31.86 183.98 MGCC (M) CLIP-CLIP 62.44 82.44 88.52 54.18 233.40 59.28 78.32 84.75 33.30 222.35 49.85 74.95 83.45 38.48 208.25 Table 2: Performance comparisons on three occluded datasets. “S” and “M” in “Type” stand for Single/Multi-granularity paradigm. Variants of MGCC Datasets intra-granularity parallel contrast inter-granularity cross contrast Occluded-CUHK-PEDES Occluded-ICFG-PEDES Occluded-RSTPReid SI−T SP −W SP −T SI−W R@1 R@5 R@10 mAP Rsum R@1 R@5 R@10 mAP Rsum R@1 R@5 R@10 mAP Rsum ✓ 57.33 77.70 84.45 49.13 219.48 57.88 76.36 82.72 31.57 216.96 41.30 64.60 74.95 31.52 180.85 ✓ 59.76 80.36 87.52 52.46 227.64 55.26 76.18 83.09 31.76 214.53 43.00 69.40 79.15 33.89 191.55 ✓ 59.55 80.75 87.41 52.76 227.71 56.57 76.83 84.05 32.13 217.45 41.05 69.40 78.45 33.23 188.90 ✓ 49.69 70.42 77.71 41.29 197.82 49.73 70.49 77.69 24.88 197.91 33.05 55.25 64.45 24.83 152.75 ✓ ✓ 62.23 82.68 88.86 54.55 233.77 58.52 77.99 84.76 32.77 221.27 48.80 72.90 82.90 37.29 204.60 ✓ ✓ 61.92 82.47 88.65 54.03 233.04 58.60 77.85 84.51 32.95 220.96 46.55 72.95 82.20 36.12 201.70 ✓ ✓ ✓ ✓ 62.44 82.44 88.52 54.18 233.40 59.28 78.32 84.75 33.30 222.35 49.85 74.95 83.45 38.48 208.25 Table 3: A series of ablation studies on three occluded datasets to investigate effects of different contrastive modules. Dataset (ρi, ρt) R@1↑mAP↑Rsum↑ T↓ M ↓ Occluded -CUHK-PEDES (1.0, 1.0) 62.75 54.53 234.49 3.63 15.27 (0.3, 0.4) 62.44 54.18 233.40 2.61 12.70 Occluded -ICFG-PEDES (1.0, 1.0) 58.80 33.12 221.45 8.33 15.27 (0.2, 0.4) 59.28 33.30 222.35 4.40 12.70 Occluded -RSTPReid (1.0, 1.0) 48.35 37.00 202.40 3.94 15.27 (0.5, 0.5) 49.85 38.48 208.25 3.24 12.82 Table 4: Comparison with different token selection ratios. “↑” means the higher, the better; “↓” means the lower, the better. Conclusion In this paper, we make the first attempt to tackle a complex and challenging problem, TO-ReID. To handle this tricky issue, we design an OGor to generate occluded persons for simulating the real-world scenario. Meanwhile, a novel MGCC framework is proposed, to narrow the semantic gap and modality gap. Experimental results show the effectiveness and superiority of our proposal. Query Text Rank-1 Rank-10 A woman with short black curly hair is seen wearing a violet hooded insulated jacket. She is also wearing grey trousers pants and black sneakers. The man wears a orange and white jacket with a pair of jeans. He is riding on the street. The man is wearing a plaid green black and white shirt. He is wearing gray pants and carrying a black shoulder bag. Figure 4: Retrieval ranking visualization. Matched and mismatched images are marked with green and red, respectively. Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 62072465, No. 62302144) and the Science and Technology Innovation Program of Hunan Province (No. 2022RC3061, No. 2023RC3027). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6168 References Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Cao, M.; Yang, T.; Weng, J.; Zhang, C.; Wang, J.; and Zou, Y. 2022. Locvtp: Video-text pre-training for temporal localization. In Proc. of ECCV, 38–56. Springer. Chen, P.; Liu, W.; Dai, P.; Liu, J.; Ye, Q.; Xu, M.; Chen, Q.; and Ji, R. 2021a. Occlude them all: Occlusion-aware attention network for occluded person re-id. In Proc. of ICCV, 11833–11842. Chen, T.; Xu, C.; and Luo, J. 2018. Improving text-based person search by spatial matching and adaptive threshold. In Proc. of WACV, 1879–1887. Chen, Y.; Huang, R.; Chang, H.; Tan, C.; Xue, T.; and Ma, B. 2021b. Cross-modal knowledge adaptation for languagebased person search. IEEE Transactions on Image Processing, 30: 4057–4069. Chen, Y.; Zhang, G.; Lu, Y.; Wang, Z.; and Zheng, Y. 2022. TIPCB: A simple but effective part-based convolutional baseline for text-based person search. Neurocomputing, 494: 171–181. Ding, C.; Wang, K.; Wang, P.; and Tao, D. 2020. Multi-task learning with coarse priors for robust part-aware person reidentification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3): 1474–1488. Ding, Z.; Ding, C.; Shao, Z.; and Tao, D. 2021. Semantically self-aligned network for text-to-image part-aware person reidentification. ArXiv:2107.12666. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proc. of ICLR. Fang, H.; Xiong, P.; Xu, L.; and Chen, Y. 2021. Clip2video: Mastering video-text retrieval via image clip. arXiv preprint arXiv:2106.11097. Farooq, A.; Awais, M.; Kittler, J.; and Khalid, S. S. 2021. AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification. In Proc. of AAAI. Gao, C.; Cai, G.; Jiang, X.; Zheng, F.; Zhang, J.; Gong, Y.; Peng, P.; Guo, X.; and Sun, X. 2021. Contextual non-local alignment over full-scale representation for text-based person search. arXiv preprint arXiv:2101.03036. Gao, S.; Wang, J.; Lu, H.; and Liu, Z. 2020. Pose-guided visible part matching for occluded person reid. In Proc. of CVPR, 11744–11752. Ge, J.; Gao, G.; and Liu, Z. 2019. Visual-textual association with hardest and semi-hard negative pairs mining for person search. arXiv preprint arXiv:1912.03083. Han, X.; He, S.; Zhang, L.; and Xiang, T. 2021. Text-Based Person Search with Limited Data. In Proc. of BMVC. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proc. of ICCV, 2961–2969. He, L.; Liang, J.; Li, H.; and Sun, Z. 2018. Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach. In Proc. of CVPR, 7073–7082. He, L.; and Liu, W. 2020. Guided saliency feature learning for person re-identification in crowded scenes. In Proc. of ECCV, 357–373. Springer. He, L.; Wang, Y.; Liu, W.; Zhao, H.; Sun, Z.; and Feng, J. 2019. Foreground-aware pyramid reconstruction for alignment-free occluded person re-identification. In Proc. of ICCV, 8450–8459. Jiang, D.; and Ye, M. 2023. Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person Retrieval. In Proc. of CVPR, 2787–2797. Jing, Y.; Si, C.; Wang, J.; Wang, W.; Wang, L.; and Tan, T. 2020. Pose-guided multi-granularity attention network for text-based person search. In Proc. of AAAI, volume 34, 11189–11196. Li, S.; Cao, M.; and Zhang, M. 2022. Learning semanticaligned feature representation for text-based person search. In Proc. of ICASSP, 2724–2728. IEEE. Li, S.; Xiao, T.; Li, H.; Zhou, B.; Yue, D.; and Wang, X. 2017. Person search with natural language description. In Proc. of CVPR, 1970–1979. Liu, J.; Zha, Z.-J.; Hong, R.; Wang, M.; and Zhang, Y. 2019. Deep adversarial graph attention convolution network for text-based person search. In Proc. of ACM MM, 665–673. Luo, H.; Ji, L.; Zhong, M.; Chen, Y.; Lei, W.; Duan, N.; and Li, T. 2022. CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning. Neurocomputing, 508: 293–304. Ma, W.; Wu, X.; Zhao, S.; Zhou, T.; Guo, D.; Gu, L.; Cai, Z.; and Wang, M. 2023. FedSH: Towards Privacy-preserving Text-based Person Re-Identification. IEEE Transactions on Multimedia, 1–13. Ma, Y.; Xu, G.; Sun, X.; Yan, M.; Zhang, J.; and Ji, R. 2022. X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval. In Proc. of ACM MM, 638–647. Miao, J.; Wu, Y.; Liu, P.; Ding, Y.; and Yang, Y. 2019. Pose-guided feature alignment for occluded person reidentification. In Proc. of ICCV, 542–551. Niu, K.; Huang, Y.; Ouyang, W.; and Wang, L. 2020. Improving description-based person re-identification by multigranularity image-text alignments. IEEE Transactions on Image Processing, 29: 5542–5556. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In Proc. of ICML, 8748–8763. PMLR. Shao, Z.; Zhang, X.; Fang, M.; Lin, Z.; Wang, J.; and Ding, C. 2022. Learning granularity-unified representations for text-to-image person re-identification. In Proc. of ACM MM, 5566–5574. Shu, X.; Wen, W.; Wu, H.; Chen, K.; Song, Y.; Qiao, R.; Ren, B.; and Wang, X. 2022. See finer, see more: Implicit The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6169 modality alignment for text-based person retrieval. In Proc. of ECCV, 624–641. Springer. Sun, Y.; Zheng, L.; Yang, Y.; Tian, Q.; and Wang, S. 2018. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In Proc. of ECCV, 480–496. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(11). Wang, C.; Luo, Z.; Lin, Y.; and Li, S. 2021a. Text-based Person Search via Multi-Granularity Embedding Learning. In Proc. of IJCAI, 1068–1074. Wang, G.; Yang, S.; Liu, H.; Wang, Z.; Yang, Y.; Wang, S.; Yu, G.; Zhou, E.; and Sun, J. 2020a. High-order information matters: Learning relation and topology for occluded person re-identification. In Proc. of CVPR, 6449–6458. Wang, K.; Wang, P.; Ding, C.; and Tao, D. 2021b. Batch coherence-driven network for part-aware person reidentification. IEEE Transactions on Image Processing, 30: 3405–3418. Wang, Y.; Bo, C.; Wang, D.; Wang, S.; Qi, Y.; and Lu, H. 2019. Language person search with mutually connected classification loss. In Proc. of ICASSP, 2057–2061. IEEE. Wang, Z.; Fang, Z.; Wang, J.; and Yang, Y. 2020b. Vitaa: Visual-textual attributes alignment in person search by natural language. In Proc. of ECCV, 402–420. Springer. Wang, Z.; Zhu, A.; Xue, J.; Jiang, D.; Liu, C.; Li, Y.; and Hu, F. 2022a. SUM: Serialized Updating and Matching for text-based person retrieval. Knowledge-Based Systems, 248: 108891. Wang, Z.; Zhu, A.; Xue, J.; Wan, X.; Liu, C.; Wang, T.; and Li, Y. 2022b. Caibc: Capturing all-round information beyond color for text-based person retrieval. In Proc. of ACM MM, 5314–5322. Yan, S.; Dong, N.; Zhang, L.; and Tang, J. 2022. CLIPDriven Fine-grained Text-Image Person Re-identification. arXiv preprint arXiv:2210.10276. Yao, H.; Zhang, S.; Hong, R.; Zhang, Y.; Xu, C.; and Tian, Q. 2019. Deep representation learning with part loss for person re-identification. IEEE Transactions on Image Processing, 28(6): 2860–2871. Yao, L.; Huang, R.; Hou, L.; Lu, G.; Niu, M.; Xu, H.; Liang, X.; Li, Z.; Jiang, X.; and Xu, C. 2021. FILIP: Fine-grained Interactive Language-Image Pre-Training. In Proc. of ICLR. Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; and Hoi, S. C. 2021. Deep learning for person re-identification: A survey and outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6): 2872–2893. Zeng, H.; Zhou, T.; Wu, X.; and Cai, Z. 2022. Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning. In Proc. of SRDS, 69–81. Zhang, Y.; and Lu, H. 2018. Deep cross-modal projection learning for image-text matching. In Proc. of ECCV, 686– 701. Zhao, S.; Zhu, L.; Wang, X.; and Yang, Y. 2022. CenterCLIP: Token Clustering for Efficient Text-Video Retrieval. In Proc. of SIGIR. Zheng, K.; Lan, C.; Zeng, W.; Liu, J.; Zhang, Z.; and Zha, Z.-J. 2021. Pose-guided feature learning with knowledge distillation for occluded person re-identification. In Proc. of ACM MM, 4537–4545. Zheng, K.; Liu, W.; Liu, J.; Zha, Z.-J.; and Mei, T. 2020a. Hierarchical gumbel attention network for text-based person search. In Proc. of ACM MM, 3441–3449. Zheng, Z.; Zheng, L.; Garrett, M.; Yang, Y.; Xu, M.; and Shen, Y.-D. 2020b. Dual-path convolutional image-text embeddings with instance loss. ACM Transactions on Multimedia Computing, Communications, and Applications, 16(2): 1–23. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proc. of AAAI, volume 34, 13001–13008. Zhou, T.; Cai, Z.; Liu, F.; and Su, J. 2023. In Pursuit of Beauty: Aesthetic-Aware and Context-Adaptive Photo Selection in Crowdsensing. IEEE Transactions on Knowledge and Data Engineering. Zhu, A.; Wang, Z.; Li, Y.; Wan, X.; Jin, J.; Wang, T.; Hu, F.; and Hua, G. 2021. DSSL: Deep Surroundings-person Separation Learning for Text-based Person Retrieval. In Proc. of ACM MM, 209–217. Zhu, K.; Guo, H.; Liu, Z.; Tang, M.; and Wang, J. 2020. Identity-guided human semantic parsing for person reidentification. In Proc. of ECCV, 346–363. Springer. Zhuo, J.; Chen, Z.; Lai, J.; and Wang, G. 2018. Occluded person re-identification. In Proc. of ICME, 1–6. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6170 | 2024 | 685 |
18,501 | CMG-Net: Robust Normal Estimation for Point Clouds via Chamfer Normal Distance and Multi-Scale Geometry Yingrui Wu1,2*, Mingyang Zhao3*, Keqiang Li4, Weize Quan1,2 Tianqi Yu5, Jianfeng Yang5, Xiaohong Jia6,2, Dong-Ming Yan1,2† 1MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong, China 4SenseTime Research, Shanghai, China 5School of Electronic and Information Engineering, Soochow University, Suzhou, China 6AMSS, Chinese Academy of Sciences, Beijing, China [email protected], {migyangz, likeq98, qweizework, yandongming}@gmail.com {tqyu, jfyang}@suda.edu.cn, [email protected] Abstract This work presents an accurate and robust method for estimating normals from point clouds. In contrast to predecessor approaches that minimize the deviations between the annotated and the predicted normals directly, leading to direction inconsistency, we first propose a new metric termed Chamfer Normal Distance to address this issue. This not only mitigates the challenge but also facilitates network training and substantially enhances the network robustness against noise. Subsequently, we devise an innovative architecture that encompasses Multi-scale Local Feature Aggregation and Hierarchical Geometric Information Fusion. This design empowers the network to capture intricate geometric details more effectively and alleviate the ambiguity in scale selection. Extensive experiments demonstrate that our method achieves the state-of-the-art performance on both synthetic and realworld datasets, particularly in scenarios contaminated by noise. Our implementation is available at https://github.com/ YingruiWoo/CMG-Net Pytorch. Introduction Normal estimation is a fundamentally important task in the field of point cloud analysis, which enjoys a wide variety of applications in 3D vision and robotics, such as surface reconstruction (Fleishman, Cohen-Or, and Silva 2005; Kazhdan, Bolitho, and Hoppe 2006), denoising (Lu et al. 2020b) and semantic segmentation (Grilli, Menna, and Remondino 2017; Che and Olsen 2018). In recent years, many powerful methods have been developed to enhance the performance of normal estimation. However, these approaches involving both traditional and learning-based ones often suffer from heavy noise and struggle to attain high-quality results for point clouds with complex geometries. Traditional methods (Hoppe et al. 1992; Levin 1998; Cazals and Pouget 2005) typically encompass fitting local *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. npi Modified Prediction Error Annotation Error npi npi npi Input Ours HSurf-Net AdaFit DeepFit 20.28 21.72 21.13 23.01 0 60 SHS-Net 20.70 Noise: σ = 0.12% CND-modified Normal Annotated Normal Predicted Normal (a) (b) Figure 1: (a) Comparison between the annotated and the proposed CND-modified normals, where the latter is more consistent with the underlying surface geometry. (b) Our method outperforms competitors with higher robustness to noise and intricate shape details (indicated by the heat map). planes or polynomial surfaces and inferring normal vectors from the fitted outcomes. Although straightforward, these approaches are vulnerable to noise and encounter challenges when attempting to generalize to complex shapes. Furthermore, their performance hinges significantly on the meticulous tuning of parameters. In comparison with traditional approaches, learningbased proposals (Guerrero et al. 2018; Ben-Shabat et al. 2019; Hashimoto and Saito 2019; Zhou et al. 2020; Wang and Prisacariu 2020; Lenssen, Osendorfer, and Masci 2020; Ben-Shabat et al. 2020; Cao et al. 2021; Zhu et al. 2021; Zhou et al. 2022b; Zhang et al. 2022; Li et al. 2022a,b; Du et al. 2023; Li et al. 2023a) have better generalization and less dependency on parameter tuning. There are two types of learning-based normal estimators comprising deep The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6171 surface fitting and regression. The former predicts pointwise weights of the input point cloud patch and derives a polynomial surface by weighted least-squares (WLS) fitting. However, due to the fixed order of polynomial functions, deep surface fitting usually grapples with overfitting or underfitting when dealing with various surfaces. On the contrary, regression-based methods adopt Multi-Layer Perception (MLP) to extract features of the input patch and directly regress normal vector from these features. Benefiting from their strong feature extraction capability, recent regressionbased methods have advanced normal estimation for clean point clouds. However, they have not made substantial headway in improving normal estimation on point clouds that are affected by noise. To address noisy normal estimation issues, in this paper, we first analyze the normal estimation deviation produced by noisy point clouds, and then point out the inconsistency between the annotated normal and the input patch, as illustrated in Fig. 1(a). We find that this direction inconsistency indeed significantly affects the network training and the output evaluation, also degrading the downstream tasks such as surface reconstruction. The reason is that when point coordinates change significantly due to noisy influence, their neighborhood geometry and normals change accordingly, while the annotated normals are fixed. To deal with this problem, instead of using the conventional Root of Mean Squared Error (RMSE), we propose a more reasonable metric for normal estimation termed Chamfer Normal Distance (CND), which replaces the original annotated normal vector with the normal of the closest point locating on the potential clean point cloud. Moreover, we minimize the loss function modified by CND to reduce the disturbance of inconsistency deviations during training. We show that our newly defined loss function achieves much higher normal estimation accuracy than competitors on a set of benchmark datasets. Furthermore, we develop a novel network framework named CMG-Net, integrating the CND-modified loss with multi-scale geometric structures, for more stable and robust normal estimation. Unlike previous approaches that merely capture a single scale of local or global features, CMG-Net performs a multi-scale feature extraction followed by integration through an attention layer. This greatly facilitates the network capability to capture intricate geometric details and address the ambiguity of the optimal scale selection. Moreover, we further combine the local and global features from various scales together in the hierarchical architecture to increase the multi-scale information for network inference. We conduct extensive experiments to validate the developed method and compare it with the state-of-the-art (SOTA) approaches on various benckmark datasets including PCPNet (Guerrero et al. 2018) and the indoor SceneNN dataset (Hua, Tran, and Yeung 2018). Results demonstrate that our method outperforms baselines by a large margin, especially on point clouds with noise, and those with intricate geometric details and various distribution density. To summarize, our main technical contributions are threefold as follows: • We propose a new method that integrates the CND metric for robust normal estimation, which solves the direction inconsistency problem effectively and significantly boosts network training and inference. • We design a novel network that incorporates multi-scale feature extraction along with hierarchical inference combined with intricate geometry information fusion, which is capable of capturing intricate geometric details and addressing the challenge of scale selection ambiguity. • We perform comprehensive experiments to demonstrate the enhancements brought by our proposed method, thereby pushing the boundaries of SOTA performance, especially on noisy normal estimation scenarios. Related Work Traditional Methods Principal Component Analysis (PCA) (Hoppe et al. 1992) stands as the most widely adopted point cloud normal estimation method, which fits a plane to the input surface patch. Subsequent variants involving Moving Least Squares (MLS) (Levin 1998), truncated Taylor expansion fitting (njet) (Cazals and Pouget 2005), local spherical surface fitting (Guennebaud and Gross 2007) and multi-scale kernel (Aroudj et al. 2017) are proposed to reduce the noisy influence through selecting larger patches and employing more intricate energy functions. Nevertheless, these approaches typically tend to oversmooth sharp features and geometric details. To circumvent these issues, Voronoi diagram (Amenta and Bern 1998; Alliez et al. 2007; M´erigot, Ovsjanikov, and Guibas 2010), Hough transform (Boulch and Marlet 2012), and plane voting (Zhang et al. 2018) are deployed in normal estimation. However, these techniques depend on manual parameter tuning heavily, which hinders their practical applications. Learning-based Methods With the powerful development of neural network, learningbased normal estimation achieves better performance and less dependence of parameter tuning than traditional approaches. They can be generally divided into two categories: deep Surface fitting and regression-based approaches. Deep surface fitting methods. These methods typically employ a deep neural network to predict point-wise weights and then fit a polynomial surface to input patches using WLS such as IterNet (Lenssen, Osendorfer, and Masci 2020) and DeepFit (Ben-Shabat et al. 2020). Analogously, Zhang et al. (2022) adopted the predicted weights as the guiding geometric information. AdaFit (Zhu et al. 2021) proposed a novel layer to aggregate features from multiple global scales and then predicted point-wise offset to improve the normal estimation accuracy. To learn richer geometric features, GraphFit (Li et al. 2022a) combined graph convolutional layers with adaptive modules, while Du et al. (2023) analyzed the approximation error of these methods and suggested two fundamental design principles to further improve the estimation accuracy. However, due to the constant order of the objective polynomial functions, deep surface fitting methods typically suffer from overfitting and underfitting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6172 Regression-based methods. This type casts the normal estimation problem as a regression process and predicts the point cloud normals via the network straightforward. For instance, HoughCNN (Boulch and Marlet 2016) transformed point clouds into a Hough space and then utilized Convolutional Neural Networks (CNN) to directly infer normal vectors, whereas Lu et al. (2020a) projected point clouds into a height map by computing distances between scatter points and the fitted plane. However, these approaches sacrifice the 3D geometry unavoidably when executing in 2D spaces. PCPNet (Guerrero et al. 2018) directly adopted the unstructured point clouds as input and then used the PointNet (Qi et al. 2017a) to capture multi-scale features instead. Hashimoto et al. (2019) combined PointNet with 3D-CNN to extract local and spatial features, and NestiNet (BenShabat et al. 2019) employed mixture-of-experts framework to determine the optimal normal estimation scale. To provide more information of the input patch, Refine-Net (Zhou et al. 2022a) additionally calculated the initial normals and the height map. Recent work involve HSurf-Net (Li et al. 2022b) and SHS-Net (Li et al. 2023a) first transformed point clouds into a hyper space through local and global feature extractions and then performed plane fitting in the constructed space. NeAF (Li et al. 2023b) inferred an angle field around the ground truth normal to make it learn more information of the input patch. Benefiting from the strong feature extraction abilities of the network architectures, recent regression-induced approaches demonstrate promising results on clean point clouds. However, they have yet made significant progress in normal estimation on noisy point clouds, which are often emerged in practical scenarios. Aiming at improving the robustness to noise, we identify a crucial inconsistency between the annotated normal and the neighborhood geometry of the noisy point and introduce CND to address this problem. Besides, compared with the recent regression methods, we propose a network that combines various geometric information extraction with a hierarchical architecture to make the complex information capture more effectively. Rethinking Noisy Normal Estimation Direction Inconsistency Previous learning-based approaches directly minimize the deviations between the predicted normals and the annotated ones for training and evaluation. This is reasonable for noise-free scenarios, however, for the noisy point clouds, due to the noise-caused relative coordinate changes, the annotated normals indeed are inconsistent with the neighborhood geometry of the query points. As presented in Fig. 2(a), given a set of noisy point clouds P, suppose the ground truth position locating on the surface of the noisy point pi is ˜pi. The annotated normal of pi is npi ∈R3, which is the same as the one of the point before adding noise, and the normal of ˜pi is n˜pi ∈R3. If we optimize the typically defined normal estimation loss ∥npi −ˆnpi∥2 2 as predecessors, where ˆnpi is the predicted normal, this will unavoidably lead to inconsistency between the annotated normal npi and the input patch P i. What’s worse, this inconsistency greatly decreases the npi npi (a) npi dpi (c) npi npi (b) npi (d) Fi Fi CND-modified GT Normal Annotated GT Normal Predicted Normal Predicted Offset Predicted Face GT Face Figure 2: (a) The annotated normal npi of noisy point pi determined before noisy disturbance indeed is inconsistent with the input patch (dashed red ellipse). (b) The direction of the normal n˜pi of the nearest clean point ˜pi is more consistent with the input patch. (c) The predicted offset ˆdpi cannot drag pi to the noise-free underlying surface. (d) This inconsistency also arises for surface reconstruction assignments. quality of the training data and thus lowers down the estimation ability of the network on noisy point clouds. Moreover, this inconsistency also degrades downstream tasks such as denoising and 3D reconstruction. For instance, Fig. 2(c) shows the denosing principle for point clouds. If we utilize the predicted normal vector ˆnpi, which closely resembles the annotated normal vector npi (indicating a highly accurate estimation), then the introduced offset ˆdpi will not align or bring pi closer to the noise-free underlying surface. Anonymously, in the context of reconstruction tasks, as shown in Fig. 2(d), the regenerated mesh face ˆ F i in relation to the normal vector ˆnpi significantly deviates from the authentic mesh fact F i. Scale Ambiguity Another challenge in current normal estimation approaches is the ambiguity regarding the optimal scale in both local and global feature extraction. Concerning local structures, using large scales typically improves robustness against noise but can lead to oversmoothing of shape details and sharp features. Conversely, small scales can preserve geometric details but are relatively sensitive to noise. When it comes to global features, large scales include more structure information from the underlying surface but may also incorporate irrelevant points, thus degrading the geometry information of the input patch. On the other hand, small scales reduce irrelevant points but are less robust to noise. Previous works have struggled to effectively extract and combine multi-scale local and global features, making them highly dependent on scale selection and resulting in unsatisfactory performance on both noisy point clouds and complex shape details. Proposed Method To solve the aforementioned issues, we propose a novel normal estimation approach that is robust against noise and less sensitive to scale selection. Concrete technical contributions are presented in the following. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6173 + ∙ ∙ Query Point p Local Patch P Multi-scale Local Feature Extraction Attentional Feature Fusion Hierarchical Geometric Information Fusion Position Feature Fusion Weighted Normal Prediction QSTN (a) CMG-Net × *4 *4 (b) Multi-scale Local Feature Aggregation Local Feature Extraction Attentional Feature Fusion (c) Hierarchical Geometric Information Fusion (d) Position Feature Fusion Encoder Decoder 1+ Local Feature Extraction Local Feature Extraction MLP MaxPooling AveragePooling ∙ Element-wise Multiply + Element-wise Add C Concat Skip-connection (e) Weighted Normal Prediction pM C C p1 ... f1 ... fM ∙ W n Figure 3: Architecture of the proposed method. (a) Overall structure of CMG-Net. (b) Multi-scale Local Feature Aggregation. (c) Hierarchical Geometric Information Fusion. (d) Position Feature Fusion. (e) Weighted Normal Prediction. Chamfer Normal Distance To bridge the direction inconsistency between the annotated normal and the predicted one of the input patch, instead of using the conventional metric ∥npi −ˆnpi∥2 2, inspired from the Chamfer Distance (CD) C(P, ˆ P)= 1 N1 X pi∈P min ˆpj ∈ˆ P (∥pi−ˆpj∥2 2)+ 1 N2 X ˆ pj ∈ˆ P min pi∈P(∥pi−ˆpj∥2 2), (1) where N1 and N2 represent the cardinalities of the point cloud P and ˆP, we formulate the Chamfer Normal Distance (CND) as CND(P, ˜P) = v u u t 1 N N X i=1 arccos2 < n˜pi, ˆnpi >, (2) where < ·, · > represents the inner product of two vectors and ˜pi is the closest point of pi in the noise-free point cloud ˜P. In contrast to previous approaches that relied on annotated normal correspondence, our proposed CND manner assures consistency with the underlying geometric structure of the input patch (Fig. 2(b)). The CND metric not only faithfully captures the prediction errors in noisy point clouds, but also eliminates the direction inconsistency during network training, thus substantially improving the network robustness and facilitating the subsequent assignments. CMG-Net To capture more fruitful multi-scale structure information and solve the scale ambiguity issue simultaneously, we develop a network combining various geometric information extraction with a hierarchical architecture termed CMG-Net. Given a patch P = {pi ∈R3}N i=1 centralized at a query point p, as shown in Fig. 3(a), CMG-Net first normalizes the input points and rotates P by PCA and QSTN (Qi et al. 2017a; Du et al. 2023) to initialize the normal vectors. Then, we group the local features by k-nearest neighbors (k-NN) with different scales and aggregate them together. Besides, we design a hierarchical structure with intricate geometry information fusion, followed by the decoding of the embedding features. Our loss function modified by CND enables the network jumping out of the annotation inconsistency. Multi-scale Local Feature Aggregation. Previous methods group the local features by k-NN and capture the geometric information by MLP and maxpooling (Li et al. 2022b). However, this manner often suffers from scale ambiguity and results in unsatisfactory robustness against noise. To solve this issue, as presented in Fig. 3(b), we construct graphs by k-NN with small and large scales and employ the skip-connection and maxpooling to capture the local structures. The Local Feature Extraction (LFE) can be formulated as fn+1 i =MaxPool n ϕ1 φ1 fn i , φ1 fn i,j , φ1 fn i −fn i,j osl j=1 , (3) where f n i,j is the neighbor feature of the feature f n i , φ1 is the MLP layer, ϕ1 is the skip-connection layer, and sl represents the scale of k-NN with l = 1, 2 in default. Moreover, we use an Attentional Feature Fusion (AFF) architecture to aggregate the features which can benefit both the small and large scales. The AFF can be formulated as M fs1 i , fs2 i = sigmoid φ2 AvgPool fs1 i + fs2 i N i=1 , (4) fi = φ3 fs1 i · M fs1 i , fs2 i + fs2 i · 1 −M fs1 i , fs2 i , (5) where f s1 i abd f s2 i are the local structures with different scales of feature f i, φ2 and φ3 are the MLP layers, N represents the cardinality of the input point cloud patch. Hierarchical Geometric Information Fusion. Recent approaches have proven the effectiveness of multi-scale global feature extraction (Qi et al. 2017b; Li et al. 2022b; Qin et al. 2022), however, large scale global information and local structures may be lost after point cloud downsampling. To alleviate this problem, as shown in Fig. 3(c), we propose a hierarchical architecture that combines the multiscale global features with the local structures. During the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6174 Method RMSE CND Noise (σ) Density Ave. Noise (σ) Density Ave. None 0.12% 0.6% 1.2% Stripes Gradient None 0.12% 0.6% 1.2% Stripes Gradient PCA 12.28 12.86 18.40 27.61 13.63 12.79 16.26 12.28 12.78 16.41 24.46 13.63 12.79 15.39 n-jet 12.32 12.82 18.34 27.77 13.36 13.09 16.29 12.32 12.77 16.36 24.67 13.36 13.09 15.43 PCPNet 9.62 11.36 18.89 23.32 11.15 11.69 14.34 9.62 11.23 17.28 20.16 11.15 11.69 13.52 Nesti-Net 8.43 10.72 17.56 22.63 10.20 10.66 13.37 8.43 10.57 15.00 18.16 10.20 10.66 12.17 DeepFit 6.51 9.21 16.73 23.12 7.93 7.31 11.80 6.51 8.98 13.98 19.00 7.93 7.31 10.62 AdaFit 5.21 9.05 16.44 21.94 6.01 5.90 10.76 5.21 8.79 13.55 17.31 6.01 5.90 9.46 GraphFit 4.49 8.69 16.04 21.64 5.40 5.20 10.24 4.49 8.43 13.00 16.93 5.40 5.20 8.91 HSurf-Net 4.17 8.78 16.25 21.61 4.98 4.86 10.11 4.17 8.52 13.23 16.72 4.98 4.86 8.75 Du et al. 4.11 8.66 16.02 21.57 4.89 4.83 10.01 4.11 8.43 13.10 17.08 4.89 4.83 8.74 SHS-Net 3.95 8.55 16.13 21.53 4.91 4.67 9.96 3.95 8.29 13.13 16.60 4.91 4.67 8.59 Ours 3.86 8.45 16.08 21.89 4.85 4.45 9.93 3.86 8.13 12.55 16.23 4.85 4.45 8.35 Table 1: Quantitative comparisons in terms of RMSE and CND on the PCPNet dataset. Bold values indicate the best estimator. Hierarchical Geometric Information Fusion, the global feature GNh of current scale Nh can be formulated as GNh = φ5 MaxPool n φ4 f Nh i oNh i=1 , (6) where φ4 and φ5 are MLP layers. Meanwhile, the local structures gNh+1 i are captured by g Nh+1 i = MaxPool n φ6 g Nh i,j os j=1 + g Nh i , i = 1, ..., Nh+1, (7) where gNh i,j is the neighborhood feature of point pi in the scope of the scale Nh+1, s is the scale of the neighborhood features, and φ6 represents the MLP layer. Then, we downsample the patch by decreasing the patch size. Moreover, we integrate the global features of the current scale and the last scale with the local structures by f Nh+1 i = φ7 GNh, GNh−1, g Nh+1 i + f Nh i , i = 1, ..., Nh+1, (8) where φ7 is the MLP layer, and Nh+1 ≤Nh ≤Nh−1. Decoder. Note that the point coordinates are important basic attributes for point cloud processing and the spatial relationship between them such as distance can guide the inference process of the network (Zhao et al. 2021; Zhang et al. 2022). To explore this idea, we introduce two modules including Position Feature Fusion (PFF) and Weighted Normal Prediction (WNP) into the decoder part. As shown in Fig. 3(d), during the PFF, we embed the neighborhood coordinates of each point and fuse them with the extracted feature by skip-connections, which can be formulated as F i = MaxPool ϕ2 f i, pi,j −pi, φ8 pi,j −pi s j=1 , (9) where pi,j is the neighbor coordinate of the point pi, f i is the extracted feature of pi, s represents the neighborhood scale, φ8 is the MLP layer and ϕ2 is the skip-connection. As shown in Fig. 3(e), we predict weights based on the geometry information of each point and use the weighted features to predict the normal vector of the query point: n = φ11 MaxPool {φ10 (F i · softmaxM (φ9 (F i)))}M i=1 , (10) where φ9, φ10 and φ11 are the MLP layers, and the normalized n is the finally predicted unit normal vector. Loss function. To bridge the gap between the annotated normal and the noise-caused neighborhood geometry variation of the query point, we reformulate the sine loss by CND, namely, taking the normal n˜p of the nearest neighbor point ˜p in the corresponding noise-free point cloud ˜P as the ground truth L1 = ∥n˜p × ˆnp∥. (11) Meanwhile, we use the transformation regularization loss and the z-direction transformation loss to constrain the output rotation matrix R ∈R3×3 of the QSTN (Du et al. 2023) L2 =
I −RRT
2 , (12) L3 = ∥n˜pR × z∥, (13) where I ∈R3×3 represents the identity matrix, z = (0, 0, 1). Additionally, to make full use of the spatial relationships between data points, we adopt the weight loss similar to Zhang et al. (2022) L4 = 1 M M X i=1 (wi −ˆwi)2, (14) where ˆw are the predicted weights for each data point, M represents the cardinality of the downsampled patch, wi = exp(−(pi · n˜p)2 /δ2) and δ = max 0.052, 0.3 PM i=1 (pi · n˜p)2 /M , where pi is the point in the downsampled patch. Therefore, our final loss function is defined as L = λ1L1 + λ2L2 + λ3L3 + λ4L4, (15) where λ1 = 0.1, λ2 = 0.1, λ3 = 0.5, and λ4 = 1 are weighting factors. Experimental Results Datasets. As predecessor approaches, we first adopt the synthetic dataset PCPNet (Guerrero et al. 2018) for comparison, in which we follow the same experimental setups including train-test split, adding noise, and changing distribution density on test data. To test the generalization capability of our method, we then evaluate and compare the models trained on the PCPNet on the real-world indoor SceneNN dataset (Hua, Tran, and Yeung 2018). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6175 Method Noise (σ) Ave. 0.125% 0.25% 0.5% 0.75% 1.25% PCA 14.46 15.09 17.75 21.40 31.81 20.10 n-jet 14.40 14.98 17.67 21.50 32.08 20.12 PCPNet 13.42 15.52 18.12 20.02 23.92 18.20 Nesti-Net 13.34 14.33 16.63 18.34 22.31 16.99 DeepFit 11.70 12.83 15.62 17.64 24.01 16.36 AdaFit 11.42 12.95 15.52 17.02 21.74 15.73 GraphFit 11.01 12.40 15.05 16.66 20.56 15.14 HSurf-Net 11.04 12.67 15.33 16.79 20.67 15.30 Du et al. 10.97 12.48 15.17 16.77 21.04 15.29 SHS-Net 10.90 12.66 15.18 16.59 20.89 15.24 Ours 10.60 12.56 14.89 16.31 19.62 14.79 Table 2: Quantitative comparisons of CND on the PCPNet dataset with gradually increased noise. Point Cloud Ours SHS-Net HSurf-Net AdaFit DeepFit 23.75 24.62 24.66 27.77 28.54 8.71 13.07 12.68 12.17 14.22 σ = 0.12% σ = 0.6% Figure 4: Qualitative comparisons on the PCPNet datasts. We use the heat map to visualize the CND error. Implementation details. We set the input patch size N = 700 and the downsampling factors ρ = {2/3, 2/3, 2/3, 1}. The scales of k-NN in the LFE are equivalent to 16 and 32, and s = {32, 32, 16, 16} in the Hierarchical Geometric Information Fusion. The number of the neighbor points during the PPF is 16. We adopt the AdamW (Loshchilov and Hutter 2017) optimizer with initial learning rate 5 × 10−4 for training. The learning rate is decayed by a cosine function. Our model is trained with a 64 batch size on an NVIDIA A100 GPU in 900 epochs. More implementation details are reported in Supplementary Materials (SM). Evaluation. The commonly used traditional methods, i.e., PCA (Hoppe et al. 1992) and n-jet (Cazals and Pouget 2005) and the latest learning-based methods, i.e., PCPNet (Guerrero et al. 2018), Nesti-Net (Ben-Shabat et al. 2019), DeepFit (Ben-Shabat et al. 2020), AdaFit (Zhu et al. 2021), GraphFit (Li et al. 2022a), HSurf-Net (Li et al. 2022b), Du et al. (2023) and SHS-Net (Li et al. 2023a) are taken as baselines. To make thorough comparison, we adopt the proposed CND metric to assess the normal estimation results and compare it with the RMSE. Moreover, the error distribution analysis can be found in SM. Results on Synthetic Data PCPNet. Table 1 reports the statistical results of all compared approaches on the PCPNet dataset, measured in terms of both RMSE and CND metrics. As observed, our method Method Ours SHS-Net HSurf-Net AdaFit DeepFit Clean 6.92 7.20 6.73 7.55 9.46 Noise 10.82 11.30 11.30 11.82 12.27 Ave. 8.87 9.25 9.02 9.97 10.86 Table 3: Statistical CND results on the SceneNN dataset. Point Cloud Ours SHS-Net HSurf-Net AdaFit DeepFit 9.27 9.93 9.84 10.78 10.56 σ = 0.3% Figure 5: Qualitative comparisons on the SceneNN datasets. achieves the overall highest normal estimation accuracy across different scenarios, particularly in scenarios with noise. In comparison to RMSE, the CND metric allows for more accurate and faithful prediction evaluations while mitigating the annotation inconsistency. Qualitative comparison results are presented in Fig. 4. Notably, our method exhibits the smallest errors in regions characterized by noise and intricate geometry. Robustness to noise. Subsequently, we specifically employ five representative models from the PCPNet dataset to assess the robustness against noise. We introduce varying levels of noise to these data which encompass one CAD model and four scanned point clouds. The quantitative outcomes displayed in Table 2 indicate that our method exhibits superior performance compared to competitors, particularly in scenarios contaminated by high levels of noise. Generalization to Real-world Data Next, we investigate the generalization capability using the real-world indoor SceneNN dataset. Results in Table 3 suggest that our method has the highest normal estimation accuracy in an average sense. The qualitative results presented in Fig. 5 exhibit our superiority. It is noticeable that our method successfully preserves more geometric details, such as the handle of the refrigerators. Additionally, more results on different real-word datasets can be found in SM. Ablation Study Network architecture. CMG-Net comprises three key components: Multi-scale Local Feature Aggregation, Hierarchical Geometric Information Fusion, and Decoder. We delve into the functions of them on the PCPNet dataset. (1). In the Multi-scale Local Feature Aggregation, we capture the local structure using two scales and integrate them by AFF. Table 4(a) reports the results of 1) without LFE; 2) with single-scale LFE, and 3) integrating multi-scale local features directly by MLP instead of AFF. As observed, compared with Ours, the multi-scale local features with AFF can effectively improve the network performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6176 Category Noise (σ) Density Ave. None 0.12% 0.6% 1.2% Stripes Gradient (a) w/o Local Feature Extration (LFE) 4.05 8.23 12.76 16.45 4.93 4.68 8.52 w/ Single-scale Local Feature Extration 3.98 8.20 12.76 16.40 4.96 4.66 8.50 w/o Attentional Feature Fusion (AFF) 3.96 8.19 12.68 16.39 4.78 4.62 8.44 (b) w/o Hierarchical Architecture 3.88 8.45 13.80 18.93 4.87 4.50 9.07 w/o Muli-scale Global Feature 3.87 8.27 12.56 16.24 4.94 4.45 8.39 w/o Local Feature 3.98 8.46 12.68 16.21 5.00 4.60 8.49 (c) w/o Position Feature Fusion (PFF) 3.93 8.15 12.62 16.24 4.87 4.68 8.42 w/o Weighted Normal Prediction (WNP) 4.32 8.23 12.56 16.22 5.01 4.89 8.54 Ours 3.86 8.13 12.55 16.23 4.85 4.45 8.35 Table 4: Ablation studies with the (a) multi-scale local feature aggregation; (b) hierarchical architecture; (c) decoder. Method Ours HSurf-Net DeepFit LCND ✓ ✓ ✓ No Noise 3.86 3.85 4.24 4.17 6.53 6.51 Low Noise 8.13 8.23 8.50 8.52 8.77 8.98 Med Noise 12.55 12.76 12.83 13.22 13.66 13.98 High Noise 16.23 16.46 16.47 16.71 18.69 19.00 Stripes 4.85 4.65 5.18 4.98 7.95 7.93 Gradients 4.45 4.51 4.94 4.86 7.31 7.31 Ave. 8.35 8.41 8.69 8.75 10.48 10.62 Table 5: Network training with or without the CNDmodified loss function on the PCPNet dataset. Ground-truth Point Cloud Ours 15.35 HSurf-Net 15.44 DeepFit 15.63 σ = 0.12% Figure 6: Comparisons on Poisson surface reconstruction. (2). To validate the effectiveness of the Hierarchical Geometric Information Fusion, we carry out experiments using the model with a fixed global scale that is equivalent to the output scale of CMG-Net. Additionally, we compare the results of the models without the global feature of the last scale or the local feature in the hierarchical architecture. Results shown in Table 4(b) demonstrate that the Hierarchical Geometric Information Fusion operation can also boost the normal estimation performance. (3). Table 4(c) shows the ablation studies of the Decoder part, suggesting the effectiveness of PFF and WNP. Besides, more ablation results on QSTN, the input patch sizes N, and the downsampling factors ρ can be found in SM. CND-modified loss function. To demonstrate the effectiveness and generalization of the newly introduced CNDmodified loss function, we conduct experiments on the PCPNet dataset, comparing the results with and without its incorporation. We employ representative methods, including the deep surface fitting method DeepFit (Ben-Shabat et al. 2020), as well as the regression methods Hsurf-Net (Li et al. 2022b), and Ours. Table 5 highlights the impact of the CND component, demonstrating its significant enhancement in normal estimation accuracy for both deep surface fitting and regression methods. Application of the Proposed Method We also demonstrate the application of our method on downstream tasks. Fig. 6 presents the Poisson surface reconstruction (Kazhdan, Bolitho, and Hoppe 2006) results using the normal vectors predicted by competing approaches. Compared with ground-truth surfaces, our method achieves the best reconstruction quality (quantified by the Symmetric Mean Hausdorff Distance (SMD)(×10−4)), especially in shape details of noisy regions, underscoring the higher accuracy of our normal estimation. We provide more reconstruction instances and highlight the application of our newly developed method to point cloud denoising in the SM. Limitations While our method has demonstrated remarkable normal estimation accuracy across diverse 3D models, it is not yet realtime capable and still depends on annotated training data, as is the case with previous approaches. Therefore, it is highly desirable in the future to reduce the computation time and delve into unsupervised frameworks. Conclusions We propose a novel method for robust normal estimation in unorganized point clouds, which shows superiority across various datasets and scenarios. We identify the issue of direction inconsistency in predecessor approaches and introduce the CND metric to address this concern. This not only boosts the network training and evaluation, but also greatly enhances the network robustness against noisy disturbance. Additionally, we design an innovative architecture that combines multi-scale local and global feature extraction with hierarchical information fusion to deal with scale selection ambiguity. Extensive experiments validate that our method outperforms competitors in terms of both accuracy and robustness for normal estimation. Moreover, we demonstrate its ability to generalize in real-world settings and downstream application tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6177 Acknowledgements This work was partially funded by the National Natural Science Foundation of China (62172415, 12022117, 62102418), the CAS Project for Young Scientists in Basic Research (YSBR-034), and the Beijing Science and Technology Plan Project (Z231100005923033). References Alliez, P.; Cohen-Steiner, D.; Tong, Y.; and Desbrun, M. 2007. Voronoi-based variational reconstruction of unoriented point sets. In Proc. Symp. Geom. Process., 39–48. Amenta, N.; and Bern, M. 1998. Surface reconstruction by Voronoi filtering. In Proc. Ann. Symp. Comput. Geom., 39– 48. Aroudj, S.; Seemann, P.; Langguth, F.; Guthe, S.; and Goesele, M. 2017. Visibility-consistent thin surface reconstruction using multi-scale kernels. ACM Trans. Graph., 36(6): 1–13. Ben-Shabat; et al. 2020. DeepFit: 3D surface fitting via neural network weighted least squares. In Proc. Eur. Conf. Comput. Vis., 20–34. Ben-Shabat, Y.; et al. 2019. Nesti-Net: Normal estimation for unstructured 3D point clouds using convolutional neural networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 10112–10120. Boulch, A.; and Marlet, R. 2012. Fast and robust normal estimation for point clouds with sharp features. Comput. Graph. Forum, 31(5): 1765–1774. Boulch, A.; and Marlet, R. 2016. Deep learning for robust normal estimation in unstructured point clouds. Comput. Graph. Forum, 35(5): 281–290. Cao, J.; Zhu, H.; Bai, Y.; Zhou, J.; Pan, J.; and Su, Z. 2021. Latent tangent space representation for normal estimation. IEEE Trans. Ind. Electron., 69(1): 921–929. Cazals, F.; and Pouget, M. 2005. Estimating differential quantities using polynomial fitting of osculating jets. Comput. Aided. Geom. Des., 22(2): 121–146. Che, E.; and Olsen, M. J. 2018. Multi-scan segmentation of terrestrial laser scanning data based on normal variation analysis. ISPRS J. PhotoGramm., 143: 233–248. Du, H.; Yan, X.; Wang, J.; Xie, D.; and Pu, S. 2023. Rethinking the approximation error in 3d surface fitting for point cloud normal estimation. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 9486–9495. Fleishman, S.; Cohen-Or, D.; and Silva, C. T. 2005. Robust moving least-squares fitting with sharp features. ACM Trans. Graph., 24(3): 544–552. Grilli, E.; Menna, F.; and Remondino, F. 2017. A review of point clouds segmentation and classification algorithms. Int. arch. photogramm. remote sens. spat. inf. sci., 42: 339–344. Guennebaud, G.; and Gross, M. 2007. Algebraic point set surfaces. ACM Trans. Graph., 26: 23–es. Guerrero, P.; Kleiman, Y.; Ovsjanikov, M.; and Mitra, N. J. 2018. Pcpnet learning local shape properties from raw point clouds. Comput. Graph. Forum, 37(2): 75–85. Hashimoto, T.; and Saito, M. 2019. Normal Estimation for Accurate 3D Mesh Reconstruction with Point Cloud Model Incorporating Spatial Structure. In Proc. IEEE Conf. Comput. Vis. Pattern Recog. Worksh., 54–63. Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; and Stuetzle, W. 1992. Surface reconstruction from unorganized points. In Proc. Ann. Conf. Comput. Graph. Interact. Tech., 71–78. Hua, B.-S.; Tran, M.-K.; and Yeung, S.-K. 2018. Pointwise Convolutional Neural Networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 984–993. Kazhdan, M.; Bolitho, M.; and Hoppe, H. 2006. Poisson surface reconstruction. In Proc. Symp. Geom. Process., volume 7. Lenssen, J. E.; Osendorfer, C.; and Masci, J. 2020. Deep iterative surface normal estimation. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 11247–11256. Levin, D. 1998. The approximation power of moving leastsquares. Math. Comput., 67(224): 1517–1531. Li, K.; Zhao, M.; Wu, H.; Yan, D.-M.; Shen, Z.; Wang, F.-Y.; and Xiong, G. 2022a. Graphfit: Learning multi-scale graphconvolutional representation for point cloud normal estimation. In Proc. Eur. Conf. Comput. Vis., 651–667. Li, Q.; Feng, H.; Shi, K.; Gao, Y.; Fang, Y.; Liu, Y.-S.; and Han, Z. 2023a. SHS-net: Learning signed hyper surfaces for oriented normal estimation of point clouds. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 13591–13600. Li, Q.; Liu, Y.-S.; Cheng, J.-S.; Wang, C.; Fang, Y.; and Han, Z. 2022b. HSurf-Net: Normal estimation for 3D point clouds by learning hyper surfaces. In Proc. Int. Conf. Neural Inf. Process. Syst., volume 35, 4218–4230. Li, S.; Zhou, J.; Ma, B.; Liu, Y.-S.; and Han, Z. 2023b. Neaf: Learning neural angle fields for point normal estimation. In Proc. AAAI Conf. Artif. Intell., volume 37, 1396–1404. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv:1711.05101. Lu, D.; Lu, X.; Sun, Y.; and Wang, J. 2020a. Deep featurepreserving normal estimation for point cloud filtering. Comput. Aided. Des., 125: 102860. Lu, X.; Schaefer, S.; Luo, J.; Ma, L.; and He, Y. 2020b. Low rank matrix approximation for 3D geometry filtering. IEEE Trans. Vis. Comput. Graph., 28(4): 1835–1847. M´erigot, Q.; Ovsjanikov, M.; and Guibas, L. J. 2010. Voronoi-based curvature and feature estimation from point clouds. IEEE Trans. Vis. Comput. Graph., 17(6): 743–756. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proc. Int. Conf. Neural Inf. Process. Syst., volume 30. Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; and Xu, K. 2022. Geometric transformer for fast and robust point The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6178 cloud registration. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 11143–11152. Wang, Z.; and Prisacariu, V. A. 2020. Neighbourhoodinsensitive point cloud normal estimation network. In Proc. Brit. Mach. Vis. Conf. Zhang, J.; Cao, J.; Liu, X.; Chen, H.; Li, B.; and Liu, L. 2018. Multi-normal estimation via pair consistency voting. IEEE Trans. Vis. Comput. Graph., 25(4): 1693–1706. Zhang, J.; Cao, J.-J.; Zhu, H.-R.; Yan, D.-M.; and Liu, X.-P. 2022. Geometry Guided Deep Surface Normal Estimation. Comput. Aided. Des., 142: 103119. Zhao, H.; Jiang, L.; Jia, J.; Torr, P. H.; and Koltun, V. 2021. Point transformer. In Proc. IEEE Int. Conf. Comput. Vis., 16259–16268. Zhou, H.; Chen, H.; Feng, Y.; Wang, Q.; Qin, J.; Xie, H.; Wang, F. L.; Wei, M.; and Wang, J. 2020. Geometry and learning co-supported normal estimation for unstructured point cloud. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 13238–13247. Zhou, H.; Chen, H.; Zhang, Y.; Wei, M.; Xie, H.; Wang, J.; Lu, T.; Qin, J.; and Zhang, X.-P. 2022a. Refine-Net: Normal Refinement Neural Network for Noisy Point Clouds. IEEE Trans. Pattern Anal. Mach. Intell. Zhou, J.; Jin, W.; Wang, M.; Liu, X.; Li, Z.; and Liu, Z. 2022b. Fast and accurate normal estimation for point clouds via patch stitching. Comput. Aided. Des., 142: 103121. Zhu, R.; Liu, Y.; Dong, Z.; Wang, Y.; Jiang, T.; Wang, W.; and Yang, B. 2021. AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds. In Proc. IEEE Int. Conf. Comput. Vis., 6118–6127. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6179 | 2024 | 686 |
18,502 | WaveFormer: Wavelet Transformer for Noise-Robust Video Inpainting Zhiliang Wu1, Changchang Sun2, Hanyu Xuan3*, Gaowen Liu4, Yan Yan2 1 CCAI, Zhejiang University, China 2 Department of Computer Science, Illinois Institute of Technology, USA 3 School of Big Data and Statistics, Anhui University, China 4 Cisco Research, USA Abstract Video inpainting aims to fill in the missing regions of the video frames with plausible content. Benefiting from the outstanding long-range modeling capacity, the transformerbased models have achieved unprecedented performance regarding inpainting quality. Essentially, coherent contents from all the frames along both spatial and temporal dimensions are concerned by a patch-wise attention module, and then the missing contents are generated based on the attention-weighted summation. In this way, attention retrieval accuracy has become the main bottleneck to improve the video inpainting performance, where the factors affecting attention calculation should be explored to maximize the advantages of transformer. Towards this end, in this paper, we theoretically certificate that noise is the culprit that entangles the process of attention calculation. Meanwhile, we propose a novel wavelet transformer network with noise robustness for video inpainting, named WaveFormer. Unlike existing transformer-based methods that utilize the whole embeddings to calculate the attention, our WaveFormer first separates the noise existing in the embedding into high-frequency components by introducing the Discrete Wavelet Transform (DWT), and then adopts clean low-frequency components to calculate the attention. In this way, the impact of noise on attention computation can be greatly mitigated and the missing content regarding different frequencies can be generated by sharing the calculated attention. Extensive experiments validate the superior performance of our method over state-ofthe-art baselines both qualitatively and quantitatively. Introduction Video inpainting which aims to fill missing regions of videos with plausible contents is a fundamental yet challenging task in the computer vision field. It has great value in many practical applications, such as scratch restoration (Chang et al. 2019), undesired object removal (Seoung et al. 2019) and autonomous driving (Liao et al. 2020). Unlike image inpainting (Somani et al. 2023; Shukla et al. 2023; Bar et al. 2022) that usually focuses on the spatial dimension, video inpainting pays more attention to exploiting the temporal information. Therefore, naively extending the image inpainting algorithm on individual video frame will neglect *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the inter-frame motion continuity, resulting in flicker artifacts (Chang et al. 2019; Wu et al. 2023a). Recently, several deep learning-based video inpainting methods (Gao et al. 2020; Ji et al. 2022; Lee et al. 2019; Li et al. 2022; Wu et al. 2021; Zeng et al. 2019; Liu et al. 2020) have been proposed and achieved great progress in terms of the quality and speed. However, due to the limited receptive field along the temporal domain, these methods still suffer from limitations of blurry and misplacement artifacts in the completed video (Ren et al. 2022; Wu et al. 2023b). To address these issues, the state-of-the-art methods (Cai et al. 2022; Lee et al. 2019; Li et al. 2020; Liu et al. 2021; Ren et al. 2022; Seoung et al. 2019; Wu et al. 2023c; Zhang, Wu, and Yan 2023) resort to the attention mechanism to explore the long-term correspondences between frames. In this way, the available content at distant frames can also be globally propagated into the missing regions. Notably, the representative technique transformer (Cai et al. 2022; Liu et al. 2021; Ren et al. 2022; Zeng, Fu, and Chao 2020; Cai et al. 2022; Zhang, Fu, and Liu 2022) has gained increasing attention from researchers of video inpainting field due to its remarkable advantage of long-range modeling capacity. Typically, these transformer-based methods first search coherent contents from all the frames along both spatial and temporal dimensions by a patch-wise attention mechanism, and then utilize the attention-weighted summation to generate the missing contents. It means that the attention retrieval accuracy has become the main bottleneck limiting the inpainting performance. Inaccurate attention retrieval will ignore relevant content that is essential in video inpainting and introduce more irrelevant content in the missing regions, resulting in generating blurry or compromised contents (Zhang et al. 2023; Zhang, Fu, and Liu 2022). In fact, due to the limitations of transmission media and recording equipment, digital images and videos will inevitably be polluted by noise during the transmission and recording process (Geng et al. 2022). Correspondingly, the learned embeddings always contain noise. Therefore, to improve the performance of video inpainting, it is promising and necessary to explore the impact of noise on attention computation. For this purpose, we theoretically certificate that noise-contained inputs are disadvantageous to transformers’ attention calculation. Then, to address above disadvantages caused by ubiquitous noise in video inpainting, we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6180 propose a novel wavelet transformer network by introducing the Discrete Wavelet Transform (DWT) (Mallat 1989), dubbed as WaveFormer. Concretely, unlike existing transformer-based video inpainting methods that utilize the whole embedding to calculate attention (Cai et al. 2022; Liu et al. 2021; Ren et al. 2022; Zhang, Fu, and Liu 2022), our WaveFormer first adopts DWT to decompose the embedding used for the attention calculation into low-frequency and high-frequency components. By doing this, the noise existing in the embedding can be explicitly separated into the high-frequency components, making the low-frequency ones contain relatively clean basic features. In this way, the calculation of attention weight is only based on the low-frequency components and the missing content regarding different frequencies can be generated by sharing such attentions. Finally, the completed low-frequency and high-frequency components are aggregated to yield the final inpainting result through Inverse Discrete Wavelet Transform (IDWT). Substantial experiments show that our WaveFormer outperforms the state-of-the-arts by a significant margin in terms of PSNR and Ewarp (flow warping error) with relative improvements of 7.45% and 9.48%, respectively. Moreover, thanks to the robustness to noise, our method is able to fill missing regions using the visually-plausible and spatialtemporal coherent contents with fine-grained details. To sum up, our contributions are summarized as follows: • We theoretically demonstrate that noise always cause inferior effect when calculating attention. To the best of our knowledge, this is the first attempt to explore the factors that affect the transformers’ attention calculation in the video inpainting. • We propose a novel WaveFormer by introducing the DWT. It can calculate the attention on low-frequency components and share it with the high-frequency components, greatly mitigating the impact of noise on the attention calculation. • Experiments on two benchmark datasets, including Youtube-vos (Xu et al. 2018) and DAVIS (Perazzi et al. 2016), demonstrate the superiority of our proposed method in both quantitative and qualitative views. Related Work Video Inpainting With the rapid development of deep learning (Shang et al. 2023; Gu et al. 2023; Shang et al. 2022), several deep learning-based video inpainting methods have been proposed recently. For instance, Wang et al. (Wang et al. 2019), Kim et al. (Kim et al. 2019), and Chang et al. (Chang et al. 2019) employed the 3D temporal convolution and directly aggregate the temporal information of neighbor frames to reconstruct the missing contents. However, compared with 2D CNN, 3D CNN has relatively higher computational complexities, limiting the application of these methods in the real scenarios (Wu et al. 2023b; Ji et al. 2022; Liu, Li, and Zhu 2022). To alleviate this issue, treating the video inpainting as a pixel propagation problem has been explored by some works (Gao et al. 2020; Kang, Oh, and Kim 2022; Ke, Tai, and Tang 2021; Li et al. 2022; Xu et al. 2019; Zou et al. 2021). In particular, they first exploit a deep flow completion network to restore the flow sequence. Such a restored flow sequence is used to guide the relevant pixels of neighboring frames to fill in the missing regions. Overall, although these methods have shown promising results, they fail to capture the visible contents of long-distance frames, resulting in poor inpainting performance in the scene with large objects or slowly moving objects. To effectively model the long-distance correspondence, recent methods (Cai et al. 2022; Li et al. 2020; Ren et al. 2022; Seoung et al. 2019; Srinivasan et al. 2021) introduced the attention module to retrieve information from neighboring frames and adopted weighted summing operation to generate missing contents. Among these methods, benefiting from the advantages of long-range feature capture capacity, transformer has shed light to the video inpainting community. For example, Zeng et al. (Zeng, Fu, and Chao 2020) proposed the first transformer model for video inpainting by designing a multi-layer multi-head transformer. To improve the edge details of missing contents, Liu et al. (Liu et al. 2021) devised a new transformer model by introducing soft split and soft composition operations. In addition, Ren et al. (Ren et al. 2022) developed a novel Discrete Latent Transformer (DLFormer) by formulating video inpainting task into the discrete latent space. Meanwhile, Zhang (Zhang, Fu, and Liu 2022) leveraged the motion discrepancy exposed by optical flows to instruct the attention retrieval in the transformer for high-fidelity video inpainting. At the same time, Cai (Cai et al. 2022) designed a new Deformed Vision Transformer (DeViT) with emphasis on better patch-wise alignment and matching in video inpainting. It is worth noting that these transformer-based video inpainting methods ignore the impact of noise on attention calculation, which inevitably leads to inaccurate attention retrieval. In our work, we expect to explore the mechanism of noise works in the attention calculation and propose a novel wavelet transformer network with noise robustness to improve the accuracy of attention retrieval. Discrete Wavelet Transform (DWT) Thanks to the powerful time-frequency analysis capability of DWT, more and more researchers expect to combine it with deep learning to solve various computer vision tasks. For example, Liu et al. (Liu et al. 2018) presented a novel multi-level wavelet CNN to enlarge the receptive field for a better trade-off between efficiency and restoration performance. To preserve the original image details while reducing computational cost in self-attention learning, Yao et al. (Yao et al. 2022) formulated a invertible downsampling for wavelet transforms. Yu et al. (Yu et al. 2021) proposed a wavelet-based inpainting network that can separately fills the missing regions of each frequency band. These works show that combining wavelets and CNNs is promising. However, to the best of our knowledge, the potential of using wavelets to mitigate the influence of noise on the attention calculation of transformer has not been well validated, which is the major concern of this paper. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6181 Motivation In this paper, we argue that noise is disadvantageous to the transformers’ attention calculation, which greatly limits the performance of video inpainting. Using the noise-contained embeddings to calculate the attention will disregard the contents related with the missing regions and increases unrelated contents filled into the missing regions during video completion, leading to blurred or compromised missing contents and hence suffer from the inferior inpainting results. Theorem: Given n noise-contained features f i, whose dimension is h × w × c and value range from 0 to 1. Formally, f i can be denoted as the summation of the clean feature ei ∈[0, 1]h×w×c and the noise oi ∈[0, 1]h×w×c (Cheng et al. 2021; Jia, Wong, and Zeng 2021; Pang et al. 2021), i.e., f i = ei + oi. Let rf i,j stands for the attention between noisecontained features f i and f j, and re i,j denotes the attention between ei and ej, which can be obtained as follows, rf i,j = exp(sf i,j) Pn t=1 exp(sf i,t) , re i,j = exp(se i,j) Pn t=1 exp(se i,t), (1) where sf i,j = f i·f T j √h×w×c, se i,j = ei·eT j √h×w×c. Essentially, the value of ri,j represents the correlation extent between two features. The correlation reaches maximum when ri,j = 1, representing i-th feature is completely related to the j-th feature, and vice versa. Based on above theory regarding attention, the following theoretical statements hold: • if re i,j →0, then re i,j<rf i,j, i.e., the noise increases the attention between unrelated contents; • if re i,j →1, then re i,j>rf i,j, i.e., the noise decreases the attention between related contents. Proof: According to the definition of the rf i,j, we have: rf i,j = exp(sf i,j) Pn t=1 exp(sf i,t) = exp(f i · f T j ) Pn t=1 exp(f i · f T t ) . (2) Simple algebra computations enable us to have, re i,j rf i,j = re i,j Pn t=1,̸=j exp(f i · f T t ) exp(f i · f T j ) + 1 ! . (3) Besides, as exp(x) is a monotonically increasing function and its value ranges from 0 to 1, we have, 1 ≤exp(f i · f T t ) ≤e, ⇒n −1 ≤ n X t=1,̸=j exp(f i · f T t ) ≤(n −1)e, ⇒n −1 e ≤ Pn t=1,̸=j exp(f i · f T t ) exp(f i · f T j ) ≤(n −1)e. (4) Since n is a finite real number, n−1 e and (n −1)e are both finite real numbers. Considering the convenience of the expression, we denote Pn t=1,̸=j exp(f i · f T t ) exp(f i · f T j ) revealed in Eq.(3) and (4) as Fijt. For each statement of above theorem, we can prove it as follows, 1) if re i,j →0, we have, re i,j (Fijt + 1) →0, ⇒re i,j rf i,j →0<1, ⇒re i,j<rf i,j, (5) 2) if re i,j →1, we have, re i,j (Fijt + 1) >1, ⇒re i,j rf i,j >1, ⇒re i,j>rf i,j. (6) Methodology Formulation and Overview Let X = {x1, x2, · · · , xT } be a corrupted video sequence consisting of T frames with height H and width W. The corresponding frame-wise masks are denoted as M = {m1, m2, · · · , mT }. For each mask mi, “0” indicates that corresponding pixel is valid, and “1” denotes that the pixel is missing or corrupted. The goal of video inpainting is to generate an inpainted video sequence bY = {by1,by2, · · · ,byT }, which are spatially and temporally consistent with the original video sequence Y = {y1, y2, · · · , yT }. Based on the fact that the contents of missing regions in one frame may exist in neighboring frames, existing transformer-based methods (Cai et al. 2022; Liu et al. 2021; Ren et al. 2022; Zhang, Fu, and Liu 2022; Yu, Fan, and Zhang 2023; Zhang et al. 2023) usually formulate the video inpainting task as a “multi-to-multi” conditional distribution prediction problem as follows, p(bY|X) = T Y t=1 p(bY t+n t−n|Xt+n t−n, Mt+n t−n), (7) where Xt+n t−n = {xt−n, · · · , xt, · · · , xt+n} stands for a short clip of neighboring frames with a center moment t and a temporal radius n, Mt+n t−n denotes the mask clip regarding Xt+n t−n. In practice, these transformer-based methods usually generate the missing contents by aggregating coherent contents, which are searched by patch-based attention module from all the frames along both spatial and temporal dimensions. Therefore, the attention retrieval accuracy is an critical factor affecting the final inpainting performance. Inevitably, digital images and videos are polluted by noise during the transmission and recording process (Geng et al. 2022), resulting in the learned embeddings always contain noise. In Sect., we have also theoretically confirmed that noise can also have an adverse effect on transformerbased video inpainting. For this purpose, we propose a novel wavelet transformer network with noise robustness to mitigate this adverse effect. As shown in Fig.1, the proposed WaveFormer mainly consists of three parts: a framelevel encoder, wavelet spatial-temporal transformer and a frame-level decoder. Specifically, the frame-level encoder is built by stacking multiple convolutional layers and residual blocks with ReLUs as activation functions, aiming to extract deep features from low-level pixels of each frame. Similarly, the frame-level decoder is designed to decode inpainted features into frames. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6182 Softmax Q DWT DWT DWT IDWT V K r Wavelet Spatial-Temporal Transformer e e e e Encoder . . . . . . Patch size 1 ( ) Patch size 1 ( ) e extract patches batch dot piece patches convolution e extract patches batch dot piece patches convolution Patch size n ( ) Patch size n ( ) . . . . . . Decoder Concat + Conv p p p Share Figure 1: Illustration of the proposed WaveFormer, consisting of 1) a frame-level encoder, 2) the wavelet spatial-temporal transformer and 3) a frame-level decoder. Instead of using queries (Q) and keys (K) to directly calculate attention in existing transformer-based methods, our WaveFormer employs Discrete Wavelet Transform (DWT) to separate the embedding into highfrequency and low-frequency components. These separated low-frequency components are relatively clean, which are used to calculate attention for video inpainting. In this way, the impact of noise on the attention weight is greatly mitigated. Wavelet Spatial-Temporal Transformer As the core component of our WaveFormer, wavelet spatialtemporal transformer is designed to search coherent contents from all the input frames with the aim of learning spatial-temporal transformations for all missing regions in the wavelet domain of the deep encoding space. Specifically, in the process of attention calculation, we introduce DWT to separate noise into high-frequency components, and then use low-frequency components to calculate attention. Finally, the calculated attention is shared with highfrequency components to generate the missing content with different frequencies. In this way, the impact of noise on attention computation can be greatly mitigated. Essentially, our wavelet spatial-temporal transformer also follows the general pipeline of transformer design, namely embedding, matching, and aggregating. We will introduce more details of each step one by one as below. Embedding: Embedding aims to map deep features into key and memory, so as to establish deep correspondences for each region in different semantic spaces (Ren et al. 2022). Let F = {f 1, f 2, · · · , f T } denote the deep features encoded by the frame-level encoder, where f i ∈Rh×w×c. The three basic elements of the attention mechanism are extracted by the 1 × 1 convolution, including Q(query), K(key), and V(value): Qi, (Ki, Vi) = Mq(f i), (Mk(f i), Mv(f i)), (8) where 1 ≤i ≤T. Mq(·), Mk(·) and Mv(·) denote the 1 × 1 2D convolution. Matching: Having obtained these three basic elements, the coherent contents are searched by calculating the similarity between patches. Specifically, we first decompose Qi, Ki and Vi into corresponding low-frequency components and high-frequency components by DWT, individually, QL i , QH i ; KL i , KH i ; VL i , VH i = DWT (Qi, Ki, Vi), (9) where QL i , KL i , VL i ∈R h 2 × w 2 ×c denote the low-frequency components corresponding to Qi, Ki and Vi, mainly recording principal information including the basic structures. Similarly, QH i , KH i , VH i ∈R3× h 2 × w 2 ×c denote the highfrequency components in horizontal, vertical and diagonal directions, containing a very large proportion of data noise. After obtaining the low-frequency components QL i and KL i , we extract spatial patches of shape p1 × p2 × c from QL i and KL i of each frame, denoted as qL i and kL i . Then, the patch-wise similarities can be calculated by matrix multiplication, denoted as si,j = qL i · (kL j )T √p1 × p2 × c, (10) where 1 ≤i, j ≤N and N = T × h p1 × w p2 . A softmax function is introduced to obtain the attention weights of all patches, ri,j = exp(si,j)/ N P t=1 exp(si,t), qL i ∈Ω, 0, qL i ∈¯Ω, (11) where Ωand Ωdenote visible regions and missing regions, respectively. Naturally, we only borrow features from visible regions to fill missing regions. Aggregating: After modeling the deep correspondences of all spatial patches, we share the calculated attention on the low-frequency components with the high-frequency components. The output of the query for the low-frequency and high-frequency components of each patch can be obtained by the attention-weighted summation of the values of related patches, separately, bvL i = N X j=1 ri,jvL j , bvH i = N X j=1 ri,jvH j , (12) where vL j and vH j denote the value of the low-frequency and high-frequency components of the j-th patch, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6183 Methods YouTube-VOS (Xu et al. 2018) DAVIS (Perazzi et al. 2016) PSNR↑ SSIM↑ Ewarp ↓ LPIPS↓ PSNR↑ SSIM↑ Ewarp ↓ LPIPS↓ TCCDS (Huang et al. 2016) 23.418 0.8119 0.3388 1.9372 28.146 0.8826 0.2409 1.0079 VINet (Kim et al. 2020) 26.174 0.8502 0.1694 1.0706 29.149 0.8965 0.1846 0.7262 DFVI (Xu et al. 2019) 28.672 0.8706 0.1479 0.6285 30.448 0.8961 0.1640 0.6857 FGVC (Gao et al. 2020) 24.244 0.8114 0.2484 1.5884 28.936 0.8852 0.2122 0.9598 CPVINet (Lee et al. 2019) 28.534 0.8798 0.1613 0.8126 30.234 0.8997 0.1892 0.6560 OPN (Seoung et al. 2019) 30.959 0.9142 0.1447 0.4145 32.281 0.9302 0.1661 0.3876 STTN (Zeng, Fu, and Chao 2020) 28.993 0.8761 0.1523 0.6965 28.891 0.8719 0.1844 0.8683 FuseFormer (Liu et al. 2021) 29.765 0.8876 0.1463 0.5481 29.627 0.8852 0.1767 0.6706 E2FGVI (Li et al. 2022) 30.064 0.9004 0.1490 0.5321 31.941 0.9188 0.4579 0.6344 FGT (Zhang, Fu, and Liu 2022) 30.811 0.9258 0.1308 0.4565 32.742 0.9272 0.1669 0.4240 WaveFormer 33.264 0.9435 0.1184 0.2933 34.169 0.9475 0.1504 0.3137 Table 1: Quantitative results of video inpainting on YouTube-VOS (Xu et al. 2018) and DAVIS (Perazzi et al. 2016) datasets. We piece all patches together to acquire bV L i ∈R h 2 × w 2 ×c and bV H i ∈R3× h 2 × w 2 ×c, and then generate the completed feature bf i by IDWT: bf i = IDWT (bV L i , bV H i ). (13) Note that the proposed wavelet spatial-temporal transformer adopts a multi-head design, where different heads are employed to calculate the attention weights of patches with various sizes. In this way, the patches with large size can apply global features to complete semantic background, while the patches with small size can utilize local features to generate detailed texture, thereby achieving high-quality video inpainting. Furthermore, to fully exploit the power of the proposed transformer, our WaveFormer stacks multiple layers of the wavelet spatial-temporal transformer. Such a design can use the updated region features in a single feedforward process to improve the results of attention to missing regions. The final inpainted frame byi can be obtained by decoding bf i with the frame-level decoder. Loss Function The total loss of our WaveFormer consists of three terms, i.e., the reconstruction term of the hole regions Lhole (Zeng, Fu, and Chao 2020), the reconstruction term of the valid regions Lval (Zeng, Fu, and Chao 2020) and the adversarial term Ladv by using Temporal PatchGAN (TPatchGAN) (Chang et al. 2019) as a discriminator: L = λholeLhole + λvalLval + λadvLadv, (14) where λhole, λval and λadv are the trade-off parameters. In real implementation, we empirically set these three parameters as 3, 5 and 0.01. Experiments Experimental Setting Datasets and Evaluation Metrics. Two most commonlyused datasets are taken to verify the effectiveness of the proposed method, including Youtube-vos dataset (Xu et al. 2018) and DAVIS dataset (Perazzi et al. 2016). The former contains 3,471, 474 and 508 video clips in training, validation and test set, respectively. The latter is composed of 60 video clips for training and 90 video clips for testing. Following previous works, we report quantitative results by four metrics, including PSNR (Haotian et al. 2019), SSIM (Zhang et al. 2022), LPIPS (Zhang et al. 2018) and flow warping error Ewarp (Lai et al. 2018). Mask Settings. In the real world, the applications of video inpainting mainly include undesired object removal, scratch restoration, watermark removal, etc. To simulate these applications, we evaluate the model with the following three types of masks: ◦Object mask: it is used to simulate applications like undesired object removal. Following FuseFormer (Liu et al. 2021), we employ the foreground object annotations in DAVIS dataset as the testing object masks, which have continuous motion and realistic appearance. ◦Curve mask: it is composed of curves with continuous motion, which is exploited to simulate applications like scratch restoration. In our experiment, these curve masks are sampled from FVI dataset (Chang et al. 2019). ◦Stationary mask: it has an arbitrary shapes but a relatively fixed position. The stationary mask is used to simulate applications such as watermark removal, and its generation process follows previous work (Chang et al. 2019; Zeng, Fu, and Chao 2020). Experimental Results and Analysis Quantitative Results. Quantitative results of video inpainting are reported on both YouTube-VOS and DAVIS. We select the most recent and the most competitive approaches as the baselines, including TCCDS, VINet, CPVINet, DFVI, FGVC, OPN, STTN, FuseFormer, E2FGVI and FGT. To ensure the comparability of experimental results, these baselines are fine-tuned several times based on their released codes, and report their best results in this section. As shown in Tab.1, the PSNR, SSIM, Ewarp and LPIPS of our model substantially surpass all previous state-of-the-art methods on YouTube-VOS and DAVIS. The superior results demonstrate that our WaveFormer can generate the videos with less distortion (PSNR and SSIM), more visually plausible content (LPIPS) and better spatial and temporal coherence (Ewarp). Such a commendable performance verifies the superiority of the proposed method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6184 (a) Curve mask (b) Stationary mask (c) Object mask E2FGVI STTN FuseFormer WaveFormer FGT 20 20 Input Figure 2: Qualitative results compared with E2FGVI (Li et al. 2022), STTN (Zeng, Fu, and Chao 2020), FuseFormer (Liu et al. 2021) ,and FGT (Zhang, Fu, and Liu 2022). Better viewed at zoom level 400%. STTN FuseFormer WaveFormer Input frame bmx-bumps rollerblade Figure 3: Comparison of the feature maps before feeding into transformer blocks between STTN (Zeng, Fu, and Chao 2020), FuseFormer (Liu et al. 2021) and our WaveFormer. Qualitative Results. To visually inspect the visual results, we choose four competitive methods, including E2FGVI, STTN, FuseFormer and FGT, to conduct visual comparisons. Respectively, Fig.2 (a), Fig.2 (b) and Fig.2 (c) illustrates the scratch restoration case of curve masks, the watermark removal case of stationary masks and the object removal case of object masks. It can be observed that our WaveFormer generates the missing contents with more accurate structures and details than baselines in these three cases. Furthermore, we also visualize the feature maps of STTN, FuseFormer and WaveFormer before extracting spatial patches for attention computation. As shown in the second example (rollerblade) of Fig. 3, the texture structure of text, windows and walls in the feature map generated by STTN is completely destroyed. Although the texture struc0% 10% 20% 30% 40% 50% rank 1 rank 2 rank 3 STTN FuseFormer E2FGVI FGT Ours Figure 4: User study. “rank x” means the percentage of results from each model being chosen as the x-th best. ture of text in the feature map produced by FuseFormer is retained, the texture structure of windows and walls has been totally broken by strong noise. Compared with these two most competitive approaches, our WaveFormer produces the feature map with a cleaner background and a more complete texture structure. It is easy to figure out the text, window and wall in our feature map. Such a distinct background texture leads to more accurate attention retrieval in the transformer block, thus naturally producing better visual quality. The above observations illustrate that noise accumulation destroys the texture structure used for attention retrieval, and our WaveFormer relieves this drawback to some extent. We believe that this is the reason why our WaveFormer has better inpainting performance. User Study. In order to further make a comprehensive comparison, we conduct a user study of the inpainting results The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6185 STTN FuseFormer WaveFormer Input frame STTN FuseFormer WaveFormer Input frame (a) Parkour (b) Train Figure 5: Visual comparison of the feature maps sourced from clean and noisy video frame, where the first, second and third rows are the clean frames, frame with Gaussian noise and frame with Salt & pepper noise, respectively. Best viewed in zoom. of the five competitive approaches, including STTN, FuseFormer, E2FGVI, FGT and WaveFormer. We invited 20 volunteers to perform a questionnaire survey for 10 videos from the DAVIS dataset. In each inquiry, we asked volunteers to choose the video for which they think the inpainting results are best. To ensure the reliability of subjective evaluations, the inpainting results obtained by the five methods were scrambled at each interrogation, and each video can be played multiple times. The results of the user study are concluded in Fig. 4. As we can see, volunteers obviously favor our results compared with other competitors. Ablation Study Noise-robustness. Fig. 5 shows the feature maps with noisy frames as inputs in two representative example, where the first row reveals the feature map produced when using the clean frame from DAVIS dataset as inputs, and the next two rows display the feature map generated when using the frame added with Gaussian and Salt & pepper noise as inputs. As shown in Fig. 5, we can find that it is difficult for STTN and FuseFormer to suppress noise, while WaveFormer could suppress the noise and maintain the background structure during its inference. For example, in Fig. 5(a), the building structure in the two feature maps generated by STTN and WaveFormer is complete, when the clean parkour frame is fed. After the frame is superposed with Gaussian or salt & pepper noise, the feature map of STTN contains very strong noise, and the building structure vanishes, while the basic structure could still be observed from our WaveFormer. Similarly, in Fig. 5(b), the feature map of FuseFormer also contains strong noise, and the railway structure disappears, while WaveFormer can still observe the railway structure. such results indicate that WaveFormer is robust to different noises. The Impact of Noise. To further verify the impact of noise on attention calculation, we use DWT to separate the noise from the embedding used for attention calculation in the STTN and FuseFormer, and compare them with its original versions. Here, the improved STTN and FuseFormer are labeled STTN Wave and FuseFormer Wave. As shown in Tab.2, STTN Wave and FuseFormer Wave are obviously superior to original STTN and FuseFormer in all evaluation metrics. These results demonstrate the effectiveness and necessity of noise removal in attention calculation. Methods PSNR↑ SSIM↑ Ewarp ↓ LPIPS↓ STTN 28.993 0.8761 0.1523 0.6965 STTN Wave 30.012 0.8917 0.1509 0.6631 FuseFormer 29.765 0.8876 0.1463 0.5481 FuseFormer Wave 31.171 0.8995 0.1429 0.5236 w/o DWT 31.326 0.9259 0.1299 0.3471 Full model 33.264 0.9435 0.1184 0.2933 Table 2: Impact of noise on attention computation. Methods STTN FuseFormer E2FGVI FGT WaveFormer FLOPs 477.91G 579.82G 442.18G 455.91G 349.71G Time 0.22s 0.30s 0.26s 0.39s 0.18s Table 3: Efficiency analysis. Efficiency analysis. In addition, we compare the efficiency of WaveFormer with STTN, FuseFormer, E2FGVI and FGT by using FLOPs and inference time. Since the FLOPs in video inpainting are related to the simultaneous processing of the temporal size (number of frames), we set the temporal size to 20 following to previous works (Liu et al. 2021; Zeng, Fu, and Chao 2020; Zhang, Fu, and Liu 2022). And the runtime is measured on a single Titan RTX GPU. The compared results are shown in Tab. 3. The inference speed of the proposed method is the fastest, improving 0.04s over the optimal baseline—STTN. Besides, WaveFormer holds the lowest FLOPs in contrast to all other methods. Conclusion In this work, we theoretically proved that noise reduces the attention to relevant contents and increases the attention to irrelevant contents when generating the missing regions. Based on this fact, we propose a novel transformer network by introducing the DWT, named WaveFormer. Our WaveFormer uses DWT to separate the noise existing in the embedding into high-frequency components, and employs relatively clean low-frequency components to calculate attention weight, thereby mitigating the impact of noise on the calculation of attention weight to the greatest extent. Experiments demonstrate the superior performance of the proposed WaveFormer both quantitatively and qualitatively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6186 References Bar, A.; Gandelsman, Y.; Darrell, T.; Globerson, A.; and Efros, A. 2022. Visual prompting via image inpainting. In Advances in Neural Information Processing Systems (NIPS), volume 35, 25005–25017. Cai, J.; Li, C.; Tao, X.; Yuan, C.; and Tai, Y.-W. 2022. DeViT: Deformed Vision Transformers in Video Inpainting. In Proceedings of the 30th ACM International Conference on Multimedia (ACMMM), 779–789. Chang, Y.-L.; Liu, Z. Y.; Lee, K.-Y.; and Hsu, W. 2019. Freeform Video Inpainting with 3D Gated Convolution and Temporal PatchGAN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 9066–9075. Cheng, S.; Wang, Y.; Huang, H.; Liu, D.; Fan, H.; and Liu, S. 2021. NBNet: Noise Basis Learning for Image Denoising With Subspace Projection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4896–4906. Gao, C.; Saraf, A.; Huang, J.-B.; and Kopf, J. 2020. Flowedge Guided Video Completion. In Proceedings of the European Conference on Computer Vision (ECCV), 713–729. Geng, M.; Meng, X.; Zhu, L.; Jiang, Z.; Gao, M.; Huang, Z.; Qiu, B.; Hu, Y.; Zhang, Y.; Ren, Q.; and Lu, Y. 2022. Triplet Cross-Fusion Learning for Unpaired Image Denoising in Optical Coherence Tomography. IEEE Transactions on Medical Imaging (TMI), 41(11): 3357–3372. Gu, B.; Yu, Y.; Fan, H.; and Zhang, L. 2023. FlowGuided Diffusion for Video Inpainting. arXiv preprint arXiv:2311.15368. Haotian, Z.; Long, M.; Hailin, W., JinZha ando wen; and Collomosse, N. X. J. 2019. An Internal Learning Approach to Video Inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2720– 2729. Huang, J.; Kang, S. B.; Ahuja, N.; and Kopf, J. 2016. Temporally coherent completion of dynamic video. ACM Transactions on Grapics (TOG), 35(6): 196.1–196.11. Ji, Z.; Hou, J.; Su, Y.; Pang, Y.; and Li, X. 2022. G2LPNet: Global to Local Progressive Video Inpainting Network. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 33(3): 1082–1092. Jia, F.; Wong, W. H.; and Zeng, T. 2021. DDUNet: Dense Dense U-Net With Applications in Image Denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 354–364. Kang, J.; Oh, S. W.; and Kim, S. J. 2022. Error compensation framework for flow-guided video inpainting. In European Conference on Computer Vision, 375–390. Ke, L.; Tai, Y.-W.; and Tang, C.-K. 2021. OcclusionAware Video Object Inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 14468–14478. Kim, D.; Woo, S.; Lee, J.-Y.; and Kweon, I. S. 2019. Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4263– 4272. Kim, D.; Woo, S.; Lee, J.-Y.; and Kweon, I. S. 2020. Recurrent Temporal Aggregation Framework for Deep Video Inpainting. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 42(5): 1038–1052. Lai, W.-S.; Huang, J.-B.; Wang, O.; Shechtman, E.; Yumer, E.; and Yang, M.-H. 2018. Learning blind video temporal consistency. In Proceedings of the European conference on computer vision (ECCV), 179–195. Lee, S.; Oh, S. W.; Won, D.; and Kim, S. J. 2019. Copy-andpaste networks for deep video inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 4413–4421. Li, A.; Zhao, S.; Ma, X.; Gong, M.; Qi, J.; Zhang, R.; Tao, D.; and Kotagiri, R. 2020. Short-Term and Long-Term Context Aggregation Network for Video Inpainting. In Proceedings of the European Conference on Computer Vision (ECCV), 728–743. Li, Z.; Lu, C.-Z.; Qin, J.; Guo, C.-L.; and Cheng, M.-M. 2022. Towards an End-to-End Framework for Flow-Guided Video Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17562–17571. Liao, M.; Lu, F.; Zhou, D.; Zhang, S.; Li, W.; and Yang, R. 2020. Dvi: Depth guided video inpainting for autonomous driving. In Proceedings of the European Conference on Computer Vision (ECCV), 1–17. Liu, P.; Zhang, H.; Zhang, K.; Lin, L.; and Zuo, W. 2018. Multi-level wavelet-CNN for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) workshops, 773–782. Liu, R.; Deng, H.; Huang, Y.; Shi, X.; Lu, L.; Sun, W.; Wang, X.; Dai, J.; and Li, H. 2021. FuseFormer: Fusing FineGrained Information in Transformers for Video Inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 14040–14049. Liu, R.; Li, B.; and Zhu, Y. 2022. Temporal Group Fusion Network for Deep Video Inpainting. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 32(6): 3539–3551. Liu, R.; Weng, Z.; Zhu, Y.; and Li, B. 2020. Temporal Adaptive Alignment Network for Deep Video Inpainting. In International Joint Conference on Artificial Intelligence (IJCAI), 927–933. Mallat, S. G. 1989. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 11(4): 674–693. Pang, T.; Zheng, H.; Quan, Y.; and Ji, H. 2021. Recorruptedto-Recorrupted: Unsupervised Deep Learning for Image Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2043– 2052. Perazzi, F.; Pont-Tuset, J.; Mcwilliams, B.; Gool, L. V.; and Sorkine-Hornung, A. 2016. A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 724–732. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6187 Ren, J.; Zheng, Q.; Zhao, Y.; Xu, X.; and Li, C. 2022. DLFormer: Discrete Latent Transformer for Video Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3511–3520. Seoung, O., Wug; Sungho, L.; Joon-Young, L.; and Seon, K., Joo. 2019. Onion-Peel Networks for Deep Video Completion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 4402–4411. Shang, Y.; Xu, D.; Zong, Z.; Nie, L.; and Yan, Y. 2022. Network binarization via contrastive learning. In Proceedings of the European conference on computer vision (ECCV), 586– 602. Shang, Y.; Yuan, Z.; Xie, B.; Wu, B.; and Yan, Y. 2023. Posttraining quantization on diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1972–1981. Shukla, T.; Maheshwari, P.; Singh, R.; Shukla, A.; Kulkarni, K.; and Turaga, P. 2023. Scene Graph Driven TextPrompt Generation for Image Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 759–768. Somani, A.; Banerjee, P.; Rastogi, M.; Agarwal, K.; Prasad, D. K.; and Habib, A. 2023. Image Inpainting With Hypergraphs for Resolution Improvement in Scanning Acoustic Microscopy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3112–3121. Srinivasan, V. S. R.; Ma, R.; Tang, Q.; Yi, Z.; and Xu, Z. 2021. Spatial-Temporal Residual Aggregation for High Resolution Video Inpainting. arXiv preprint arXiv:2111.03574. Wang, C.; Huang, H.; Han, X.; and Wang, J. 2019. Video inpainting by jointly learning temporal structure and spatial details. In Proceedings of the AAAI Conference on Artificial Intellignce (AAAI), 5232–5239. Wu, Z.; Sun, C.; Xuan, H.; and Yan, Y. 2023a. Deep Stereo Video Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5693–5702. Wu, Z.; Sun, C.; Xuan, H.; Zhang, K.; and Yan, Y. 2023b. Divide-and-Conquer Completion Network for Video Inpainting. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 33(6): 2753–2766. Wu, Z.; Zhang, K.; Sun, C.; Xuan, H.; and Yan, Y. 2023c. Flow-Guided Deformable Alignment Network with SelfSupervision for Video Inpainting. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. Wu, Z.; Zhang, K.; Xuan, H.; Yang, J.; and Yan, Y. 2021. DAPC-Net: Deformable Alignment and Pyramid Context Completion Networks for Video Inpainting. IEEE Signal Processing Letters (SPL), 28: 1145–1149. Xu, N.; Yang, L.; Fan, Y.; Yang, J.; Yue, D.; Liang, Y.; Price, B.; Cohen, S.; and Huang, T. 2018. YouTube-VOS: Sequence-to-Sequence Video Object Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), 603–619. Xu, R.; Li, X.; Zhou, B.; and Loy, C. C. 2019. Deep FlowGuided Video Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3723–3732. Yao, T.; Pan, Y.; Li, Y.; Ngo, C.-W.; and Mei, T. 2022. Wavevit: Unifying wavelet and transformers for visual representation learning. In Proceedings of the European conference on computer vision (ECCV), 328–345. Yu, Y.; Fan, H.; and Zhang, L. 2023. Deficiency-Aware Masked Transformer for Video Inpainting. arXiv preprint arXiv:2307.08629. Yu, Y.; Zhan, F.; Lu, S.; Pan, J.; Ma, F.; Xie, X.; and Miao, C. 2021. WaveFill: A Wavelet-based Generation Network for Image Inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 14094– 14103. Zeng, Y.; Fu, J.; and Chao, H. 2020. Learning Joint SpatialTemporal Transformations for Video Inpainting. In Proceedings of the European Conference on Computer Vision (ECCV), 3723–3732. Zeng, Y.; Fu, J.; Chao, H.; and Guo, B. 2019. Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1486– 1494. Zhang, K.; Fu, J.; and Liu, D. 2022. Flow-Guided Transformer for Video Inpainting. In Proceedings of the European Conference on Computer Vision (ECCV), 74–90. Zhang, K.; Peng, J.; Fu, J.; and Liu, D. 2023. Exploiting Optical Flow Guidance for Transformer-Based Video Inpainting. arXiv preprint arXiv:2301.10048. Zhang, K.; Wu, S.; Wu, Z.; Yuan, X.; and Zhao, C. 2022. Fractional Optimization Model for Infrared and Visible Image Fusion. In Proceedings of the British Machine Vision Conference (BMVC), 1–12. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 586– 595. Zhang, Y.; Wu, Z.; and Yan, Y. 2023. PFTA-Net: Progressive Feature Alignment and Temporal Attention Fusion Networks for Video Inpainting. In Proceedings of the IEEE International Conference on Image Processing (ICIP), 191– 195. Zou, X.; Yang, L.; Liu, D.; and Lee, Y. J. 2021. Progressive Temporal Feature Alignment Network for Video Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16448–16457. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6188 | 2024 | 687 |
18,503 | FD3D: Exploiting Foreground Depth Map for Feature-Supervised Monocular 3D Object Detection Zizhang Wu1, Yuanzhu Gan2, Yunzhe Wu2, Ruihao Wang2, Xiaoquan Wang3, Jian Pu1* 1 Fudan University 2 ZongmuTech 3 ExploAI [email protected], {yuanzhu.gan, nelson.wu, ruihao.wang}@zongmutech.com, [email protected], [email protected] Abstract Monocular 3D object detection usually adopts direct or hierarchical label supervision. Recently, the distillation supervision transfers the spatial knowledge from LiDAR- or stereobased teacher networks to monocular detectors, but remaining the domain gap. To mitigate this issue and pursue adequate label manipulation, we exploit Foreground Depth map for feature-supervised monocular 3D object detection named FD3D, which develops the high-quality instructive intermediate features to conduct desirable auxiliary feature supervision with only the original image and annotation foreground object-wise depth map (AFOD) as input. Furthermore, we build up our instructive feature generation network to create instructive spatial features based on the sufficient correlation between image features and pre-processed AFOD, where AFOD provides the attention focus only on foreground objects to achieve clearer guidance in the detection task. Moreover, we apply the auxiliary feature supervision from the pixel and distribution level to achieve comprehensive spatial knowledge guidance. Extensive experiments demonstrate that our method achieves state-of-the-art performance on both the KITTI and nuScenes datasets, with no external data and no extra inference computational cost. We also conduct experiments to reveal the effectiveness of our designs. Introduction 3D object detection is crucial for the perception task in extensive applications such as autonomous driving and robotic manipulation (Reading et al. 2021; Liu, Wu, and T´oth 2020). Considering different scenarios, recent 3D detection approaches (Thomas et al. 2019; Nabati and Qi 2021; Huang et al. 2022; Sun et al. 2020) measure the objects’ precise location from the different-modalities inputs, such as 3D point clouds, radar signals, monocular images or stereo images. In particular, the monocular setting adopting the deployment of a single RGB camera has attracted increasing attention. The usual pipeline for monocular 3D object detection reveals to apply the direct label supervision with the welldesigned model and constrains (Zhang, Lu, and Zhou 2021; Reading et al. 2021; Huang et al. 2022), or deliver the hierarchical label supervision on different-layer features (Lu *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Pixel-level & Distribution-level Auxiliary Feature Supervision Monocular 3D Detector Instructive Feature Generation Network Detection Results Primary Detection Network Instructive Feature Input Image AFOD Input Image & Instructive Feature Generation Figure 1: Illustration of our auxiliary feature supervision framework, where the instructive features are trained from the generation network with inputs of image and annotation foreground object-wise depth (AFOD). et al. 2021). Recently, researchers have explored the distillation schema to train the LiDAR- or stereo-based teacher networks to transfer the learned spatial features’ knowledge to monocular 3D detectors (Chong et al. 2022; Chen, Dai, and Ding 2022). It actually exhibits an auxiliary feature supervision to the original monocular baseline, with the elaborate manipulation of the labels, teacher models, and inputs like LiDAR or stereo sensors. Notably, these multi-modal sensors bring robust spatial guidance like feature-level (Chen, Dai, and Ding 2022) or object-level (Chong et al. 2022) adaptation but remain more expensive compared with the monocular image-only setting (Yin, Zhou, and Krahenbuhl 2021; Li and Zhao 2021). In addition, their teacher network’s training stays restricted by the domain gap, which reveals the other-modal input, the heavier feature extractor and more challenging cross-modal feature distillation. Furthermore, they utilize indirect label supervision to make inaccurate predictions under the model limitation, ignoring the potential of ground truth as a direct indication to generate reliable spatial features. To mitigate these issues and pursue adequate label manipulation, we exploit Foreground Depth map for featuresupervised monocular 3D object detection named FD3D, which develops the high-quality instructive intermediate The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6189 features to conduct desirable auxiliary feature supervision with only the original image and annotation foreground object-wise depth map (AFOD) as input. As shown in Fig. 2, it can be considered as an elegant auxiliary feature supervision with a novel input-label manipulation for excellent label absorbance. In specific, within the instructive feature generation network (IFGN), we cancel the heavy LiDAR or stereo feature extractor. Instead, we pre-process the foreground object-wise depth label to achieve alignment with the image domain, where AFOD provides the attention focus only on foreground objects to achieve clearer guidance in the detection task. Moreover, we propose a vision-depth association module (VDAM) to promote the semantic and spatial clues correlation with image and AFOD as input. Noticeably, the IFGN actually keeps the similar light monocular detection framework, and the VDAM fully manipulates the depth labels as a reliable indicator to generate more instructive intermediate features. Afterwards, we deliver the auxiliary feature supervision to release the cross-domain challenge and acquires efficient spatial knowledge migration with the channel-wise projection layer (pixel-level supervision) and adversarial scoring block (distribution-level supervision). After training, we reserve the primary detection network to inference tested images, with no external data and no extra inference computational cost. We summarize our contributions as follows: • We propose a new framework FD3D for monocular 3D object detection, which sufficiently manipulates the image and annotation foreground object-wise depth map (AFOD) as input to focus on the foreground objects, thus to produce instructive intermediate features for further domain-free feature supervision. • We develop a vision-depth association module to generate robust intermediate features, which projects features with semantic, depth, and geometric clues into 3D coordinates to deploy adequate feature fusion. • We propose the auxiliary feature supervision to reach efficient pixel-level and distribution-level feature guidance with channel-wise projection layer and adversarial scoring block. Our approach achieves state-of-the-art performance on the KITTI (Geiger, Lenz, and Urtasun 2012) and nuScenes (Caesar et al. 2019) datasets. Related Work Monocular 3D object detection. The objective of monocular 3D objection detection was to recognize objects of interest and recover the corresponding 3D bounding box information from monocular images. It was an ill-posed problem due to the lack of direct depth information measurements for solving 2D-3D projection ambiguity (Ma et al. 2021; Reading et al. 2021). Recent approaches (Zhang, Lu, and Zhou 2021; Reading et al. 2021; Park et al. 2021; Huang et al. 2022) adopted the convolutional neural networks to encode high-level semantic features from image inputs, and designed geometric constraints based on calibration projection or utilized additional depth supervision with LiDAR measuring to decode target-level responses. To mitigate the issue of depth measuring absence, PatchNet (Ma et al. 2020) adopted dense depth estimation pretraining (Fu et al. 2018) and performed the task of regression from patched depth maps. MonoDTR (Huang et al. 2022) proposed a depthaware transformer to encode long-range semantic and depth dependencies. MonoDistill (Chong et al. 2022) utilizes the projected LiDAR signals as the inputs for the teacher model to educate the student model with spatial information. Our approach reveals the monocular detector to further motivate the auxiliary spatial feature supervision. Auxiliary Learning. Auxiliary learning (Zhang, Tang, and Jia 2018; Liu, Davison, and Johns 2019; Ye et al. 2021) aimed at jointly training a primary task alongside auxiliary tasks to improve the primary model robustness to unseen data. Works (Flynn et al. 2016; Zhou et al. 2017) accomplished unsupervised monocular depth estimation via developing image synthesis networks that predicted the relative pose of multiple cameras for auxiliary learning. For the primary task of 2D object detection, Mordan et al. (Mordan et al. 2018) proposed the generic ROCK residual block to train auxiliary scene classification, depth estimation, and normal estimation. In terms of monocular 3D object detection, DD3D (Park et al. 2021) proposed to pre-train the detector with the auxiliary task of depth estimation to assist in monocular 3D localization. MonoCon (Liu, Xue, and Wu 2022) proposed to recover the 2D-3D relationship via applying geometric constraints, which trained the key-point estimations of foreground objects as the auxiliary task. Our auxiliary feature supervision can be regarded as an auxiliary learning task to improve the primary detection task. Methodology Overview As shown in Fig. 2, we illustrate the overview of our FD3D framework for monocular 3D object detection. Firstly, we propose an instructive feature generation network (IFGN) by developing a vision-depth association module (VDAM), which considers the long-range dependencies of foreground object-wise depth maps (AFOD) from labels and their encoded semantic features. We supervise it with labels and restore the instructive intermediate features for feature supervision. Afterwards, we adopt MonoDLE (Ma et al. 2021) monocular detector as our baseline model, and train it for the primary task of 3D detection, together with the auxiliary feature supervision between intermediate features, where we design the channel-wise projection layer and an adversarial scoring block to promote spatial knowledge migration. After training, we propose the tested images into the primary detection network to receive detection results with no external data. Next, we will introduce our framework with equations compared with the previous ones, and reveal more details of our contributions. Label Manipulation for 3D Object Detection In the Fig. 3, we elaborate the label manipulation comparison. Firstly, we introduce the usual label manipulation: Fig. 3(a) with direct label supervision and Fig. 3(b) with multi-layer losses to realize hierarchical label supervision, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6190 Input Image Semantic Encoder 3D Decoder Primary Features …… Label Supervision Primary Detection Network Detection Prediction (a) Channel-Wise Projection Layer (Pixel-level Supervision) (b) Adversarial Scoring Block (Distribution-level Supervision) Depth Map & Input Image Vision-Depth Association Instructive Features …… Pre-Processing Auxiliary Features Generation Network Label 3D Decoder Detection Prediction Label Supervision Auxiliary Feature Adaptation Supervision Figure 2: Overview of our FD3D framework for monocular 3D object detection. 16 Figure 3: The usual label manipulation for monocular 3D object detection with the direct label (a) or hierarchical label (b) supervision. The distillation schema in (c) with other-modal input ˆX, which applies cross-domain intermediate feature supervision. Our strategy in (d) replaces the ˆX with image X, adopts the similar monocular model ˜F and leverages the ground truth Y as a spatial indicator to achieve more instructive intermediate features. where X denotes the monocular image inputs and Fi denotes the i-th layer of model. L1(I(FN; X); Y )+ L31(I(GN; ˆX); Y )) + X L32(O(I(Fi; X)); I(Gi; ˆX)) =L1(...) + X L3(I(Fi; X); I(Ti(GN, ˆX; Y ), ˆX)) (1) Furthermore, Equ.(1) reflects the distillation schema in Fig. 3(c) which trains the guidance network G in L31 with other-modal input ˆX like LiDAR pointcloud, and applies the feature supervision loss L32 between intermediate features, where I(A; B) denotes the inference result of model layers A with input B; L(A; B) denotes the loss between prediction A with label B; O denotes the operation on features. Actually, we could simplify the L31 and L32 to the L3 where T(GN, ˆX; Y ) indicates the manipulation (model training) process of the three elements (GN, ˆX, Y ), thus could achieve the i-th intermediate Ti features via the input ˆX. Considering the limitations of previous frameworks, we propose our stylish pipeline: L1(I(FN; X); Y ) + L41(I( ˜FN; X, Y ); Y )) + X L42(O(I(Fi; X)); I( ˜Fi; X, Y )) =L1(...) + X L4(I(Fi; X); I(Ti( ˜FN, X, Y ; Y ); X, Y )) (2) As shown in Fig. 3 (d), we firstly replace the ˆX with the same image domain input X and replace the othermodal-based guidance network G with the similar monocular model ˜F to release the domain gap. In addition, we delve into label manipulation where we leverage the ground truth Y as the strong geometrical clue indicator to achieve more instructive intermediate features I(Ti; X, Y ). We regard our approach as a sufficient approach to absorb and manipulate the three elements ( ˜F, X, Y ), which decreases the training complexity. Instructive Feature Generation Network Prior to the details, we briefly introduce some important design considerations. First of all, we shall avoid the usage The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6191 Semantic Feature Prior Coordinates MLP Sigmoid Cross-Attention Association × MLP Coordinates with 3D Embedding × Element-wise Multiplication + Element-wise Addition 𝐻𝐻×𝑊𝑊×C1 𝐻𝐻×𝑊𝑊×𝐶𝐶2 𝐻𝐻×𝑊𝑊×3 Vision-Depth Association Coordinate Generator AFOD + Cross-Attention Association Latent Semantic Latent Depth and Semantic CrossAttention As Query As Key & Value Prior Coordinates 2D coordinates with depth grid 𝐻𝐻×𝑊𝑊×1 Instructive Feature Coordinate Generator 𝐹𝐹Img 𝐹𝐹If Figure 4: The structure of our vision-depth fusion module is only employed in our teacher model. of additional inputs like LiDAR signals or paired stereoimages during the network training or inference, so that our results could be fairly compared against the prior monocular research. Secondly, noting that we aim at transferring the abundant spatial knowledge to the baseline, we shall ensure that the instructive intermediate features from the generation network are accurate and reliable. Thirdly, to alleviate the cost of performing dimensional alignments, despite the vision-depth association module, we maintain the generation network architecture as that of the baseline model. Specifically, as shown in Fig. 4, we create the annotation foreground object-wise depth map (AFOD), denoted as D, via applying the calibration projection of object-wise depth labels A. We adopt the AFOD for its overall superior performance with clearer instructive features about the foreground objects. LiDAR depth reaches the ambiguous feature guidance on the long-distance and occluded cases, which is affected by the disruption of background noise, such as buildings and traffic devices. More details are demonstrated in the experiment section. Then pixels within projected 2D boxes are assigned with center depth values of 3D boxes, while the occlusion areas are assigned with depth values of closer targets. As shown in Fig. 4, we designed the vision-depth association module based on the cross-attention association (Vaswani et al. 2017) and the coordinate embedding (Liu et al. 2022) mechanisms. While, instead of generating 3D coordinate feature maps with semantic features only, we further restrain the pixel localization error by multiplying it with encoded depth features from D. We denote the image feature extractor as FImg, the multi-layer-perceptron (MLP) operation as FM, the cross-attention operation as ψ, and the coordinate generator as ϕ. Hence in function, we formulate the vision-depth association module as FIf =σ(FM(ψ(FImg(X), D)) × FM(ϕ(X)) + ψ(FImg(X), D), (3) where FIf refers to the associated vision-depth features, i.e. our wanted instructive features, and σ refers to the sigmoid operation. Within the coordinate generator ϕ, we first create height and width arrays with lengths of the input image size, and leverage the LID distribution to create the depth array for each pixel location. Hence, we multiplied the cameraaxis coordinates with the inverse of the intrinsic parameters, together with the camera to LiDAR extrinsic parameters to obtain our initial 3D coordinates of shape H × W × 3. Our cross-attention association sets the encoded semantic features as query and the concatenation of depth and semantic map as key and value, so that the generator has access to the precise information of distance measuring. Afterwards, we train the IFGN with 3D labels in a supervised manner and evaluate the depth-measuring quality of instructive intermediate features based on its 3D detection performance on the validation split. Auxiliary Feature Supervision In this subsection, we are going through the auxiliary feature supervision process of instructive intermediate features. Unlike previous auxiliary learning approaches for monocular 3D object detection (Park et al. 2021, 2023; Peng et al. 2022) that require the pre-training of a large number of samples with dense depth map labels, we train our auxiliary supervision on the detection dataset only. Channel-Wise Projection Layer (CPL) Motivated by the MLP-only architecture (Touvron et al. 2022), we conduct the channel-wise interaction in a residual manner with the MLP operations. As shown in Fig. 5, the proposed CPL consists of the residual structure with channel attention generation. Instead of directly adopting the proposed designs (Touvron et al. 2022), we replace the self-attention module with linear layers (Li.), affine transformation (Aff.) and other operations, which abandons the traditional multi-head attention computation and achieves GPU savings and stable training. Next, we deliver the global features through the channel attention generation, to obtain the channel attention weights. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6192 Figure 5: Overview of the channel-wise projection layer (CPL). Real Labels Fake Labels Discriminator Training Score Map Score Map Fixing Discriminator & Auxiliary Supervision Training Real Labels Instructive features …… …… …… Linear Layer Leaky ReLU Linear Layer Leaky ReLU Sigmoid Discriminator Score Map Primary features Primary features ADV Loss Discriminator Figure 6: Training of the discriminator for the adversarial scoring block (ASB). Therefore, the total pixel-wise loss Lpix can be denoted as: Lpix =λF mLmse CPL(FS), FT , (4) where λF m is the hyper-parameters for loss balancing the training. The MSE loss refers to the mean squared errors. Adversarial Scoring Block The above MSE loss cares about the pixel-wise diversity, which may mislead the auxiliary supervision with the unbalanced feature distribution: a small partition of foreground object targets but a large partition of background (Chong et al. 2022). Therefore, we adopt the training procedure of generative adversarial network (Goodfellow et al. 2020), and propose the adversarial scoring block (ASB), as shown in Fig. 6. It allocates the distribution consistency between paired features, and leverages the discriminator network to distinguish them. When the primary features succeed in fooling the discriminator, we receive a similar distribution between the primary and instructive features. Specifically, we design the discriminator network with linear layers. When training the discriminator, we assign the real labels to the auxiliary instructive features and fake labels to the primary ones. We minimize the binary cross-entropy loss LD of the score map from discriminator D on the features as follows: LF D = −1 N X i X h,w yilog D(Fin)(h,w) + (1 −yi)log D(Fpr)(h,w) (5) Method mAP↑ mATE↓ mAOE↓ NDS↑ CenterNet 0.338 0.658 0.629 0.400 FCOS3D 0.358 0.690 0.452 0.428 DD3D 0.418 0.572 0.368 0.477 PETR 0.391 0.647 0.433 0.455 BEVFormer 0.409 0.650 0.439 0.462 BEVDet 0.398 0.556 0.414 0.463 Ours 0.431 0.569 0.365 0.485 Imp. +1.3% +0.3% +0.3% +0.8% Table 1: Single-frame nuScenes detection test set evaluation. ‘Imp.’ indicates our performance improvement over the base model DD3D. where yi=1 when the discriminator input is instructive intermediate features Fin, and yi=0 when the input reveals the primary features Fpr. After the discriminator’s training, we fix the discriminator and begin the auxiliary supervision training for the primary network. As shown in the bottom of Fig. 6, by constraining the score map from the auxiliary training and assigning real labels with binary cross-entropy loss, the auxiliary learning model can effectively generate appropriate features and prediction outputs with higher quality, which acts as one distribution agreement different from the pixel-wise MSE loss. The adversarial loss Ladv could be formulated as: LF adv = −1 N X i X h,w log D(Fpr)(h,w) , (6) As a result, features from the primary network can fool the discriminator by maximizing the probability of the feature or prediction similarity. Primary and Auxiliary Supervision For brevity, we use Lcls and Lreg to denote the detection loss. We first train our primary network with the loss: Lpr = Lcls + Lreg (7) Then, we train our auxiliary feature supervision with the pre-trained primary network and instructive feature generation network. To achieve end-to-end network training, within one batch training, we first train the discriminator with LF D (Equ. 5), then increase the auxiliary feature supervision (Equ. 8). LS = Lpr + Lpix + λF a LF adv (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6193 Monocular Method Extra Data Time(ms) AP3D (Car test) APBEV (Car test) Easy Mod. Hard Easy Mod. Hard MonoDTR (Huang et al. 2022) LiDAR 37 21.99 15.39 12.73 28.59 20.38 17.14 MonoDistill (Chong et al. 2022) LiDAR 40 22.97 16.03 13.60 31.87 22.59 19.72 DCD (Li et al. 2022) LiDAR 23.81 15.90 13.21 32.55 21.50 18.25 SGM3D (Zhou et al. 2022) Stereo 30 22.46 14.65 12.97 31.49 21.37 18.43 OPA-3D (Su et al. 2023) KITTI-Depth 40 24.60 17.05 14.25 33.54 22.53 19.22 NeurOCS (Min et al. 2023) G.T. Fore. Mask 29.89 18.94 15.90 37.27 24.49 20.89 PGD (Wang et al. 2021b) None 21 19.05 11.76 9.39 26.89 16.51 13.49 MonoDLE (Ma et al. 2021) None 40 17.23 12.26 10.29 24.79 18.89 16.00 MonoEF (Zhou et al. 2021) None 30 21.29 13.87 11.71 29.03 19.70 17.26 GUPNet (Lu et al. 2021) None 34 20.11 14.20 11.77 HomoLoss (Gu et al. 2022) None 21.75 14.94 13.07 29.60 20.68 17.81 MonoJSG (Lian, Li, and Chen 2022) None 42 24.69 16.14 13.64 32.59 21.26 18.18 MonoCon (Liu, Xue, and Wu 2022) None 26 22.50 16.46 13.95 31.12 22.10 19.00 Ours None 40 25.38 17.12 14.50 34.20 23.72 20.76 Improvements +0.69 +0.66 +0.55 +1.61 +1.62 +1.76 Table 2: Comparison on the KITTI test set. ‘Improvements’ indicates our performance gain over the previous best results without extra data. G.T. Fore. Mask denotes ground truth foreground mask (Min et al. 2023). Method Modal AP3D(Car val) APBEV (Car val) Easy Mod. Hard Easy Mod. Hard (i) MV3D (Chen et al. 2017) LiDAR 71.19 56.60 55.30 86.18 77.32 76.33 (ii) Instructive Fea. Gen. Network RGB + All LiDAR Depth 69.40 55.43 43.54 82.45 67.56 55.60 (iii) Instructive Fea. Gen. Network RGB+ Foreground LiDAR Depth 69.50 55.57 43.60 82.58 67.64 55.58 (iv) Instructive Fea. Gen. Network RGB+ AFOD (Ours) 68.39 60.47 52.24 78.39 73.05 64.33 Table 3: Evaluation of the instructive feature generation network on the KITTI validation split on ‘Car’ category. Our approach (iv) could achieve a comparable detection performance with the LiDAR-based detector (i) (Chen et al. 2017) and obtains better overall performance compared with LiDAR-depth-related (ii, iii). where λF a denote the loss balancing hyper-parameters of the adversarial loss. Experiments Settings Dataset. We evaluate our approach on two widely used datasets: KITTI (Geiger, Lenz, and Urtasun 2012) and nuScenes (Caesar et al. 2020) benchmarks. The KITTI dataset consists of 7,481 samples for training and 7,518 for testing. Following (Reading et al. 2021), we divide training samples into a training set with 3,712 samples and a validation set with 3,769 samples. Ablation studies are all conducted on the validation split with models trained on the training split. There are three object classes (Car, Pedestrian and Cyclist) and each class is divided into three difficulty levels based on occlusion, truncation and size. The largescale dataset nuScenes (Caesar et al. 2020) contains a full 360-degree field of view provided by 6 cameras, 1 Lidar and 5 radars, which consists of 1000 driving scenes, with 700, 150 and 150 scenes for training, validation, and testing, respectively. The corresponding sequences are sampled to frames with the resolution of 1600 × 900 at 2Hz. Evaluation metric. For KITTI dataset, following prior works (Reading et al. 2021), the 3D Average Precision (AP3D) and BEV Average Precision (APBEV ) are two vital evaluation metrics. They are calculated using class-specific thresholds with 40 recall positions based on the intersectionover-union (IoU) of 2D BEV and 3D bounding boxes. The Car, Pedestrian and Cyclist categories have 0.7, 0.5, 0.5 IoU threshold. For nuScenes, we follow (Park et al. 2021) and adopt the evaluation metrics including nuScenes Detection Score (NDS) and mean Average Precision (mAP), along with two true-positive metrics ATE and AOE. Implementation details. We select monocular 3D detection method MonoDLE (Ma et al. 2021) as our base model for KITTI dataset following (Chong et al. 2022), and DD3D (Park et al. 2021) as the base model for nuScenes dataset, which both reveal one-stage center-based detection approaches. The weight performs λF m=0.9 for pixel-wise loss, and λF a =0.9 for the adversarial loss. The settings for the optimizer and batch size follow base models (Ma et al. 2021; Park et al. 2021). Evaluation of Our Framework Comparisons on KITTI dataset. In Table 2, we present the benchmark evaluation on the KITTI test split. Compared with the previous best results without extra data, our framework outperforms it with a certain margin. Furthermore, our framework realizes the inference time of 40ms, which does not introduce additional computational costs in the inference stage and is industrially implementable. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6194 Ablation AP3D@IoU=0.7 APBEV @IoU=0.7 Easy Mod. Hard Easy Mod. Hard (i) direct. 60.57 57.32 49.50 71.00 67.68 59.30 (ii) cro. 61.31 57.45 49.61 71.72 67.69 59.75 (iii) +coord. 62.03 58.54 50.10 72.39 68.72 60.83 (iv) +coord.×σ(FM(vis.)) 66.58 63.52 54.82 77.49 75.40 63.32 (v) +coord.×σ(FM(dep.)) 67.38 62.97 54.03 78.91 76.06 64.64 (vi) +coord.×σ(FM(cro.)) 73.48 67.10 56.99 83.93 78.76 69.39 Table 4: Ablation study of different instructive feature generation network designs on the KITTI validation set. ‘vis.’ denotes the visual features. ‘dep.’ denotes the object-wise annotation depth. ‘direct.’ denotes the direct association from the MLP layer with concatenation input of ‘vis.’ and ‘dep.’. ‘cro.’ denotes the our cross-attention association strategy. ‘coord.’ denotes the preset 3D coordinates. FM and σ denotes the MLP and sigmoid operation. Feature C-P A-S AP3D@IoU=0.7 APBEV @IoU=0.7 Easy Mod. Hard Easy Mod. Hard (i) 17.45 13.66 11.68 24.97 19.33 17.01 (ii) ✓ ✓ 26.78 19.43 16.41 35.79 26.21 22.71 (iii) ✓ ✓ 24.72 18.16 15.47 34.48 24.47 21.16 (iv) ✓ ✓ ✓ 28.22 20.23 17.04 36.98 26.77 23.16 Table 5: Ablation study for auxiliary feature supervision. ‘C-P’ denotes the channel-wise projection layer for pixel-wise loss. ‘A-S’ denotes the adversarial scoring block for distribution-level loss. Comparisons on nuScenes dataset. In Table 1, we compare with monocular approaches CenterNet (Zhou, Wang, and Kr¨ahenb¨uhl 2019), FCOS3D (Wang et al. 2021a) and DD3D (Park et al. 2021) on the nuScenes dataset, where our approach outperforms the base model DD3D with a 1.3% improvement in mAP and 0.8% improvement in NDS. Evaluation of the instructive feature generation network. The generation network should achieve adequate detection accuracy to ensure reliable instructive intermediate features. As shown in Table 3, we conduct the evaluation comparison with (i) the LiDAR-based method MV3D (Chen et al. 2017), and (ii-iv) the IFGN with all LiDAR depth, foreground LiDAR depth and our AFOD as input. Our approach (iv) could achieve a comparable detection performance with the LiDAR-based detector (i) and generates clearer guidance features on the long-distance and occluded objects, i.e. the ‘Mod.’ and ‘Hard’ cases. The settings (ii) and (iii) gain higher score on the ‘Easy’ cases with denser and more refined LiDAR depth clues, but receive the mixed and ambiguous instructive features on the long-distance and occluded cases. In addition, the full-range LiDAR depth (ii) is disturbed by the background objects like buildings and road devices. Overall, our (iv) obtains better overall performance. Ablation Study In this section, we investigate the effects of each component of our framework on the KITTI validation split. Ablation study for instructive feature generation network. We explore some different designs of the instructive feature generation network, specifically for the prior coordinates with 3D embedding, as shown in Table 4. Firstly, the cross-attention association (setting (ii)) performs better compared to the direct association of concatenation and MLP (setting (i)), for the long-range dependencies consideration. Based on setting (ii), we add the fixed 3D grid coordinates, yet have a limited improvement (setting (iii)), as the fixed 3D grid is a presetting schema and lacks the contentaware and geometry-aware bias. If we insert the visual information (setting (iv)) or depth knowledge (setting (v)), as the attention weight, into the originally fixed coordinates, the performance advances. We hence inject both contents simultaneously (setting (vi)) and achieve our strongest features generation network for producing instructive intermediate features. Ablation study for auxiliary feature supervision. As shown in Table 5, we investigate our designs of auxiliary feature supervision: the channel-wise projection layer (CPL) and adversarial scoring block (ASB). Setting (ii) proves the effects of our CPL to enhance the pixel-wise guidance. Setting (iii) also improves the baseline, but it performs suboptimal results without applying the pixel-wise constraint. We hence further jointly take both supervisions, and it turns to generate convincing improvements in setting (iv). Conclusion In this paper, we propose a new monocular 3D object detection framework named FD3D, which develops high-quality instructive intermediate features to conduct auxiliary feature supervision with only the image and annotation foreground object-wise depth map (AFOD) as input. To obtain representative instructive features with depth-positional cues, we develop a vision-depth association within the generation network that interacts with the AFOD with semantic features to realize long-range aware embedding. We proceed with auxiliary feature supervision from both the pixel and distribution levels. Our pipeline is shown effective and efficient on KITTI and nuScenes datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6195 References Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2019. nuScenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuScenes: A multimodal dataset for autonomous driving. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 11621–11631. IEEE. Chen, X.; Ma, H.; Wan, J.; Li, B.; and Xia, T. 2017. Multiview 3d object detection network for autonomous driving. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1907–1915. Chen, Y.-N.; Dai, H.; and Ding, Y. 2022. Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 887–897. Chong, Z.; Ma, X.; Zhang, H.; Yue, Y.; Li, H.; Wang, Z.; and Ouyang, W. 2022. MonoDistill: Learning spatial features for monocular 3D object detection. In Int. Conf. on Learning Representations (ICLR). Flynn, J.; Neulander, I.; Philbin, J.; and Snavely, N. 2016. DeepStereo: Learning to predict new views from the world’s imagery. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 5515–5524. Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; and Tao, D. 2018. Deep ordinal regression network for monocular depth estimation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2002–2011. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the KITTI vision benchmark suite. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 3354–3361. IEEE. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Gu, J.; Wu, B.; Fan, L.; Huang, J.; Cao, S.; Xiang, Z.; and Hua, X.-S. 2022. Homography Loss for Monocular 3D Object Detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Huang, K.-C.; Wu, T.-H.; Su, H.-T.; and Hsu, W. H. 2022. MonoDTR: Monocular 3D Object Detection with DepthAware Transformer. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Li, P.; and Zhao, H. 2021. Monocular 3d detection with geometric constraint embedding and semi-supervised training. IEEE Robotics and Automation Letters (RA-L), 6(3): 5565– 5572. Li, Y.; Chen, Y.; He, J.; and Zhang, Z. 2022. Densely Constrained Depth Estimator for Monocular 3D Object Detection. In European Conf. on Computer Vision (ECCV). Springer. Lian, Q.; Li, P.; and Chen, X. 2022. MonoJSG: Joint Semantic and Geometric Cost Volume for Monocular 3D Object Detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1070–1079. Liu, S.; Davison, A.; and Johns, E. 2019. Self-supervised generalisation with meta auxiliary learning. In Conf. and Workshop on Neural Information Processing Systems (NeurIPS), 1677–1687. Liu, X.; Xue, N.; and Wu, T. 2022. Learning auxiliary monocular contexts helps monocular 3D object detection. In AAAI Conf. on Artificial Intell. (AAAI), 1810–1818. Liu, Y.; Wang, T.; Zhang, X.; and Sun, J. 2022. PETR: Position Embedding Transformation for Multi-View 3D Object Detection. In European Conf. on Computer Vision (ECCV). Liu, Z.; Wu, Z.; and T´oth, R. 2020. SMOKE: Single stage monocular 3D object detection via keypoint estimation. In IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 996–997. Lu, Y.; Ma, X.; Yang, L.; Zhang, T.; Liu, Y.; Chu, Q.; Yan, J.; and Ouyang, W. 2021. Geometry uncertainty projection network for monocular 3D object detection. In IEEE Int. Conf. on Computer Vision (ICCV), 3111–3121. Ma, X.; Liu, S.; Xia, Z.; Zhang, H.; Zeng, X.; and Ouyang, W. 2020. Rethinking pseudo-lidar representation. In European Conf. on Computer Vision (ECCV), 311–327. Springer. Ma, X.; Zhang, Y.; Xu, D.; Zhou, D.; Yi, S.; Li, H.; and Ouyang, W. 2021. Delving into localization errors for monocular 3D object detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 4721–4730. Min, Z.; Zhuang, B.; Schulter, S.; Liu, B.; Dunn, E.; and Chandraker, M. 2023. NeurOCS: Neural NOCS Supervision for Monocular 3D Object Localization. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 21404– 21414. Mordan, T.; Thome, N.; Henaff, G.; and Cord, M. 2018. Revisiting multi-task learning with rock: a deep residual auxiliary block for visual detection. In Conf. and Workshop on Neural Information Processing Systems (NeurIPS), 1317– 1329. Nabati, R.; and Qi, H. 2021. CenterFusion: Center-based radar and camera fusion for 3d object detection. In IEEE Winter Conf. on Applications of Computer Vision (WACV), 1527–1536. Park, D.; Ambrus, R.; Guizilini, V.; Li, J.; and Gaidon, A. 2021. Is pseudo-lidar needed for monocular 3D object detection? In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 3142–3152. Park, D.; Li, J.; Chen, D.; Guizilini, V.; and Gaidon, A. 2023. Depth is all you need for monocular 3d detection. In IEEE Int. Conf. on Robotics and Automation (ICRA). Peng, L.; Liu, F.; Yu, Z.; Yan, S.; Deng, D.; Yang, Z.; Liu, H.; and Cai, D. 2022. Lidar point cloud guided monocular 3d object detection. In European Conf. on Computer Vision (ECCV), 123–139. Springer. Reading, C.; Harakeh, A.; Chae, J.; and Waslander, S. L. 2021. Categorical depth distribution network for monocular 3D object detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 8555–8564. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6196 Su, Y.; Di, Y.; Zhai, G.; Manhardt, F.; Rambach, J.; Busam, B.; Stricker, D.; and Tombari, F. 2023. OPA-3D: Occlusionaware pixel-wise aggregation for monocular 3d object detection. IEEE Robotics and Automation Letters (RA-L), 8(3): 1327–1334. Sun, J.; Chen, L.; Xie, Y.; Zhang, S.; Jiang, Q.; Zhou, X.; and Bao, H. 2020. Disp R-CNN: Stereo 3D object detection via shape prior guided instance disparity estimation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 10548–10557. Thomas, H.; Qi, C. R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; and Guibas, L. J. 2019. KPConv: Flexible and deformable convolution for point clouds. In IEEE Int. Conf. on Computer Vision (ICCV), 6411–6420. Touvron, H.; Bojanowski, P.; Caron, M.; Cord, M.; ElNouby, A.; Grave, E.; Izacard, G.; Joulin, A.; Synnaeve, G.; Verbeek, J.; et al. 2022. ResMLP: Feedforward networks for image classification with data-efficient training. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI). Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Conf. and Workshop on Neural Information Processing Systems (NeurIPS), 30. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021a. FCOS3D: Fully convolutional one-stage monocular 3d object detection. In IEEE Int. Conf. on Computer Vision (ICCV), 913– 922. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021b. Probabilistic and geometric depth: Detecting objects in perspective. In Conf. on Robot Learning (CoRL). PMLR. Ye, J.; Batra, D.; Das, A.; and Wijmans, E. 2021. Auxiliary tasks and exploration enable objectgoal navigation. In IEEE Int. Conf. on Computer Vision (ICCV), 16117–16126. Yin, T.; Zhou, X.; and Krahenbuhl, P. 2021. Center-based 3d object detection and tracking. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 11784–11793. Zhang, Y.; Lu, J.; and Zhou, J. 2021. Objects are Different: Flexible monocular 3D object detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 3289– 3298. Zhang, Y.; Tang, H.; and Jia, K. 2018. Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data. In European Conf. on Computer Vision (ECCV), 233–248. Zhou, T.; Brown, M.; Snavely, N.; and Lowe, D. G. 2017. Unsupervised learning of depth and ego-motion from video. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1851–1858. Zhou, X.; Wang, D.; and Kr¨ahenb¨uhl, P. 2019. Objects as points. arXiv preprint arXiv:1904.07850. Zhou, Y.; He, Y.; Zhu, H.; Wang, C.; Li, H.; and Jiang, Q. 2021. Monocular 3D object detection: An extrinsic parameter free approach. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 7556–7566. Zhou, Z.; Du, L.; Ye, X.; Zou, Z.; Tan, X.; Zhang, L.; Xue, X.; and Feng, J. 2022. SGM3D: stereo guided monocular 3d object detection. IEEE Robotics and Automation Letters (RA-L), 7(4): 10478–10485. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6197 | 2024 | 688 |
18,504 | Attention Disturbance and Dual-Path Constraint Network for Occluded Person Re-identification Jiaer Xia1*, Lei Tan1*, Pingyang Dai1†, Mingbo Zhao2, Yongjian Wu3, Liujuan Cao1 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 2Donghua University, Shanghai, China 3Tencent Youtu Lab, Shanghai, China {xiajiaer, tanlei}@stu.xmu.edu.cn, [email protected], [email protected], [email protected], [email protected] Abstract Occluded person re-identification (Re-ID) aims to address the potential occlusion problem when matching occluded or holistic pedestrians from different camera views. Many methods use the background as artificial occlusion and rely on attention networks to exclude noisy interference. However, the significant discrepancy between simple background occlusion and realistic occlusion can negatively impact the generalization of the network. To address this issue, we propose a novel transformer-based Attention Disturbance and DualPath Constraint Network (ADP) to enhance the generalization of attention networks. Firstly, to imitate real-world obstacles, we introduce an Attention Disturbance Mask (ADM) module that generates an offensive noise, which can distract attention like a realistic occluder, as a more complex form of occlusion. Secondly, to fully exploit these complex occluded images, we develop a Dual-Path Constraint Module (DPC) that can obtain preferable supervision information from holistic images through dual-path interaction. With our proposed method, the network can effectively circumvent a wide variety of occlusions using the basic ViT baseline. Comprehensive experimental evaluations conducted on person re-ID benchmarks demonstrate the superiority of ADP over stateof-the-art methods. Introduction Person re-identification (Re-ID) refers to the process of matching pedestrian images captured by non-overlapping cameras. This technique has gained popularity in recent years as surveillance systems have become more advanced and widespread. With the rapid development of deep learning technology (He et al. 2016; Vaswani et al. 2017; Dosovitskiy et al. 2021), Re-ID has also achieved remarkable performance (Luo et al. 2019; Zhai et al. 2020; Zheng et al. 2015; Eom and Ham 2019; Chen et al. 2018; Wu et al. 2016) by meriting from its powerful feature extraction capabilities. However, most existing methods assume that the pedestrians in retrieved images are unobstructed, ignoring the possible occlusion problems that can occur in real-world sce*Equal contribution. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Significant discrepancy Functional Similar Train on Vanilla occlusion Test on Real occlusion Train on ADM occlusion Test on Vanilla occlusion Test on Real occlusion (a) Baseline (b) ADP Figure 1: Visualization of attention to baseline and proposed ADP. (a) The baseline trained with the assistance of background occlusion failed to avoid the realistic occlusion in the testing set. (b) The ADP trained by the proposed realitysimilar occlusion ADM performs well on both artificial and real occlusion. narios. Consequently, these methods significantly degrade when dealing with occluded images. While recent endeavors have facilitated person Re-ID under occlusion conditions (Yan et al. 2021; Wang et al. 2022b; Tan et al. 2022; Li et al. 2021; Shi et al. 2022; Jia et al. 2022), two main problems associated with occlusions still need to be addressed. Firstly, the presence of obstacles will vanish some parts of the human body, missing and misaligned extracted features. Traditional Re-ID methods cannot perform valid retrievals when some discriminative parts are obscured. Secondly, occlusions introduce noise into extracted features, polluting the final feature representation of each image. When dealing with these polluted features, different identities may have high similarities due to the same obstacle, resulting in incorrect matches. To address the aforementioned problems, some methods (Wang et al. 2020; Gao et al. 2020; Miao et al. 2019; Miao, Wu, and Yang 2022) use additional trained networks, such as human parsing and keypoint estimation, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6198 to align different human parts. With the aid of these extra networks, the occluded parts can be repaired by disseminating information from the visible parts. However, these approaches are severely limited due to the domain gap between the pre-trained network and the Re-ID dataset. Recently, with the exploration of attention mechanisms for various vision tasks, it has also been adopted for occluded person Re-ID to eliminate the interference of noisy information (Zhao et al. 2021; Sun et al. 2019; He et al. 2021). During the process of attention learning, many data augmentation strategies (Chen et al. 2021; Wang et al. 2022b; Zhuo et al. 2018) generate artificial occlusion, which directs the attention to person and forces it to avoid occluded regions. Currently, the most widely used artificial occlusion methods are random erasing (Zhong et al. 2020) or using the background as occlusion (Chen et al. 2021). Nevertheless, pre-trained attention networks are inherently more likely to focus on the semantically rich foreground than the background. Therefore, network will inevitably tend to ignore the occlusion constituted by the background, which will result in a lack of generalization. To illustrate this point, we utilized background for artificial occlusion based on the ViT baseline in TransReID (He et al. 2021) and visualize the attention for both the training and testing sets in Fig.1(a). The results demonstrate that while the baseline can avoid artificial occlusions well in the training set, attention is still disturbed in the testing set due to the significant discrepancy between artificial occlusion and actual occlusion. In this paper, we propose a solution to the challenges mentioned above by introducing an Attention Disturbance Mask (ADM) module that simulates real-world occlusions with greater fidelity. The primary way in which occlusions disrupt models is by impeding attention. However, obtaining enough occluded data to enable the model to avoid such disruptions is difficult. To surmount this problem, we utilize an attack-oriented methodology that produces noise masks with the capacity to simulate the interference effects of actual obstructions at the feature level. This enables us to construct occlusions that mirror the effects of those encountered in real-world scenarios. As illustrated in Figure 1(b), the proposed Attention Disturbance Module (ADM) performs a similar role to real-world occlusions by introducing disruptions to the neural network’s attention. This finding directly verifies the capability of our designed ADM in faithfully emulating occlusions at the feature level. By training the network on such occlusions that closely resemble those encountered in real-world scenarios, we can effectively enhance its robustness against occlusions during testing. However, handling complex occlusions directly can pose optimization challenges for the network. To address this issue, we propose the Dual-Path Constraint Module (DPC) to handle both holistic and occluded images simultaneously, thus using holistic features as an extra supervisor to guide attention more towards the target pedestrian. Notably, the network parameters in the proposed DPC are shared by both paths, while the individual classifiers learn information about holistic and occluded images separately. The main contributions of our method can be summarized as below: • We first introduce a novel attack-based augmentation strategy called the Attention Disturbance Mask (ADM), which simulates real occlusion at the feature level and effectively diverts attention away from actual occlusions during testing. • We propose a Dual-Path Constraint module (DPC) that utilizes dual-path interactions to encourage the network to learn a more generalized attention mechanism. DPC is compatible with existing occlusion-based data augmentation methods and can provide significant performance improvements. • The two proposed methods are both used to assist in the training of the baseline, and can be discarded in the inference stage, making them easy to be compatible with many existing methods, indicating the efficiency and wide applicability of our method. • Trained with our proposed ADP, the transformer baseline can achieve new state-of-the-art performance on multiple benchmark datasets e.g., 74.5% on Rank-1 on OccludedDuke dataset. Related Work Two main issues of occluded person Re-ID are the missing information and the noisy information caused by the various obstacles. Some methods have been proposed to address the missing information issue by attempting to remove the obstacle and reconstruct the missing visible human parts (Hou et al. 2019, 2022; Wang et al. 2020). Hou et al. (Hou et al. 2019) designed an auto-encoder to generate contents of the occluded parts at pixel level. Wang et al. (Wang et al. 2020) utilize an additional pose estimate network to detect the keypoints then find the high-order relation and human-topology information. Moreover, an adaptive direction graph convolutional layer is proposed to pass relation information from visible to occluded nodes. Hou et al. (Hou et al. 2022) divide the feature map into six regions and predict the features of occluded regions by adopting the long-range spatial contexts from non-occluded regions. Instead of reconstructing the missing parts, other methods choose to more focus on the visible parts. With the help of the attention mechanism, these methods can generate attention maps in which occluded regions are given smaller weights to discard the noisy information. To better discover the occluded part, several data augmentation strategies are adopted. Zhuo et al. (Zhuo et al. 2018) design an occlusion simulator to use the random patch from the background as the artificial occlusion to cover the full-body person image. Chen et al. (Chen et al. 2021) also use the background information and paste it to eight predefined locations to make the data augmentation. These artificial occlusion data augmentation can provide labels for the location of occlusions. This extra information helps train attention mechanisms to exclude noisy occlusions. However, networks trained on artificial occlusions often cannot handle realistic occlusions well, due to the significant discrepancy between artificial and realistic occlusion. The proposed ADP method takes a different approach. It generates more realistic occlusions while also providing The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6199 Linear Projection … … h × w h w h w h w Norm MHA Norm MLP Transformer Blocks ADM DPC CLS CLS CLS copy Attention Weight Occlusion Mask ADM Weight Gradient Reverse Update Holistic Class token Occluded Class token Visible Patch ADM Patch Cosine Similarity Q Vector K Vector Sum Resize 𝑥ℎ 𝑥𝑜 𝐿𝑡𝑟𝑖−ℎ 𝐿𝑡𝑟𝑖−𝑜 𝐿𝑡𝑟𝑖 𝐿𝑖𝑑−ℎ 𝐿𝑖𝑑−𝑜 𝐿𝑖𝑡𝑟 𝐿𝑎𝑑𝑚 Figure 2: The overview of the proposed Attention Disturbance and Dual-Path Constraint Network (ADP). To create the corresponding occlusion images, the transformed background patch is used as the carrier of the Attention Disturbance Mask (ADM) and covers a random region of the original image. Then the Dual-Path Constraint Module (DPC) simultaneously deals with holistic and occluded images. In Multi-Head Attention (MHA) stage, ADM maximizes the similarity between class token and masked patches to optimize the mask. extra supervision from holistic features. The more realistic occlusions lead to more robust and generalized networks. Proposed Method Overview In this section, we introduce the proposed Attention Disturbance and Dual-Path Constraint Network (ADP). The architecture of ADP is depicted in Fig.2, where a pre-trained ViT (Dosovitskiy et al. 2021) is utilized as the backbone to extract image features. To generate occluded images, we use an artificial occlusion consisting of a background patch and an attention disturbance mask. Both holistic and occluded images are fed into a parameter-shared transformer to extract their respective features. Similar to TransReID (He et al. 2021), we divide the input images, which have a resolution of H × W, into N patches and then convert them into patch embeddings via a linear projection operation with a patch size of P and a stride size of S. N = h × w = H + S −P S × W + S −P S . (1) Meanwhile, a learnable class embedding token denoted as xcls is attached to the input sequences to aggregate the image information during the process and act as the global feature map in the output stage. Besides, a learnable position embedding P ∈R(N+1)×d is added to the transformer to append spatial information to the transformer. The complete input sequence embeddings can be formulated as: Z0 = xcls; F x1 p ; F x2 p ; · · · ; F xN p + P, (2) where Z0 represents input sequence embeddings, F is the linear projection operation mapping the input image x ∈ RH×W ×C into patch embedding fpe ∈RN×d, N is the number of flattened h×w patched, and d denotes the dimensions. Then, the patch embedding fpe is combined with the class token as the input feature map of transformer blocks. Attention Disturbance Mask The main idea of ADM is to generate a mask that can make the network’s attention accidentally focus on the occlusion, which simulates the same effect as real-world occlusion. However, directly generating a mask to make attention focus on a blank area is difficult. Therefore, the background information is adopted as a carrier of mask to provide the domain information and simplify the mask optimization process. To get the background region of the input image, we randomly select the corner patch of image. Then the cropped background patch Pb will be resized to so = ro × s, where ro ∼U(0.1, 0.5) and s = H × W. The shape of Pb is Hb = √so × rs and Wb = q so rs with rs ∼U(0.3, 3.3). The processed background patch is pasted arbitrarily anywhere in the image, and a mask M ∈RH×W corresponding to the pasting position is saved for overlay attention disturbance mask. To generate the ADM, we first initialize a random learnable parameter, which has same shape as input image, and superimpose it to the occlusion position according to the M. During training, we dynamically update the ADM based on the weight matrix in the multi-head attention stage of each transformer block. Considering the mechanism of attention operation, it first calculates the dot-product between the queries Q and the keys K to measure the similarity between them and in accordance with the similarity to get the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6200 weight matrix (Vaswani et al. 2017). The whole attention process can be formulated as follows: Attention(Q, K, V ) = WV, (3) W = softmax( QKT p c/NH ), (4) where NH means the number of heads in multi-head attention. Therefore, we can disturb the attention by maximizing the similarity between the class token and the occluded region, which will increase the occluded region’s weight and make attention mistakenly focus on it. To be specific, in each transformer block, we have class token embedding xi cls and patch embedding xi p, where i represents the i-th block of the transformer. Then the disturbance loss can be formulated as: Ladm = N X i=1 X n softmax(xi clsxi p T p c/NH ) ⊙˜ M, (5) ˜ M represents the resized occlusion mask M according to the patch size, ⊙presents element-wise product. Then we adopt an additional optimizer to update the ADM alone based on the reversed gradient of the disturbance loss. Dual-Path Constraint Module By generating a corresponding occluded image for each input image, we can obtain the holistic-occluded paired data and make it possible to exploit the holistic image as additional supervision for occluded images, which will help deal with complex occlusion cases. We separately use the identity loss and metric loss in both holistic and occluded paths to ensure the extracted features towards respective images are reliable and discriminative. Meanwhile, a global metric loss and information passing classifier is adopted to convey information between two paths. After extracting the features of holistic and occlusion images, we obtain the final class token of each path as the feature map denoted as xh and xo. In the holistic path, we use the cross-entropy loss as IDloss and triplet loss as metric loss. The id-loss Lid−h and metric loss Ltri−h are shown as follows: Lid−h = −1 B B X i=1 log e(W yi h )T xi h PC j=1 e(W yj h )T xi h , (6) Ltri−h = h α + ∥fa −fp∥2 2 −∥fa −fn∥2 2 i + , (7) where the B and C in ID-loss refer to the batch size and the number of class, and Wh represents the weight of holistic classifier. The fa, fp, and fn in triplet loss refer to the anchor, positive and negative features with online hardmining (Schroff, Kalenichenko, and Philbin 2015), and α is the margin. The loss of holistic path can be calculated as: Lh = Lid−h + Ltri−h. (8) In the occluded path, the existence of occlusion weakens the identity information and fuzzes the inter-class discrepancy when they have the same occlusion. The widely used softmax loss is incapable of achieving splendid enough intra-class compactness in such difficult conditions. So we adopt an extra angular margin in the original softmax loss to increase intra-class compactness and inter-class discrepancy (Deng et al. 2019; Tan et al. 2022). The id-loss Lid−o with the extra margin can be represented as: Lid−o = −1 B B X i=1 log es(θi+m) es(θi+m) + PC j=1,j̸=yi es(θj) , θi = (W yi o )T xi o, θj = (W yj o )T xj o, (9) where m denotes the angular margin and s is the scale adjust hyper-parameter. And metric loss Ltri−o is also adopted, which is same as holistic path. The loss of occluded path can be calculated as: Lo = Lid−o + Ltri−o. (10) To connect the holistic and occluded path, we adopt a global triplet loss Ltri to close the distance between these two paths. Furthermore, since the classifier can be regarded as a prototype center of each identity, we use it as an anchor of the holistic identity feature to pull close the same identity in the occluded path to mitigate the gap between them. And benefits from the asymmetric structure of the classifier in two paths, the interaction between two paths will bring more information and provide more substantial supervision. Specifically, we clone the parameter of the classifier in holistic path as ˆWh and calculate the similarity with occluded feature as the interaction loss to eliminate the effect of occlusion. The interaction loss Litr can be given as: Litr = −1 B B X i=1 log e( ˆ W yi h )T xi o PC j=1 e( ˆ W yj h )T xi o . (11) The whole loss function of Dual-Path Constraint Module can be summarized as: Ldpc = Lh + Lo + Ltri + λLitr, (12) where the λ is the hyper-parameter coefficient of Litr. In the testing phase, only the pure ViT baseline is used to extract feature maps without any artificial occlusion, which makes our network simple and efficient for implementation. Experiment Datasets and Evaluation Setting To validate the effectiveness of our proposed method, we perform extensive experiments on publicly available Re-ID datasets, including both occluded (Miao et al. 2019; Zhuo et al. 2018) and holistic (Zheng et al. 2015; Zheng, Zheng, and Yang 2017; Ristani et al. 2016) datasets. Occluded-Duke (Miao et al. 2019) is a large-scale dataset selected from the DukeMTMC for occluded person reidentification. It consists of 15,618 training images of 702 people, while the query and gallery sets contain 2,210 testing images of 519 people and 17,661 images of 1,110 persons, respectively. Until now, Occluded-Duke is still the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6201 Methods Occ-Duke Occ-REID R-1 mAP R-1 mAP PCB(Sun et al. 2018) 42.6 33.7 41.3 38.9 DSR(He et al. 2018) 40.8 30.4 72.8 62.8 FPR(He et al. 2019) 78.3 68.0 Ad-Occ(Huang et al. 2018) 44.5 32.2 PVPM(Gao et al. 2020) 47.0 37.7 66.8 59.5 GASM(He and Liu 2020) 74.5 65.6 HOReID(Wang et al. 2020) 55.1 43.8 80.3 70.2 OAMN(Chen et al. 2021) 62.6 46.1 Part-Label(Yang et al. 2021) 62.2 46.3 81.0 71.0 ISP(Zhu et al. 2020) 62.8 52.3 PRE-Net(Yan et al. 2023) 68.3 55.2 CAAO(Zhao et al. 2023) 68.5 59.5 87.1 83.4 PAT(Li et al. 2021) 64.5 53.6 81.6 72.1 TransReID(He et al. 2021) 64.2 55.7 PFD(Wang et al. 2022a) 67.7 60.1 79.8 81.3 FED(Wang et al. 2022b) 68.1 56.4 86.3 79.3 SAP(Jia et al. 2023) 70.0 62.2 83.0 76.8 ADP(Ours) 72.2 60.6 88.2 82.0 TransReID∗(He et al. 2021) 66.4 59.2 PFD∗(Wang et al. 2022a) 69.5 61.8 81.5 83.0 DPM∗(Tan et al. 2022) 71.4 61.8 85.5 79.7 ADP(Ours)∗ 74.5 63.8 89.2 85.1 Table 1: Comparison with state-of-the-art methods on Occluded-Duke and Occluded-REID. ∗indicates that the backbone has a sliding-window setting and a smaller stride. most challenging dataset for occluded Re-ID due to the scale of occlusion. Occluded-REID (Zhuo et al. 2018) is an occluded person dataset captured by mobile cameras. A total of 2,000 images were captured from 200 individuals, each consisting of five full-body images and five occluded images. Following the evaluation protocol of previous works (Gao et al. 2020; Wang et al. 2020), we trained the model under the training set of Market-1501 (Zheng et al. 2015),while OccludedREID is used only as a test set. Market-1501 (Zheng et al. 2015) is a widely-used holistic Re-ID dataset captured from 6 cameras. The training set contains 1,236 images of 751 people, while the query and gallery sets contain 3,368 images of 750 people and 19,732 images of 750 people, respectively. DukeMTMC-reID (Zheng, Zheng, and Yang 2017; Ristani et al. 2016) contains 36,441 images of 1,812 persons captured by 8 cameras, with 16,522 images of 702 identities are used as the training set and 2,228 and 16,522 images of 702 people who do not appear in the training set as the query and gallery images, respectively. Evaluation Protocol. To make it fair compared with other methods, we adopt the widely-used Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP) as evaluation metrics. All experiments are performed in the single query setting. Implementation details. We adopt the ViT (Dosovitskiy et al. 2021) pre-trained on ImageNet (Deng et al. 2009) as our backbone and use 12 transformer blocks with 8 heads for multi-head attention. The numbers of channel is set to 768. Methods Market-1501 DukeMTMC R-1 mAP R-1 mAP PCB(Sun et al. 2018) 92.3 77.4 81.8 66.1 ISP(Zhu et al. 2020) 95.3 88.6 89.6 80.0 BOT(Luo et al. 2019) 94.1 85.7 86.4 76.4 DSR(He et al. 2018) 50.7 70.0 58.8 67.2 STNReID(Luo et al. 2020) 66.7 80.3 54.6 71.3 VPM(Sun et al. 2019) 93.0 80.8 83.6 72.6 HOReID(Wang et al. 2020) 94.2 84.9 86.9 75.6 OAMN(Chen et al. 2021) 93.2 79.8 86.3 72.6 FPR(He et al. 2019) 95.4 86.6 88.6 78.4 PAT(Li et al. 2021) 95.4 88.0 88.8 78.2 FED(Wang et al. 2022b) 95.0 86.3 89.4 78.0 TransReID∗(He et al. 2021) 95.2 88.9 90.7 82.0 PRE-Net(Yan et al. 2023) 95.3 86.5 89.3 77.8 CAAO(Zhao et al. 2023) 95.3 88.0 89.8 80.9 DPM∗(Tan et al. 2022) 95.5 89.7 91.0 82.6 PFD∗(Wang et al. 2022a) 95.5 89.7 91.2 83.2 SAP(Jia et al. 2023) 96.0 90.5 ADP(Ours)∗ 95.6 89.5 91.2 83.1 Table 2: Comparison with state-of-the-art methods on Market-1501 and DukeMTMC-reID. ∗indicates that the backbone has a sliding-window setting and a smaller stride. The input images are resized to 256 × 128 and augmented by commonly used random horizontal flipping, padding and random cropping. During the training phase, the batch size is set to 64 with 16 identities. We utilize the SGD as the optimizer, with the initial learning rate of 0.004 and a cosine learning rate decay. The margin of each triplet loss is set to 0.3. The hyper-parameter m and s in eq.(9) are set to 0.3 and 30, respectively, while the λ in eq.(12) is 0.1. Comparison with State-of-the-art Methods Result on Occluded Datasets. We compare our ADP with existing state-of-the-art (SOTA) methods on two occluded datasets, and the results are shown in Table 1. The comparison methods can be divided into CNN-based methods and Transformer-based methods. As can be seen from Table 1, transformer-based methods outperform the CNN-based methods. This improvement can be achieved by approximately 15%, which demonstrates the utilization of attention mechanism is beneficial to occlusion tasks. A case in point is that in the most challenging Occluded-Duke dataset, our proposed method ADP can achieve 72.2% in rank-1. Furthermore, with a small step sliding-window setting, the proposed ADP∗can further achieve a higher performance of 74.5% in rank-1 and 63.8% in mAP, respectively, exceeds +3.1% in Rank-1 and +2.0% in mAP compared with the transformer-based SOTA method DPM (Tan et al. 2022). On the Occluded-REID dataset, our ADP and ADP∗also consistently outperform current SOTAs. Result on Holistic Datasets. We also experiment our proposed method on holistic person Re-ID datasets, including Market-1501 and DukeMTMC-reID, and compare our method with state-of-the-art methods in three categories, i.e., holistic Re-ID methods (Zhu et al. 2020; Luo et al. 2019; Sun et al. 2018), partial Re-ID methods (Luo et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6202 (a)Baseline (b) Occluded by background (c) Occluded by ADM Figure 3: Visualization of the feature distribution on Occlude-Duke dataset. Circles denote the features of images while the colors represent different identities. (a) Baseline refers to the model trained on the images without extra occlusions. (b) The middle plot shows the results of model trained on the images occluded by the background. (c) Compared to the other two models, the model trained on our ADM can avoid the influence of obstacles well. Method Occluded-Duke R-1 R-5 R-10 mAP baseline 59.7 75.3 80.7 49.8 +ADM 66.2 81.6 86.3 57.7 +DPC 72.2 85.1 88.0 60.6 baseline∗ 63.2 78.8 83.6 53.3 +ADM 69.8 82.8 87.5 60.3 +DPC 74.5 86.4 89.6 63.8 Table 3: Ablation study of each proposed module in ADP on Occluded-Duke dataset. ∗indicates that the backbone has a sliding-window setting and a smaller stride. 2020; Sun et al. 2019) and occluded Re-ID methods (Wang et al. 2020; Chen et al. 2021; He et al. 2019; Li et al. 2021; Wang et al. 2022b; He et al. 2021; Tan et al. 2022; Wang et al. 2022a; Jia et al. 2023; Yan et al. 2023; Zhao et al. 2023) methods. The results are shown in Table 2. Though designed for occlusion problems, our proposed module achieves comparable performance on holistic datasets. For example, the proposed ADP can achieve +0.3%/+1.6% improvement in Rank-1 and +0.9%/+3.1% in mAP on Market1501 and DukeMTMC-ReID datasets, respectively, compared with the state-of-the-art method ISP (Zhu et al. 2020). Our ADP also got +2.6%/+7.6% improvement in Rank-1 and +8.7%/+10.5% in mAP compared with SOTA partial Re-ID method VPM (Sun et al. 2019). Ablation Studies In this section, we implement the ablation studies based on the Occluded-Duke dataset to analyze the influence of each module of the proposed ADP method. In our study, the baseline method adopts ViT as the backbone of network, which is trained based on the original softmax loss and triplet loss without any artificial occlusion. The results of ablation studies are given in Table 3. From the result, we can observe that training with images Method Occluded-Duke R-1 R-5 R-10 mAP baseline 59.7 75.3 80.7 49.8 +ADM 66.2 81.6 86.3 57.7 +AM 69.5 82.2 85.9 58.8 +DP 70.5 83.4 87.0 59.3 +Litr 71.8 83.3 87.1 59.7 +Ltri(full) 72.2 85.1 88.0 60.6 Table 4: Ablation study of the dual-path loss used in DPC on Occluded-Duke dataset. AM denotes the module use shared angular softmax with single-path structure, while DP represents the module with an asymmetric dual-path structure. occluded by the ADM can significantly improve the model performance, in a way that the performance can be increased by +6.5% in Rank-1 and +7.9% in mAP, respectively, over baseline. Meanwhile, this improvement of performance can reach +6.6% in Rank-1 and +7.0% in mAP, respectively, over baseline∗which only has a smaller stride. Besides, with the assistance of DPC, the performance of the model can further increase from 66.2% to 72.2% in Rank-1 and 57.7% to 60.6% in mAP over baseline, and increase from 69.8% to 74.5% in Rank-1 and 60.3% to 63.8% in mAP over baseline∗. We next conduct the ablation test to evaluate the influence of structure and loss function in the DPC module on Occluded-Duke dataset. The result is given in Table 4, which shows the effectiveness of the proposed dual-path structure and adopted loss used for connecting two paths. Specifically, with the single-path structure, the adopted angular margin softmax does improve the performance, but it is not suitable for dealing with holistic images and is prone to overfitting problem. To the contrast, since we propose the dual-path structure and separate the holistic and occluded images to use an asymmetric classification, the performance increased by +1.0% in Rank-1 and +0.5% in mAP. Meanwhile, beneThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6203 Index RE BG ADM DPC Rank-1 mAP 1 59.7 49.8 2 ✓ 61.1 53.8 3 ✓ 64.8 56.5 4 ✓ 66.2 57.7 5 ✓ ✓ 66.7 56.5 6 ✓ ✓ 70.4 58.4 7 ✓ ✓ 72.2 60.6 Table 5: Comparison with previous occlusion strategies. RE indicates the random erasing method, while BG denotes the occlusion strategy using background. fiting from the proposed dual-path structure, we can add additional connections between two paths to better leverage the advantages of different types of data. As a result, Litr and Ltri can improve the performance of rank1 by +1.3%/+0.4% and mAP by +0.4%/+0.9%, respectively. Discussions Effectiveness of ADM occlusion. To better demonstrate the advantages of ADM occlusion, we compare various occlusion schemes, including random erasing (Zhong et al. 2020) and directly using the background as occlusion (Chen et al. 2021). The results are shown in Table 5. From index-2 to index-4, ADM exhibits prominent performance improvement over conventional occlusion schemes. In detail, compared with random erasing, the performance is significantly increased by +5.1% in Rank-1 and +3.9% in mAP, respectively; compared with the background occlusion, ADM can also achieve +1.4% performance increase in Rank-1 and +1.2% in mAP, respectively. Meanwhile, we further visualize the distributions of different features extracted by the model trained with different occlusion strategies. The simulation results are shown in Fig.3, where the circles denote the features of images randomly selected from testing set of Occluded-Duke dataset and visualized via t-SNE (Van der Maaten and Hinton 2008). In detail, Fig.3(a) illustrates the distribution of features extracted by the baseline. It is evident that there are numerous outlier features caused by occlusions. In Fig.3(b), the widely used background occlusion is able to reduce the situation of the outlier features to some extent, but it still cannot completely eliminate them, indicating the model is still affected by the obstacles; To the contrast, with the model trained by our proposed ADM, the outlier features in Fig.3(c) almost disappear due to the model’s excellent ability to avoid obstacles. In a nutshell, Fig.3 proves that ADM can help the model reduce the impact of obstacles. Effectiveness of DPC module. We evaluated the effectiveness of our proposed DPC with different occlusion strategies, and the simulation results are presented in index-5 to index-7 of Table 5. Compared with the result in index-2 to index-4, our experimental results demonstrate that the DPC can be seamlessly integrated into each occlusion strategy and show improvements in performance. For example, in the random erasing strategy, performance can be increased from 61.1% to 66.7% in Rank-1 and 53.8% to 56.5% in mAP, (a) (b) (c) Figure 4: Visualization of attention maps on occlusion testing set in Occluded-Duke dataset. (a) occluded person images. (b) attention maps of baseline. (c) attention maps of ADP. respectively. Moreover, with the background occlusion, the DPC can significantly increase the Rank-1 performance by +5.6% and mAP performance by +1.9%, respectively. Our results indicate that the proposed DPC has strong compatibility and universality, making it capable of improving the performance of several previous methods. Visualization of the Attention To demonstrate the model’s ability to process images with occlusions, we visualize the attention maps and show in Fig.4. The input images are from the testing set with diverse occlusions, and we apply Grad-CAM (Selvaraju et al. 2017) to visualize the attention heatmap to demonstrate the areas the model focuses on. It is obvious that baseline can be easily interfered by obstacles, which greatly limits the performance. In the contrast, Fig.4(c) appears that our model’s attention mechanism is capable of avoiding paying attention to occlusions to a great extent and focusing more on the target pedestrian. Furthermore, the attention heatmap shows the proposed model can provide good performance when handling diverse occlusion types and locations. Conclusion In this research, we introduced a new approach to address the problem of occluded person re-identification by proposing two innovative modules. The ADM generates a more effective artificial occlusion that closely resembles real-world occlusions at the feature level, making the network robust to unseen occlusions and enhancing its generalization. The DPC handles both holistic and occluded images simultaneously, aligning the holistic and occluded features and guiding attention more toward the target pedestrian. Meanwhile, the two proposed modules, ADM and DPC, can be seamlessly integrated with various existing models to enhance their performance, demonstrating the wide applicability of our approach. Experiment results on two occluded datasets and two holistic datasets, illustrate the effectiveness of proposed method and superiority to other state-of-the-art methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6204 Acknowledgements This work was supported by National Key R&D Program of China (No.2022ZD0118202), the National Science Fund for Distinguished Young Scholars (No.620256 03), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 620723 87, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001). References Chen, P.; Liu, W.; Dai, P.; Liu, J.; Ye, Q.; Xu, M.; Chen, Q.; and Ji, R. 2021. Occlude Them All: Occlusion-Aware Attention Network for Occluded Person Re-ID. In ICCV, 11813–11822. IEEE. Chen, Y.; Zhu, X.; Zheng, W.; and Lai, J. 2018. Person ReIdentification by Camera Correlation Aware FFeature Augmentation. IEEE Trans. Pattern Anal. Mach. Intell., 40(2): 392–408. Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; and Fei-Fei, L. 2009. ImageNet: A large-scale hierarchical image database. In CVPR, 248–255. IEEE Computer Society. Deng, J.; Guo, J.; Xue, N.; and Zafeiriou, S. 2019. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In CVPR, 4690–4699. Computer Vision Foundation / IEEE. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. OpenReview.net. Eom, C.; and Ham, B. 2019. Learning Disentangled Representation for Robust Person Re-identification. In NeurIPS, 5298–5309. Gao, S.; Wang, J.; Lu, H.; and Liu, Z. 2020. Pose-Guided Visible Part Matching for Occluded Person ReID. In CVPR, 11741–11749. Computer Vision Foundation / IEEE. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In CVPR, 770–778. IEEE Computer Society. He, L.; Liang, J.; Li, H.; and Sun, Z. 2018. Deep Spatial Feature Reconstruction for Partial Person Re-Identification: Alignment-Free Approach. In CVPR, 7073–7082. Computer Vision Foundation / IEEE Computer Society. He, L.; and Liu, W. 2020. Guided Saliency Feature Learning for Person Re-identification in Crowded Scenes. In ECCV (28), volume 12373 of Lecture Notes in Computer Science, 357–373. Springer. He, L.; Wang, Y.; Liu, W.; Zhao, H.; Sun, Z.; and Feng, J. 2019. Foreground-Aware Pyramid Reconstruction for Alignment-Free Occluded Person Re-Identification. In ICCV, 8449–8458. IEEE. He, S.; Luo, H.; Wang, P.; Wang, F.; Li, H.; and Jiang, W. 2021. TransReID: Transformer-based Object ReIdentification. In ICCV, 14993–15002. IEEE. Hou, R.; Ma, B.; Chang, H.; Gu, X.; Shan, S.; and Chen, X. 2019. VRSTC: Occlusion-Free Video Person ReIdentification. In CVPR, 7183–7192. Computer Vision Foundation / IEEE. Hou, R.; Ma, B.; Chang, H.; Gu, X.; Shan, S.; and Chen, X. 2022. Feature Completion for Occluded Person ReIdentification. IEEE Trans. Pattern Anal. Mach. Intell., 44(9): 4894–4912. Huang, H.; Li, D.; Zhang, Z.; Chen, X.; and Huang, K. 2018. Adversarially Occluded Samples for Person ReIdentification. In CVPR, 5098–5107. Computer Vision Foundation / IEEE Computer Society. Jia, M.; Cheng, X.; Lu, S.; and Zhang, J. 2022. Learning disentangled representation implicitly via transformer for occluded person re-identification. IEEE Transactions on Multimedia. Jia, M.; Sun, Y.; Zhai, Y.; Cheng, X.; Yang, Y.; and Li, Y. 2023. Semi-attention Partition for Occluded Person Reidentification. In Williams, B.; Chen, Y.; and Neville, J., eds., Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, 998–1006. AAAI Press. Li, Y.; He, J.; Zhang, T.; Liu, X.; Zhang, Y.; and Wu, F. 2021. Diverse Part Discovery: Occluded Person Re-Identification With Part-Aware Transformer. In CVPR, 2898–2907. Computer Vision Foundation / IEEE. Luo, H.; Gu, Y.; Liao, X.; Lai, S.; and Jiang, W. 2019. Bag of Tricks and a Strong Baseline for Deep Person ReIdentification. In CVPR Workshops, 1487–1495. Computer Vision Foundation / IEEE. Luo, H.; Jiang, W.; Fan, X.; and Zhang, C. 2020. STNReID: Deep Convolutional Networks With Pairwise Spatial Transformer Networks for Partial Person Re-Identification. IEEE Trans. Multim., 22(11): 2905–2913. Miao, J.; Wu, Y.; Liu, P.; Ding, Y.; and Yang, Y. 2019. Pose-Guided Feature Alignment for Occluded Person ReIdentification. In ICCV, 542–551. IEEE. Miao, J.; Wu, Y.; and Yang, Y. 2022. Identifying Visible Parts via Pose Estimation for Occluded Person ReIdentification. IEEE Trans. Neural Networks Learn. Syst., 33(9): 4624–4634. Ristani, E.; Solera, F.; Zou, R. S.; Cucchiara, R.; and Tomasi, C. 2016. Performance Measures and a Data Set for Multitarget, Multi-camera Tracking. In ECCV Workshops (2), volume 9914 of Lecture Notes in Computer Science, 17–35. Schroff, F.; Kalenichenko, D.; and Philbin, J. 2015. FaceNet: A unified embedding for face recognition and clustering. In CVPR, 815–823. IEEE Computer Society. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In ICCV, 618–626. IEEE Computer Society. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6205 Shi, Y.; Ling, H.; Wu, L.; Zhang, B.; and Li, P. 2022. Attribute disentanglement and registration for occluded person re-identification. Neurocomputing, 470: 226–235. Sun, Y.; Xu, Q.; Li, Y.; Zhang, C.; Li, Y.; Wang, S.; and Sun, J. 2019. Perceive Where to Focus: Learning Visibility-Aware Part-Level Features for Partial Person ReIdentification. In CVPR, 393–402. Computer Vision Foundation / IEEE. Sun, Y.; Zheng, L.; Yang, Y.; Tian, Q.; and Wang, S. 2018. Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline). In ECCV (4), volume 11208 of Lecture Notes in Computer Science, 501–518. Springer. Tan, L.; Dai, P.; Ji, R.; and Wu, Y. 2022. Dynamic Prototype Mask for Occluded Person Re-Identification. In ACM Multimedia, 531–540. ACM. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In NIPS, 5998–6008. Wang, G.; Yang, S.; Liu, H.; Wang, Z.; Yang, Y.; Wang, S.; Yu, G.; Zhou, E.; and Sun, J. 2020. High-Order Information Matters: Learning Relation and Topology for Occluded Person Re-Identification. In CVPR, 6448–6457. Computer Vision Foundation / IEEE. Wang, T.; Liu, H.; Song, P.; Guo, T.; and Shi, W. 2022a. Pose-Guided Feature Disentangling for Occluded Person Re-identification Based on Transformer. In AAAI, 2540– 2549. AAAI Press. Wang, Z.; Zhu, F.; Tang, S.; Zhao, R.; He, L.; and Song, J. 2022b. Feature Erasing and Diffusion Network for Occluded Person Re-Identification. In CVPR, 4744–4753. IEEE. Wu, S.; Chen, Y.; Li, X.; Wu, A.; You, J.; and Zheng, W. 2016. An enhanced deep feature representation for person re-identification. In WACV, 1–8. IEEE Computer Society. Yan, C.; Pang, G.; Jiao, J.; Bai, X.; Feng, X.; and Shen, C. 2021. Occluded Person Re-Identification with Single-scale Global Representations. In ICCV, 11855–11864. IEEE. Yan, G.; Wang, Z.; Geng, S.; Yu, Y.; and Guo, Y. 2023. PartBased Representation Enhancement for Occluded Person Re-Identification. IEEE Trans. Circuits Syst. Video Technol., 33(8): 4217–4231. Yang, J.; Zhang, J.; Yu, F.; Jiang, X.; Zhang, M.; Sun, X.; Chen, Y.; and Zheng, W. 2021. Learning to Know Where to See: A Visibility-Aware Approach for Occluded Person Re-identification. In ICCV, 11865–11874. IEEE. Zhai, Y.; Lu, S.; Ye, Q.; Shan, X.; Chen, J.; Ji, R.; and Tian, Y. 2020. AD-Cluster: Augmented Discriminative Clustering for Domain Adaptive Person Re-Identification. In CVPR, 9018–9027. Computer Vision Foundation / IEEE. Zhao, C.; Lv, X.; Dou, S.; Zhang, S.; Wu, J.; and Wang, L. 2021. Incremental Generative Occlusion Adversarial Suppression Network for Person ReID. IEEE Trans. Image Process., 30: 4212–4224. Zhao, C.; Qu, Z.; Jiang, X.; Tu, Y.; and Bai, X. 2023. Content-Adaptive Auto-Occlusion Network for Occluded Person Re-Identification. IEEE Trans. Image Process., 32: 4223–4236. Zheng, L.; Shen, L.; Tian, L.; Wang, S.; Wang, J.; and Tian, Q. 2015. Scalable Person Re-identification: A Benchmark. In ICCV, 1116–1124. IEEE Computer Society. Zheng, Z.; Zheng, L.; and Yang, Y. 2017. Unlabeled Samples Generated by GAN Improve the Person Reidentification Baseline in Vitro. In ICCV, 3774–3782. IEEE Computer Society. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, 13001–13008. Zhu, K.; Guo, H.; Liu, Z.; Tang, M.; and Wang, J. 2020. Identity-Guided Human Semantic Parsing for Person Reidentification. In ECCV (3), volume 12348 of Lecture Notes in Computer Science, 346–363. Springer. Zhuo, J.; Chen, Z.; Lai, J.; and Wang, G. 2018. Occluded Person Re-Identification. In ICME, 1–6. IEEE Computer Society. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6206 | 2024 | 689 |
18,505 | Efficient Spiking Neural Networks with Sparse Selective Activation for Continual Learning Jiangrong Shen1, 2, Wenyao Ni 2, Qi Xu3, Huajin Tang1, 2, 4* 1State Key Lab of Brain-Machine Intelligence, Zhejiang University, China 2The College of Computer Science and Technology, Zhejiang University, China 3School of Computer Science and Technology, Dalian University of Technology, China 4MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, China [email protected], [email protected], [email protected], [email protected] Abstract The next generation of machine intelligence requires the capability of continual learning to acquire new knowledge without forgetting the old one while conserving limited computing resources. Spiking neural networks (SNNs), compared to artificial neural networks (ANNs), have more characteristics that align with biological neurons, which may be helpful as a potential gating function for knowledge maintenance in neural networks. Inspired by the selective sparse activation principle of context gating in biological systems, we present a novel SNN model with selective activation to achieve continual learning. The trace-based K-Winner-Take-All (K-WTA) and variable threshold components are designed to form the sparsity in selective activation in spatial and temporal dimensions of spiking neurons, which promotes the subpopulation of neuron activation to perform specific tasks. As a result, continual learning can be maintained by routing different tasks via different populations of neurons in the network. The experiments are conducted on MNIST and CIFAR10 datasets under the class incremental setting. The results show that the proposed SNN model achieves competitive performance similar to and even surpasses the other regularization-based methods deployed under traditional ANNs. Introduction Biological organisms are capable of learning continually from interactions with their environments throughout their lifespan (Kandel and Hawkins 1992). Machine intelligence with artificial neural networks (ANNs) has exhibited remarkable capabilities in computer vision and natural language processing applications (LeCun, Bengio, and Hinton 2015). However, the new generation of applications, such as self-driving cars and wearable devices, require new machine intelligence that can acquire new knowledge without forgetting the old one while conserving limited computing resources (Kudithipudi et al. 2022). Inspired by biological systems, spiking neural networks (SNNs) have exhibited more complex spatiotemporal dynamics in comparison to ANNs (Maass 1997; Subbulakshmi Radhakrishnan et al. 2021; Yin, Corradi, and Bohte 2023; Bu et al. 2023), and have the potential to implement the next generation of machine intelligence with low power consumption by combin*Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing with neuromorphic hardware (Imam and Cleland 2020; Pei et al. 2019; Deng et al. 2021). Hence, in this paper, we explore how to implement continual learning using SNNs by leveraging the spatiotemporal dynamics specificity in SNNs. In biological systems, the context has been found to have a significant influence on modulating, filtering, and assimilating new information (Kay and Laurent 1999; Levinson et al. 2020). Context gating facilitates the selective activation of subpopulations of neurons, thereby encouraging the reduction of memory interference between similar experiences (Kudithipudi et al. 2022). Coincidentally, the SNNs model exhibits the relational property with the context gating mechanism. On the one hand, SNNs have significant sparsity owing to the expansion of the temporal dimension and the discrete nature of information transmission termed as ”spike”. This characteristic is advantageous in mitigating interference between neurons with distinct functionalities (Shen et al. 2023). On the other hand, the synaptic weight of each postsynaptic neuron undergoes update only when the corresponding connected presynaptic neuron’s membrane potential reaches the firing threshold and emits a spike, while even small activation values in ANN can arise weight updates (Hammouamri, Masquelier, and Wilson 2022). Hence, the aforementioned sparsity in two different levels of neural activity and synapse plasticity could be helpful for SNNs model to reduce memory interference and mitigate the catastrophic forgetting (McCloskey and Cohen 1989) (French 1999) during the continual learning scenario. In this paper, we explore how to alleviate catastrophic forgetting by enhancing the neural dynamics characteristics in SNN. The selective activation SNNs (SA-SNN) model for continual learning with trace-based K-Winner-Take-All (KWTA) and variable threshold mechanisms is proposed to alleviate catastrophic forgetting by enhancing the neural dynamics characteristics in SNN, which does not need task labels or memory replay. In the SA-SNN model, we first adopt a biologically plausible, temporal trace-based K-WTA method to reduce interference between different tasks. The trace-based K-WTA method itself converges with the connectivity of many brain regions that utilize inhibitory interneurons and we further modify it to accommodate multistep spiking neurons. Then we design a simple but effective variable threshold method to modify the threshold of spiking neurons, which enables to encourage the participation of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 611 B. Variable threshold feature extractor Hidden layer Data Top-k neurons in trace (A.) with variable firing Threshold (B.) slightly rise Embeddings Pretrained (1K neuron for default) Temporal K-WTA mechanism 20 15 10 5 0 T = 5 spike trace spike sequence T =15 highest-trace highest-rate Time Window min max Spike Number Threshold 0 input current membrane potential spike sequence spike sequence membrane potential A. Trace-based K-WTA Task1 Task2 Task3 Task4 Task5 Figure 1: The architecture of the proposed SA-SNN model which incorporates trace-based K-WTA and variable threshold components. (A.) The process of obtaining the Top-K Mask in SNN via trace in a single time step and the rate over the whole time window. The dark blue curves and the black vertical line sequence below represent the traces and the spike sequence throughout the entire time window, respectively. (B.) The plotting of the relationship between the variable firing threshold of the neurons and their firing times. all neurons and thus enhances the effect of gating at the population neuron level. The experiments are conducted in the class incremental (Class-IL) setting. Our main contributions are summarized as follows: • We propose the SNNs model with selective activation (SA-SNN) to reduce memory interference and mitigate catastrophic forgetting for continual learning scenarios without additional task information, inspired by the biological selective activation of context gating mechanism. • The trace-based K-WTA and variable threshold components are developed with the aim of inducing sparsity in the selective activation of spiking neurons across both spatial and temporal dimensions, which in turn facilitates the activation of subpopulations of neurons that are specialized for different tasks. • The Class-IL experiments are conducted on MNIST and CIFAR10 datasets. The proposed SA-SNN model achieves competitive performance similar to and even surpasses the other regularization-based methods deployed under ANNs. Related Works Continual learning aims to acquire knowledge in a sequential manner while ensuring that agents only have access to the data of the current task, without compromising their ability to recall previously learned tasks. The primary continual learning techniques can be broadly categorized into three distinct groups, namely regularizationbased, rehearsal-based, and architecture-based techniques. Regularization-based techniques preserver synaptic connections by adding a regularization term in loss function to consolidate the previously acquired knowledge, such as EWC(Kirkpatrick et al. 2017), MAS(Aljundi et al. 2018), SI(Zenke, Poole, and Ganguli 2017), which calculate the importance of each parameter through a certain rule, and generate a penalty term to limit the change of important parameters. Rehearsal-based techniques aim to enhance knowledge retention by preserving several key samples (or intermediate representations) from each task and propagating them forward in the network mixed with current task’s data. Most of these methods are oriented towards ANNs, such as (Van de Ven, Siegelmann, and Tolias 2020; Arani, Sarfraz, and Zonooz 2022; Rebuffi et al. 2017). As definition, such methods inevitably require additional storage space for extra information and extension of the model. The architecturebased methods (Kang et al. 2022; Yoon et al. 2017), which enhance the network’s ability to perform different tasks by continuously adjusting its architecture, also encounter similar issues. As a result of the purposeful partitioning/expansion of the subnetworks, these methods often exhibit relatively stable performance between split tasks. However, in order to distinguish the usage range of different subnetThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 612 works, it tends to be necessary for these methods to have prior knowledge of task information. As for other SNNs-based methods for continual learning, (Antonov, Sviatov, and Sukhov 2022) determines the importance of synaptic weights via stochastic Langevin dynamics with local STDP and achieved continual learning by unsupervised learning. (Skatchkovsky, Jang, and Simeone 2022) introduced an online rule base on the Bayesian SNN model. (Hammouamri, Masquelier, and Wilson 2022) achieves continual learning in SNNs by training an external network using evolutionary strategies to generate the firing threshold of classifer. (Tadros et al. 2022) implements the local plasticity to help the model to correct bias after the learning process of new tasks by using a conversion algorithm to switch between rate-coding and spike-coding. In addition, ANN-oriented methods, such as (Bricken et al. 2023) and (Shen, Dasgupta, and Navlakha 2021), have investigated the mechanisms similar to the selective activated Top-K function and provided efficient solutions for continual learning with ANNs model. These models are all based on rate-coding so it is not possible to explain the continual learning mechanism of neural networks from the level of temporal neural dynamics, nor to fully utilize the specific characteristics of biological neurons. In brief, though the aforementioned techniques offer distinct performance benefits, they often necessitate intricate algorithms, additional storage usage, or task-specific knowledge, thereby deviating considerably from the innate sequential learning abilities of biological agents. Our approach attains the identical objective by only augmenting neural dynamics and neural-computational characteristics like context gating, which is more proximate to the possible biological learning process. By proposing meaningful neural computation features and temporal neural dynamics based on selective activation, we have developed a potential continual learning model of SNNs that is more akin to the learning processes of selective activation in biological systems. Methods The architecture of SNNs with selective activation is illustrated in Figure 1, which consists of a possible feature extractor for mining deep features in complex tasks and a multi-layer SNN to enable the network with the ability of continual learning. We apply the Temporary K-WTA mechanism in the dynamic of neurons in the hidden layer to reduce the mutual interference between different tasks. Meanwhile, the neurons with variable firing thresholds are employed to encourage silent neurons to participate in the learning process to some extent. Task Setup The task we perform in the is the continual learning task under the class-IL scenario. The following is the training pipeline for this type of task. Assume there are N learning phase (i.e. N sequential tasks in total). Task in each phase contains the data Di with all training samples of ci classes, where i devotes the index of the learning phase. During training stage, each learning phase (e.g. i-th phase, i <= N) can only access data related to the current task (i.e, all data belongs to class ci in Di) to train the model. When the training is done, the model will be evaluated on a test set with all classes Pi j=1 cj so far. The ultimate goal of the class-IL is to enable the model to distinguish all learned classes ever seen after training on all tasks in sequence. In the following, we introduce the components and trainning details of our solution. Trace-Based K-WTA Mechanism In neural circuit of animals, as mentioned in some researches (Lin et al. 2014) (Stevens 2015), there are inhibitory neurons which collect the excitation of some neurons and send feed-back inhibition and finally prevent most of the neurons from firing firing. This mechanism is often referred to as a winner-task-all(WTA) mechanism (Shen, Dasgupta, and Navlakha 2021), which is potentially helpful in maintaining network robustness (this mechanism is also refer to as K-WTA, where K devotes the number of winners). However, The homogeneous spike firing of neurons within a time window in SNNs makes it difficult to proceed the proper comparison among neurons directly. It may not be rational enough if we directly integrate the spikes in the entire time window and then deploy the KWTA mechanism. On the one hand, due to the discrete nature of the spikes, confusion is still easy to arise as there may be a lot of neurons that have the same number of spikes (especially with relatively small time steps). On the other hand, it seems to be unreasonable to use future information to generate gating signals for past events, which is somewhat lacking from the perspective of temporal rationality. In the process of investigation, we noticed that Spike Timing-Dependent Plasticity (STDP) (Markram et al. 1997; Dan and Poo 2004), one kind of synaptic plasticity rule, updates the weight based on the spike time interval between the presynaptic and postsynaptic neurons. A internal variable called ”trace” is introduced in (Morrison, Diesmann, and Gerstner 2008) for neurons to bridge the gap between time scale and action potential in the plasticity theory, which is updated with each spike and decays between spikes. Since this variable can give an online estimate of the mean firing rate in the spike train (Morrison, Diesmann, and Gerstner 2008), we can also use it as a indicator for temporal K-WTA. The ”trace” is calculated as follows: tr[t + 1] = tr[t] −tr[t] τ + S[t + 1], (1) where τ is the time constant, which decides the decay speed of the trace. tr[t] is the trace of the neurons at time step t. S[t + 1] denotes the neurons’ spike output at step t. This trace can be calculated at each time step and it is relatively easy to compare and obtain the Top-K value, so we apply this to deploy K-WTA computation step by step. As illustrated in Figure 1 (A), it is one example to show the TopK neuron activation difference between using trace or rate. The rate-based k-WTA chooses the 5th neuron when T=5 and 15 (high rate), while the trace-based K-WTA has better selective activation diversity with activating the 5th and 3rd neurons when T=5 and 15 (high trace). Another potential benefit The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 613 Methods splitMNIST (h=1000) splitCIFAR10 (h=1000) splitCIFAR100 (h=2K, tasks=25) None 19.96 (+/-0.01) 25.29 (+/-1.97) 4.28(+/-0.38) Joint 97.72 (+/-0.04) 92.19 (+/-0.08) 65.28(+/-0.15) EWC-SNN 19.89 (+/-0.00) 30.04 (+/-2.65) 3.93(+/-0.24) MAS-SNN 19.91 (+/-0.01) 30.44 (+/-2.60) 3.09(+/-0.45) SDMLP 46.67 (+/-3.92) 73.27 (+/-1.28) SA-SNN(rate) 50.22 (+/-0.91) 76.88 (+/-2.12) 21.37(+/-0.77) SA-SNN 60.06 (+/- 2.16) 77.73 (+/-1.95) 22.86(+/-0.64) FlyModel 76.97 (+/-1.26) 70.09 (+/-0.51) 17.25 (+/-0.42) SDMLP + EWC 79.61 (+/- 2.46) 78.64 (+/- 0.30) 21.31 (+/-0.72) SA-SNN + EWC 82.18 (+/- 1.14) 80.39 (+/- 1.84) 36.47 (+/- 2.13) Table 1: The validation accuracy of the baseline and our methods in the experiment. All the methods listed here adopt the same classifier structure with one single hidden layer (h devote hidden size) and K is set to be 10. For split-CIFAR10, the model will have a pre-trained feature extractor. We highlight the final results of our methods. Note that SDMLP and Flymodel still use the calculation mode of ANN (and we skip the pre-training of the first layer of MLP in SDMLP), while EWC and MAS are applied to SNN networks. CIFAR10-Embeddings Ave accuracy curve 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 10 classes so far Test Accuracy None MAS-SNN EWC-SNN Joint SDMLP Flymodel SDMLP+EWC SA-SNN+EWC SA-SNN SA-SNN(rate) Accuracy curve per task 0.0 0.2 0.4 0.6 0.8 1.0 class so far 0 2 4 6 8 10 T1 T2 T3 T4 T5 Figure 2: (Left)The performance comparison between our model and other baseline models on splitCIFAR10 dataset. The curve depicting the accuracy levels for the class ever learned over the learning process indicates that our model achieves competitive performance on splitCIFAR10. (Right) The average accuracy curves of our model across five tasks (T1 to T5). of this trace-based K-WTA method is that it does not strictly adhere to the constraints of K from the perspective of the entire time window, which may improve the expression ability of the sub-networks under the Top-K function. Variable Threshold Top-K activation function usually leads to large groups of dead neurons (Ahmad and Scheinkman 2019; Fedus, Zoph, and Shazeer 2022). This is because the randomly initialized weight may allow a group of neurons easily activated and their synaptic weight continuously updated, leaving other neurons never activate and thus never receive feedback signals. Inspired by the threshold variability mentioned in (Izhikevich 2004), we propose to use variable firing thresholds for the spiking neurons in the hidden layers. As shown in Figure 1 (B), we set the firing threshold to slowly increase as the activation times increase. It is worth noting that for convenience, the firing threshold won’t decay in this paper, which enhances memory maintenance by the irreversible threshold changes. In that way, those neurons which were most frequently activated in the past have a decreasing probability of being reactivated, making those neurons with relatively low activation frequencies more likely to be activated and then gradually participate in the learning process of the network. By adding the above two features to the basic neuron model, we can obtain the following neuronal dynamics: H[t] = f(V [t −1], X[t]), S[t] = Θ(H[t] −Vth), Mask[t] = TopK(tr[t]), (2) S∗[t] = S[t] · Mask[t], The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 614 V [t] = H[t] −Vth · S[t], where X[t] is the input to the neurons at the time step t. S[t] represents the neurons’ original spiking output. V [t] and H[t] are the membrane potential of the neurons before firing spikes (after charging) and after firing spikes, respectively. Θ(·) is the spiking function. TopK(·) is the function used in trace-based K-WTA component to generate a mask that filters out the K largest trace which is mentioned in Equation 1. S∗denotes the actual activation output after the TopK function. f(·) represents the state update of the neuron (such as integrate-and-fire (IF) neuron in Equation 3 or leaky integrate-and-fire (LIF) neuron model in Equation 4). f(V [t −1], X[t]) = V [t −1] + X[t] (3) f(V [t −1], X[t]) = V [t −1] −(V [t −1] −Vreset) τt + X[t] (4) we adopt the linear approach to implement the variable threshold as shown in Figure 1(B Left). Though it seems to be more biologically plausible for the firing threshold Vth to grow in a non-linear manner, we found the experimental results are similar between the non-linear and the linear approach. Hence, we choose the linear variable threshold for simplicity. The variable threshold is obtained by: Vth = min(Thmin + C · (Thmax −Thmin) p , Thmax) (5) where Thmax and Thmin are the upper and lower bounds of the firing threshold, C is the counter that record the accepted firing numbers of neurons and p is a hyper-parameter that controls the rate of threshold changing. This variable threshold mechanism can avoid certain neurons always to be selected for new coming classes but forget its response to the old classes. Through the irreversible variable threshold increasing, the activated neurons in the previous task would be more difficult to be activated again when training the following task. Though the above two components can facilitate the formation of the sub-networks and guide neurons to participate in continual learning. Over-dense input may still lead to abnormal performance. That is because the greedy kernel of back-propagation as well as the limited precision and activation of SNN may lead to the excessively high firing rate of these neurons, which may harm the memory retention. So we adopt L2 normalization method for the input layer and its correlated weight matrix to control the sparsity, inspired by (Bricken et al. 2023). Besides, we also applied some proven effective techniques in continual learning, such as Dale’s rules (Dale 1935) and SGD optimizer to avoid the ”stale momentum” problem (Zhu et al. 2023). And In the training process of each learning stage, the training method used in SNN is consistent with the standard image classification algorithm of the SNN (Zhu et al. 2023; Xu et al. 2021; Shen et al. 2021). That is, we calculate the cross entropy between the outputs and the labels as loss, and use STBP (Wu et al. 2018; Xu et al. 2023; Guo et al. 2023) to train the learnable parameters in the network. Results To verify the effectiveness of our proposed framework in continual learning problems. The original CIFAR10 dataset is randomly divided into 5 tasks with 2 classes per task, referred to as ”splitCIFAR10” datasets in the following. Each model is trained on these tasks sequentially as described in section 3.1. The pre-trained feature extractor from (Bricken et al. 2023) is applied to transform each sample in the CIFAR dataset into 256-dimensional latent embeddings. We also evaluate our model performance on the splitMNIST, splitNMNIST (without the pre-trained feature extractor) and splitCIFAR100 (The experimental results on CIFAR100 dataset are illustrated in the supplementary materials). The experimental details other than splitCIFAR10 are recorded in the supplementary materials. The models are trained 20000 batches on each sub-dataset (256 samples per batch, approximately 500 epochs for the whole CIFAR10 dataset) without any other pre-processing. We test the model performance on every class it has learned. The final accuracy is obtained by taking the average of three groups of experiments with multiple random seeds. Basic Model Settings All methods involved in the comparative studies are multilayer neural networks with a single hidden layer. The hidden size is set to be 1000 and the hyper-parameter K is set to be 10. The time step is set to be 16 in splitCIFAR10. The baseline models contain three different types: The first is the most basic situations, including the ”without any measures” (None) and ”training with all learned data” (Joint); The second is several regularization-based continual learning methods including EWC (Kirkpatrick et al. 2017) and MAS (Aljundi et al. 2018). Especially, the fixed version of EWC in (Bricken et al. 2023) is employed to improve the model performance. Since these methods are independent of the computational characteristics of ANNs, they can directly be transferred to SNNs. The third is some recently proposed biologically plausible methods based on ANNs, including SDMLP (Bricken et al. 2023) and Flymodel (Shen, Dasgupta, and Navlakha 2021), where SDMLP ignores its pretraining process of the first layer for convenience and fair, and Flymodel runs only one epoch for its particularity. In addition, the IF neuron model is employed in our model for its good compatibility with variable thresholds. Performance Comparison The comparative studies are conducted to evaluate the performance of the proposed model. As illustrated in Table 1 and Figure 2 (left), the proposed SA-SNN outperforms other methods on splitMNIST and splitCIFAR10 datasets, achieving an accuracy of 60.06 % and 77.73 %, respectively. Furthermore, it consistently maintains its accuracy advantage over other methods throughout the entirety of the learning process. Additionally, SA-SNN also exhibits superior performance compared to state-of-the-art ANNs models of SDMLP (73.27 %) and FlyModel (70.09 %) with similar principle for continual learning. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 615 0.0 0.2 0.4 0.6 0.8 SA-SNN +EWC -var thresh SA-SNN SA-SNN(rate) -var thresh -dale -norm Ablation Experiment on split CIFAR10 Test Accuracy -K-WTA Figure 3: The validation accuracy of all ablated versions of SA-SNN. ’-’ means the removed corresponding component. Besides, our model demonstrates a relatively high level of performance even by utilizing the rate mask directly (represented by SA-SNN(rate)), achieving an accuracy of 76.88% higher than the SDMLP algorithm on splitCIFAR10, despite both methods employing the K-WTA mechanism to mitigate the issue of forgetting. This observation suggests that the ”integrate and fire” pattern of spiking neurons in the SNN may have advantages in the case of continual learning. Moreover, we evaluate the accuracies of earliest classes after each classes training from T1 to T5 to directly compare the models’ performance on CIFAR10 dataset. The SA-SNN achieves higher average accuracy with less variance than SDMLP and original SNN especially after last two tasks training. as shown in Figure 2 (Right), it is worth noting that despite the absence of any constraints on the direction of weight update or pre-training process in our model, the accuracy change curve of our model remains relatively balanced performance across various tasks. The balance among different tasks benefits from the effectiveness of selective activation of specific subpopulations of neurons and their associated connections for distinct tasks. On this basis, since our method SA-SNN is not based on the inferred importance of weights to avoid forgetting, it’s compatible with regularization-based methods such as EWC, MAS, SI(Zenke, Poole, and Ganguli 2017) that add penalty term in loss functions. As shown in Table 1, when combined with EWC, our model exhibits an enhancement of approximately 2.66 % and 22.12 % on splitCIFAR10 and splitMNIST dataset, compared with that of SDMLP plus EWC. This big gap between these two datasets appears because EWC applies regularization function to find a sub optimal local minima by minimizing the loss of new task around the local region of parameters space of old task, since the sequential classes in MNIST share more similar low-level features than CIFAR10, it is easier for SA-SNN with EWC to find a good joint probability distribution around the local region of parameters space of old task. Meanwhile, it is observed that the performance of EWC and MAS techniques on SNN in the conducted experiment is comparatively low, approximately 30 % on splitCIFAR10. And EWC methods exhibited a more noteworthy accuracy improvement by nearly 5 % when applied to ANNs on splitCIFAR10. This performance difference may be attributed to distinct neuron activation way between SNNs and ANNs. The above results are sufficient to demonstrate the effectiveness of our method. Our SA-SNN model mainly introduce five hyperparameter: p, τ, k, Thmin and Thmax. A small τ can lead to an overemphasis on temporal proximity by the Top-K function, thereby impeding the network’s ability to segregate tasks into distinct subnetworks. Hence, τ is set to 10 and K is set to 10 through empirical observation. Then, the influence of the other two hyper-parameters is illustrated in Figure 4 (a) and (b). Regarding the maximum threshold Thmax, its influence on the overall performance is relatively small when taking a value greater than 2.0. Since it is obvious that this parameter is correlated with p and the continuous change of threshold itself can lead to forgetting, we opt to establish a fixed value of 2.0 for it. p controls the change speed of the firing threshold. When its value is too large, the continuously changing threshold will actually affect the ability to maintain knowledge. Meanwhile, it’s not enough to fully mobilize the neurons when p is too small. Referring to the results in Figure 4 (b), we choose to set p = 2000000 under the experiment setting. For hyperparameters in other methods for comparison, we use grid-search for parameter tuning and select relatively reasonable values for them. Ablation Studies In order to assess the efficacy of the incorporated components that facilitate the ability for continual learning, we conduct a series of ablation experiments and present the results in Figure 3. It is obvious that combining these components yield better performance than other versions without a certain component, indicating the necessity of these components. It is worth noting that apart from combining all components (78 %), removing a single component alone can lead to a sharp decrease in the final results. That means it is synergistic effect between sparse input and isolated learning that results in the continual learning ability of SA-SNN. As mentioned earlier, we can see that the Multi-step mask outperforms the rate-mask in terms of performance. This phenomenon is more notable in the splitMNIST experiment. It may be attributed to the fact that this method appropriately receives some ambiguous outputs, thereby indirectly expanding the network’s ability to generalize tasks. Regardless of the additional gains from the extended regularization-based method, the overall performance of the method we use under SNN is better than that of the similar method of SDMLP implemented under ANN, but where does the effort of this improvement come from? To explore the possible reasons, we refer to the definition of ”taskselectivity” in (Flesch et al. 2023) and compare the relevant performance of the trained models. We briefly define that when a neuron primarily responds to the input from only one category, it has selectivity for this The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 616 SA-SNN SA-SNN(rate) SDM confusion matrix 200 0 400 800 600 1000 task selectivity plotting T1 T2 T3 T4 T5 4 5 3 1 2 0 8 9 7 6 True Label Predicted Label 0 1 2 3 4 5 6 7 8 9 4 5 3 1 2 0 8 9 7 6 0 1 2 3 4 5 6 7 8 9 4 5 3 1 2 0 8 9 7 6 0 1 2 3 4 5 6 7 8 9 T1 T2 T3 T4 T5 T1 T2 T3 T4 T5 neuron number Max Threshold Tune No var_th Th=1.5 Th=2.0 Th=3.0 Th=4.0 0.0 0.2 0.4 0.6 0.8 1.0 Hyperparameter p Tune 0 2 4 6 8 10 classes so far 0.3 0.4 0.5 0.7 0.6 1.0 0.9 0.8 p=40000 p=100000 p=2000000 p=4000000 p=1000000 No var_thresh (a) (b) (c) (d) Test Accuracy Test Accuracy Figure 4: (a) Comparison of model performance with different hyper-parameter Thmax. (b) The accuracy curve of models with different hyper-parameter p. (c,d) The plotting of some statistical features of trained neural networks using SDMLP and our method in one experiment. (c) The confusion matrix of SDMLP, SA-SNN(rate), SA-SNN after training on splitCIFAR10. (d) The distribution of neurons with different task-selectivity during continual learning, starting from the end of Task1 learning. category.To roughly classify neurons with different selectivity, we regress the hidden layer activity against 10 expected distinct selectivity (i.e. neurons only respond to a specific category). The ’activity’ refers to the neurons’ output (after ReLu), while in SNN it refers to the spikes count. And one of the results is shown in Figure 4 (c, d). In Figure 4 (d), we notice that the distribution of the neurons with different selectivity is relatively biased under the SDMLP method, while that under our method seems to be even during the learning process. This phenomenon is also directly reflected in the final confusion matrix of the model in Figure 4 (c): during the learning process of SDMLP models, the proportion of neurons with category 2, 4 and 5 selectivity is relatively small, so the ability to identify these categories is very easy to be disturbed in the subsequent learning process, which finally results in a relatively low accuracy in several categories. And this kind of contradiction is not so prevalent when using our method. Besides, the number of neurons with specific selectivity is more when using rate-mask than using Multi-step mask. That may be attributed to the Multi-step mask’s tolerance towards the neurons with similar functions, which encourages mixed selectivity. It still maintains a relatively uniform selectivity distribution even so. Discussion Compared to ANNs, SNNs share more biological features, and the spike firing mechanism within is thought to make the SNN networks more robust. In this paper, to explore the potential of intrinsic properties in SNNs, we introduce a set of neural features and combine the SNNs’ computation mechanism with K-WTA mechanism in a biological plausibly way. The components we proposed in the paper ultimately form an SNN network that can automatically form relatively balanced subnetworks and achieve good performance in the class increment learning settings of continual learning with no need for task information and extra storage. The trace variable is innovatively introduced to express the degree of neuronal activities over a period of time, thus determine the superior K neurons when proceeding K-WTA process. It is different from membrane potential in SNNs, the membrane potential is only used to determine the spike firing in our neuron model. Conclusions Overall, inspired by the selective sparse activation principle of context gating in biological systems, we propose the SA-SNN with effective components of trace-base K-WTA, normalization and variable threshold, and reach a competitive performance. In addition to a new method for continual learning, we also investigate the potential advantages of the sparse selective activation mechanism in SNN during continual learning. The experimental results suggested the effectiveness of the proposed components of SNNs on continual learning tasks. What’s more, our method also provides a possible way for augmenting continual learning capabilities in machine intelligence with limited computing resources through the integration of neuromorphic hardware. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 617 Acknowledgements This work was supported by National Key Research and Development Program of China (No. 2022YFB4500100), National Natural Science Foundation of China under Grant (No. 62306274, No.62236007, No. 62206037, No. 61925603), and Fellowship from the China Postdoctoral Science Foundation under Grant (No. 2023M733067 and No. 2023T160567). References Ahmad, S.; and Scheinkman, L. 2019. How can we be so dense? The benefits of using highly sparse representations. arXiv preprint arXiv:1903.11257. Aljundi, R.; Babiloni, F.; Elhoseiny, M.; Rohrbach, M.; and Tuytelaars, T. 2018. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European conference on computer vision (ECCV), 139–154. Antonov, D.; Sviatov, K.; and Sukhov, S. 2022. Continuous learning of spiking networks trained with local rules. Neural Networks, 155: 512–522. Arani, E.; Sarfraz, F.; and Zonooz, B. 2022. Learning fast, learning slow: A general continual learning method based on complementary learning system. arXiv preprint arXiv:2201.12604. Bricken, T.; Davies, X.; Singh, D.; Krotov, D.; and Kreiman, G. 2023. Sparse Distributed Memory is a Continual Learner. arXiv preprint arXiv:2303.11934. Bu, T.; Ding, J.; Hao, Z.; and Yu, Z. 2023. Rate Gradient Approximation Attack Threats Deep Spiking Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7896–7906. Dale, H. 1935. Pharmacology and nerve-endings. Dan, Y.; and Poo, M.-m. 2004. Spike timing-dependent plasticity of neural circuits. Neuron, 44(1): 23–30. Deng, L.; Wu, Y.; Hu, Y.; Liang, L.; Li, G.; Hu, X.; Ding, Y.; Li, P.; and Xie, Y. 2021. Comprehensive snn compression using admm optimization and activity regularization. IEEE transactions on neural networks and learning systems. Fedus, W.; Zoph, B.; and Shazeer, N. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1): 5232–5270. Flesch, T.; Nagy, D. G.; Saxe, A.; and Summerfield, C. 2023. Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals. PLOS Computational Biology, 19(1): e1010808. French, R. M. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4): 128–135. Guo, Y.; Peng, W.; Chen, Y.; Zhang, L.; Liu, X.; Huang, X.; and Ma, Z. 2023. Joint A-SNN: Joint training of artificial and spiking neural networks via self-Distillation and weight factorization. Pattern Recognition, 142: 109639. Hammouamri, I.; Masquelier, T.; and Wilson, D. 2022. Mitigating Catastrophic Forgetting in Spiking Neural Networks through Threshold Modulation. Transactions on Machine Learning Research. Imam, N.; and Cleland, T. A. 2020. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nature Machine Intelligence, 2(3): 181–191. Izhikevich, E. M. 2004. Which model to use for cortical spiking neurons? IEEE transactions on neural networks, 15(5): 1063–1070. Kandel, E. R.; and Hawkins, R. D. 1992. The biological basis of learning and individuality. Scientific American, 267(3): 78–87. Kang, H.; Mina, R. J. L.; Madjid, S. R. H.; Yoon, J.; Hasegawa-Johnson, M.; Hwang, S. J.; and Yoo, C. D. 2022. Forget-free continual learning with winning subnetworks. In International Conference on Machine Learning, 10734– 10750. PMLR. Kay, L. M.; and Laurent, G. 1999. Odor-and contextdependent modulation of mitral cell activity in behaving rats. Nature neuroscience, 2(11): 1003–1009. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521–3526. Kudithipudi, D.; Aguilar-Simon, M.; Babb, J.; Bazhenov, M.; Blackiston, D.; Bongard, J.; Brna, A. P.; Chakravarthi Raja, S.; Cheney, N.; Clune, J.; et al. 2022. Biological underpinnings for lifelong learning machines. Nature Machine Intelligence, 4(3): 196–210. LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. nature, 521(7553): 436–444. Levinson, M.; Kolenda, J. P.; Alexandrou, G. J.; Escanilla, O.; Cleland, T. A.; Smith, D. M.; and Linster, C. 2020. Context-dependent odor learning requires the anterior olfactory nucleus. Behavioral Neuroscience, 134(4): 332. Lin, A. C.; Bygrave, A. M.; De Calignon, A.; Lee, T.; and Miesenb¨ock, G. 2014. Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination. Nature neuroscience, 17(4): 559–568. Maass, W. 1997. Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9): 1659–1671. Markram, H.; L¨ubke, J.; Frotscher, M.; and Sakmann, B. 1997. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275(5297): 213–215. McCloskey, M.; and Cohen, N. J. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, 109–165. Elsevier. Morrison, A.; Diesmann, M.; and Gerstner, W. 2008. Phenomenological models of synaptic plasticity based on spike timing. Biological cybernetics, 98: 459–478. Pei, J.; Deng, L.; Song, S.; Zhao, M.; Zhang, Y.; Wu, S.; Wang, G.; Zou, Z.; Wu, Z.; He, W.; et al. 2019. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 572(7767): 106–111. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 618 Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001–2010. Shen, J.; Xu, Q.; Liu, J. K.; Wang, Y.; Pan, G.; and Tang, H. 2023. ESL-SNNs: An Evolutionary Structure Learning Strategy for Spiking Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1): 86–93. Shen, J.; Zhao, Y.; Liu, J. K.; and Wang, Y. 2021. HybridSNN: Combining bio-machine strengths by boosting adaptive spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 34(9): 5841 – 5855. Shen, Y.; Dasgupta, S.; and Navlakha, S. 2021. Algorithmic insights on continual learning from fruit flies. arXiv preprint arXiv:2107.07617. Skatchkovsky, N.; Jang, H.; and Simeone, O. 2022. Bayesian continual learning via spiking neural networks. arXiv preprint arXiv:2208.13723. Stevens, C. F. 2015. What the fly’s nose tells the fly’s brain. Proceedings of the National Academy of Sciences, 112(30): 9460–9465. Subbulakshmi Radhakrishnan, S.; Sebastian, A.; Oberoi, A.; Das, S.; and Das, S. 2021. A biomimetic neural encoder for spiking neural network. Nature communications, 12(1): 2143. Tadros, T.; Krishnan, G. P.; Ramyaa, R.; and Bazhenov, M. 2022. Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks. Nature Communications, 13(1): 7742. Van de Ven, G. M.; Siegelmann, H. T.; and Tolias, A. S. 2020. Brain-inspired replay for continual learning with artificial neural networks. Nature communications, 11(1): 4069. Wu, Y.; Deng, L.; Li, G.; Zhu, J.; and Shi, L. 2018. Spatiotemporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience, 12: 331. Xu, Q.; Li, Y.; Shen, J.; Liu, J. K.; Tang, H.; and Pan, G. 2023. Constructing deep spiking neural networks from artificial neural networks with knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7886–7895. Xu, Q.; Shen, J.; Ran, X.; Tang, H.; Pan, G.; and Liu, J. K. 2021. Robust transcoding sensory information with neural spikes. IEEE Transactions on Neural Networks and Learning Systems, 33(5): 1935–1946. Yin, B.; Corradi, F.; and Bohte, S. M. 2023. Accurate online training of dynamical spiking neural networks through forward propagation through time. Nature Machine Intelligence, 1–10. Yoon, J.; Yang, E.; Lee, J.; and Hwang, S. J. 2017. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547. Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. In International conference on machine learning, 3987–3995. PMLR. Zhu, Y.; Fang, W.; Xie, X.; Huang, T.; and Yu, Z. 2023. Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks. In Thirty-seventh Conference on Neural Information Processing Systems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 619 | 2024 | 69 |
18,506 | Locality Preserving Refinement for Shape Matching with Functional Maps Yifan Xia, Yifan Lu, Yuan Gao∗, Jiayi Ma* Electronic Information School, Wuhan University, Wuhan 430072, China [email protected], [email protected], [email protected], [email protected] Abstract In this paper, we address the nonrigid shape matching with outliers by a novel and effective pointwise map refinement method, termed Locality Preserving Refinement. For accurate pointwise conversion from a given functional map, our method formulates a two-step procedure. Firstly, starting with noisy point-to-point correspondences, we identify inliers by leveraging the neighborhood support, which yields a closedform solution with linear time complexity. After obtained the reliable correspondences of inliers, we refine the pointwise correspondences for outliers using local linear embedding, which operates in an adaptive spectral similarity space to further eliminate the ambiguities that are difficult to handle in the functional space. By refining pointwise correspondences with local consistency thus embedding geometric constraints into functional spaces, our method achieves considerable improvement in accuracy with linearithmic time and space cost. Extensive experiments on public benchmarks demonstrate the superiority of our method over the state-of-the-art methods. Our code is publicly available at https://github.com/XiaYifan1999/LOPR. Introduction Recognizing the similarity and correspondences between two nonrigid shapes is a fundamental problem in computer vision and graphics (Van Kaick et al. 2011; Sahillio˘glu 2020), such as shape analysis (Hartman et al. 2023), style transfer (Sumner and Popovi´c 2004), pose estimation (Jiang et al. 2022), and texture mapping (Ezuz and Ben-Chen 2017). Unlike rigid alignment with easy parametric modeling, the complexity of nonrigid transformation and the existence of unknown outliers make such a problem intractable to be modeled. Due to the approximately isometric nature of real-world deformations, estimating the near-isometric maps for nonrigid shape matching receives increasing research interests in the last decades (Sahillio˘glu 2020). Among the numerous strategies for seeking the near-isometric maps (Deng et al. 2022), functional maps (Ovsjanikov et al. 2012) stands out as an exemplary technique due to its high efficiency. Observing the isometric invariance based on the Laplacian-Beltrami *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 3 iterations Sinkhorn LOPR 7 iterations Target GT Figure 1: Qualitative examples employing color transfer, comparing our LOPR with representative state-of-the-art Sinkhorn (Pai et al. 2021) by integrating them into the iteration of MWP. Notably, our LOPR exhibits the capacity to recover more precise pointwise maps. operator, functional map framework proposes the determination of a functional map operator that maps two spaces of square-integrable functions on respective shape, thereby efficiently recovering pointwise correspondences. This enables algebraic operations on shape maps such as map sum, and natural constraints become linear. For instance, local volume preserving maps can be associated with the orthogonality of functional map matrices, near isometries correspond to the commutativity of Laplacian operator, and conformal maps would preserve functional inner products. Interested readers are referred to the introductory course (Ovsjanikov et al. 2016) for details. Following the functional map framework, many works are presented in recent years. BCICP (Ren et al. 2018) is proposed to promote orientation preservation. PFM (Rodol`a et al. 2017) is devised for partial shape matching. Moreover, ZoomOut (Melzi et al. 2019), SmoothShells (Eisenberger, Lahner, and Cremers 2020), DiscreteOp (Ren et al. 2021), MWP (Hu et al. 2021), CFM (Donati et al. 2022), and EDEO (Magnet et al. 2022) are proposed to enhance the effectiveness of functional maps. The pointwise map recovThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6207 eries of all these methods draw inspiration from ICP proposed by original work (Ovsjanikov et al. 2012), i.e., an iterative refinement based on Nearest Neighbor (NN) searches between eigenfunction matrices. However, this strategy is subject to the absence of topological constraints from Euclidean space, thus hindering properties of pointwise correspondences such as continuity and smoothness. Therefore, a recurring problem of functional map framework lies in prohibiting precise alignment at fine scales. Several works (Rodol`a, Moeller, and Cremers 2015; Rodola, M¨oller, and Cremers 2017; Pai et al. 2021) have noted this ill-defined problem of pointwise conversion from a functional map. PMF (Vestner et al. 2017b) and KernalMatching (Vestner et al. 2017a) both utilize the kernel density estimation for correspondence recovery but suffer from high computational burden. Fast Sinkhorn Filters (Pai et al. 2021) combines the functional map representation with the matrix scaling schemes from computational optimal transport. Nevertheless, this method is limited to the spectral embedding alignment, leaving room for improvement in terms of accuracy and complexity. COMB (Roetzer et al. 2022) devises a scalable combinatorial solver but suffers from high computational burden. GCPD (Fan et al. 2022) generalizes the kernel techniques based on classic CPD (Myronenko and Song 2010) to achieve extrinsic alignment, however relies on functional map methods as initialization. In conclusion, pointwise map recovery, as a widely prevalent step in functional map framework, exhibits evident limitations while holding significant significance and broad value. Existing methods either suffer from inadequate precision (e.g., nearest neighbors) or incur significant time and memory cost (e.g., assignment solvers). In order to release above limitations, we propose an efficient framework named LOcality Preserving Refinement (LOPR), which recovers more precise pointwise maps with explicit continuity and smoothness constraints. Given noisy pointwise maps have erroneous point-to-point correspondences as outliers, we firstly establish a mathematical model to differentiate between correct correspondences (inliers) and outliers. Observing local consistency during various deformations, we derive a closed-form solution with linear time and space complexity based on meshed neighborhood support to ensure continuity between points. Subsequently, we perform re-matching for the outliers using reliable mapping relations from the inliers. Specifically, to estimate a weight matrix based on Locally Linear Embedding (LLE), we construct neighborhoods from inliers for outliers in source shape. Then, in the target shape, we select appropriate corresponding points from the K-nearest neighbors in the spectral domain with the minimum reconstruction errors of LLE, thus avoiding the previous ambiguity of searching only the nearest neighbors in the spectral domain. Additionally, our hyperparameters can be selected adaptively, ensuring the applicability to diverse resolutions. Qualitative comparisons with the recent pointwise map recovery method Sinkhorn (Pai et al. 2021) are shown in Fig. 1. In summary, the primary contributions are threefold: - We propose a novel and effective framework for pointwise map recovery, which embeds topological constraints from Euclidean spaces into the functional space. - Drawing on the local consistency within nonrigid deformations, we formulate a concise mathematical model that offers a closed-form solution with linear complexity and devise an outlier refinement strategy via LLE. - We conduct extensive experiments on public datasets to compare our method against state-of-the-art methods, which demonstrate the superiority of our method. Functional Maps Revisited Given a shape modeled as a smooth compact twodimensional manifold M with area element dµ embedded into R3, the space of square-integrable functions on the manifold M is denoted by L2(M) = f : M →R, R M f(x)2dµ < ∞ . With the symmetric Laplace-Beltrami operator ∆M providing Fourier analysis to manifold M, there exits an eigen-decomposition ∆Mφi = λiφi for i ≥1 with eigenfunctions {φi}i≥1 as an orthonormal basis and eigenvalues 0 = λ1 < λ2 ≤. . . of L2(M). In this case, any function f ∈L2(M) can be represented as f(x) = P i≥1 ⟨f, φi⟩M φi(x), where ⟨·⟩M denotes the inner product on the manifold M. Functional Correspondence As proposed by the original work (Ovsjanikov et al. 2012), the shape correspondences can be obtained by transferring functions between manifolds. Given a pointwise map T: M →N and a functional map TF : L2(M) →L2(N) from a manifold M to another manifold N, the image of any function f ∈L2(M) is defined as TF (f) = f ◦T −1. Considering two orthonormal bases {φM i }i≥0 and {φN i }i≥0 of L2(M) and L2(N), respectively, the functional image is represented as: TF (f) = X j X i f, φM i M TF φM i , φN j N | {z } cji φN j . (1) We denote the matrix C = [cji] ∈Rk×k, which maps the first k eigenfunctions to efficiently approximate smooth correspondence. The unknown matrix C encodes the functional map TF , which can be derived by linear constraints such as descriptor and segment preservation together with the operator commutativity (Ovsjanikov et al. 2012). Pointwise Map Recovery Given a functional map TF , the pointwise map T between two discrete shapes can be reconstructed (Ovsjanikov et al. 2012). Specifically, for any point x on M, its corresponding point T(x) in N is denoted as T(x) = arg maxy TF (δx)(y), where δx is the delta-function at point x on the shape M. As ⟨δx, φi⟩= φi(x), if let the matrices ΦM ∈Rm×k and ΦN ∈Rn×k respectively denote the first k LaplaceBeltrami eigenfunctions of M and N, where each column corresponds to an eigenfunction and each row to a point, each row vector of C(ΦM)⊤represents transferred Fourier coefficients of a point on M and has high similarity to a row vector of Φ⊤ N corresponding its matching point on N. By Plancherel’s theorem (Penney 1975), the distances between The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6208 coefficient vectors of functions can be computed by L2 differences. Thus, the pointwise map T can be recovered by minimizing the following function: min T m X i=1
C ΦM(i) ⊤− ΦN (T(i)) ⊤
2 , (2) where ΦM(i) represents the i-th row of matrix ΦM and ΦN (T(i)) corresponds to the mapping point of point i on N. Eq. (2) can be solved by nearest neighbor search via KD tree. In this way, the high-dimensional pointwise maps T can be recovered from the functional spaces, though may suffer from limited accuracy and continuity. Methodology This section describes our LOPR for pointwise map recovery in functional maps, where an effective framework based on local consistency is proposed to refine noisy point-topoint correspondences towards a better performance. Given that pointwise maps incorporate correct correspondences (inliers) and incorrect ones (outliers), our proposed framework involves two steps: (i) distinguish inliers and outliers from noisy pointwise maps; (ii) refine point correspondences of outliers based on the reliable mapping of inliers. Distinguish Inliers via Neighborhood Consensus Given that X = {xi}m i=1 and Y = {yi}n i=1 are two vector sets of vertex spatial coordinates on two discrete manifolds M and N, respectively, denoting T0 ∈Rm an initial pointwise map that maps each point from M to a point of N, our first goal is to distinguish the inliers from a noisy point correspondence set {xi, yT0(i)}m i=1. Ideal Isometric Formulation Denoting I an unknown inlier set, C as a cost function, we have the following objective to find the inliers in the isometric transformation case: I∗= arg min I C(I; X, Y, T0, λ). (3) Since the distance between two arbitrary points in M preserves the same as that of their corresponding points in N, the cost function C can be defined as C(I; X, Y, T0, λ)= X i∈I X j∈I d(xi, xj)−d(yT0(i), yT0(j)) 2 + λ(|T0| −|I|), (4) where d is the geodesic distance, and |T0| = m. The first term of Eq. (4) restricts the variance of geodesic distances between any two point pairs, and the second term discourages the outliers with a balance parameter λ > 0. Ideally, this cost function should be minimized to zero. General Shape Matching Real applications generally require the acquisition and analysis about nonrigidly deformable objects. Though complex nonrigid deformations produces non-isometric maps, the topology of local structure is consistent. In discrete setting, the point distribution in a local region is preserved even after a severe deformation. Thus, we have a more general form of Eq. (4): C(I; X, Y, T0, λ) = X i∈I 1 |Nxi| + |NyT0(i)| X j|xj∈Nxi d(xi, xj)−d(yT0(i), yT0(j)) 2+ X j|yT0(j)∈NyT0(i) d(xi, xj) −d(yT0(i), yT0(j)) 2 + λ(|T0| −|I|), (5) where Nx denotes the neighborhood of vertex x, which is x’s adjacent points under triangular meshing. Due to the severe deformations, we only assume the distance preserves in a local (neighboring/adjacent) region, therefore, the distance d in Eq. (5) is: d(xi, xj) = 0, xj ∈Nxi 1, xj /∈Nxi , (6) and we also have a similar definition for d(yi, yj). Let a binary vector p be the correctness of correspondences, where pi = 1 represents an inlier (xi, yT0(i)), otherwise an outlier. Substituting Eq. (6) into the cost function Eq. (5), we have C(p; X, Y, T0, λ) = m X i=1 pi |Nxi| + |NyT0(i)| X j|xj∈Nxi d(yT0(i), yT0(j)) + X j|yT0(j)∈NyT0(i) d(xi, xj) + λ m − m X i=1 pi . (7) In this case, the topological constraints based on locality consistency are invariant to translation, rotation, and scale, facilitating the robustness for various deformations. Optimization To minimize the cost function Eq. (7), we reformulate it by merging the terms related to pi as: C(p; X, Y, T0, λ) = m X i=1 pi(ci −λ) + λm, (8) where ci is defined as the disparity score: ci = 1 − 2ni |Nxi| + |NyT0(i)|, (9) and ni = count(j|xj ∈Nxi, yT0(j) ∈NyT0(i)) is the number of shared elements in two neighborhoods. We note that ci is approximate to 0 if (xi, yT0(i)) is an ideally correct correspondence. Since a surface usually consists of a vary large number of vertices (e.g., ten thousands), it is necessary to consider not only a single neighborhood. Thus to better describe the structure of local region, we extend the disparity score ci into eci = ci + P j|xj∈Nxi cj 1 + |Nxi| , (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6209 and the cost function is updated as C(p; X, Y, T0, λ) = m X i=1 pi(eci −λ) + λ|T0|. (11) Eq. (11) shows that any correspondence with eci > λ produces a positive term to increase the cost function, while that with eci < λ leads to a negative term thus decreasing the objective. λ is a balance factor that determines the trade-off between two terms in Eq. (4), i.e., the higher value of λ, the smaller and more reliable the set I is. Apparently the consensus score {eci}m i=1 can be computed in advance. Given triangular meshing and mapping T0 of two manifolds, therefore, the optimal p minimizing Eq. (11) can be obtained by: pi = 1, eci ≥λ, 0, eci < λ, i = 1, . . . , m. (12) Hence, the optimal inlier set I∗is: I∗= {i|pi = 1, i = 1, · · · , m}, (13) which is a reliable estimation not only satisfying local consistency, but also possessing well continuity due to the constraint of adjacent relations. Correspondence Refinement via LLE An reliable inlier set I can be used to refine the correspondences of outliers. To this end, we propose a local geometric constraint based on Locally Linear Embedding (LLE) (Roweis and Saul 2000). Exploiting the local consensus prior, LLE embeds the topology of a local region into a lowdimensional manifold, which learns a better distance metric with strong stability. Specifically, we divide the vertex set X into two parts: XI indicates inliers with correct correspondences, and XO denotes the outlier with mismatches. We also have similar YI and YO for Y. Next, we show how to choose a corresponding vertex from YO for each point in XO based on reliable maps (XI, YI), which introduces geometric constraints in Euclidean space. Firstly, for each point xO i of XO, we search its K1 nearest neighbors from XI with the L2 distance of their coordinates on manifold M, obtaining a neighborhood set N K1 xO i : N K1 xO i = K1-NNsearch(xO i , X I). (14) In this step, nO neighborhood sets will be generated, where nO = |XO| is the number of points in XO. Secondly, to derive a weight matrix W ∈RnO×K1, we minimize the reconstruction errors measured by the following cost function: E(W) = nO X i=1
xO i − K1 X j=1 Wijxi,j
2 , s.t. ∀i, K1 X j=1 Wij = 1, (15) where ∀i, xi,j ∈N K1 xO i , j = 1, . . . , K1, and the topology of vertex distribution is preserved in the weight matrix W via Algorithm 1: Locality Preserving Refinement for Pointwise Map Recovery Input: A pair of discrete manifolds M and N with bases ΦM and ΦN and a functional map C Parameter: λ Output: pointwise map T 1: Obtain T0 by selecting nearest neighbors by Eq. (2); 2: Calculate disparity scores {ci}m i=1 by Eq. (10); 3: Determine optimal inlier set I∗by Eq. (13); 4: Construct neighborhoods {N K1 xO i }nO i=1 by Eq. (14); 5: Compute matrix W by minimizing Eq. (15); 6: Obtain outlier map TO by Eqs. (16) and (17); 7: Obtain T by combining TO and TI from I∗; the neighboring point relationship. We can efficiently solve Eq. (15) for Wij by the least squares. Thirdly, we seek the corresponding vertex yO TO(i) from YO to xO i from XO, so as to obtain the pointwise map TO for outliers. We solve this in an embedded manifold representation using the spectral similarity. Specifically, we obtain K1 corresponding points TI(N K1 xO i ) of neighborhood N K1 xO i by pointwise map TI. Then, we find the K2 nearest neighbors between the column vectors of C(ΦO M)⊤and (ΦO N )⊤ as the spectral neighborhoods, where ΦO M/ΦO N ∈RnO×k is the matrix consisting of the row vectors of ΦM/ΦN corresponding to outliers XO/YO, respectively. Next, the correspondence yO TO(i) of vertex xO i can be chosen as the vertex with minimal reconstruction error from the spectral neighborhood N K2 C(ΦO M(i))⊤: TO(i) = arg min j|ΦN (j)∈N K2 C(ΦO M(i))⊤
yj − K1 X k=1 Wikyi,k
2 , (16) where the spectral neighborhood N K2 C(ΦO M(i))⊤= K2-NNsearch(C(ΦO M(i))⊤, (ΦO N )⊤), (17) and ∀i, xO i ∈XO, yi,k ∈TI(N K1 xO i ), k = 1, . . . , K1. Finally, the pointwise maps TI for inliers and TO derived from Eq. (16) form a full pointwise map T. The overall algorithm flow is described in Algorithm 1. Spectral ambiguity is the main issue that degrades the pointwise map recovery, which occurs when two points are not correct correspondence but with the highest similarity in Fourier coefficients. This happens when those two points are intrinsically symmetric, such as the left and right elbows of a human body; or two correct matching points are not the most similar in spectral domain. These stem from the limited constraints in low-dimensional functional space. To avoid this, we combine topological and functional spaces by constructing correspondences based on geometric LLE under relaxed spectral constraints (i.e., K2 nearest neighbors). As shown in Fig. 2, the correspondence refinement via LLE can effectively reduce spectral ambiguity during an iterative The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6210 LLE Refinement GT 3rd Iteration Noisy Pointwise Map Final 5th Iteration Distinguish Inliers Figure 2: A visual illustration about the process of our LOPR, where LLE refinement for outliers after distinguishing inliers yields noticeable improvements. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 -values 0 0.005 0.01 0.015 Relative Geodesic Error 4 5 6 7 8 9 Average Time (s) Figure 3: Parameter analysis of λ. Performance of LOPR under different λ values is evaluated using the mean and standard deviation of errors (Red), with the average time displayed on the right (Blue). functional map framework. Additionally, for simple matching pair of shapes, we provide a cutoff condition to avoid redundant iterations, i.e., the inlier set no longer grows, at which point the cardinality of inlier set can be regarded as a metric for evaluating the pointwise maps. Parameter Setting Our method introduces three hyperparameters, i.e., λ, K1, K2. We show how to set them in the following. Parameter λ is the crucial threshold to determine the inlier set I. To find an appropriate value, we select 200 shape pairs from the FAUST dataset (Bogo et al. 2014) as the test set, and use the whole LOPR for pointwise map recovery of MWP (Hu et al. 2021). Fig. 3 shows that λ can be set to 0.2 to balance the geometric errors and the runtime. Parameter K1 determines the number of nearest neighbors in Eq. (14), which is used to construct local manifold representations in Eq. (15). Intuitively, a higher value of K1 is associated with stronger local geometric constraints but more time consumption. Considering a larger set I can support more nearest neighbors for LLE, we empirically set an adaptive value for K1 w.r.t. the cardinality of I, e.g., K1 = ⌈|I|/100⌉, where ⌈·⌉means rounding up. Similarly, as the number of nearest neighbors searched in the spectral domain in Eq. (17), K2 also deserves to be proportional to |I|, e.g., K2 = ⌈|I|/1000⌉. Computational Complexity Our LOPR involves two main steps, namely inlier identification and correspondence refinement. In the first step, the computational cost is determined by the procedure of obtaining the disparity score ci for each point. Given the adjacencies provided by the shape meshing, the computational complexity is O(N), where N is the number of points on manifold M. The computational consumption of the second step depends on two K-nearest neighbor searches using K-D tree, and the complexities are O((K1 + NO) log NO) and O((K2 + NO) log NO), where NO is the number of outliers on manifold M. Since K1 ≪NO and K2 ≪NO, the time complexity of the second step can be simply written as O(NO log NO) and space complexity as O(NO). Since NO and N belong to the same order of magnitude, therefore the proposed LOPR has linearithmic complexity for both time and space w.r.t. the number of points N on the manifold, which guarantees high efficiency of our method. Experiments In this section, we apply our LOPR to challenging shapes from several public datasets and compare it with classical and state-of-the-art approaches. Implement Details Datasets Four public datasets are used in our exvluation experiments: • FAUST (Bogo et al. 2014) contains a total of 100 shapes, representing 10 poses of 10 different human subjects, exhibiting significant variations across different subjects. Each shape comprises 6890 element vertices, making it applicable even for computationally demanding methods. For quantitative evaluation, we randomly selected 300 shape pairs, encompassing both isometric and nonisometric deformations. • TOSCA (Bronstein, Bronstein, and Kimmel 2008) provides 76 shapes divided into 8 distinct categories (ranging from human to animal), with almost every shape containing 10k vertices. Our experimental evaluation involves all isometric shape pairs, totaling 414 pairs. • SCAPE (Anguelov et al. 2005) contains 71 registered meshes representing different poses of the same human subject. Each of these meshes consists of 12500 vertices. We randomly selected 200 matching pairs for quantitative evaluations. • TOPKIDS (L¨ahner et al. 2016) consists of 26 nonintersecting manifold shapes, which are generated by merging self-intersecting parts of the shapes from KIDS dataset (Rodola et al. 2014) thus highly challenging. We randomly selected 200 matching pairs from lowresolution shapes (each mesh contains approximately 10k vertices) for quantitative evaluations. Evaluation Princeton benchmark protocol (Kim, Lipman, and Funkhouser 2011) is used to evaluate the accuracy of shape matching. Specifically, given a ground-truth correspondence (x, y∗) where x ∈X and y∗∈Y, the error The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6211 0 0.05 0.1 Relative Geodesic Error 0 0.2 0.4 0.6 0.8 1 Correspondences (%) FAUST SHOT ICP BCICP ZoomOut DiscreteOp SmoothShells MWP SinkHorn GCPD LOPR 0 0.05 0.1 Relative Geodesic Error 0 0.2 0.4 0.6 0.8 1 Correspondences (%) SCAPE SHOT ICP BCICP ZoomOut DiscreteOp SmoothShells MWP SinkHorn GCPD LOPR 0 0.05 0.1 Relative Geodesic Error 0 0.2 0.4 0.6 0.8 1 Correspondences (%) TOPKIDS SHOT ICP ZoomOut SmoothShells MWP SinkHorn GCPD LOPR 0 0.05 0.1 Relative Geodesic Error 0 0.2 0.4 0.6 0.8 1 Correspondences (%) TOSCA SHOT BCICP SmoothShells MWP SinkHorn GCPD LOPR Figure 4: Evaluations of our LOPR with other methods on FAUST, SCAPE, TOPKIDS, and TOSCA datasets with CQCs. Target GCPD LOPR GT Figure 5: Qualitative demonstration using color transfer. In each row of results, the first and last shapes represent the target and source shapes matched with the ground truth respectively, the second shape depicts the result of GCPD, while the third shape represents the result of our LOPR. The shapes in the topmost row originate from SCAPE, those in the second row are sourced from TOPKIDS, and the ones in the bottom row are from TOSCA. for obtained correspondence (x, y) is calculated by relative geodesic distance between y and y∗normalized by diameter of Y: ϵ(x) = dgeo (y,y∗) √ area (Y). We employ the Correspondence Quality Characteristics (CQC) curves (Kim, Lipman, and Funkhouser 2011), which depict the percentages of matches that have geodesic errors no greater than r, and the average geodesic error across all vertices on the shape M, to quantify the correspondence quality. Competitors The competitor methods include ICP (Ovsjanikov et al. 2012), BCICP (Ren et al. 2018), ZoomOut (Melzi et al. 2019), DiscreteOp (Ren et al. 2021), SmoothShells (Eisenberger, Lahner, and Cremers 2020), MWP (Hu et al. 2021), Sinkhorn (Pai et al. 2021), and GCPD (Fan et al. 2022). Methods FAUST SCAPE TOPKIDS TOSCA SmoothShells 0.0442 0.1175 0.1625 0.0198 MWP 0.0100 0.0096 0.0502 0.0171 Sinkhorn 0.0089 0.0096 0.0515 0.0175 GCPD 0.0084 0.0087 0.0709 0.0159 LOPR 0.0070 0.0091 0.0466 0.0157 Table 1: Average relative geodesic errors of our LOPR and state-of-the-arts on four public datasets, where the bold indicates the best. Settings The matching results in SHOT (Tombari, Salti, and Di Stefano 2010) descriptor space are used as initialization for ICP, BCICP, DiscreteOp, and MWP. GCPD uses the results of MWP as the input. Sinkhorn and our LOPR are used to recover pointwise maps of MWP with 5 discrete filters and 4 iterations. For all competitors, we use the settings and codes provided online by their authors. In particular, the number of eigenfunctions is uniformly set to 500, which for ZoomOut is the maximal dimension of its upsampling iterations. As suggested by its authors, GCPD is initialized by MWP with 200 eigenfunctions, and the subsequent smoothing model is still based on a 500-dimensional eigenfunction space. All experiments are conducted on a PC with Intel(R) Core i9-9920X CPU at 3.50GHz, using MATLAB R2018a. And K-nearest neighbor search is accelerated by GPU. Results Analysis The quantitative evaluations on four public datasets include CQC curves and average errors, which are shown in Fig. 4 and Table 1, respectively. MWP (Hu et al. 2021), Sinkhorn (Pai et al. 2021), and GCPD (Fan et al. 2022), as the recent State-Of-The-Art (SOTA) methodologies, exhibit evident advantages over their predecessors, notably demonstrated on the FAUST, SCAPE, and TOPKIDS datasets. Notably, GCPD, as a SOTA advancement, employs MWP to initialize an externally-deformation-based probabilistic model and demonstrates decent performances. However, our method consistently achieves lower average errors across the majority of datasets, as illustrated in Table 1. Qualitative comparisons between our LOPR and GCPD are provided in Fig. 5, where the second row comes from the TOPKIDS dataset (see the third sub-figure in Fig. 4 for quantitative analysis). These visual comparisons underscore LOPR’s superior capability to address intricate matching pairs, exemThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6212 MWP+NN MWP+Sinkhorn MWP+LOPR GT Target Figure 6: Qualitative comparisons for NN, Sinkhorn, and our LOPR on MWP using color transfer. The shapes in the topmost row originate from SCAPE, those in the second row are sourced from TOPKIDS, and the ones in the bottom row are from TOSCA. Best viewed zoomed in. Methods FAUST SCAPE TOPKIDS Wolf Michael #Vertices 6890 12500 10399 4344 10000 SmoothShells 98.957 161.03 113.68 56.979 112.54 MWP 1.8358 4.3591 3.6495 0.8974 3.1439 GCPD 9.3658 20.146 14.184 2.8385 12.615 Sinkhorn 24.072 101.70 63.872 10.010 64.415 LOPR 3.9523 13.357 9.4687 1.4258 7.1651 Table 2: Average runtime (in seconds) demonstration of our LOPR and state-of-the-art on different resolutions. We present the results on FAUST, SCAPE, and TOPKIDS datasets and two models (Wolf and Michael) of TOSCA. plified by challenging datasets such as TOPKIDS. In addressing the pointwise map recovery problem, analogous methods include classical Nearest Neighbors (NN) (Ovsjanikov et al. 2012) and the recent Sinkhorn (Pai et al. 2021) method. To visually contrast our method with these competitors, qualitative comparisons are showcased in Fig. 6. Here, the functional map matrices are uniformly computed utilizing MWP with 5 iterations. Evidently, our LOPR excels in recovering more accurate pointwise correspondences comparing classic NN and representative Sinkhorn, which is due to that both of them lack the geometric constraints of the external space. Furthermore, the average runtimes of our method, alongside several representative techniques, are reported in Table 2. The runtime of our LOPR is deemed acceptable across various shape resolutions, notably outpacing the recent methods Sinkhorn and GCPD due to the linearithmic computational complexity of our LOPR. Partial Matching The geometric constraints embedded in LOPR are rooted in the consistency of small local regions during deformation, displaying broad applicability, even in partial matching. Target Sinkhorn LOPR GT Target Sinkhorn LOPR GT Figure 7: Qualitative examples of our LOPR and Sinkhorn on partial shape matching using color transfer. In each of these two groups, the first and last shapes are target and source shapes with ground-truth matching, the second shape is the result of Sinkhorn, and the third shape represents the results of our LOPR. 0 0.05 0.1 Relative Geodesic Error 0 0.05 0.1 0.15 0.2 0.25 Correspondences (%) Partial Matching ICP ZoomOut MWP SinkHorn LOPR 0 50 100 Proportion (%) 0 20 40 60 Runtime (s) Partial Matching ICP ZoomOut MWP SinkHorn LOPR Figure 8: Quantitative comparisons for our LOPR and stateof-the-art functional map methods on partial shape matching tasks. The left contains the CQC curve of each method, while the right shows the cumulative distribution function of runtime, where the point situated along the curve at coordinates (x, y) signifies that the runtime does not exceed y for a percentage of x% of the instances. Employing the cuts dataset provided by Partial Functional Correspondences (Rodol`a et al. 2017), which encompasses 456 partial shapes spanning different classes of TOSCA, we compare our LOPR with representative functional map methods to achieve partial-to-full matching. Qualitative instances of comparing our LOPR with SOTA Sinkhorn are presented in Fig. 7, where our LOPR can achieve more accurate and continuous pointwise maps. From quantitative CQC curves and runtime results illustrated in Fig. 8, comparing with other functional map methods, our method is capable of achieving competitive accuracy improvements within a moderate time cost. Conclusion In this paper, we propose a concise and efficient framework for pointwise map recovery in functional maps to address nonrigid shape matching, i.e., LOPR. The geometric constraints we apply are based on local consistency, wherein small regions on a manifold exhibit invariance across various deformations, including nonrigid transformation. The process of LOPR involves two steps, i.e., outlier identification based on neighborhood support and correspondence refinement via LLE. Neighborhood support enforces continuity among points within local regions, while the utilization of LLE eliminates spectral ambiguities such as intrinsic symmetries. Experiments on public benchmarks validate the superiority of our LOPR over the state-of-the-art in terms of accuracy with efficiency and generality for partial matching. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6213 Acknowledgments This work was supported by the National Natural Science Foundation of China (62306214 and 62276192). References Anguelov, D.; Srinivasan, P.; Koller, D.; Thrun, S.; Rodgers, J.; and Davis, J. 2005. SCAPE: shape completion and animation of people. ACM Transactions on Graphics (TOG), 24(3): 408–416. Bogo, F.; Romero, J.; Loper, M.; and Black, M. J. 2014. FAUST: Dataset and evaluation for 3D mesh registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3794–3801. Bronstein, A. M.; Bronstein, M. M.; and Kimmel, R. 2008. Numerical geometry of non-rigid shapes. Springer Science & Business Media. Deng, B.; Yao, Y.; Dyke, R. M.; and Zhang, J. 2022. A Survey of Non-Rigid 3D Registration. Computer Graphics Forum, 41(2): 559–589. Donati, N.; Corman, E.; Melzi, S.; and Ovsjanikov, M. 2022. Complex functional maps: A conformal link between tangent bundles. Computer Graphics Forum, 41(1): 317–334. Eisenberger, M.; Lahner, Z.; and Cremers, D. 2020. Smooth shells: Multi-scale shape registration with functional maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 12265–12274. Ezuz, D.; and Ben-Chen, M. 2017. Deblurring and denoising of maps between shapes. Computer Graphics Forum, 36(5): 165–174. Fan, A.; Ma, J.; Tian, X.; Mei, X.; and Liu, W. 2022. Coherent Point Drift Revisited for Non-rigid Shape Matching and Registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1424–1434. Hartman, E.; Sukurdeep, Y.; Klassen, E.; Charon, N.; and Bauer, M. 2023. Elastic shape analysis of surfaces with second-order sobolev metrics: a comprehensive numerical framework. International Journal of Computer Vision, 131(5): 1183–1209. Hu, L.; Li, Q.; Liu, S.; and Liu, X. 2021. Efficient deformable shape correspondence via multiscale spectral manifold wavelets preservation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 14536–14545. Jiang, L.; Lee, C.; Teotia, D.; and Ostadabbas, S. 2022. Animal pose estimation: A closer look at the state-of-the-art, existing gaps and opportunities. Computer Vision and Image Understanding, 103483. Kim, V. G.; Lipman, Y.; and Funkhouser, T. 2011. Blended intrinsic maps. ACM Transactions on Graphics, 30(4): 1–12. L¨ahner, Z.; Rodol`a, E.; Bronstein, M. M.; Cremers, D.; Burghard, O.; Cosmo, L.; Dieckmann, A.; Klein, R.; Sahillioˇglu, Y.; et al. 2016. SHREC’16: Matching of deformable shapes with topological noise. In Proceedings of the Eurographics Workshop on 3D Object Retrieval, 55–60. Magnet, R.; Ren, J.; Sorkine-Hornung, O.; and Ovsjanikov, M. 2022. Smooth non-rigid shape matching via effective Dirichlet energy optimization. In Proceedings of the International Conference on 3D Vision, 495–504. Melzi, S.; Ren, J.; Rodola, E.; Sharma, A.; Wonka, P.; and Ovsjanikov, M. 2019. Zoomout: Spectral upsampling for efficient shape correspondence. arXiv preprint arXiv:1904.07865. Myronenko, A.; and Song, X. 2010. Point set registration: Coherent point drift. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(12): 2262–2275. Ovsjanikov, M.; Ben-Chen, M.; Solomon, J.; Butscher, A.; and Guibas, L. 2012. Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics, 31(4): 1–11. Ovsjanikov, M.; Corman, E.; Bronstein, M.; Rodol`a, E.; Ben-Chen, M.; Guibas, L.; Chazal, F.; and Bronstein, A. 2016. Computing and processing correspondences with functional maps. In SIGGRAPH ASIA 2016 Courses, 1–60. Pai, G.; Ren, J.; Melzi, S.; Wonka, P.; and Ovsjanikov, M. 2021. Fast sinkhorn filters: Using matrix scaling for nonrigid shape correspondence with functional maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 384–393. Penney, R. 1975. Abstract Plancherel theorems and a Frobenius reciprocity theorem. Journal of Functional Analysis, 18(2): 177–190. Ren, J.; Melzi, S.; Wonka, P.; and Ovsjanikov, M. 2021. Discrete optimization for shape matching. Computer Graphics Forum, 40(5): 81–96. Ren, J.; Poulenard, A.; Wonka, P.; and Ovsjanikov, M. 2018. Continuous and orientation-preserving correspondences via functional maps. ACM Transactions on Graphics, 37(6): 1– 16. Rodol`a, E.; Cosmo, L.; Bronstein, M. M.; Torsello, A.; and Cremers, D. 2017. Partial functional correspondence. Computer Graphics Forum, 36(1): 222–236. Rodol`a, E.; Moeller, M.; and Cremers, D. 2015. Point-wise map recovery and refinement from functional correspondence. arXiv preprint arXiv:1506.05603. Rodola, E.; M¨oller, M.; and Cremers, D. 2017. Regularized pointwise map recovery from functional correspondence. Computer Graphics Forum, 36(8): 700–711. Rodola, E.; Rota Bulo, S.; Windheuser, T.; Vestner, M.; and Cremers, D. 2014. Dense non-rigid shape correspondence using random forests. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4177– 4184. Roetzer, P.; Swoboda, P.; Cremers, D.; and Bernard, F. 2022. A scalable combinatorial solver for elastic geometrically consistent 3d shape matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 428–438. Roweis, S. T.; and Saul, L. K. 2000. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500): 2323–2326. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6214 Sahillio˘glu, Y. 2020. Recent advances in shape correspondence. The Visual Computer, 36(8): 1705–1721. Sumner, R. W.; and Popovi´c, J. 2004. Deformation transfer for triangle meshes. ACM Transactions on Graphics, 23(3): 399–405. Tombari, F.; Salti, S.; and Di Stefano, L. 2010. Unique signatures of histograms for local surface description. In Proceedings of the European Conference on Computer Vision, 356–369. Van Kaick, O.; Zhang, H.; Hamarneh, G.; and Cohen-Or, D. 2011. A survey on shape correspondence. Computer Graphics Forum, 30(6): 1681–1707. Vestner, M.; L¨ahner, Z.; Boyarski, A.; Litany, O.; Slossberg, R.; Remez, T.; Rodola, E.; Bronstein, A.; Bronstein, M.; Kimmel, R.; et al. 2017a. Efficient deformable shape correspondence via kernel matching. In Proceedings of the International Conference on 3D Vision, 517–526. Vestner, M.; Litman, R.; Rodola, E.; Bronstein, A.; and Cremers, D. 2017b. Product manifold filter: Non-rigid shape correspondence via kernel density estimation in the product space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3327–3336. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6215 | 2024 | 690 |
18,507 | SocialCVAE: Predicting Pedestrian Trajectory via Interaction Conditioned Latents Wei Xiang1*, Haoteng Yin2*, He Wang3, Xiaogang Jin1† 1State Key Lab of CAD&CG, Zhejiang University 2Department of Computer Science, Purdue University 3Department of Computer Science, University College London [email protected], [email protected], he [email protected], [email protected] Abstract Pedestrian trajectory prediction is the key technology in many applications for providing insights into human behavior and anticipating human future motions. Most existing empirical models are explicitly formulated by observed human behaviors using explicable mathematical terms with a deterministic nature, while recent work has focused on developing hybrid models combined with learning-based techniques for powerful expressiveness while maintaining explainability. However, the deterministic nature of the learned steering behaviors from the empirical models limits the models’ practical performance. To address this issue, this work proposes the social conditional variational autoencoder (SocialCVAE) for predicting pedestrian trajectories, which employs a CVAE to explore behavioral uncertainty in human motion decisions. SocialCVAE learns socially reasonable motion randomness by utilizing a socially explainable interaction energy map as the CVAE’s condition, which illustrates the future occupancy of each pedestrian’s local neighborhood area. The energy map is generated using an energy-based interaction model, which anticipates the energy cost (i.e., repulsion intensity) of pedestrians’ interactions with neighbors. Experimental results on two public benchmarks including 25 scenes demonstrate that SocialCVAE significantly improves prediction accuracy compared with the state-of-the-art methods, with up to 16.85% improvement in Average Displacement Error (ADE) and 69.18% improvement in Final Displacement Error (FDE). Code is available at: https://github.com/ ViviXiang/SocialCVAE. Introduction Pedestrian trajectory prediction is a vital task in intelligent systems for understanding human behavior and anticipating future motions. Predicting the future movements of pedestrians in complex environments is challenging due to the highly dynamic and subtle nature of human interactions. Empirical methods explicitly model interactions for crowd motion prediction, e.g., rule-based model (Reynolds 1987; Reynolds et al. 1999), force-based model (Helbing and Molnar 1995; Karamouzas, Skinner, and Guy 2014) and energy-based model (Guy et al. 2010; Karamouzas et al. *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2017). These models are explainable but with lower predictive accuracy, as they cannot fit observed data precisely. In contrast, various methods based on deep neural nets have been proposed with social interaction modeling by employing social pooling mechanism (Alahi et al. 2016; Gupta et al. 2018), graph-based modeling (Mohamed et al. 2020; Bae and Jeon 2021), and attention mechanism (Mangalam et al. 2020; Shi et al. 2021; Tsao et al. 2022). While they achieve expressive power and generalization ability, their black-box nature makes the learned model less interpretable to human understanding. It remains a challenge to explore the trade-off between model explainability and prediction capability. Recent research effort has been focused on exploring the aforementioned trade-off by designing hybrid models that combine deep neural nets with explainable interaction (Kothari, Sifringer, and Alahi 2021; Yue, Manocha, and Wang 2022). However, their prediction accuracy suffers from the deterministic nature of the physics-driven behaviors (Yue, Manocha, and Wang 2023). To overcome the challenges while retaining the advantages of hybrid methods, we propose SocialCVAE, a new hybrid model for pedestrian trajectory prediction that combines an energy-based interaction model for socially explainable interaction anticipations with an interactionconditioned CVAE for multimodal prediction. Fig. 1 illustrates the framework of our method. SocialCVAE takes advantage of the data-driven optimization model (Xiang et al. 2023) to quantify the interaction energy cost (i.e., repulsion intensity) of the temporal coarse predictions and explicitly represent the interaction energies into the local energy map. Using the CVAE model conditioned on the interaction energy map, SocialCVAE learns socially reasonable residuals for the temporal motion decisions. Similar to the previous methods (Zhou et al. 2021; Yue, Manocha, and Wang 2022) that achieve state-of-the-art (SOTA) performance, we employ the recursive prediction scheme to update future trajectories step by step with the input trajectories at each step including the updated trajectories. We conduct extensive experiments on two popular benchmark datasets (ETH-UCY (Pellegrini et al. 2009; Lerner, Chrysanthou, and Lischinski 2007) and SDD (Robicquet et al. 2016)), and demonstrate SocialCVAE’s superiority over existing state-of-the-art methods in terms of prediction accuracy. Furthermore, our results highlight the effectiveThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6216 Past Trajectories Temporal Encoder Candidate Velocity Space Attention Model Coarse Motion Prediction Energy-based Interaction Heterogeneous Neighbors Scene Segmentation Multimodal Prediction Past Trajectories CVAE Data-driven Interaction Interaction Energy Map Predicted Trajectories (a) (b) (c) Figure 1: The framework of SocialCVAE. (a) The coarse motion prediction model learns the temporal motion tendencies and predicts a preferred new velocity for each pedestrian. (b) The energy-based interaction model constructs a local interaction energy map to anticipate the cost of pedestrian interactions with heterogeneous neighbors, including pedestrians, static environmental obstacles found in the scene segmentation (e.g., buildings), and dynamic environmental obstacles (e.g., vehicles). (c) The multimodal prediction model predicts future trajectories using a CVAE model conditioning on the past trajectories and the interaction energy map. ness of using an energy-based interaction model for pedestrian trajectory prediction and provide insights into how to better model pedestrian behavior in complex environments. The main contributions are concluded as follows: • We propose a novel multimodal pedestrian trajectory prediction model (SocialCVAE) that leverages the advantages of both empirical and learning-based approaches for better prediction performance and interpretability of motion decisions. • SocialCVAE explores the behavioral uncertainty of human motion by introducing socially explainable interaction energy maps generated from an energy-based interaction model. Both the quantitative and quality results of SocialCVAE demonstrate that the energy-based interaction helps the model better understand the social relationships between pedestrians, leading to improved prediction performance. Related Works Energy-Based Interaction Methods. Considering the nonlinear nature of pedestrian motion dynamics that pedestrians try to anticipate and react to the future trajectories of their neighbors for collision avoidance (Karamouzas, Skinner, and Guy 2014), energy-based methods (Karamouzas et al. 2017; Ren et al. 2019; Xiang et al. 2023) predicts pedestrians’ future trajectories by minimizing the anticipated social interaction cost calculated by energy functions, i.e., the anticipated repulsion intensity from neighbors. These models explicitly predict pedestrians’ future motion by consuming minimum interaction cost, but provide less prediction accuracy due to relying solely on explicit motion features such as velocity. Comparatively, our method is a hybrid model that leverages the social explainability of energy-based interaction models with the prediction capability of deep-learning models, resulting in better prediction performance. Data-Driven Methods. With advances in data acquisition techniques, deep learning methods have been proposed and have achieved impressive results in predicting pedestrian trajectories. RNN structure has widely been used to capture temporal dependencies while considering social interactions using pooling mechanism (Alahi et al. 2016; Bisagno, Zhang, and Conci 2018; Gupta et al. 2018) or attention mechanism (Vemula, Muelling, and Oh 2018; Sadeghian et al. 2019; Salzmann et al. 2020; Xu, Hayet, and Karamouzas 2022). Graph-based models that utilize distance-based physical adjacency matrices (Mohamed et al. 2020; Bae and Jeon 2021; Xu et al. 2022) or attention-based learnable adjacency matrices (Huang et al. 2019; Shi et al. 2021; Duan et al. 2022; Wu et al. 2023) to learn pedestrian social interactions have also been developed. Besides, transformer-based models incorporate attention mechanisms (Yu et al. 2020; Yuan et al. 2021; Tsao et al. 2022) to model social interaction for better performance in pedestrian trajectory prediction tasks. Recently, prediction accuracy improvement has been made by NSP-SFM (Yue, Manocha, and Wang 2022), a multimodal prediction model which is a hybrid of steering behavior learning based on conservative position-dependent forces with unexplainable randomness learning. However, the deterministic force-driven behavior of NSP may result in performance degradation (Yue, Manocha, and Wang 2023). Different from NSP-SFM, our method combines the energybased interaction model for explicit interaction cost anticipation with interaction-conditioned human motion uncertainty learning, resulting in providing socially reasonable randomness of future motion and yielding superior prediction performance. Methodology Problem Formulation Pedestrian trajectory prediction aims to predict the positions of pedestrians’ trajectories in a traffic scenario. Given The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6217 the observed T-time step pedestrian trajectories X1:T = {X1, X2, . . . , XT }, the task is to predict the pedestrians’ future trajectories ˆ XT +1:T +M = { ˆXT +1, ˆXT +2, . . . , ˆXT +M} over the next M time steps, where Xi = {xt 1, . . . , xt n} ∈ Rn×2 denotes the spatial (2D-Cartesian) coordinates of n pedestrian at time step t. Formally, taking the past T-time step trajectories of pedestrians Xt−T +1:t, the trajectories of other dynamically moving obstacles (e.g., vehicles) Xd, and the scene segmentation S of the surrounding environment (see Fig. 1b, the colored area is not feasible for human motions) as input, the prediction task is formulated as: ˆ Xt+1 = f(Xt−T +1:t, Xd, S), (1) where f is any prediction model. Framework The framework of our method is illustrated in Fig. 1. Overall, SocialCVAE learns the uncertainty of human motion and implements it as predicting residuals of a coarse prediction. Specifically, a coarse prediction model (Fig. 1a) predicts a temporally reasonable preferred velocity and a new position for each pedestrian by aggregating the information from a discrete candidate velocity space, which is built based on the learned temporal tendency from an RNN-structured temporal encoder and includes possible temporal motion decisions (velocities). Then, an energy-based interaction model (Fig. 1b) anticipates the social interaction cost of the preferred velocity with heterogeneous neighbors, including interactions with pedestrians, static obstacles (e.g., buildings) obtained from the scene segmentation, and dynamic obstacles (e.g., vehicles). We map the interaction energy with the neighbors onto a local energy map to represent the future occupancy of the local neighborhood area. Finally, a CVAE model (Fig. 1c), which is conditioned on both past trajectories and the interaction energy map, predicts the socially reasonable residuals of the preferred new position and generates multimodal future trajectories. Coarse Motion Prediction The coarse motion prediction model predicts a temporally reasonable future motion (velocity v, position x) for each pedestrian based on the trajectories in the past T time steps. Temporal Motion Tendency Learning. We employ a recurrent neural network with one LSTM layer (Hochreiter and Schmidhuber 1997) to capture the temporal motion dependency and predict future motion. Given the hidden state ht i of each pedestrian i at time step t, a temporal extrapolation velocity ¯vt+1 i can be obtained: ht i = LSTM(ht−1 i , Relu(ϕ(xt i, vt i))), ¯vt+1 i = ϕ(ht i), (2) where ϕ(·) represents Linear transformation, xt i and vt i are the current location and velocity. As human behavior is diverse and uncertain, multiple reasonable motion decisions exist for pedestrians. In our method, the possible motion decisions are explicitly modeled as the velocity candidates in a discrete candidate velocity space V t i , which is generated based on the temporal extrapolation velocity ¯vt+1 i . An illustration of V t i is shown in Fig. 2. V t i is a velocity set with size kC = (2kr+1)(2kθ+1). Figure 2: An illustration of the discrete candidate velocity space V t i in a polar coordination system. The blue point represents the time extrapolation velocity ¯vt+1 i , with rt i and θt i denoting the magnitude and angle of ¯vt+1 i . The polar space is discretized into a grid, with a predefined cell side length ∆r and ∆θ for the magnitude and angle axes. The velocity candidates in V t i are represented by the intersection points of the solid lines, centered at ¯vt+1 i within kr grid cells on the magnitude axis and kθ on the angle axis. Coarse Trajectory Prediction. After obtaining the candidate velocity space representing multiple motion decisions, we need to optimize for the best one as the coarse motion tendency for the subsequent time step. We adopt the attention mechanism (Vaswani et al. 2017) to score the relation between the pedestrian’s trajectories Xt i = {xt−T +1 i , . . . , xt i} ∈RT ×4 in the past T time steps and the velocity candidates from V t i ∈RkC×2. The attention score matrix eAt i is calculated as follows: F t i,P = MLP1(Xt i), F t i,C = MLP2(V t i ), eAt i = Softmax (F t i,P WP )(F t i,CWC)⊤ √dF ! , (3) where WP , WC are learnable parameters, √dF is the scaled factor for ensuring numerical stability (Vaswani et al. 2017). Then, the coarse preferred velocity evt+1 i of pedestrian i at the next time step can be calculated by aggregating the information from the velocity candidate set V t i with the weight of attention scores from Eq. (3) as: evt+1 i = MLP3(F t i,P + eAt iF t i,C). (4) Then, the coarse preferred position is calculated as ext+1 i = xt i + evt+1 i ∆t, where ∆t is the horizon of a time step. Energy-Based Interaction Anticipating As humans anticipate and react to the future trajectories of their neighbors for collision avoidance, we employ an energy-based interaction model similar to (Xiang et al. 2023) to calculate the interaction cost (i.e., repulsion intensity) driven by the coarse preferred velocity evt+1 i from Eq. (4). Our interaction model considers heterogeneous neighbors within a local neighborhood, which is a square area centering with the pedestrian (see Fig. 3a), including pedestrians, static obstacles (e.g., buildings), and dynamic obstacles (e.g., vehicles). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6218 (a) (b) Figure 3: An example of a pedestrian’s square-shaped local interaction area. (a) The focal pedestrian at the center of the orange square interacts with heterogeneous neighbors within the square. (b) The interaction energy is recorded at the predicted local location of each neighbor (The darker color represents the higher value of interaction energy). Interaction Energy. Given the preferred velocity evt+1 i calculated by Eq. (4), assuming that the neighbor j holds its current velocity vt j for moving in the next time step, the predicted distance edt+1 ij of pedestrian i to the neighbor j is calculated by considering the pedestrian i may collide with their neighbor j during a time step: edt+1 ij = ∥xt ij + evt+1 ij · eτ t+1 ij ∥2, (5) where xt ij = xt i −xt j, evt+1 ij = evt+1 i −vt j. eτ t+1 ij ∈[0, ∆t] is the predicted traveled time in the next time step, and it is obtained by solving the following quadratic function: τ t+1 ij = arg minτ∥xt ij + evt+1 ij · τ∥2, eτ t+1 ij = max(0, min(τ t+1 ij , ∆t)). (6) The interaction energy is calculated based on the predicted distance edt+1 ij : et ij = e(1−e dt+1 ij /ds), (7) where ds is a scaling factor. Higher interaction energy means that the pedestrian is more likely to collide with the neighbor and vice versa, which can be used to quantify the interaction cost of two objects. Socially Explainable Energy Map. After calculating the interaction energies that anticipate and quantify the repulsion intensities from the neighbors, we project the interaction energies onto an energy map M t i , which has the same size as the local interaction area, in order to explicitly indicate the socially anticipated occupancy of each point within the local interaction area. M t i is initialized as a zero matrix with size L × L, where L is the side length of the local interaction area. A zero value in M t i means no occupancy, i.e., no risk of collision at this position during the next time step. The interaction energy et ij calculated by Eq. (7) represents the occupancy of the neighbor j’s future location ext+1 j = xt j + vt jeτ t+1 ij . Fig. 3b illustrates the interaction energy map. Notably, to avoid performance degradation caused by the sparse matrix M t i , 𝐸𝐸𝑚𝑚𝑚𝑚𝑚𝑚 𝐸𝐸𝑚𝑚𝑚𝑚𝑚𝑚 𝐸𝐸𝑟𝑟𝑟𝑟𝑟𝑟 Past Trajectory Energy Map 𝐸𝐸𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝐷𝐷𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 Δ𝐱𝐱𝑖𝑖 𝑡𝑡+1 𝑁𝑁𝜇𝜇, 𝜎𝜎2 ~𝑍𝑍 Δ𝐱𝐱𝑖𝑖 𝑡𝑡+1 Figure 4: The architecture of the interaction-conditioned CVAE model. L represents the concatenation operation. Red dotted lines denote that the layers are only performed during training. All the components in the CVAE model are built using MLPs. our method regards dynamic interaction neighbors as entities with a specific shape, and the points occupied inside the entity are allocated with the calculated interaction energy. A pedestrian neighbor is regarded as a square-shaped entity centered at ext+1 j with side length ∆L; for other types of dynamic neighbors (e.g., vehicles and bicycles), as we don’t specify the accurate type of those neighbors, their occupied points are those inside the bounding box from the raw dataset. For the static neighbors (e.g., buildings) from the scene segmentation, the interaction neighbors are the points labeled impassable to pedestrians. For a point p = (px, py) in the local interaction area, the corresponding value in the energy map is calculated as: M t i [px, py] = X j∈Ω(p) wR(j) · et ij, (8) where Ω(p) is the set of neighbors who are anticipated to occupy point p, R(j) is the type of neighbor j ∈Ω(p), and wR(j) is the trainable weight of this neighbor type. The energy map contributes to the model for a better understanding of the future social relationship with the interaction neighbors. Multi-Modal Trajectory Prediction To capture the uncertainty of human motion, we employ an interaction-conditioned CVAE model for multimodal trajectory forecasting. The architecture of our CVAE model is illustrated in Fig. 4. Different from the previous models that learn unexplainable randomness using CVAEs (Zhou et al. 2021; Yue, Manocha, and Wang 2022; Zhou et al. 2023), our model takes the pedestrian’s past trajectories Xt i and the socially explainable energy map M t i as input to reconstruct the position residual ∆xt+1 i = xt+1 i −ext+1 i between the ground-truth future position xt+1 i and the predicted coarse preferred position ext+1 i . As a result, the CVAE can learn socially reasonable randomness from data. In the training process, the CVAE model firstly obtains the encodings of motion F t i,M and the encodings of the groundtruth position residual F t+1 i,R : F t i,M = Emot(Xt i) ⊕Emap(M t i ), F t+1 i,R = Eres(∆xt+1 i ), (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6219 where Emot, Emap and Eres are the encoders. Then, a latent encoder Elatent generates the parameters (µt i, σt i) of a latent distribution: (µt i, σt i) = Elatent(F t i,M ⊕F t+1 i,R ). (10) When predicting the future trajectory of a pedestrian, a latent decoder Dlatent is employed to estimate a position residual ∆ext+1 i : ∆ext+1 i = Dlatent(F t i,M ⊕Zt i), (11) where the latent variable Zt i is randomly sampled from a normal Gaussian distribution N(0, I). Finally, the predicted position is: ˆxt+1 i = ext+1 i + ∆ext+1 i . (12) Notably, the CVAE aims to learn the social behaviors specifically referring to the subconscious steering/change of speed of pedestrians when they anticipate being close to their neighbors, which plays a crucial role in pedestrian motion (Kuang et al. 2008, 2009). To achieve this, we introduce collision anticipation/avoidance as a critical explainable social factor. This is because collision avoidance is the most notable social factor that contains rich social information, e.g., it can encode the social information that is intrinsically driven by the consideration of safety (Van Toll and Pettr´e 2021), as well as high-level factors such as culture, custom (Kaminka and Fridman 2018). Furthermore, collision is the only information that can be obtained reliably from trajectories. As the ground truth of other social factors (e.g., affective mind states, traffic signals, smartphone distraction, etc.) is unavailable, we do not explicitly model them as there is no way to quantitatively verify them. In addition, as energybased models have been proven to effectively capture social interactions (Guy et al. 2010; Karamouzas et al. 2017), the interaction energy map is employed as the CVAE’s condition, which enables the model to learn the subconscious behavior and explain its associated social interaction. Loss Function Our model is trained end-by-end by minimizing a multi-task loss: L = 1 nM n X i=1 T +M X t=T +1 λ1∥ext i −xt i∥2 + λ2∥∆ext i −∆xt i∥2 + λ3DKL(N(µt i, σt i)||N(0, I)) , (13) where λ1, λ2 and λ3 are the loss weights. The first term is the position loss for training the coarse prediction model, which measures the distance between each preferred new position with the ground truth. The second term is the predicted position residual loss for training the CVAE model, which measures the distance between each predicted position residual with the ground truth. The third term is the Kullback-Leibler (KL) divergence loss for training the CVAE model, which measures the distance between the sampling distribution of the latent variable learned at the training stage with the sampling normal Gaussian distribution at the test stage. Evaluation Experiment Setup Datasets. To evaluate the effectiveness of our method, we conduct extensive experiments on two widely used datasets in pedestrian trajectory prediction tasks: ETH-UCY dataset (Pellegrini et al. 2009; Lerner, Chrysanthou, and Lischinski 2007) and Stanford Drone Dataset (SDD) (Robicquet et al. 2016). ETH-UCY includes pedestrians’ trajectories in 5 scenes (ETH, HOTEL UNIV, ZARA1, and ZARA2). We follow the leave-one-out strategy (Mangalam et al. 2021) for training and evaluation. SDD contains pedestrians’ trajectories in 20 scenes. For SDD, we follow the data segmentation as (Yue, Manocha, and Wang 2022) for training and evaluation. Following the common practice (Mangalam et al. 2021; Yue, Manocha, and Wang 2022), the raw trajectories are segmented into 8-second trajectory segments with time step ∆t = 0.4s, we train the model to predict the future 4.8s (12 frames) based on the observed 3.2s (8 frames). Evaluation Metrics. We adopt the two widely used metrics, Average Displacement Error (ADE) and Final Displacement Error (FDE), to quantify the performance of our model. ADE computes the average ℓ2 distance between the prediction and the ground truth over all predicted time steps. FDE calculates the ℓ2 distance between the predicted final location and the ground-truth final location at the end of the prediction horizon. We follow the previously commonly used measurement to report the performances of the best of 20 predicted trajectories. Similar to (Zhou et al. 2021; Yue, Manocha, and Wang 2022; Zhou et al. 2023), we sample 20 future points at each prediction time step and select the best one as the predicted result. Environment. Our model was implemented in PyTorch on a desktop computer running Ubuntu 20.04 containing an Intel ® CoreTM i7 CPU and an NVIDIA GTX 3090 GPU. The model is trained end-to-end with an Adam optimizer with a learning rate 0.0001. We trained the ETH-UCY for 100 epochs and SDD for 150 epochs. Quantitative Evaluation Quantitative Comparisons. We compare SocialCVAE with state-of-the-art models in recent years. The experimental results on ADE20/FDE20 are presented in Tab. 1 for ETH-UCY and SDD, showing that SocialCVAE achieves state-of-the-art performance on both datasets. Compared with the SOTA baseline methods, our method achieves performance improvement by 66.67% for FDE on ETH-UCY and 16.85%/69.18% for ADE/FDE on SDD. The main difference between SocialCVAE and the baseline methods is that our interaction-conditioned CVAE model learns socially reasonable motion randomness. The quantitative results demonstrate that SocialCVAE works well for better prediction performance. Ablation Study. We conduct ablative experiments to show the effectiveness of the key components in our model. Ablating the interaction-conditioned CVAE. In this experiment (named Ours/wo), we connect the coarse prediction model in SocialCVAE with the same CVAE model as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6220 Model ETH Hotel UNIV ZARA1 ZARA2 AVG SDD S-GAN (Gupta et al. 2018) 0.87/1.62 0.67/1.37 0.76/1.52 0.35/0.68 0.42/0.84 0.61/1.21 27.23/41.44 Sophie (Sadeghian et al. 2019) 0.70/1.43 0.76/1.67 0.54/1.24 0.30/0.63 0.38/0.78 0.51/1.15 16.27/29.38 Trajectron++ (Salzmann et al. 2020) 0.39/0.83 0.12/0.21 0.20/0.44 0.15/0.33 0.11/0.25 0.19/0.41 PECNet (Mangalam et al. 2020) 0.54/0.87 0.18/0.24 0.35/0.60 0.22/0.39 0.17/0.30 0.29/0.48 9.96/15.88 YNET (Mangalam et al. 2021) 0.28/0.33 0.10/0.16 0.24/0.41 0.17/0.27 0.13/0.22 0.18/0.27 7.85/11.85 Social-VAE (Xu, Hayet, and Karamouzas 2022) 0.41/0.58 0.13/0.19 0.21/0.36 0.17/0.29 0.13/0.22 8.10/11.72 CAGN (Duan et al. 2022) 0.41/0.65 0.13/0.23 0.32/0.54 0.21/0.38 0.16/0.33 0.25/0.43 SIT (Shi et al. 2022) 0.39/0.61 0.13/0.22 0.29/0.49 0.19/0.31 0.15/0.29 0.23/0.38 8.59/15.27 MUSE-VAE (Lee et al. 2022) 6.36/11.10 MSRL (Wu et al. 2023) 0.28/0.47 0.14/0.22 0.24/0.43 0.17/0.30 0.14/0.23 0.19/0.33 8.22/13.39 S-CSR* (Zhou et al. 2021) 0.19/0.35 0.06/0.07 0.13/0.21 0.06/0.07 0.05/0.08 0.10/0.16 2.77/3.45 NSP-SFM* (Yue, Manocha, and Wang 2022) 0.07/0.09 0.03/0.07 0.03/0.04 0.02/0.04 0.02/0.04 0.03/0.06 1.78/3.44 CSR* (Zhou et al. 2023) 0.28/0.53 0.07/0.08 0.24/0.35 0.07/0.09 0.05/0.09 0.14/0.23 4.87/6.32 Ours* 0.06/0.04 0.025/0.01 0.03/0.03 0.02/0.01 0.02/0.01 0.03/0.02 1.48/1.06 Table 1: Quantitative comparison with state-of-the-art methods on ETH-UCY and SDD for ADE20/FDE20. The bold/underlined font represents the best/second best result. The prediction results on ETH-UCY and SDD are measured in meters and pixels, respectively. Previous SOTA methods labeled by * also employ the recursive prediction scheme. Components ADE/FDE Fgoal Attention model Interaction -conditioned CVAE ✓ % 8.64/13.72 % ✓ 1.76/1.57 ✓ % ✓ 1.56/2.71 ✓ ✓ 1.48/1.06 Table 2: Ablation study of different components of our method on the SDD dataset. Fgoal denotes the goalattraction model proposed by NSP-SFM. (Zhou et al. 2021; Yue, Manocha, and Wang 2022; Zhou et al. 2023), which is only conditioned on the past trajectories, to learn the random residuals for the predicted preferred position from the coarse prediction model. Tab. 2 shows the quantitative results on SDD. Because the model doesn’t consider pedestrian interactions and learns unexplainable motion randomness, compared with our full model, significant performance degradation occurs on both ADE and FDE, demonstrating the importance of our interaction-conditioned CVAE model for achieving better performance. Ablating the attention model. In this experiment, we use the temporal extrapolation velocity generated by the temporal encoder as the output coarse preferred velocity of the coarse prediction model. The prediction results on SDD in Tab. 2 show performance degradation compared to our full model. However, when compared with the SOTA baselines, it still achieves better prediction accuracy, demonstrating the importance of our proposed interaction-conditioned CVAE model which learns the uncertainty of human motions. Ablating the coarse prediction model. We also conduct another ablation experiment, named GSocialCVAE, by replacing the coarse motion prediction model in Sec. with the goal-attraction model from the SOTA NSP-SFM method (Yue, Manocha, and Wang 2022), to further demonstrate the importance of the interaction-conditioned multimodal learning scheme employed in SocialCVAE. Tab. 2 gives the quantitative results of GSocialCVAE on SDD, showing perforFigure 5: Visualization results of our method. The visualized trajectories are the best predictions sampled from 20 trials. The white, green, and red dots represent the observed, ground-truth, and predicted trajectories respectively. mance degradation compared with our full model. However, when compared with NSP-SFM, GSocialCVAE achieves better performance with 11.80% improvement on ADE and 20.35% improvement on FDE, demonstrating the better prediction capability of our energy interaction-conditioned CVAE model for human motion uncertainty learning. Qualitative Evaluation We first visualize the predicted trajectories in several scenarios to illustrate the effectiveness of our method. The visualization results are shown in Fig. 5. Predicted trajectory comparison. To further validate the better performance of our model, in Fig. 6, we compare our visualization results with the SOTA NSP-SFM model (Yue, Manocha, and Wang 2022). NSP-SFM may predict trajectories that obviously deviate from the ground-truth final positions. This is because NSP-SFM learns force-driven steering behaviors plus with unexplainable motion randomness; the predicted results show strong determinism in reaching a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6221 (a) NSP-SFM-1 (b) NSP-SFM-2 (c) NSP-SFM-3 (d) GSocialCVAE-1 (e) GSocialCVAE-2 (f) GSocialCVAE-3 (g) Ours-1 (h) Ours-2 (i) Ours-3 Figure 6: Visualization comparisons with NSP-SFM and GSocialCVAE. The visualized trajectories are the best predictions sampled from 20 trials. Our method predicts future trajectories closer to the ground truth than compared methods. The purple red dots in the visualization results of NSPSFM and GSocialCVAE represent the sampled goals. sampled final goal. In contrast, SocialCVAE employs an energy interaction-conditioned CVAE model for learning socially reasonable human motion uncertainty, thus achieving better prediction performance. In Fig. 6, we also compare with the visualization results of the aforementioned ablation model GSocialCVAE. Due to the determinism nature of the goal-attraction model (Yue, Manocha, and Wang 2022), compared with our full model’s result (Figs. 6g-6i), the predicted trajectories of GSocialCVAE show slight deviation from the ground-truth trajectories because the predicted goal is far from the ground truth. However, GSocialCVAE shows better visual results than NSP-SFM, demonstrating the proposed method’s prediction capability to achieve better performance. Interaction-conditioned multimodal prediction. As shown in Fig. 7, we compare the multiple predicted trajectories of the NSP-SFM, the ablation experiment of SocialCVAE without the interaction-conditioned CVAE (Ours/wo), and our full model (Ours). Our full model’s results in Figs. 7g-7i demonstrate that by conditioning on the socially explainable interaction energy map, SocialCVAE learns better human motion uncertainty than the model without conditioned on interaction. Figs. 7h and 7i also demon(a) NSP-SFM-1 (b) NSP-SFM-2 (c) NSP-SFM-3 (d) Ours/wo-1 (e) Ours/wo-2 (f) Ours/wo-3 (g) Ours-1 (h) Ours-2 (i) Ours-3 Figure 7: Visualization comparisons of the multiple predicted trajectories with NSP-SFM and SocialCVAE without the interaction-conditioned CVAE (Ours/wo). Our method predicts more socially reasonable future trajectories than the compared methods. The white and green lines are the observed and ground-truth trajectories. The yellow lines for each pedestrian are the 20 predicted trajectories. strate that SocialCVAE can predict socially reasonable trajectories for avoiding potential collisions than the models without conditioned on interaction. Conclusion In this work, we present SocialCVAE, a novel multimodal pedestrian trajectory prediction method with an interactionconditioned CVAE model for learning socially reasonable human motion randomness. SocialCVAE explicitly models the anticipated social relationships of pedestrians and their neighbors by using an interaction energy map generated based on an energy-based interaction model. Taking the interaction energy map as a condition, the CVAE model can learn the uncertainty of human motions while maintaining social awareness. The proposed method outperforms existing state-of-the-art methods in achieving higher prediction accuracy. One limitation is that our method is computationally inefficient as we sequentially predict the energy map for each pedestrian. In the future, we will improve the computation performance by exploring other formulations of energybased interaction. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6222 Acknowledgments Xiaogang Jin was supported by the National Natural Science Foundation of China (Grant No. 62036010). He Wang has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 899739 CrowdDNA. References Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; FeiFei, L.; and Savarese, S. 2016. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 961–971. Bae, I.; and Jeon, H.-G. 2021. Disentangled multi-relational graph convolutional network for pedestrian trajectory prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 35, 911–919. Bisagno, N.; Zhang, B.; and Conci, N. 2018. Group lstm: Group trajectory prediction in crowded scenarios. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 213–225. Duan, J.; Wang, L.; Long, C.; Zhou, S.; Zheng, F.; Shi, L.; and Hua, G. 2022. Complementary attention gated network for pedestrian trajectory prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 36, 542–550. Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; and Alahi, A. 2018. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2255–2264. Guy, S. J.; Chhugani, J.; Curtis, S.; Dubey, P.; Lin, M. C.; and Manocha, D. 2010. PLEdestrians: A least-effort approach to crowd simulation. In Symposium on Computer Animation, 119–128. Helbing, D.; and Molnar, P. 1995. Social force model for pedestrian dynamics. Physical Review E, 51(5): 4282. Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural Computation, 9(8): 1735–1780. Huang, Y.; Bi, H.; Li, Z.; Mao, T.; and Wang, Z. 2019. Stgat: Modeling spatial-temporal interactions for human trajectory prediction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 6272–6281. Kaminka, G. A.; and Fridman, N. 2018. Simulating urban pedestrian crowds of different cultures. ACM Transactions on Intelligent Systems and Technology, 9(3): 1–27. Karamouzas, I.; Skinner, B.; and Guy, S. J. 2014. Universal power law governing pedestrian interactions. Physical Review Letters, 113(23): 238701. Karamouzas, I.; Sohre, N.; Narain, R.; and Guy, S. J. 2017. Implicit crowds: Optimization integrator for robust crowd simulation. ACM Transactions on Graphics, 36(4): 1–13. Kothari, P.; Sifringer, B.; and Alahi, A. 2021. Interpretable social anchors for human trajectory forecasting in crowds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 15556–15566. Kuang, H.; Li, X.; Song, T.; and Dai, S. 2008. Analysis of pedestrian dynamics in counter flow via an extended lattice gas model. Physical Review E, 78(6): 066117. Kuang, H.; Tao, S.; Dai, S.; and Li, X. 2009. Subconscious effect on pedestrian counter flow in a modified lattice gas model with the variable transition probability. International Journal of Modern Physics C, 20(12): 1945–1961. Lee, M.; Sohn, S. S.; Moon, S.; Yoon, S.; Kapadia, M.; and Pavlovic, V. 2022. Muse-VAE: multi-scale VAE for environment-aware long term trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2221–2230. Lerner, A.; Chrysanthou, Y.; and Lischinski, D. 2007. Crowds by example. In Computer Graphics Forum, volume 26, 655–664. Mangalam, K.; An, Y.; Girase, H.; and Malik, J. 2021. From goals, waypoints & paths to long term human trajectory forecasting. In International Conference on Computer Vision (ICCV), 15233–15242. Mangalam, K.; Girase, H.; Agarwal, S.; Lee, K.-H.; Adeli, E.; Malik, J.; and Gaidon, A. 2020. It is not the journey but the destination: endpoint conditioned trajectory prediction. In European Conference on Computer Vision (ECCV), 759– 776. Mohamed, A.; Qian, K.; Elhoseiny, M.; and Claudel, C. 2020. Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 14424–14432. Pellegrini, S.; Ess, A.; Schindler, K.; and Van Gool, L. 2009. You’ll never walk alone: Modeling social behavior for multitarget tracking. In International Conference on Computer Vision (ICCV), 261–268. Ren, J.; Xiang, W.; Xiao, Y.; Yang, R.; Manocha, D.; and Jin, X. 2019. Heter-Sim: Heterogeneous multi-agent systems simulation by interactive data-driven optimization. IEEE Transactions on Visualization and Computer Graphics (TVCG), 27(3): 1953–1966. Reynolds, C. W. 1987. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, 25–34. Reynolds, C. W.; et al. 1999. Steering behaviors for autonomous characters. In Game Developers Conference, volume 1999, 763–782. Robicquet, A.; Sadeghian, A.; Alahi, A.; and Savarese, S. 2016. Learning social etiquette: Human trajectory understanding in crowded scenes. In European Conference on Computer Vision (ECCV), 549–565. Sadeghian, A.; Kosaraju, V.; Sadeghian, A.; Hirose, N.; Rezatofighi, H.; and Savarese, S. 2019. Sophie: An attentive gan for predicting paths compliant to social and physical constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1349– 1358. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6223 Salzmann, T.; Ivanovic, B.; Chakravarty, P.; and Pavone, M. 2020. Trajectron++: dynamically-feasible trajectory forecasting with heterogeneous data. In European Conference on Computer Vision (ECCV), 683–700. Shi, L.; Wang, L.; Long, C.; Zhou, S.; Zheng, F.; Zheng, N.; and Hua, G. 2022. Social interpretable tree for pedestrian trajectory prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 36, 2235– 2243. Shi, L.; Wang, L.; Long, C.; Zhou, S.; Zhou, M.; Niu, Z.; and Hua, G. 2021. SGCN: Sparse graph convolution network for pedestrian trajectory prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 8994–9003. Tsao, L.-W.; Wang, Y.-K.; Lin, H.-S.; Shuai, H.-H.; Wong, L.-K.; and Cheng, W.-H. 2022. Social-SSL: Self-supervised Cross-Sequence Representation Learning Based on Transformers for Multi-agent Trajectory Prediction. In European Conference on Computer Vision (ECCV), 234–250. Van Toll, W.; and Pettr´e, J. 2021. Algorithms for Microscopic Crowd Simulation: Advancements in the 2010s. Computer Graphics Forum, 40(2): 731–754. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in Neural Information Processing Systems (Neurips), 30. Vemula, A.; Muelling, K.; and Oh, J. 2018. Social attention: Modeling attention in human crowds. In 2018 IEEE International Conference on Robotics and Automation (ICRA), 4601–4607. Wu, Y.; Wang, L.; Zhou, S.; Duan, J.; Hua, G.; and Tang, W. 2023. Multi-stream representation learning for pedestrian trajectory prediction. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 37, 2875– 2882. Xiang, W.; Wang, H.; Zhang, Y.; Yip, M. K.; and Jin, X. 2023. Model-based crowd behaviours in human-solution space. In Computer Graphics Forum, e14919. Xu, C.; Li, M.; Ni, Z.; Zhang, Y.; and Chen, S. 2022. Groupnet: Multiscale hypergraph neural networks for trajectory prediction with relational reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6498–6507. Xu, P.; Hayet, J.-B.; and Karamouzas, I. 2022. Socialvae: Human trajectory prediction using timewise latents. In European Conference on Computer Vision (ECCV), 511–528. Yu, C.; Ma, X.; Ren, J.; Zhao, H.; and Yi, S. 2020. Spatiotemporal graph transformer networks for pedestrian trajectory prediction. In European Conference on Computer Vision (ECCV), 507–523. Yuan, Y.; Weng, X.; Ou, Y.; and Kitani, K. M. 2021. Agentformer: Agent-aware transformers for socio-temporal multiagent forecasting. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 9813–9823. Yue, J.; Manocha, D.; and Wang, H. 2022. Human trajectory prediction via neural social physics. In European Conference on Computer Vision (ECCV), 376–394. Yue, J.; Manocha, D.; and Wang, H. 2023. Human trajectory forecasting with explainable behavioral uncertainty. arXiv:2307.01817. Zhou, H.; Ren, D.; Yang, X.; Fan, M.; and Huang, H. 2021. Sliding sequential CVAE with time variant sociallyaware rethinking for trajectory prediction. arXiv preprint arXiv:2110.15016. Zhou, H.; Ren, D.; Yang, X.; Fan, M.; and Huang, H. 2023. CSR: cascade conditional variational auto encoder with socially-aware regression for pedestrian trajectory prediction. Pattern Recognition, 133: 109030. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6224 | 2024 | 691 |
18,508 | Dynamic Semantic-Based Spatial Graph Convolution Network for Skeleton-Based Human Action Recognition Jianyang Xie1,2, Yanda Meng2,6∗, Yitian Zhao4, Anh Nguyen3, Xiaoyun Yang5, Yalin Zheng2,6 * 1CDT in Distributed Algorithms, School of EEECS, University of Liverpool, UK 2Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK 3Department of Computer Sciences, University of Liverpool, Liverpool, UK 4Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, CAS, Cixi, China 5Remark AI UK Limited, London, UK 6Liverpool Centre for Cardiovascular Science, Liverpool, UK [email protected], [email protected] Abstract Graph convolutional networks (GCNs) have attracted great attention and achieved remarkable performance in skeletonbased action recognition. However, most of the previous works are designed to refine skeleton topology without considering the types of different joints and edges, making them infeasible to represent the semantic information. In this paper, we proposed a dynamic semantic-based graph convolution network (DS-GCN) for skeleton-based human action recognition, where the joints and edge types were encoded in the skeleton topology in an implicit way. Specifically, two semantic modules, the joints type-aware adaptive topology and the edge type-aware adaptive topology, were proposed. Combining proposed semantics modules with temporal convolution, a powerful framework named DS-GCN was developed for skeleton-based action recognition. Extensive experiments in two datasets, NTU-RGB+D and Kinetics-400 show that the proposed semantic modules were generalized enough to be utilized in various backbones for boosting recognition accuracy. Meanwhile, the proposed DS-GCN notably outperformed state-of-the-art methods. The code is released here https://github.com/davelailai/DS-GCN. Introduction Human action recognition (HAR) is an essential topic in computer vision and has a wide range of applications in video understanding (Gaur et al. 2011; Gui et al. 2018). Especially, skeleton-based action recognition has attracted much attention in the research community. Compared with RGB image squeeze (Carreira and Zisserman 2017; Bilen et al. 2017; Tran et al. 2015) or optical flows (Simonyan and Zisserman 2014; Wang and Schmid 2013), skeleton data (Yan, Xiong, and Lin 2018; Vemulapalli, Arrate, and Chellappa 2014) provided body pose and movement information directly, making it more robust to variations of camera viewpoint and video appearance. Meanwhile, low-cost depth sensors (Liu et al. 2019) (e.g., Kinect) and availability of pose estimation algorithms (Sun et al. 2019) make the skeleton-based HAR can be extensively studied. *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 𝛷: 1×1 𝝋: 1×1 𝜉: 1×1 × N N 𝝉𝟏: 1×1 𝝉𝟐: 1×1 × N N (a) (b) (c) N N × Matrix multiplication N: joint number Convolution Kernel 𝜑, 𝜉, 𝝉𝟏, 𝝉𝟐, 𝛷 Edge type 𝐴! " ∈ℝ#×# 𝐴! % ∈ℝ#×# 𝐴! & ∈ℝ#×# Figure 1: Illustration of node and edge type-aware adaptive graph generation. (a) represents the general adaptive graph generation, where the joint is considered as the same type, ϕ, and ξ is 1 × 1 convolution kernel. general adaptive graph Ag D ∈RN×N was generated based on the matrix multiplication. (b) represent node type-aware adaptive generation, human body is split into five parts with different colours, and node-type specific transform function τ1 and τ2 was designed, where the color is corresponding to the node type. For each part, the joints were projected into corresponding feature space, then the node type-aware adaptive graph An D ∈RN×N can be obtained based on the matrix multiplication. (c) represent the edge type-aware adaptive graph generation, the edge type is represented as the type pair of its end nodes. There are fifteen types of edges, and an edgetype specific transform function ϕ was designed and was utilized to transfer edge representations to their corresponding distribution space, thus the edge type-aware adaptive graph Ae D ∈RN×N can be obtained. Early methods focus on extracting handcrafted features from skeleton sequences (Vemulapalli, Arrate, and Chellappa 2014; Wang et al. 2014). Recently, deep learning has become the mainstream research due to its strong feature learning ability, and various network structures have been investigated. For instance, recurrent neural networks (RNNs) (Du, Wang, and Wang 2015; Zhang, Liu, and Xiao 2017; Liu et al. 2017) have been applied to model the temporal information within the skeleton sequences, convolution neuThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6225 ral networks (CNNs) also have been adapted for HAR by representing the skeleton sequence as pseudo-images (Ke et al. 2017; Caetano et al. 2019; Duan et al. 2022c). Spatialtemporal graph convolution networks (ST-GCNs) have been proposed for working on the skeleton graph (Yan, Xiong, and Lin 2018; Si et al. 2018; Chen et al. 2021; Cheng et al. 2020b; Liu et al. 2020; Zhang et al. 2020; Shi et al. 2019b). Among these approaches, ST-GCNs have been the most popular one since they can capture inherent interaction between body joints through node aggregation scheme. Yan (Yan, Xiong, and Lin 2018) first proposed the STGCN on predefined skeleton graphs. However, the fixed graph limited the representation of GCN and is inefficient in capturing the changeable human movement. In order to boost the flexibility of the model, some dynamic graph generation methods were proposed (Chen et al. 2021; Shi et al. 2019b; Cheng et al. 2020a) to learn an adaptive adjacent matrix. However, these works ignored the semantic information of the skeleton. They simply assumed all joints/edges as the same type, As shown in Figure. 1 (a), making them insufficient to capture the semantic properties of actions. Intuitively, human actions involve movements of different body parts. For example, pointing to something mainly depend on swinging the arms but kicking forward indicates swinging legs. In this case, the types of moving nodes will be useful information for action recognition. Zhang (Zhang et al. 2020) noticed this limitation and proposed a semantics-guided neural network to enrich the input joint feature by explicitly adding one-hot vectors of different node types. Although this method proved that the semantic information of joints type can boost performance, it faces several issues: Firstly, the explicit encoding in the input step is not flexible and cannot incorporate the high-order semantic information when GCNs go deeper. Secondly, the edge types were not considered. Because the connection in different types of joints might be various, even between the same type of joints but in different directions, the connection weight value might be different. Taking legs and arms as an example, the information passing from legs to arms should be different from that passing within arm joints. Meanwhile, within the arm, the information passing from elbow to wrist might be different and vice versa. In light of these limitations, a dynamic semantic-based graph neural convolutions network (DS-GCN) was proposed in this paper. The main idea of the proposed work is to encode the dynamical semantic information of joints and edges in GCNs aggregation process implicitly. Specifically, a dynamic adaptive topology with semantic information on joints/edges types was generated. Instead of adding the predefined type encoding into the joint feature, the joint/edge type was encoded with different transform functions, each of which represents a specific distribution. Thus the feature of joint/edge in different types can be represented in their individual feature space. In other words, the types of joint/edge were encoded in an implicit way. Compared with the predefined encoding, there are two advantages of our proposed DS-GCN. On the one hand, since the semantic information of joints/edges was learned from the sample itself, the dynamic nature of each skeleton can be maintained. On the other hand, the joints/edges types were represented by the transform functions, and can be encoded in each ST-GCN layer. Thus, the semantic information can be reserved without over-smoothing even if the model goes deeper. As shown in Figure. 1 (b), the joints and edges were split into different types in advance. For the joint/edge type definition, the human body was decomposed into several parts (five parts in this paper, including left/right arms, left/right legs, and the trunk with the head) according to the natural human body structure, then the edge type can be obtained according to the type of its two end nodes, As shown in Figure. 1 (c). For instance, the link between the left arm and trunk is different from the link within the trunk. Then two semantic-aware modules were proposed to encode the joint/edge types respectively, the node type-aware adaptive graph module and the edge type-aware adaptive graph module. In the node type-aware module, As shown in Figure. 1 (b), a non-local mechanism was applied, but separate transform functions were designed for each body part to project the node representation in their specific type distribution, thus the adaptive graph can be generated with consideration of the node types. In the edge type-aware module, As shown in Figure. 1 (c), similar to the node type encoding, the edge type-specific transform functions were designed, which were then applied to the adaptive skeleton graph to encode semantic information over each edge type. Based on the two semantic modules and combining the temporal modeling modules proposed in (Duan et al. 2022a), the dynamic semantic-based graph neural networks (DS-GCN) was developed for skeleton-based human action recognition. The framework of the proposed method is as shown in Figure. 2. The extensive experiments on NTU-RGB+D (Shahroudy et al. 2016; Liu et al. 2019) and Kinetics-400 (Carreira and Zisserman 2017) show that: (1) the proposed two semantics modules are efficient and generalized to be adaptive to various ST-GCNs structure to boost the performance. (2) the generated DS-GCN outperforms state-of-the-art methods notably on all two datasets. The main contributions are summarized as follows: • We proposed to implicitly encode the joints and edge types for skeleton-based human action recognition. Two dynamical semantic-based adaptive graphs including a node type-aware adaptive graph, and an edge type-aware adaptive graph were generated. Extensive experiments show that the proposed semantic graph is very generalizable that can be easily adapted to various ST-GCNs. • Generated a dynamic semantic-based graph neural network for skeleton-based human action recognition, and extensive experiments highlight that the proposed DSGCN outperforms SOTA methods notably on both NTURGB+D and Kinetics-400. Related Work GCNs for Skeleton-Based Action Recognition Graph convolution networks have attracted increasing attention in skeleton-based human action recognition (Yan, Xiong, and Lin 2018; Si et al. 2018; Chen et al. 2021; Cheng The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6226 N 1×1 Conv ⊗ 𝑋!"#$ 𝑋$#%$ 𝑋!"&' N 𝐶 1×1 Conv 𝑋(! 𝑋")* 𝑐")* N Node type-aware branch Edge type-aware branch General branch 𝐴! " ∈ℝ#×#×(& ' ⁄ ) 𝐴! * ∈ℝ#×#×(& ' ⁄ ) 𝐴! + ∈ℝ#×#×(& ' ⁄ ) ⨁ 𝑃𝐴, ∈ℝ#×# ⨁ 𝑃𝐴- ∈ℝ#×# ⨁ 𝑃𝐴' ∈ℝ#×# 𝑋'(+ 𝐶 𝐶3 ⁄ 𝐶3 ⁄ 𝑋, 𝑋' 𝐶3 ⁄ 𝑋𝜏, 𝜏𝝋 𝜉 𝝋 𝜉 × ⊗ × × ⊗ 𝛷 × Matrix multiplication Pixel-wise addition ⨁ ⊗GCN aggregation Node type specific convolution kernel Normal convolution kernel edge type-aware transfer function Normal convolution kernel Figure 2: The framework of the proposed DS-GCN. The spatial graph convolution structure was decomposed into three branches, the node-type aware branch, the edge-type aware branch, and the general branch. C is the input channel, is the output channel. In each branch, the corresponding semantic self-adaptive graph and a shared correction matrix PAi ∈RN×N, i = 1, 2, 3 were applied to represent the skeleton. The mixed output Xmix can then be obtained by concatenating the outputs of the three branches along the feature channel dimension. The final output Xout can be calculated by a 1 × 1 Conv function on Xmix. et al. 2020b; Liu et al. 2020; Zhang et al. 2020; Shi et al. 2019b; Duan et al. 2022a). Yan . (Yan, Xiong, and Lin 2018) introduced a pre-defined skeleton graph according to the human body’s natural link and proposed the ST-GCN to capture the spatial and temporal patterns from the graph structure. Upon this baseline, some spatial adaptive graph generation methods based on no-local mechanisms were proposed to increase the flexibility of the skeleton graph structure (Shi et al. 2019b; Chen et al. 2021; Cheng et al. 2020b; Zhang et al. 2020; Duan et al. 2022a). Instead of only applying the fixed graph structure, these methods learned another adaptive graph to boost the GCNs’ representation ability. For instance, the 2S-AGCN (Shi et al. 2019b) learned a datadriven graph for all feature channels, and CTR-GCN (Chen et al. 2021) learned an adaptive graph for each individual feature channel. Meanwhile, the multi-scale and shift GCN were proposed (Cheng et al. 2020b; Liu et al. 2020) to release the over-smooth problem in graph long-distance transfer. In the temporal pattern, multi-scale temporal convolution was proposed in (Chen et al. 2021; Duan et al. 2022a) to boost the information aggregation in temporal space. Semantic Information Exploration The semantic information has been exploited in RNNs for skeleton-based human action recognition (Du, Wang, and Wang 2015; Wang and Wang 2017; Si et al. 2018). In these methods, the skeleton structure is manually partitioned into different functional parts, and processed by the individual RNN. As the network goes deeper, the feature of different components is concatenated and progressed in a hierarchical way. Even though such information is important, in most GCNs for skeleton-based human action recognition, the semantic information is overlooked. Inspired by this, Zhang (Zhang et al. 2020) proposed the SGN to encode the information of joint types in the initial feature by explicitly adding one-hot vectors for node types representation. However, this pre-defined semantics encoding in the input layer is not flexible and cannot represent such information in high-dimension space when networks go deeper. To tackle the above limitations, we proposed a more elegant method to encode the semantics implicitly. Methods In this section, The notation of ST-GCN will be introduced first, and then the ST-GCN with its variants are formulated and discussed. Finally, the proposed DS-GCN will be described in detail. Preliminaries Notation. A skeleton data is denoted as a spatial-temporal graph G = (V, Es, Et, X) where V = {vti|t = 1, ..., T, i = 1, ..., N} as the N body joints in T frames, Es and Et as the spatial and temporal link respectively. X ∈RN×T ×d represents the joint coordinates as the node feature, where d is the feature dimension. For the spatial graph Gs = (V, Es, X), Es is formulated as an adjacent matrix A ∈RN×N to represent the intro-body connection. For the temporal graph Gt = (V, Et, X), Et is constructed by connecting the same joints along consecutive frames. Then the ST-GCNs can be divided into two parts: the Spatial-GCN (S-GCN) with regular GCN and the Temporal-GCN (T-GCN) with 1D temporal convolution. The proposed method is adapted to the S-GCN. Topology-Fixed Graph Convolution Network. The main operation of GCN is to update the node representation by aggregating information from its neighborhood. In ST-GCN (Yan, Xiong, and Lin 2018) A is defined as three partitions and represented as A ∈RN×N×3. Denoting X = {Xt ∈RN×d|t = 1, ...T} as input feature, the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6227 output X ′ = {X ′ t ∈RN×C|t = 1, ...T} of S-GCN can be formulated as Eq. 1. X ′ = 3 X i=1 f(AiX, θ), (1) where f is the updating function, which is a 2D convolution network with kernel size 1 normally, θ is the learnable parameters of the updating function, and C is the number of the output feature channel. Topology-Adaptive Graph Convolution Network. In most ST-GCN variants (Yan, Xiong, and Lin 2018; Si et al. 2018; Chen et al. 2021; Cheng et al. 2020b; Liu et al. 2020; Zhang et al. 2020; Shi et al. 2019b; Duan et al. 2022a), the adaptive matrix AD was dynamically learned with selfattention mechanism. As shown in Figure 3 (a), supposing two transformation functions φ(·) and ξ(·), the correlation between two joints can be modeled as Eq. 2. AD = σ(φ(X) −ξ(X)), (2) where σ(·) represents the activate function in use, such as Relu. The adaptive S-GCN can be represented as Eq. 3. X ′ = 3 X i=1 f((Ai + λAi D)X, θ), (3) where λ is the predefined or learnable weight to refine the effect of the adaptive graph. The adaptive graph has proved to be an advantageous topology for skeleton-based human action recognition (Shi et al. 2019b; Chen et al. 2021). Semantic-Guided Graph Convolution Network. In explicit semantic encoding method (Zhang et al. 2020), the input feature was refined by adding a one-hot vector of joint types, which can be formulated as Eq. 4 X = {[Xt, Xt,k] ∈RN×c|t = 1, ...T, k = 1, ..., m} (4) where m is the joint type number, c is the modified feature channels, Xt,k is the corresponding type encoding. The topology-adaptive graph convolution network is then worked on this input. Dynamic Semantic-Based GCN The general frame of the proposed DS-GCN origins from the topology-adaptive GCN, however, different to the above methods, the joint and edge types in the skeleton graph were introduced and encoded dynamically when calculating the adaptive graph. Specifically, the DS-GCN contains two modules: the node type-aware topology and the edge typeaware topology. As shown in Figure 3, when modeling the node type-aware adaptive graph, different conversion functions were defined for different types of nodes, and then the node type-aware topology can be obtained by capturing pairwise joint correlations. In this paper, the non-local mechanism was applied in a channel-wise manner (Chen et al. 2021). In the edge type-aware correction graph, different update functions for edges of different types were applied to the adaptive graph. In this case, the graph in our work can be defined as a directed graph G = (V, E, A, R, X), where A and R denote the type mapping function for each node: τ(v) = {τ1(v), τ2(v)} : V →A and edge ϕ(e) : E →R respectively. Supposing the input feature X ∈RN×d, the semantic-based adaptive graph is calculated as Eq. 5 An D = σ(τ1(X) −τ2(X)), Ae D = ϕ(AD), (5) where An D represents the node type-aware graph and Ae D represents the edge type-aware graph. The details of each of them are introduced as follows. Node Type-Aware Adaptive Topology. As shown in Figure. 3 (b), the node features were first projected into their individual feature space with a node type mapping function: τ(v), then the node type-aware adaptive graph can calculate according to the non-local mechanism. Specifically, denoting s and t as two nodes of different types, xs ∈R1×d and xt ∈R1×d as the corresponding feature, then the nodeaware feature representation was formulated as Eq. 6 x ′ s1 = τ s 1(xs), x ′ s2 = τ s 2(xs) x ′ t1 = τ t 1(xt), x ′ t2 = τ t 2(xt) (6) where x ′ ∗∈R1×C, C is the output feature channels. Supposing τ1(v) as the source feature projection, τ2(v) as the target feature projection, the directed correction between node s and t along channel dimension can be calculated as Eq. 7 As→t D = σ(x ′ s1 −x ′ t2), At→s D = σ(x ′ t1 −x ′ s2) (7) where σ is the activation function. A∗ D ∈R1×C. For the whole skeleton structure, the node aware-adaptive graph An D ∈RN×N×C can be represented as the set of A∗ D. Edge Type-Aware Adaptive Topology. As shown in Figure. 3 (c), the edge type was encoded by applying separate convolution kernel ϕ(e) on the adaptive graph. Specifically, Given three nodes s, t and u of different types, the edge-type link between these nodes can be represented as ⟨s, t⟩, ⟨s, u⟩ and ⟨t, u⟩with the feature e⟨s,t⟩, e⟨s,u⟩and e⟨t,u⟩. Thus, the edge type-aware adaptive correlation can be refined as Eq. 8 A⟨s,t⟩ D = ϕ⟨s,t⟩(e⟨s,t⟩) A⟨s,u⟩ D = ϕ⟨s,u⟩(e⟨s,u⟩) A⟨t,u⟩ D = ϕ⟨t,u⟩(e⟨t,u⟩) (8) where ϕ⟨∗,∗⟩(e) represent separate transform functions. Here the 2D convolution kernels with kernel size equal to 1 were applied. The edge type-aware topology can be represented as Ae D = {A⟨s,t⟩ Dij |i, j = 1, ..., N, s, t = 1, ..., M}, where s and t is the node type index respectively, M is the number of types. Dynamic Semantic-Based GCN: As shown in Figure.2, Different from the previous ST-GCNs which utilized the same spatial graph convolution structure on three pregenerated skeleton graphs, in DS-GCN, the spatial graph convolution structure was decomposed into three branches, the node-type aware branch, the edge-type aware branch, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6228 Figure 3: Illustration of the joint correlation calculation. (a) represents the standard non-local mechanism, for each transform function φ(·) and ξ(·), the node features are updated by sharing the same parameters. (b) represents the node type-aware correction. In each transform function, the convolution kernels are divided into several parts, each of which corresponds to a specific node type, and then the node characteristics in different types were updated by their individual parameters set. The colored circles denote different node types and the colored squares denote different convolution kernels. (c) illustrates the edge type-aware correlation. For each type of edge, specific convolution kernels were designed and utilized for updating the edge feature. The colored circles denote node types; mix-colored squares denote corresponding edges with node pairs. and the general branch. A branch-wise weight is set as learnable and utilized for the combination of a shared correction matrix and the corresponding self-adaptive graph. Specifically, the input was first projected into a high dimension, which was split into three parts corresponding to different branched. For each branch, the combination of a shared correction matrix and a self-adaptive graph was utilized for spatial graph convolution operation. To balance the influence of the shared skeleton for each branch for action recognition, the pre-defined skeleton graph was replaced by a totally learnable correction matrix. Finally, the three branches were concatenated along the feature channel dimension and followed by a 1 × 1 convolution kernel, so that combines the information of the three branches and projects it into the output dimension. The process of the DS-GCN can be formulated as Eq. 9. X ′ = f(x, θ) x = [xn, xe, xg] ∈RN×3c xn = (A1 + λ1An D)f 1 pre(X) xe = (A2 + λ2Ae D)f 2 pre(X) xg = (A3 + λ3AD)f 3 pre(X) (9) where X ∈RN×C, f ∗ pre is the projection function to reduce the feature channels; c is the output channels of the f ∗ pre. xn, xe, xg are the output of the node type-aware, edge typeaware, and general branch respectively. A∗is the learnable correlation matrix of each branch. λ∗is the learnable weight to refine the effect of each semantic-based topology-adaptive graph, which is different between branches. Model Architecture Based on the DS-GCN, a new spatial-temporal graph convolution network was introduced. Similar to ST-GCN (Si et al. 2018), ten basic blocks were connected in series, followed by a global average pooling and a softmax classifier for action classification. The number of basic feature channels is set as 64 and was doubled at 5th and 8th blocks. In each basic block, one DS-GCN and a multi-scale temporal modeling module proposed in (Chen et al. 2021) were contained. Experiments Datasets To demonstrate the advantage of the proposed DS-GCN, two datasets were utilized in this paper: NTU RGB+D and Kinetics-400. The brief introduction is as follows and more details of these 2 datasets are in Supplementary 1. NTU RGB+D. NTU RGB+D (Shahroudy et al. 2016; Liu et al. 2019) is a large-scale action recognition dataset. Here, four benchmarks recommended by the official are utilized: (1) NTU60 cross-subject (NTU60-Xsub), (2) NTU60 cross-view (NTU60-Xview), (3) NTU120 cross-subject (NTU120-Xsub), NTU120 cross-setup (NTU120-Xset). Kinetics-400 (Carreira and Zisserman 2017). Kinetics400 is a large-scale action recognition dataset with 400 actions. The skeletons utilized in this paper were provided by (Duan et al. 2022b), where the OpenPose algorithm (Cao et al. 2017) was applied for joint estimation. Implementations Details All experiments are conducted on one A100 GPU with the PyTorch deep learning framework. All models are trained with the SGD optimizer with momentum 0.9 and weight decay 5e−4. The initial learning rate was set to 0.1, and the model is trained for 100 epochs with the Cosine Annealing learning rate scheduler. The batch size was set to 128. To accelerate the training process, the input of temporal length was set to 64 in the ablation study. For a fair comparison, the input of temporal length was set to 100 when compared with SOTAs. The pre-processing approach follows the settings detailed in (Duan et al. 2022b). Ablation Study In this section, the proposed two semantic-based adaptive graph modules were analyzed on two benchmarks: NTU60The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6229 Method NTU60-XSub NTU120-XSet 2s-GCN (Shi et al. 2019b) 89.5 86.0 2s-GCN+NE 90.1 86.1 CTR-GCN (Chen et al. 2021) 89.6 86.0 CTR-GCN+NE 90.4 86.5 Table 1: Generalization of the proposed semantic module. +NE represents that the adaptive graph utilized in spatialGCN is replaced by the semantic-based adaptive graph. Method NTU60-XSub NTU120-XSet ST-GCN (Si et al. 2018) 87.8 85.5 2s-GCN (Shi et al. 2019b) 89.5 86.0 CTR-GCN (Chen et al. 2021) 89.6 86.0 DS-GCN 90.8 87.2 Table 2: Effectiveness of DS-GCN. The proposed DS-GCN can achieve the best performance. Method NTU60-XSub NTU120-XSet DS-GCNshared 90.1 86.8 DS-GCNB-wise 90.8 87.2 Table 3: Comparison DS-GCN in different learnable weight manners. DS-GCNshared represents the DS-GCN with shared λ for all the branches, DS-GCNB-wise represent the DS-GCN with individual λ for different branches. Method NTU60-XSub DS-GCN w/o N&E 90.0 DS-GCN w/o N 90.5 DS-GCN w/o E 90.4 DS-GCN 90.8 Table 4: Ablation On the edge/node type encoding. N represents the node type-aware encoding, and E represents the edge type-aware encoding. w/o means without, representing that the corresponding semantic encoding is replaced with the general branch. Module Encode stage NTU60-XSub DS-GCN w/o N&E 90.0 DS-GCNini [1-4] 90.2 DS-GCNmid [5-7] 90.7 DS-GCNend [8-10] 90.5 DS-GCN [1-10] 90.8 Table 5: Exploration on the semantic encoding stage. DSGCN w/o N&E represents no semantic module is utilized, DS-GCNini represents just utilized DS-GCN in layer [1-4], DS-GCNmid represents just utilized DS-GCN in layer [57], DS-GCNend represents just utilized DS-GCN in layer [8-10], DS-GCN represents DS-GCN is utilized in all the layers. (a) (b) 24 24 16 17 16 17 Figure 4: Visualization of classification. The action index is as follows: taking on shoes (16), taking off shoes (17), and kicking something (24). (a) the confusion matrix for CTRGCN, (b) the confusion matrix for CTR-GCN+NE. It can be observed that, after encoding the semantic information, CTR-GCN+NE can distinguish kicking something from taking on/off shoes more accurately (e.g., errors reduce from 3% and 9% to 1% and 5% respectively). XSub and NTU120-XSet. The joints coordinate were utilized as input and three learnable correlation matrices were randomly initialized for skeleton topology modeling. Generalization Of The Semantic Encoding Modules: In order to justify the generalization and efficiency of proposed node/edge type-aware adaptive graph modules, several famous topology-adaptive topology ST-GCNs structures were utilized as the backbone, and the node/edge typeaware adaptive graph modules were adapted and utilized to replace the Spatial-GCN in these backbones. Here the 2sGCN (Shi et al. 2019b), CTR-GCN (Chen et al. 2021) were utilized. For a fair comparison, the characteristic of the initial backbone was kept where three branches share the same structure. The node/edge type-aware adaptive graph modules were combined in series and then utilized as the SpatialGCN in these backbones. The detail of structure is introduced in Supplementary 2. The results are shown in Table 1, It can be observed that, after encoding the node/edge types in these backbones, the accuracy of action recognition can have a stable increase. To analyze the classification performance in more detail. The confusion metrics of CTR-GCN and CTR-GCN+NE on NTU60-XSub were generated as shown in Figure 4. Taking the action of kicking something (index 24 in the confusion matrix) and the action of taking on /off shoes (index 16/17 in the confusion matrix) for example, these actions can be described as the relative movement between two parts of the body, the Taking on/off shoes can be interpreted as the relative movement between arms and legs, but kicking something is the relative movement between two legs. It can be observed that after encoding the semantic information, CTR-GCN+NE can distinguish the action of kicking something from the action of taking on/off shoes more accurately. Effectiveness of DS-GCN: In order to validate the effectiveness of the proposed dynamic semantic-based graph convolution, we compared the performance of the DS-GCN with several ST-GCN variants. Vanilla ST-GCN (Si et al. 2018), 2s-GCN (Shi et al. 2019b) and CTR-GCN (Chen et al. 2021) were utilized as the backbones in this experiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6230 Module NTU60-Xsub NTU60-Xview NTU120-Xsub NTU120-Xset Kinetics-400 ST-GCN (Si et al. 2018) 81.5 88.3 70.7 73.2 30.7 SGN (Zhang et al. 2020) 86.6 93.4 AS-GCN (Li et al. 2019) 86.8 94.2 78.3 79.8 34.8 RA-GCN (Song et al. 2020) 87.3 93.6 78.3 79.8 34.8 2s-GCN (Shi et al. 2019b) 88.5 95.1 DGNN (Shi et al. 2019a) 89.9 96.1 FGCN (Yang et al. 2021) 90.2 96.3 85.4 87.4 ShiftGCN (Cheng et al. 2020b) 90.7 96.5 85.9 87.6 DSTA-Net (Shi et al. 2020a) 91.5 96.4 86.6 89.0 MS-G3D (Liu et al. 2020) 91.5 96.2 86.9 88.4 38.0 CTR-GCN (Chen et al. 2021) 92.4 96.8 88.9 90.6 ST-GCN++ (Duan et al. 2022b) 92.6 97.4 88.6 90.8 49.1 PoseConv3D (Duan et al. 2022c)* 94.1 97.1 86.9 90.3 47.7 DS-GCN 93.1 97.5 89.2 91.1 50.6 Table 6: Classification accuracy comparison against state-of-the-art methods. ∗represents the SOTA CNN-based method. ment. For all the models, the joints coordinate was set as the input, and the pre-defined graph was set as the totally trainable adjacency matrix. The results in Table 2 show that the topology-adaptive graph convolution network (2s-GCN, CTR-GCN, and DS-GCN) achieves better performance than the topology-fixed graph convolution network (ST-GCN). Compared with the CTR-GCN (Chen et al. 2021), the proposed DS-GCN has a smaller number of parameters but achieved a 1.2% Top1-acc increase in NTU60 Xsub and NTU120 Xset. The comparison of model size can be seen in Supplementary 3. This proves that the proposed DS-GCN is more effective in modeling the skeleton topology. Configuration Exploration. In this section, the learnable weight λ is analyzed. Different to utilize one shared λ in other topology-adaptive graph learning, the individual refinement weight is learned for each branch in DS-GCN. To justify the branch-wise λ, we trained the DS-GCN in two ways: with the shared λ and with the individual λ in a branch-wise manner. The result is shown in Table 3, where it can be seen that the DS-GCN learned in a branch-wise weight manner has a stable improvement. Ablation On The Edge/Node Type Encoding: In this section, the effectiveness of different configurations of DSGCN was explored. In practice, to test the effects of nodetype encoding, we replaced the node type-aware adaptive branch with the general branch, in this case, two branches were utilized to model the general adaptive graph, and one branch was utilized to model the edge type-aware adaptive graph. Similarly, the edge-type adaptive branch was replaced by the general branch to validate the effect of edge-type encoding. In Table 4, we can observe that the node/edge typeaware adaptive graph has a positive effect on the recognition performance, and combining both semantic branches can achieve the best performance. Top1-acc of the DS-GC outperforms the backbone with no semantic encoding by 0.8%. Exploration Of The Semantic Encoding Stage. In practice, there are ten basic blocks in ST-GCN, as we described above that the proposed semantic encoding module is flexible that can be applied in different depths of the ST-GCN. Thus in order to explore the importance of semantic information encoding in various depths, the DS-GCN was utilized in different stages alone for comparison. Specifically, we split the whole DS-GCN into three stages: the initial stage represented as DS-GCNini, which contains the layer from 1st to 4th, the middle stage DS-GCNmid with layer 5th-7th, and the end stage DS-GCNend with 8th-10th, then the DS-GCN was applied in each stage respectively. For instance, to justify the effect of semantic information on the initial stage, the DS-GCN is only utilized in layer 1st to 4th, in the rest block, all semantic-based modules are replaced by the general adaptive branch. The results in Table. 5 show that semantic encoding has a positive effect on human action recognition irrespective of the stage where the DS-GCN was used. When utilizing the DS-GCN in all layers, the model shows the best performance. Comparing within three stages, the middle stage outperforms the others, which can be explained as the over-smoothing problems. When the layer goes deeper, the semantic information encoded in the initial stage might be over-smoothed during the aggregation process. If only encoding the semantic information in the end stage, the feature of the node was already over-smoothed after the former stages’ aggregation. Thus the correlation matrix plays weakly effect on feature updating, which limits the ability of the semantic encoding module. Comparisons With the State-of-the-Art Multi-stream fusion proposed in (Shi et al. 2020b) has been proven to be advanced for skeleton-based action recognition and has been adapted in many state-of-the-art methods (Chen et al. 2021; Duan et al. 2022a; Shi et al. 2019b). Thus, for a fair comparison, the DS-GCN was trained on four modalities respectively, the result for each modality was reported in Supplementary 4, and the final result was obtained by summering the probability from each stream. The performance of the DS-GCN was compared with SOTA methods on NTURGB+D 60 (120) and Kinetics 400 in Table 6. It can be observed that the proposed DS-GCN outperforms all existing methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6231 Acknowledgments This work was supported by the EPSRC (Engineering and Physical Sciences Research Council) Centre for Doctoral Training in Distributed Algorithms [Grant Ref: EP/S023445/1]. This work was supported in part by the National Natural Science Foundation of China (62272444); Zhejiang Provincial Natural Science Foundation (LR22F020008); and Youth Innovation Promotion Association CAS (2021298). References Bilen, H.; Fernando, B.; Gavves, E.; and Vedaldi, A. 2017. Action recognition with dynamic image networks. IEEE transactions on pattern analysis and machine intelligence, 40(12): 2799–2813. Caetano, C.; Sena, J.; Br´emond, F.; Dos Santos, J. A.; and Schwartz, W. R. 2019. Skelemotion: A new representation of skeleton joint sequences based on motion information for 3d action recognition. In 2019 16th IEEE international conference on advanced video and signal based surveillance (AVSS), 1–8. IEEE. Cao, Z.; Simon, T.; Wei, S.-E.; and Sheikh, Y. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7291–7299. Carreira, J.; and Zisserman, A. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6299–6308. Chen, Y.; Zhang, Z.; Yuan, C.; Li, B.; Deng, Y.; and Hu, W. 2021. Channel-wise topology refinement graph convolution for skeleton-based action recognition. In Proceedings of the IEEE/CVF ICCV, 13359–13368. Cheng, K.; Zhang, Y.; Cao, C.; Shi, L.; Cheng, J.; and Lu, H. 2020a. Decoupling gcn with drop graph module for skeleton-based action recognition. In European Conference on Computer Vision, 536–553. Springer. Cheng, K.; Zhang, Y.; He, X.; Chen, W.; Cheng, J.; and Lu, H. 2020b. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 183–192. Du, Y.; Wang, W.; and Wang, L. 2015. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1110–1118. Duan, H.; Wang, J.; Chen, K.; and Lin, D. 2022a. DG-STGCN: Dynamic Spatial-Temporal Modeling for Skeleton-based Action Recognition. Duan, H.; Wang, J.; Chen, K.; and Lin, D. 2022b. PYSKL: Towards Good Practices for Skeleton Action Recognition. Duan, H.; Zhao, Y.; Chen, K.; Lin, D.; and Dai, B. 2022c. Revisiting skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2969–2978. Gaur, U.; Zhu, Y.; Song, B.; and Roy-Chowdhury, A. 2011. A “string of feature graphs” model for recognition of complex activities in natural videos. In 2011 International Conference on Computer Vision, 2595–2602. Gui, L.-Y.; Zhang, K.; Wang, Y.-X.; Liang, X.; Moura, J. M. F.; and Veloso, M. 2018. Teaching Robots to Predict Human Motion. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 562–567. Ke, Q.; Bennamoun, M.; An, S.; Sohel, F.; and Boussaid, F. 2017. A new representation of skeleton sequences for 3d action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3288–3297. Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; and Tian, Q. 2019. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3595–3603. Liu, J.; Shahroudy, A.; Perez, M.; Wang, G.; Duan, L.-Y.; and Kot, A. C. 2019. Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding. IEEE transactions on pattern analysis and machine intelligence, 42(10): 2684–2701. Liu, J.; Wang, G.; Duan, L.-Y.; Abdiyeva, K.; and Kot, A. C. 2017. Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Transactions on Image Processing, 27(4): 1586–1599. Liu, Z.; Zhang, H.; Chen, Z.; Wang, Z.; and Ouyang, W. 2020. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 143–152. Shahroudy, A.; Liu, J.; Ng, T.-T.; and Wang, G. 2016. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1010–1019. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019a. Skeletonbased action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7912–7921. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019b. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12026– 12035. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2020a. Decoupled spatial-temporal attention network for skeleton-based action-gesture recognition. In Proceedings of the Asian Conference on Computer Vision. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2020b. Skeletonbased action recognition with multi-stream adaptive graph convolutional networks. IEEE Transactions on Image Processing, 29: 9532–9545. Si, C.; Jing, Y.; Wang, W.; Wang, L.; and Tan, T. 2018. Skeleton-based action recognition with spatial reasoning and temporal stack learning. In Proceedings of the European conference on computer vision (ECCV), 103–118. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6232 Simonyan, K.; and Zisserman, A. 2014. Two-stream convolutional networks for action recognition in videos. Advances in neural information processing systems, 27. Song, Y.-F.; Zhang, Z.; Shan, C.; and Wang, L. 2020. Richly activated graph convolutional network for robust skeletonbased action recognition. IEEE Transactions on Circuits and Systems for Video Technology, 31(5): 1915–1925. Sun, K.; Xiao, B.; Liu, D.; and Wang, J. 2019. Deep highresolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5693–5703. Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, 4489–4497. Vemulapalli, R.; Arrate, F.; and Chellappa, R. 2014. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition, 588–595. Wang, H.; and Schmid, C. 2013. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, 3551–3558. Wang, H.; and Wang, L. 2017. Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 499–508. Wang, J.; Liu, Z.; Wu, Y.; and Yuan, J. 2014. Learning Actionlet Ensemble for 3D Human Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5): 914–927. Yan, S.; Xiong, Y.; and Lin, D. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-second AAAI conference on artificial intelligence. Yang, H.; Yan, D.; Zhang, L.; Sun, Y.; Li, D.; and Maybank, S. J. 2021. Feedback graph convolutional network for skeleton-based action recognition. IEEE Transactions on Image Processing, 31: 164–175. Zhang, P.; Lan, C.; Zeng, W.; Xing, J.; Xue, J.; and Zheng, N. 2020. Semantics-guided neural networks for efficient skeleton-based human action recognition. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1112–1121. Zhang, S.; Liu, X.; and Xiao, J. 2017. On geometric features for skeleton-based action recognition using multilayer lstm networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), 148–157. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6233 | 2024 | 692 |
18,509 | G2P-DDM: Generating Sign Pose Sequence from Gloss Sequence with Discrete Diffusion Model Pan Xie1, Qipeng Zhang1, Peng Taiying1, Hao Tang2*, Yao Du1, Zexian Li1 1Beihang University 2Carnegie Mellon University {panxie, zhangqipeng, taiyi, duyaoo}@buaa.edu.cn, [email protected], [email protected] Abstract The Sign Language Production (SLP) project aims to automatically translate spoken languages into sign sequences. Our approach focuses on the transformation of sign gloss sequences into their corresponding sign pose sequences (G2P). In this paper, we present a novel solution for this task by converting the continuous pose space generation problem into a discrete sequence generation problem. We introduce the Pose-VQVAE framework, which combines Variational Autoencoders (VAEs) with vector quantization to produce a discrete latent representation for continuous pose sequences. Additionally, we propose the G2P-DDM model, a discrete denoising diffusion architecture for length-varied discrete sequence data, to model the latent prior. To further enhance the quality of pose sequence generation in the discrete space, we present the CodeUnet model to leverage spatial-temporal information. Lastly, we develop a heuristic sequential clustering method to predict variable lengths of pose sequences for corresponding gloss sequences. Our results show that our model outperforms stateof-the-art G2P models on the public SLP evaluation benchmark. For more generated results, please visit our project page: https://slpdiffusier.github.io/g2p-ddm. Introduction Sign Language Production (SLP) is a crucial task for the Deaf community, involving the provision of continuous sign videos for spoken language sentences. Due to the distinct linguistic systems between sign languages and spoken languages (Pfau, Salzmann, and Steinbach 2018), sign languages have different sign orders, making direct alignment mapping between them challenging. To address this issue, prior works first translate spoken languages into glosses1, followed by generating sign pose sequences based on the gloss sequences (G2P)(Saunders, Bowden, and Camg¨oz 2020; Saunders, Camg¨oz, and Bowden 2020). Finally, the generated sign pose sequence can optionally be used to produce a photorealistic sign video(Saunders, Camgoz, and Bowden 2020). As such, G2P is the crucial procedure of this task, and this paper focuses on it. *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1Sign glosses are minimal lexical items that match the meaning of signs and correspond to spoken language words. t = 0 t = T/4 t = T/2 t = 3T/4 t = T Figure 1: The forward diffusion process applied to a pose sequence. The first line (t=0) represents the original pose sequence. From top to bottom (t from 0 to T), the level of noise increases gradually. Existing G2P methods can be broadly categorized as autoregressive (Saunders, Bowden, and Camg¨oz 2020; Saunders, Camg¨oz, and Bowden 2020) or nonautoregressive (Huang et al. 2021), depending on their decoding strategies. Autoregressive models generate the next pose frame based on previous frames, utilizing the teacher forcing strategy (Williams and Zipser 1989). However, during inference, recurrent decoding can lead to error propagation over time due to exposure bias (Schmidt 2019). To overcome this bottleneck, non-autoregressive methods have been proposed to enable the decoder to generate all target predictions simultaneously (Gu et al. 2018; Ghazvininejad et al. 2019). Huang et al. (Huang et al. 2021) introduced a nonautoregressive G2P model that generates sign pose sequences in a one-shot decoding scheme, using an External Aligner (EA) for sequence alignment learning. Inspired by the remarkable results achieved by the recently developed Discrete Denoising Diffusion Probabilistic Models (D3PMs) (Hoogeboom et al. 2021; Austin et al. 2021; Gu et al. 2021) for language and vector quantized image generation, we propose a two-stage approach in this paper. Our method involves transforming the continuous pose sequence into discrete tokens and modeling the discrete prior space using the denoising diffusion architecture. The proposed method is an iterative non-autoregressive approach that performs parallel refinement on the generated results, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6234 demonstrating expressive generative capacity. We elaborate our approach in three steps. Firstly, we represent the pose sequence as sequential latent codes using a vector quantized variational autoencoder (VQ-VAE). Unlike image VQ-VAE (Esser, Rombach, and Ommer 2021; van den Oord, Vinyals, and Kavukcuoglu 2017), we propose a specific architecture, Pose-VQVAE, that divides the sign skeleton into three local point patches representing pose, right hand, and left hand separately. Additionally, we use a multi-codebook to maintain separated latent embedding space for each local patch, resulting in stronger feature semantics. This approach eases the difficulty in constructing mappings between the sign pose feature and the codebook feature, thus improving reconstruction quality. Next, we present G2P-DDM, which extends the standard discrete diffusion models (Austin et al. 2021; Gu et al. 2021) to model the sequential alignments between sign glosses and quantized codes of pose sequences. This approach employs a discrete diffusion model that samples the data distribution by reversing a forward diffusion process that gradually corrupts the input via a fixed Markov chain. The corruption process, depicted in Figure 1, achieved by adding noise data (e.g., [MASK] token), draws our attention to the mask-based generative model, Mask-Predict (Ghazvininejad et al. 2019), which has been shown to be a variant of the diffusion model (Austin et al. 2021). We explore two variants of the diffusion model for variable-length sequence generation. To better leverage the spatial and temporal information of the quantized pose sequences, we introduce a new architecture, CodeUnet, which is a ”fully transformer network” designed for discrete tokens. Through iterative refinements and improved spatial-temporal modeling, our model achieves a higher quality of conditional pose sequence generation. Finally, we address the challenging task of length prediction in non-autoregressive G2P models, as the corresponding lengths of different sign glosses are variable. To tackle this issue, we propose a novel clustering method for this specific sequential data that local adjacent frames should belong to a cluster. Taking advantage of the meaningful learned codes in the first stage, we apply the k-nearest-neighbor-based density peaks clustering algorithm (Du, Ding, and Jia 2016; Zeng et al. 2022) to locate peaks with higher local density. We then design a heuristic algorithm to find the boundary between two peaks based on their semantic distance with the two peak codes. Finally, we leverage the length of each gloss as additional supervised information to predict the length of the gloss sequence during inference. Our proposed model demonstrates significant improvement in the generation quality on the challenging RWTHPHOENIX-WEATHER-2014T (Camg¨oz et al. 2018) dataset. The evaluation of conditional sequential generation is performed using a back-translated model. Extensive experiments show that our model increases the WER score from 82.01%(Huang et al. 2021) to 77.26% for the generated pose sequence to gloss sequence, and the BLEU score from 6.66(Huang et al. 2021) to 7.50 for the generated pose sequence to spoken language. Related Works Sign Language Production. Most sign language works focus on sign language recognition (SLR) and translation (SLT) (Camg¨oz et al. 2018, 2020; Camg¨oz et al. 2020; Zhou et al. 2022; Xie, Zhao, and Hu 2021; Hu et al. 2021), aiming to translate the video-based sign language into text-based sequences. And few attempts have been made for the more challenging task of sign language production (SLP) (Stoll et al. 2018; Xiao, Qin, and Yin 2020). Stoll et al. proposed the first deep SLP model, which adopts the three-step pipeline. In the core process for G2P, they learn the mapping between the sign glosses and the skeleton poses via a look-up table. After that, B. Saunders et al. (Saunders, Camg¨oz, and Bowden 2020) proposed the progressive transformer to learn the mapping with an encoder-decoder architecture and generate the sign pose in an autoregressive manner in the inference. Further, B. Saunders et al. (Saunders, Camgoz, and Bowden 2020) proposed a Mixture Density Network (MDN) to generate the pose sequences condition on the sign glosses and utilize a GAN-based method (Chan et al. 2019) to produce the photo-realistic sign language video. B. Saunders et al. (Saunders, Camg¨oz, and Bowden 2021) translated the spoken language to sign language representation with an autoregressive transformer network and used the gloss information to provide additional supervision. Then they proposed a Mixture of Motion Primitives(MoMP) architecture to combine distinct motion primitives to produce a continuous sign language sequence. B. Saunders et al. (Saunders, Camgoz, and Bowden 2022) propose a novel Frame Selection Network (FS-NET) to improve the temporal alignment of interpolated dictionary signs and SIGNGAN, a pose-conditioned human synthesis model that produces photo-realistic sign language videos direct from skeleton pose. Although they achieved state-of-the-art results, they used an additional sign language dictionary (Hanke et al. 2010), meaning that each sign vocabulary has a corresponding pose sequence. Therefore, this paper did not compare their results. Different from these methods, Huang et al. (Huang et al. 2021) proposed a non-autoregressive model to parallelly generate the sign pose sequence avoiding the error accumulation problem. They applied the monotonic alignment search (Kim et al. 2020) to generate the alignment lengths of each gloss. Our model also explores a non-autoregressive method with a diffusion strategy, and the adopted diffusion model architecture allows us to refine the results with multiple iterations. Discrete Diffusion Models. Most previous works focus on Gaussian diffusion processes that operate in continuous state spaces (Dhariwal and Nichol 2021; Ho, Jain, and Abbeel 2020; Ho et al. 2022; Nichol and Dhariwal 2021; Rombach et al. 2021). The discrete diffusion model is first introduced in (Sohl-Dickstein et al. 2015), and it is applied to text generation in Argmax Flow (Hoogeboom et al. 2021). To improve and extend the discrete diffusion model, D3PM (Austin et al. 2021) used a structured categorical corruption process to shape data generation and embed structure in the forward process. VQ-Diffusion (Gu et al. 2021) applied the discrete diffusion model to conditional vector quantized image synthesis with a mask-and-replace diffusion strategy. Upon this work, we extend this diffusion strategy with more special The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6235 ··· SPL 12 27 103 72 99 180 16 47 98 26 23 48 18 90 134 Transformer Points Embs. Encoder Decoder Linear Transformer Multi-codebook Figure 2: The architecture of the first stage model Pose-VQVAE for learning the discrete latent codes. states to length-varied discrete sequence data and introduce an Unet-like “fully transformer” network to model spatialtemporal space. The Proposed Method Our paper aims to improve the generation of conditional sign pose sequences through an enhanced discrete diffusion model. Our approach consists of three key components: the Pose-VQVAE for latent code learning, the G2P-DDM with CodeUnet for prior learning to generate discrete codes, and a sequential-KNN algorithm for length prediction in a nonautoregressive approach. Pose VQ-VAE In this section, we introduce how to tokenize the points of a sign pose skeleton into a set of discrete tokens. A naive approach is to treat per point as one token. However, such a points-wise reconstruction model tends to have tremendous computational cost due to the quadratic complexity of selfattention in Transformers. On the other hand, since the details of hand points are essential for sign pose understanding, treating all the points into one token leads to remarkably inferior reconstruction performance. To achieve a better trade-off between quality and speed, we propose a simple yet efficient implementation that groups the points of a sign skeleton into three local patches, representing pose, right hand, and left hand separately. Figure 2 illustrates the framework of our proposed Pose-VQVAE model with the following submodules. Encoder. Given a sign pose sequence of N frames s = (s1, s2, ..., sn, ..., sN) ∈RN×J×K, where {xj n}J j=1 presents a single sign skeleton containing J joints and K denotes the feature dimension for human joint data. We separate these points into three local paths, sp ∈RN×(Jp×K), sr ∈ RN×(Jr×K), and sl ∈RN×(Jl×K) for the pose, right hand, and left hand, respectively, where J = Jp + Jr + Jl. In the encoder module E(e|s), we first transform these three point sequences into feature sequences by simple three linear layers and concatenate them together. Then we apply a spatial-temporal Transformer network to learn the long-range interactions within the sequential point features. Finally, we arrive at the encoded features {en ∈R3×h} N n=1. Multi-Codebook. Similar to image VQ-VAE (van den Oord, Vinyals, and Kavukcuoglu 2017), we take the encoded features as inputs and convert them into discrete tokens. Specifically, we perform the nearest neighbors method Q(z|e) to quantize the point feature to the quantized features {zn ∈R3×h} N n=1. The quantized features are maintained by three separate codebooks, where each codebook is of size V . Decoder. The decoder D(˜s|z) receives the quantized features as inputs and also applies spatial-temporal Transformer to get the output features {on ∈R3×h} N n=1. Finally, we separate the output feature for three sub-skeleton and utilize a structured prediction layer (SPL) (Aksan, Kaufmann, and Hilliges 2019) P(˜s|o) to reconstruct the corresponding sub-skeleton ˜sp ∈RN×(Jr×K), ˜sl ∈RN×(Jr×K), and ˜sr ∈RN×(Jr×K). We adopt the SPL to rebuild the skeleton from feature because it explicitly models the spatial structure of the human skeleton and the spatial dependencies between joints. The hierarchy chains of the pose, right hand, and left hand skeleton are given in the Appendix. Training. The encoder E(e|s), tokenizer Q(z|e), and decoder D(˜s|z) can be trained end-to-end via the following loss function: LPose-VQVAE =||sp −˜sp|| + ||sr −˜sr|| + ||sl −˜sl||+ ||sg[e] −z|| + β||sg[z] −e||, (1) where sg[·] stands for stop-gradient operation. G2P-DDM with CodeUnet To allow conditional sampling, a discrete diffusion model is trained on the latent codes obtained from the Pose-VQVAE model. Figure 3 shows the architecture of our proposed G2PDDM, which aims to model the latent space in an iterative non-autoregressive manner. Given a sequence of latent codes x0 ∈RN×3 obtained from the vector quantized model, where x(i,j) 0 ∈{1, 2, ..., V } at location (i, j) represents the index within the codebook. The diffusion process aims to corrupt the original data x0 via a fixed Markov chain p(xt|xt−1) by adding a small amount of noise continuously. After a fixed T timesteps, it produces a sequence of increasingly noisy data x1, .., xT with the same dimensions as x0, and xT becomes a pure noise sample. For the scalar discrete variables with V categories x(i,j) t ∈ [1, V ], the forward transition probabilities from xt−1 to xt can be represented by matrices [Qt]mn = q(xt = m|xt−1 = n) ∈RV ×V . Note that we omit the superscripts (i, j) to avoid confusion. Then the forward diffusion process can be written as: q(xt|xt−1) = xT t Qtxt−1, (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6236 M M 1 M M 103 M M M M ··· Gloss Sequence: MORGEN MAL SONNE SPEZIELL Transformer Encoder CodeUnet N x 3 N/2 x 3 N/4 x 3 Transformerbased Encoder downsample Transformer-based Encoder-Decoder upsample N/4 x 3 N/2 x 3 N x 3 512 512 512 512 512 512 M M M M M M M 1 M M 103 M M M M M 42 M 23 M M 23 16 19 M 103 M 98 48 M M 99 M 23 M 12 72 16 26 18 103 180 98 48 134 27 99 47 23 90 ··· 512 512 N x 3 copy and add Linear and softmax Length Predictor Figure 3: Our approach uses a discrete diffusion model to represent the conditional sign pose sequence generation. Specifically, each quantized code is randomly masked or replaced, and a CodeUnet model is trained to restore the original data. where xt ∈RV ×1 is the one-hot version of xt and Qtxt−1 is the categorical distribution for xt. A nice property of the above Markov diffusion process is that we can sample xt as any timestep directly from x0 as: q(xt|x0) = xT t ¯Qtx0, with ¯Qt = Qt . . . Q1. (3) D3PM (Austin et al. 2021) formulates the transition matrix Qt ∈RV ×V by introducing a small number of uniform noises to the categorical distribution. Based on D3PM, VQDiffusion (Gu et al. 2021) proposes a mask-and-replace diffusion strategy that not only replaces the previous value but also inserts [MASK] token to explicitly figure out the tokens that have been replaced. We extend this mask-and-replace strategy to our variable-length sequence modeling. Since the length of pose sequences may be different in a minibatch, we have to add two special tokens, [MASK] and [PAD] tokens, so each token has V + 2 states. The mask-and-replace diffusion process can be defined as follows: each token has a probability of αt to be unchanged, V βt to be uniformly resampled, and γt = 1 −αt −V βt to be replaced with [MASK] token. Note that [MASK] and [PAD] tokens always keep their own state. The transition matrix Qt ∈R(V +2)×(V +2) is formulated as the second matrix of the following: Qt = αt + βt βt · · · βt 0 0 βt αt + βt · · · βt 0 0 ... ... ... ... ... ... βt βt · · · αt + βt 0 0 γt γt · · · γt 1 0 0 0 · · · 0 0 1 . (4) Finally, the categorical distribution of xt can be derived as following using reparameterization trick: when x0 ̸= V + 2, ¯ Qtx0 = ¯αt + ¯βt, xt = x0 ¯βt, xt ̸= x0 and xt ≤V ¯γt, xt = V + 1 0, xt = V + 2 when x0 = V + 2, ¯ Qtx0 = ( 0, xt ̸= V + 2 1, xt = V + 2 (5) where ¯αt = Qt i=1 αi, ¯γt = 1 −Qt i=1(1 −γi), and ¯βt = (1−¯αt −¯γt)/V . Therefore, we can directly sample xt within the computation cost O(V ). A visualized example of the diffusion process is shown in Figure 1, we first get the noised latent codes by q(xt|xt) and decode them to sign skeleton with Pose-VQVAE decoder module. The reverse denoising process is similar to D3PM (Austin et al. 2021) and VQ-Diffusion (Gu et al. 2021). The relevant derivation process is given in the appendix. CodeUnet for Model Learning. Most image diffusion models (Dhariwal and Nichol 2021; Ho, Jain, and Abbeel 2020; Song et al. 2021) adopt the Unet (Ronneberger, Fischer, and Brox 2015) as their architectures since it is effective for data with spatial structure. However, directly applying the Unet in discrete sequence generation, e.g., text generation (Austin et al. 2021) and quantized image synthesis (Gu et al. 2021), will bring information leakage problem since the convolution layer over adjacent tokens may provide shortcuts for the mask-based prediction (Nawrot et al. 2021). Therefore, Austin et al. (Austin et al. 2021) and Gu et al (Gu et al. 2021) used the token-wise Transformer framework to learn the distribution pθ(˜x0|xt, c). In this work, to incorporate the advantages of Unet and Transformer networks, we propose a novel architecture, CodeUnet, to learn the spatial-temporal interaction for our quantized pose sequence generation. As shown in Figure 3, the CodeUnet consists of a contracting path (left side), an expansive path (right side), and a middle module. The middle module is an encoder-decoder Transformer framework. The encoder consists of 6 TransThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6237 former blocks. It takes the gloss sentence as input and obtains a conditional feature sequence. The decoder has two blocks. Each block has a self-attention, a cross-attention, a feed-forward network, and an Adaptive Layer Normalization (AdaLN) (Ba, Kiros, and Hinton 2016; Gu et al. 2021). The AdaLN operator is devised to incorporate timestep t information as AdaLN(h, t) = αtLayerNorm(h) + βt, where h is the intermediate activations, αt and βt are obtained from a linear projection of the timestep embedding. The contracting and expansive paths are hierarchical structures, and each level has two Transformer encoder blocks. For downsampling in contracting path, given the feature of quantized pose sequence, e.g., h ∈RN×3×dmodel, where dmodel is the feature dimension, we first sample uniformly with stride 2 in the temporal dimension and remain constant in the spatial dimension. Then we set the downsampled feature as query Q ∈RN/2×3×dmodel, and keep key K and value V unchanged for the following attention network. In the upsampling of the expansive path, we directly repeat the feature 2 times as a query, but the key and value remain for the following attention network: ∀n = 1, ..., N, Qup n = hn//2, Kup = V up = h, (6) where ·//· denotes floor division. Finally, a linear layer and a softmax layer are applied to make the prediction. Length Prediction with Sequential-KNN In this section, inspired by (Zeng et al. 2022), which merges tokens with similar semantic meanings from different locations, we propose a novel clustering algorithm to get the lengths for corresponding glosses. Specifically, given a token sequence that is obtained from the Pose-VQVAE model, we compute the local density ρ of each token according to its k-nearest-neighbors: ρi = exp(−1 k X zj∈KNN(zi) ∥zi −zj∥2 2), where |i −j| <= l (7) where i, j is the position in the sequence, and l is a predefined hyperparameter indicating that we only consider the local region since the adjacent tokens are more likely to belong to a gloss. zi and zj are the latent feature for ith and jth tokens. KNN(xi) represents the k-nearest neighbors for ith token. We assign {p1, ..., pM} positions with a higher local density as the peaks, where M is the length of the gloss sequence. Then between two adjacent peaks, for example, p1 and p2, we sequentially iterate from p1 to p2 and find the first position that is farther from zp1 and closer to zp2, which is the boundary we determined. After finding these boundaries, we get the lengths of the contiguous pose sequence for its corresponding glosses. As shown in Figure 3, we define the obtained lengths as {L1, .., LM}, and the Transformer encoder for gloss sequence is trained under the supervised information of lengths. For each gloss word, we predict a number from [1, P], where P is the maximum length of the target pose sequence. Mathematically, we formulate the classification loss of length prediction as: Llen = δ M M X i P X j (−Li = j) log p(Li|c). (8) In the training of the discrete diffusion mode, Llen is trained together with a coefficient δ. In the inference, we predict the length of glosses, and their summation is the length of the target pose sequence. In summary, we arrive at our proposed two-stage approach, G2P-DDM, with the first-stage Pose-VQVAE model and the second-stage discrete diffusion model with a length predictor. Experiments Datasets. We evaluate our G2P model on RWTH-PHOENIXWEATHER-2014T dataset (Camg¨oz et al. 2018). It is the only publicly available SLP dataset with parallel sign language videos, gloss annotations, and spoken language translations. This corpus contains 7,096 training samples (with 1,066 different sign glosses in gloss annotations and 2,887 words in German spoken language translations), 519 validation samples, and 642 test samples. Evaluation Metrics. Following the widely-used setting in SLP (Saunders, Camg¨oz, and Bowden 2020), we adopt the back-translation method for evaluation. Specifically, we utilize the state-of-the-art SLT (Camg¨oz et al. 2020) model to translate the generated sign pose sequence back to gloss sequence and spoken language, where its input is modified as pose sequence. Specifically, we compute BLEU (Papineni et al. 2002) and Word Error Rate (WER) between the backtranslated spoken language translations and gloss recognition results with ground truth spoken language and gloss sequence. Although this evaluation method may introduce noise, it is currently the prevailing approach in SLP models, and we adopt it to ensure a fair comparison with existing methods. Data Processing. Since the RWTH-PHOENIX-WEATHER2014T dataset does not contain pose information, we generate the pose sequence as the ground truth. Following B. Saunders et al. (Saunders, Camg¨oz, and Bowden 2020), we extract 2D joint points from sign video using OpenPose (Cao et al. 2021) and lift the 2D joints to 3D with a skeletal model estimation improvement method (Zelinka and Kanis 2020). Finally, similar to (Stoll et al. 2018), we apply skeleton normalization to remove the skeleton size difference between different signers. Model Settings. The Pose-VQVAE consists of an Encoder, a Tokenizer, and a Decoder. The Encoder contains a linear layer to transform pose points to a hidden feature with a dimension set as 256, a 3-layer Transformer module with divided spacetime attention (Bertasius, Wang, and Torresani 2021). The Tokenizer maintains a codebook with a size set as 2,048. The Decoder contains the same 3-layer Transformer module as the Encoder and an SPL layer to predict the structural sign skeleton. For the discrete diffusion model, we set the timestep T as 100. All Transformer blocks of CodeUnet have dmodel=512 and Ndepth=2. The size of the local region l in Eq. (7), is set as 16, which is the average length of a gloss. And the number of nearest neighbors k is set as 16. We train the model on 8 NVIDIA Tesla V100 GPUs. We include all hyperparameters settings and the details of implementation in the Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6238 Method WER BLEU-1 BLEU-2 BLEU-3 BLEU-4 DTW-MJE PTR† (Saunders, Camg¨oz, and Bowden 2020) 94.65 11.45 7.08 5.08 4.04 0.191 NAT-AT (Huang et al. 2021) 88.15 14.26 9.93 7.11 5.53 0.177 NAT-EA (Huang et al. 2021) 82.01 15.12 10.45 7.99 6.66 0.146 G2P-AR (Ours) 85.27 14.26 10.02 7.57 5.94 0.172 G2P-MP (Ours) 79.38 15.43 10.69 8.26 6.98 0.146 G2P-DDM (Ours) 77.26 16.11 11.37 9.22 7.50 0.116 GT† 55.93 24.12 16.77 12.80 10.58 0.0 Table 1: Quantitative results for the G2P task on the RWTH-PHOENIX-WEATHER-2014T test dataset. † indicates the results is provided by Huang et al. (Huang et al. 2021). Note that, GT refers to the validation metrics obtained by using the original pose sequence extracted from the video and then applying a back-translation method. Comparisons with State-of-the-Art Methods Competing Methods. We compare our G2P-DDM with previous state-of-the-art G2P models. Progressive Transformer (PTR) (Saunders, Camg¨oz, and Bowden 2020) is the first SLP model to tackle the G2P problem in an autoregressive manner. Since they use the ground-truth first sign pose frame and timing information, their reported results are not comparable to ours. Thus we adopt the results reported by Huang et al. (Huang et al. 2021). NAT-EA (Huang et al. 2021) proposes a non-autoregressive method to directly predict the target pose sequence with the External Aligner (EA) to learn alignments between glosses and pose sequences. NAT-AT is the NAT model without EA that uses the decoder-to-encoder attention to learn the alignments. Quantitative Comparison. The comparison between our G2P-DDM and the competing methods is shown in Tabel 1. Note that, the evaluation results of the GT† are lower than the reported results in the state-of-the-art SLT (Camg¨oz et al. 2020) model. This is because the evaluation results obtained using the pose sequence are inferior to those obtained using photo-realistic content (Saunders, Camgoz, and Bowden 2022). The row of G2P-AR refers to the vector quantized model with an autoregressive decoder. The row of G2PMP refs to the vector quantized model with the MaskPredict (Ghazvininejad et al. 2019) strategy, which is also a variant of discrete diffusion model (Austin et al. 2021). G2P-DDM refs to the vector quantized model with maskand-replace diffusion strategy. As indicated in Table 1, both diffusion-based models outperform the state-of-the-art G2P models with relative improvements on WER score by 5.7% (82.01 →77.26) and on BLEU-4 by 12.6% (6.66 →7.50). This shows the effectiveness of the iterative mask-based nonautoregressive method on the vector quantized pose sequence. In addition, the Mask-Predict strategy is a mask-only strategy similar to G2P-DDM with ¯γT = 1. Therefore, G2P-DDM achieves better performance than G2P-MP. This reflects that the mask-and-replace strategy is superior to the mask-only strategy. Model Analysis and Discussions We also investigate the effects of different components and design choices of our proposed model. Figure 4: Visulization of latent vectors in the shared codebook and separated codebooks. In the separated codebook, the pink part is for the pose, and the green and orange parts represent the left and right hands, respectively. 0 10000 20000 30000 40000 50000 60000 Training steps 80 85 90 95 100 WER Test WER on training steps Transformer CodeUnet Figure 5: Ablation on the design of prediction model. Analysis of The Design of Pose-VQVAE. As shown in Table 2, we study the design of our Pose-VQVAE model. PoseVQVAE-joint-shared means we compress all points into one token with one shared codebook. Pose-VQVAE-separatedshared means the points are separated into three local patches according to the structure of a sign skeleton, and the latent embedding space is maintained with one shared codebook. Pose-VQVAE-separated-separated means the points are separated into three local patches, and the latent vectors are maintained with three codebooks separately. Experimental results in Table 2 show that Pose-VQVAEThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6239 GT Pose-VQVAE PoseVQDiffusion PTR Sign Gloss: BESONDERS OST DEUTSCH LAND MEHR REGEN ODER SCHNEE Sign Gloss: DONNERSTAG SUEDOST WEITER WECHSELHAFT NORDWEST MEHR FREUNDLICH SONNE GT Pose-VQVAE PoseVQDiffusion PTR Figure 6: G2P qualitative results. We show several examples of generated sign pose sequences compared with Pose-VQVAE and previous G2P model (Saunders, Camg¨oz, and Bowden 2020). For readability, we sampled every 5 frames for a total of 16 frames. See the Appendix for more results. local patches codebook (size) MSE (↓) WER (↓) joint shared (2048) 0.0242 separated shared (2048) 0.0139 78.21 separated shared (3072) 0.0131 78.15 separated separated (1024*3) 0.0113 77.26 Table 2: Ablation on the design of Pose-VQVAE reconstruction model. Training Steps Infer. Steps 20 50 100 200 20 79.53 79.40 78.25 78.62 50 79.31 77.69 78.23 100 77.26 78.18 200 78.15 Table 3: Ablation on training steps and inference steps. separated-separated achieves much better reconstruction (MSE) performance. This indicates that compressing all skeleton points into one token embedding is not advisable, leading to information loss. Using separated latent feature spaces for different local regions, that is, three codebooks can achieve better reconstruction quality and generation performance. To further explain this phenomenon, we visualize the latent space vectors of shared codebooks and separated codebooks with T-SNE (Van der Maaten and Hinton 2008). As shown in Figure 4, the latent space vectors corresponding to the left-hand and right-hand local regions are easily confused because of their close distances. Therefore, separated codebooks can reduce the difficulty in constructing mappings between the sign pose feature and the codebook feature, thus learning better latent space and reconstruction quality. The second row of Figure 6 shows the sample of sign pose sequences reconstructed by Pose-VQVAE-separated-separated. CodeUnet vs. Transformer. For a fair comparison, we replace our CodeUnet with a Transformer network, keeping other settings the same. As shown in Figure 5, the diffusionbased model with our CodeUnet achieves better performance on the back-translate evaluation. This phenomenon suggests that the hierarchical structure of CodeUnet makes it particularly effective for data with spatial structure. Moreover, the curve in the figure shows that CodeUnet coverages faster than Transformer. Having said that, due to sign pose sequences being temporally redundant, the compression of CodeUnet in the time dimension makes it more efficient in training. Number of Timesteps. We compare the performance of the model with different numbers of training steps. As shown in the left two columns of Table 3, we find that the results get better when the training step size is increased from 20 to 100. As it increased further, the results seemed to saturate. Therefore, we set the training step to 100 to trade off performance and speed. Besides, within the same training steps, increasing the number of inference steps yields better results. Deaf User Evaluation In our final user evaluation, we provided 50 pose sequences generated by our proposed method and a baseline method (Saunders, Camg¨oz, and Bowden 2020), and asked 7 participants to compare which one was closer to the ground truth pose sequence. The results showed that 319/350 preferred our method, while only 31/350 chose the baseline method. This clearly demonstrates the superiority of our proposed approach. Conclusion We present a novel paradigm for text-based sign pose sequence generation. Specifically, we first devise a specific architecture Pose-VQVAE with a multi-codebook to learn The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6240 semantic discrete codes by reconstruction. Then we extend the discrete diffusion method with special states to model the alignments between sign glosses and length-varied quantized code sequences. Further, a “fully transformer” network CodeUnet is proposed to model the spatial-temporal information in discrete space. Finally, we propose a sequential-KNN algorithm to learn the length of corresponding glosses and then predict the length as a classification task. Our extensive experiments show that our proposed G2P-DDM framework outperforms previous state-of-the-art methods. References Aksan, E.; Kaufmann, M.; and Hilliges, O. 2019. Structured Prediction Helps 3D Human Motion Modelling. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 7143–7152. IEEE. Austin, J.; Johnson, D. D.; Ho, J.; Tarlow, D.; and van den Berg, R. 2021. Structured Denoising Diffusion Models in Discrete State-Spaces. In NeurIPS. Ba, J.; Kiros, J. R.; and Hinton, G. E. 2016. Layer Normalization. ArXiv preprint, abs/1607.06450. Bertasius, G.; Wang, H.; and Torresani, L. 2021. Is SpaceTime Attention All You Need for Video Understanding? In Meila, M.; and Zhang, T., eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, 813–824. PMLR. Camg¨oz, N. C.; Hadfield, S.; Koller, O.; Ney, H.; and Bowden, R. 2018. Neural Sign Language Translation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 7784–7793. IEEE Computer Society. Camg¨oz, N. C.; Koller, O.; Hadfield, S.; and Bowden, R. 2020. Multi-channel Transformers for Multi-articulatory Sign Language Translation. ArXiv preprint, abs/2009.00299. Camg¨oz, N. C.; Koller, O.; Hadfield, S.; and Bowden, R. 2020. Sign Language Transformers: Joint End-to-End Sign Language Recognition and Translation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 10020– 10030. IEEE. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.-E.; and Sheikh, Y. 2021. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43: 172–186. Chan, C.; Ginosar, S.; Zhou, T.; and Efros, A. A. 2019. Everybody Dance Now. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 5932–5941. IEEE. Dhariwal, P.; and Nichol, A. 2021. Diffusion Models Beat GANs on Image Synthesis. In NeurIPS. Du, M.; Ding, S.; and Jia, H. 2016. Study on density peaks clustering based on k-nearest neighbors and principal component analysis. Knowl. Based Syst., 99: 135–145. Esser, P.; Rombach, R.; and Ommer, B. 2021. Taming Transformers for High-Resolution Image Synthesis. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12868–12878. Ghazvininejad, M.; Levy, O.; Liu, Y.; and Zettlemoyer, L. 2019. Mask-Predict: Parallel Decoding of Conditional Masked Language Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 6112–6121. Hong Kong, China: Association for Computational Linguistics. Gu, J.; Bradbury, J.; Xiong, C.; Li, V. O. K.; and Socher, R. 2018. Non-Autoregressive Neural Machine Translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Gu, S.; Chen, D.; Bao, J.; Wen, F.; Zhang, B.; Chen, D.; Yuan, L.; and Guo, B. 2021. Vector Quantized Diffusion Model for Text-to-Image Synthesis. ArXiv preprint, abs/2111.14822. Hanke, T.; K¨onig, L.; Wagner, S.; and Matthes, S. 2010. DGS Corpus & Dicta-Sign: The Hamburg Studio Setup. In signlang@ LREC 2010, 106–109. European Language Resources Association (ELRA). Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Ho, J.; Saharia, C.; Chan, W.; Fleet, D.; Norouzi, M.; and Salimans, T. 2022. Cascaded Diffusion Models for High Fidelity Image Generation. J. Mach. Learn. Res., 23: 47:1– 47:33. Hoogeboom, E.; Nielsen, D.; Jaini, P.; Forr’e, P.; and Welling, M. 2021. Argmax Flows and Multinomial Diffusion: Towards Non-Autoregressive Language Models. ArXiv preprint, abs/2102.05379. Hu, H.; Zhao, W.; gang Zhou, W.; Wang, Y.; and Li, H. 2021. SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11067– 11076. Huang, W.; Pan, W.; Zhao, Z.; and Tian, Q. 2021. Towards Fast and High-Quality Sign Language Production. Proceedings of the 29th ACM International Conference on Multimedia. Kim, J.; Kim, S.; Kong, J.; and Yoon, S. 2020. Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Nawrot, P.; Tworkowski, S.; Tyrolski, M.; Kaiser, L.; Wu, Y.; Szegedy, C.; and Michalewski, H. 2021. Hierarchical Transformers Are More Efficient Language Models. ArXiv preprint, abs/2110.13711. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6241 Nichol, A. Q.; and Dhariwal, P. 2021. Improved Denoising Diffusion Probabilistic Models. In Meila, M.; and Zhang, T., eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, 8162–8171. PMLR. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318. Philadelphia, Pennsylvania, USA: Association for Computational Linguistics. Pfau, R.; Salzmann, M.; and Steinbach, M. 2018. The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa: a journal of general linguistics. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2021. High-Resolution Image Synthesis with Latent Diffusion Models. ArXiv preprint, abs/2112.10752. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI. Saunders, B.; Bowden, R.; and Camg¨oz, N. C. 2020. Adversarial Training for Multi-Channel Sign Language Production. In 31st British Machine Vision Conference 2020, BMVC 2020, Virtual Event, UK, September 7-10, 2020. BMVA Press. Saunders, B.; Camgoz, N. C.; and Bowden, R. 2020. Everybody Sign Now: Translating Spoken Language to Photo Realistic Sign Language Video. ArXiv preprint, abs/2011.09846. Saunders, B.; Camg¨oz, N. C.; and Bowden, R. 2020. Progressive Transformers for End-to-End Sign Language Production. ArXiv preprint, abs/2004.14874. Saunders, B.; Camg¨oz, N. C.; and Bowden, R. 2021. Mixed SIGNals: Sign Language Production via a Mixture of Motion Primitives. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 1899–1909. IEEE. Saunders, B.; Camgoz, N. C.; and Bowden, R. 2022. Signing at Scale: Learning to Co-Articulate Signs for Large-Scale Photo-Realistic Sign Language Production. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5141–5151. Schmidt, F. 2019. Generalization in Generation: A closer look at Exposure Bias. In Proceedings of the 3rd Workshop on Neural Generation and Translation, 157–167. Hong Kong: Association for Computational Linguistics. Sohl-Dickstein, J.; Weiss, E. A.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Bach, F. R.; and Blei, D. M., eds., Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, 2256–2265. JMLR.org. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Stoll, S.; Camg¨oz, N. C.; Hadfield, S.; and Bowden, R. 2018. Sign Language Production using Neural Machine Translation and Generative Adversarial Networks. In British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, September 3-6, 2018, 304. BMVA Press. van den Oord, A.; Vinyals, O.; and Kavukcuoglu, K. 2017. Neural Discrete Representation Learning. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 6306–6315. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Williams, R. J.; and Zipser, D. 1989. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation, 1: 270–280. Xiao, Q.; Qin, M.; and Yin, Y. 2020. Skeleton-based Chinese sign language recognition and generation for bidirectional communication between deaf and hearing people. Neural networks : the official journal of the International Neural Network Society, 125: 41–55. Xie, P.; Zhao, M.; and Hu, X. 2021. PiSLTRc: Positioninformed Sign Language Transformer with Content-aware Convolution. ArXiv preprint, abs/2107.12600. Zelinka, J.; and Kanis, J. 2020. Neural Sign Language Synthesis: Words Are Our Glosses. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 3384–3392. Zeng, W.; Jin, S.; Liu, W.; Qian, C.; Luo, P.; Wanli, O.; and Wang, X. 2022. Not All Tokens Are Equal: Human-centric Visual Analysis via Token Clustering Transformer. ArXiv preprint, abs/2204.08680. Zhou, H.; gang Zhou, W.; Zhou, Y.; and Li, H. 2022. SpatialTemporal Multi-Cue Network for Sign Language Recognition and Translation. IEEE Transactions on Multimedia, 24: 768–779. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6242 | 2024 | 693 |
18,510 | Towards Understanding Future: Consistency Guided Probabilistic Modeling for Action Anticipation Zhao Xie1, Yadong Shi1, Kewei Wu1*, Yaru Cheng1, Dan Guo1,2,3∗ 1 School of Computer and Information, the Hefei University of Technology, Hefei 230009, China 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center 3 Anhui Zhonghuitong Technology Co., Ltd [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Action anticipation aims to infer the action in the unobserved segment (future segment) with the observed segment (past segment). Existing methods focus on learning key past semantics to predict the future, but they do not model the temporal continuity between the past and the future. However, past actions are always highly uncertain in anticipating the unobserved future. The absence of temporal continuity smoothing in the video’s past-and-future segments may result in an inconsistent anticipation of future action. In this work, we aim to smooth the global semantics changes in the past and future segments. We propose a Consistency-guided Probabilistic Model (CPM), which focuses on learning the globally temporal probabilistic consistency to inhibit the unexpected temporal consistency. The CPM is deployed on the Transformer architecture, which includes three modules of future semantics estimation, global semantics estimation, and global distribution estimation involving the learning of past-to-future semantics, past-and-future semantics, and semantically probabilistic distributions. To achieve the smoothness of temporal continuity, we follow the principle of variational analysis and describe two probabilistic distributions, i.e., a past-aware distribution and a global-aware distribution, which help to estimate the evidence lower bound of future anticipation. In this study, we maximize the evidence lower bound of future semantics by reducing the distribution distance between the above two distributions for model optimization. Extensive experiments demonstrate that the CPM achieves state-of-the-art performance on Epic-Kitchen100, Epic-Kitchen55, and EGTEA-GAZE. Introduction Action anticipation aims to infer the action in the unobserved segment (future segment), which happens after the observed segment (past segment). Action anticipation is an important task in computer vision applications, such as human-robot collaboration (Dessalene et al. 2021), assistive robotics (Liu et al. 2020), smart houses (Damen et al. 2022, 2018), and autonomous vehicle (Zhang et al. 2022). In human activities, action semantics in multiple consecutive segments usually satisfy temporal consistency, i.e., having smooth action variations along the timeline. In order to *Corresponding author: Kewei Wu, Dan Guo Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Illustration of action anticipation task and pastand-future temporal consistency. (a) General Transformerbased model for action anticipation. (b) Global semantics estimation. (c) Future semantics estimation and global distribution estimation. avoid anticipating undesired actions, action prediction models need to capture the temporal consistency of the global segment (including all the past-future segments). Existing methods learn the temporal semantics with RNN (Furnari and Farinella 2019; Guo et al. 2018; Wang et al. 2018), which uses the memory mechanism to select key semantics in the past segment. Some methods learn the temporal semantics with context model (Guo et al. 2019; Li, Guo, and Wang 2021; Guo, Wang, and Wang 2022; Song et al. 2023a), and Transformer (Girdhar and Grauman 2021; Xu, Li, and Lu 2022; Song et al. 2023b), which enhances the semantics by learning the temporal relation between frames in the global segment. The above methods learn relations between segments, and neglect temporal consistency of the semantics changes from the past to the future. Action anticipation may predict unexpected actions under the temporal inconsistent semantics. Therefore, it is a challenging task to smooth the temporal consistency in the global segment for action anticipation. Figure 1 shows that action anticipation can be influenced by different semantics in the global (past-and-future) segThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6243 ments. Unobserved future frames make it easy to take unexpected semantics changes at the boundary between past and future segments, which hinders correct action anticipation. (a) We can use a Transformer-based model to predict future action, but its backbone network pays more attention to high-frequency actions. Since “taking tomatoes” occurs in multiple frames of the past segment and is closely tied to the future segment, the Transformer model pays more temporal attention to “taking tomatoes” and “turning on the tap”, which leads to the unexpected future anticipation–“washing tomatoes”. (b) To keep the temporal continuity in the video, we consider the global semantics estimation (See Sec.3.2), which is used to constrain the semantics consistency between the past segment and the global (past-and-future) segment. (c) Considering the temporal continuity at the past-tofuture boundary, we consider both future semantics estimation module (See Sec.3.1) and global distribution estimation (See Sec.3.3). More importantly, we introduce the probabilistic model into the global distribution estimation. Concretely, in the future semantics estimation, we introduce the probabilistic consistent semantics from past features and use it to learn unknown future features. The future features are anticipated frame-by-frame with multiple consistency-aware decoders. After that, in the global distribution estimation, we introduce a probabilistic global distribution consistency by describing the global semantics with the latent probabilistic variable. Following the variational analysis (Kingma and Welling 2014), we learn the probabilistic distribution consistency with two probabilistic distributions, including a past-aware distribution and a global-aware distribution, which help to estimate the evidence lower bound of the pastaware future anticipation. The past-aware distribution is estimated from past features. The global-aware distribution is estimated with a global feature decoder. We reduce the distribution distance between the above two distributions to maximize the evidence lower bound, which can alleviate the unexpected semantics changes. In this work, we propose a consistency-guided probabilistic model that focuses on learning probabilistic temporal consistency in the videos. First, we design a future semantics estimation module (See Sec.3.1), which considers probabilistic modeling of the past segment to learn past-tofuture distribution and finally outputs the anticipated future semantics. The probabilistic distribution can describe the latent feature variation to learn uncertainty-aware future semantics. Secondly, we design a global semantics estimation module (See Sec.3.2) that measures the semantics consistency between the past segment and the past-and-future segment. A new loss, global semantics loss Lsem, is introduced to describe the constraint of the global semantics changes. Thirdly, we design a global distribution estimation module (See Sec.3.3), which introduces a probabilistic global semantics z to approximately maximize past-aware future anticipation as shown in Figure 2. Without the probabilistic global semantics z, the direct anticipation needs to directly sample global distribution from past semantics, which is hard to capture the unobserved future clues. Following variational analysis (Kingma and Welling 2014), we approximate the sampling distribution by introducing Figure 2: Probabilistic modeling for action anticipation. (a) Direct anticipation. (b) Latent probabilistic variable z. (c) Latent probabilistic anticipation. (d) Anticipation with the evidence lower bound. global-aware distribution, which can be evidently estimated with probabilistic global semantics. The direct anticipation lnp(xf|xp) in Figure 2(a) can be decomposed into a latent estimation ln(p(z|xf, xp)/q(z|xf, xp)) and a sampling distance measurement KL[q(z|xf, xp)||p(z|xf, xp)] in Figure 2(b). When the sampling of anticipated future features (past-aware) meets the global-aware distribution, the sampling distance gets its smallest value at 0, and the direct anticipation gets its evidence lower bound (ELBO) at the latent estimation. Based on Bayesian theory, the evidence lower bound is a difference term by subtracting global-past distance term KL[q(z|xf, xp)||p(z|xp)] from reconstruction term lnp(xf|z, xp) in Figure 2(c). When the global-past distance gets 0, the ELBO gets its maximum value at reconstruction estimation. Turning backward to our framework, we learn the past-aware distribution from past features. and learn the global-aware distribution through a global feature decoder. We use the probabilistic semantic concept to reduce the global-past distance, which can maximize the ELBO. The maximized ELBO can alleviate the neglected effect of unobserved future sampling distribution and increase the probability of correct future anticipation. We design the loss objectives (i.e., a future reconstruction loss Lrec, and a distribution distance loss Ldist) to optimize the model for smoothing the temporal distribution consistency. Our contributions are summarized as follows: (1) We propose a consistency-guided probabilistic model, which smooths the global (past-and-future) temporal consistency for action anticipation. We design a future semantics estimation module to learn probabilistic past-to-future consistency, which can alleviate the uncertain past-to-future semantics changes, and ease the unexpected future semantics. (2) We design a global semantics estimation module to constrain the global semantics consistency, which can smooth the temporal inconsistency during past-and-future semantics learning. (3) We design the global distribution estimation module to constrain the probabilistic distribution consistency by reducing the distance between past-aware distribution and globalaware distribution. The global distribution consistency can enhance the future anticipation certainty among the global semantics of past and future segments. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6244 Figure 3: The overview of Consistency-guided Probabilistic Model (CPM). The CPM is deployed on Transformer by introducing three modules, including future semantics estimation, global semantics estimation, and global distribution estimation. In the training stage, the CPM is optimized by integrating three modules for smoothing temporal consistency. In the testing stage, the optimized CPM only uses the future semantics estimation module for action anticipation. The future semantics estimation module anticipates future features by considering past-to-future consistency. The global semantics/distribution estimation modules measure the consistency of past and global (past-and-future) semantics/distribution. Related Work Action Anticipation. Action anticipation focuses on learning features in the past segment to predict future action. Some methods learn the temporal feature with temporal convolution network (Carreira and Zisserman 2017). The temporal features can be learned with recurrent model (Liu and Lam 2022; Gao, Yang, and Nevatia 2017; Wu et al. 2021), which can describe the temporal changes in the past segment. The temporal feature can be enhanced with hand movement (Liu et al. 2020), segment-aware feature (Dessalene et al. 2021). The temporal feature can be enhanced by considering UnRolling model (Furnari and Farinella 2019), and high-order frames (Tai et al. 2021). The temporal feature can be learned with latent goal learning (Roy and Fernando 2022a), and semantics-aware contrastive learning (Qi et al. 2023; Zhou, Guo, and Wang 2023), and transferring knowledge (Sener, Saraf, and Yao 2023). Some methods learn features with temporal Transformer model (Girdhar and Grauman 2021), which can describe the relation between frames as the temporal self-attention. The Transformer can be enhanced with dynamic temporal masks (Xu, Li, and Lu 2022), hierarchical Transformer (Wu et al. 2022), inductive self-attention (Tai et al. 2022), temporal cross-attention (Gong et al. 2022), interactive cross-attention (Roy, Rajendiran, and Fernando 2022), and multiple-scale temporal banks (Sener, Singhania, and Yao 2020). The Transformer can be enhanced with cross-modality selfattention (Gu et al. 2021; Zhong et al. 2023). The above Transformers neglect to smooth the unexpected temporal consistency, which hinders explaining the unexpected action anticipation. Probabilistic Semantics Modeling. The probabilistic model can describe the feature variation as the distribution of semantics. The variant features can predict the multiple possible labels (Yang et al. 2023). The probabilistic prediction has been described in the Bayesian language model (Xue et al. 2022; Zhang et al. 2021) and the complex action recognition model (Guo, Wang, and Ji 2022). Some methods learn probabilistic prediction with the latent variables (Zheng et al. 2022; Itkina et al. 2020; Pambala, Dutta, and Biswas 2020; Zhang et al. 2020). Abstract Goal (Roy and Fernando 2022b) learns temporal features by introducing a probabilistic recurrent network. Abstract Goal learns the Gaussian distribution of latent semantics, and considers the distribution loss between the future semantics and the past semantics. Unlike the Abstract Goal, our work introduces a probabilistic recurrent Transformer, which focuses on smoothing the unexpected past-to-future consistency, and smoothing temporal consistency in semantics and distributions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6245 Method As shown in Figure 3, we introduce the procedure of the consistency-guided probabilistic model (CPM), which contains a training stage, and a testing stage. In the training stage, we adopt the Transformer (Vaswani et al. 2017) to construct a future semantics estimation module, a global semantics estimation module, and a global distribution estimation module. In the testing stage, the model only uses the future semantics estimation module. For feature extraction, we take the temporal shift model (Lin, Gan, and Han 2019) as the backbone to extract the past segment feature fp ∈RTp×D, and the future segment feature ff ∈RTf ×D, where Tp, Tf denote the frame number of each segment. D denotes the feature dimension. Future Semantics Estimation The future semantics estimation module is designed to learn the probabilistic past-to-future semantics. We use the probabilistic past-to-future semantics to anticipate the future features frame-by-frame with a consistency-aware future decoder. The decoder considers the probabilistic past-to-future semantics to smooth the unexpected semantics changes, which can also smoothly anticipate the semantics in the future segment. At first, we use a consistency-aware past encoder to learn the temporal relation between past features. Following Vision Transformer (Dosovitskiy et al. 2021), we append a learnable class token ftoken before past feature. Given the past feature fp, an Fully Connected (FC) layer is used to learn the channel relations in one frame, and projects the past feature into f ′ p ∈RTp×D. The Query takes the original past feature fcl,p = [ftoken, fp]. The Key and Value take the projected feature f ′ cl,p = [ftoken, f ′ p]. In the Transformer encoder layer, the temporal feature is estimated as: Qen = fcl,pWQ, Ken = f ′ cl,pWK, Ven = f ′ cl,pWV , Q′ en = softmax( Qen·KT en √ D )Ven + Qen Q′′ en = FFN(Q′ en) (1) where WQ, WK, WV are learnable parameters. FFN is the feed-forward network. The next encoder layer uses the same Key and Value in the first layer, and uses the Query with the output of the previous layer. The regressive Transformer encoder finally outputs the encoded past feature xp ∈RTp×D. The future semantics estimation module is designed to describe the past-to-future consistency with Gaussian distribution. At the last frame Tp in the past segment, we estimate the Gaussian distribution by considering the frame feature and the class token. The class token is the first feature in the encoded past feature xtoken = xp,0. The mean values are estimated with multiple layer perceptron (MLP) as µTp = MLP(xp,Tp + xtoken) ∈R1×D. The standard values are estimated as σTp = MLP(xp,Tp +xtoken) ∈R1×D. We introduce a Gaussian variable to describe the unexpected consistency ϵTp ∼Gaussian(0, I). The past-aware feature at timestamp Tp is described as: x′ p,Tp = σTp · ϵTp + µTp (2) After obtaining x′ p,Tp, we further use a consistency-aware future decoder to estimate the next future feature. We stack multiple decoders recurrently to learn future features frameby-frame. Following Transformer (Vaswani et al. 2017), the decoder layer uses the feature directly without the class token. The Query takes the past-aware feature x′ p,Tp. The Key and Value take the past feature xp. In the first decoder layer, the temporal feature is estimated as: Qre = x′ p,TpWQ, Kre = xpWK, Vre = xpWV , Q′ re = softmax( Qre·KT re √ D )Vre + Qre Q′′ re = FFN(Q′ re) (3) where WQ, WK, WV are learnable parameters. FFN is the feed-forward network. The next future decoder layer uses the same Key and Value in the first layer, and uses the Qurey with the output of the previous layer. The first unit outputs the feature of the Tp+1 frame. The Multiple Transformer decoders estimate the future features from x′ p,Tp+1 to x′ p,Tp+Tf frame-by-frame. Global Semantics Estimation The global semantics estimation module is introduced to learn the global (past-and-future) semantics consistency, which is estimated with the distance between past semantics and global semantics. We consider the distance as the global semantics loss. We reduce global semantics loss to keep the global semantics consistency. The global semantics consistency can smooth the past feature, which helps to learn the evidence lower bound of the future feature anticipation. To describe the global feature of the input video, we concatenate the past feature fp with the future feature ff as the global feature: fg = concat(fp, ff). We trim the global feature from the last frames to align the temporal length of the past feature. We average the past feature and the aligned feature, and project the feature with an FC layer to describe the global feature f ′ g = FC((fp + fg,Tf +1:Tf +Tp)/2) ∈RTp×D. We reduce the distance between the projected global feature and the projected past feature to learn the global semantics consistency. The global semantics consistency can use the global feature to align the past feature. We estimate their distance by using the negative value of the Jaccard vector similarity, and use the distance as the global semantics loss: Lsem = exp(−PTp t=1 2·f ′ p,t·(f ′ g,t)T f ′ p,t·(f ′ p,t)T +f ′ g,t·(f ′ g,t)T ) (4) Global Distribution Estimation Unlike the above global semantics estimation, this section estimates global distributions to measure the temporal consistency. The global distribution estimation module learns the probabilistic global (past-and-future) distribution consistency with probabilistic global semantics. Without the probabilistic global semantics, the direct past-aware future anticipation needs the sampling distribution of global semantics, which is hard to estimate due to unobserved future frames. Following the variational analysis (Kingma and Welling The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6246 2014) as discussed in Introduction, we introduce the probabilistic global semantics. The probabilistic global semantics provides the evidence to estimate the global-aware distribution, which can be used as the approximate estimation of the unobserved sampling distribution. The probabilistic global semantics helps to decompose the direct anticipation into the latent anticipation estimation and the sampling distance estimation. When the sampling distribution is the same as the global-aware distribution, the sampling distance gets its smallest value at 0, and the direct anticipation gets its evidence lower bound at the latent anticipation estimation. We maximize the evidence lower bound, which can enhance the past-aware future anticipation. Given the future feature xf and the past features xp, the direct anticipation is lnp(xf|xp). We address the evidence lower bound (ELBO) of direct anticipation in the supplementary materials. We introduce the probabilistic global semantics as z. Given the sampling distribution q(z|xf, xp), and the global-aware distribution p(z|xf, xp), the evidence lower bound is ln(p(z|xf, xp)/q(z|xf, xp)). Based on the Bayesian theory (Kingma and Welling 2014), the evidence lower bound is the difference term by subtracting the distribution distance term from the reconstruction term: lnp(xf|xp) = Eq(z|xf ,xp)[lnp(xf|xp)] = Eq(z|xf ,xp)[ln p(xf ,z|xp) q(z|xf ,xp)] + DKL[q(z|xf, xp)||p(z|xf, xp)] | {z } ≥0 ≥Eq(z|xf ,xp)[ln p(xf ,z|xp) q(z|xf ,xp)] := ELBO = Eq(z|xf ,xp)[lnp(xf|z, xp)] | {z } Reconstruction −DKL[q(z|xf, xp)||p(z|xp)] | {z } Distribution distance , (5) where the DKL[q(·)||p(·)] is the Kullback-Leibler divergence to describe the distance between two probabilities. Future Reconstruction Loss. The future reconstruction loss is used to estimate the reconstruction term in ELBO. We maximize the reconstruction term by reducing the distance between the anticipated future feature x′ p,Tp+1:Tp+Tf and the feature of the future segment ff,1:Tf . We use Jaccard vector similarity to measure the reconstruction loss: Lrec = exp(−PTf t=1 2·x′ p,Tp+t·(ff,t)T x′ p,Tp+t·(x′ p,Tp+t)T +ff,t·(ff,t)T ) (6) Distribution Distance Loss. The distribution distance loss is used to estimate the distribution distance term in ELBO, which is between the past-aware distribution and the global-aware distribution. We estimate the Gaussian-based past-aware distribution with future features anticipated from the past features. In frame t ∈[Tp + 1, Tp + Tf], the mean values are estimated with multiple layer perceptron as µp,t = MLP(x′ p,t) ∈R1×D. The standard value are estimated as σp,t = MLP(x′ p,t) ∈R1×D. The probabilistic global semantics is learned as the probability following the past-aware distribution: P(z|x′ p,t) ∼Gaussian(µp,t, σ2 p,t). To estimate the global-aware distribution, we introduce a global feature decoder. Following Transformer (Vaswani et al. 2017), the decoder uses the feature directly without considering the class token. The Query takes the future feature ff. The Key and Value take the past feature xp. The Transformer decoder contains multiple regressive decoder layers too. In the first layer of the Transformer decoder, the temporal feature is estimated as: Qg = ffWQ, Kg = xpWK, Vg = xpWV , Q′ g = softmax( Qg·KT g √ D )Vg + Qg Q′′ g = FFN(Q′ g) (7) where WQ, WK, WV are learnable parameters. FFN is the feed-forward network. The next decoder layer uses the same Key and Value in the first layer, and uses the Query with the output of the previous layer. The Transformer decoder uses multiple layers to output the final global feature x′ g ∈ RTf ×D. We estimate the global-aware distribution with Gaussian distribution. In frame t ∈[Tp +1, Tp +Tf], the mean values are estimated as µg,t = MLP(x′ g,t). The standard value are estimated as σg,t = MLP(x′ g,t). The probabilistic global semantics depending on global features follows the globalaware distribution as: P(z|x′ g,t) ∼Gaussian(µg,t, σ2 g,t). We address the Kullback-Leibler divergence between two Gaussian-based distributions in the supplementary materials. Given the estimated past-aware distribution and globalaware distribution, the global distribution distance loss is the summation of Kullback-Leibler divergence at each timestamp t and each dimension d as: Ldist = Tp+Tf X t=Tp+1 D X d=1 ln σp,t,d σg,t,d −1 2 + σ2 g,t,d + (µg,t,d −µp,t,d)2 2σ2 p,t,d (8) Model Optimization Our model is optimized by considering both the temporal consistency loss and the action prediction loss. The temporal consistency loss contains a global semantics loss Lsem (See Sec. 3.2), a future reconstruction loss Lrec (See Sec. 3.3), and a distribution distance loss Ldist (See Sec. 3.3). The action prediction loss is the base term for this task. To be specific, the action classifier uses two fully connected layers to predict the future action. At timestamp t in the anticipation segment, the predicted action labels include the noun label ynoun,t, the verb label yverb,t, and the action label yact,t. Given the ground truth of the noun label ygt noun,t, the verb label ygt verb,t, and the action label ygt act,t, we use the cross-entropy loss to estimate the difference between the predicted and the ground truth labels. For each video, the noun label loss is Lnoun = P t −ygt noun,t · log(ynoun,t); the verb label loss is Lverb = P t −ygt verb,t · log(yverb,t); the action label loss is Lact = P t −ygt act,t · log(yact,t). The above losses compose the action prediction loss as Lbase = Lnoun + Lverb + Lact. For model optimization, the total optimization objective is formulated as follows: Ltotal = Lbase + Lsem + Lrec + Ldist | {z } ELBO . (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6247 Dataset Segments Classes τa Metric EK100 90.0K 3,807 1.0s Recall@5 EK55 39.6K 2,513 1.0s Top-1/5 EG+ 10.3K 106 0.5s Top-1, CM Top-1 Table 1: Datasets used for action anticipation. Method EK 100 EK 55 Recall@5 Top 1 Top 5 Modality-RGB ED – 25.8 – HORST 13.2 – – HORST-url 13.2 – – DCR-1s (TSN) 14.6 13.6 30.8 AVT (AVT-b) 14.9 – – MeMViT-16 15.1 – – DCR-1s (TSM) 16.1 16.1 33.1 Ours 17.2 17.2 35.2 Modality-RGB+OBJ+Flow RULSTM 14.0 15.3 35.3 ActionBanks 14.7 15.1 35.6 HRO – – 37.4 AVT+ 14.8 16.6 37.6 AVT++ 15.9 – – TransAction 16.6 – – Ego-OMG – 19.2 – DCR-1s (TSN) 18.3 19.2 41.2 Ours 19.4 20.1 43.6 Table 2: Comparison with state-of-the-art methods on the validation set of EK100 (Class-mean recall@5 of action prediction) and the validation set of EK55 (Top-1 accuracy and Top-5 accuracy of action prediction.) Experiments Datasets and Implementation Details Datasets. Table 1 shows three datasets and their evaluation metrics. τa represents the interval between past segment and future action. EpicKitchens-100 (EK100) (Damen et al. 2022) is the egocentric (first-person) video dataset of cooking activities. EpicKitchens-55 (EK55) (Damen et al. 2018) is an earlier version of EK100. For EK100/EK55, we report performance with the standard train, validation, and test splits from (Damen et al. 2022) and (Furnari and Farinella 2019) respectively. EGTEA Gaze+ (EG+) (Li, Liu, and Rehg 2018) is another first-person anticipation dataset. Following (Liu et al. 2020), we use the split 1 (Li, Liu, and Rehg 2018) of the dataset. Metrics. The anticipation metrics are estimated with action prediction, including Top-1/5, class mean (CM) Top1, and Recall@5. The action prediction is decomposed into verb prediction and noun prediction (Girdhar and Grauman 2021). The smoothness metrics are estimated with the temporal consistency curve, which is learned by averaging the features across channels, Specifically, the mean absolute temporal difference (MATD) is the mean value of the absolute temporal difference between neighbor frames. The mean curvature (MC) is the mean value of the curvature of the curve. Method Top-1 CM Top-1 Modality-RGB+Flow I3D-Res50 34.8 23.2 FHOI 36.6 25.3 RULSTM 38.6 – AVT-h 39.8 28.3 AVT 43.0 35.2 Abstract Goal 49.8 37.4 Ours 50.3 39.1 Table 3: Comparison with state-of-the-art methods on EG+. Top-1 accuracy and Class Mean (CM) Top-1 accuracy of action prediction. Encoder Future Decoder Verb Noun Act. 3 layers % 3 layers 27.2 25.5 13.1 3 layers ✓ 3 layers 34.5 34.0 16.4 3 layers ✓ 4 layers 36.9 36.2 17.2 4 layers ✓ 3 layers 36.1 35.5 16.8 4 layers ✓ 4 layers 33.1 33.0 15.6 Table 4: The effect of the future semantics estimation. Implementation Details. Following (Xu, Li, and Lu 2022), we use TSM as the backbone (Lin, Gan, and Han 2019). For EK100/EK55, the past and future segments have 30 frames and 8 frames, respectively. For EG+, the past and future segments have 20 frames and 8 frames, respectively. We use the Transformer encoder with a 3-layer, 16head, 1024-dimensional model, and the Transformer decoder with a 4-layer, 16-head, 1024-dimensional model optimized by AdamW (Loshchilov and Hutter 2019). We initialize the Transformer from scratch (Xu, Li, and Lu 2022). For EK100/EK55, we set the learning rate to 1e-4, batch size to 128, and training epoch to 100. For EG+, we set the learning rate to 5e-5, batch size to 512, and training epoch to 50. Comparison with State-of-the-Art Table 2 shows the performance on EK100 and EK55. The first part methods are Transformer-based models with RGB features. DCR-1s (Xu, Li, and Lu 2022) learns features with multiple temporal masks. Our method smooths global temporal consistency from past semantics learning to future semantics learning, and outperforms the above Transformerbased methods. The second part methods use the RGB, object, and Flow features. Our model optimizes the probabilistic global semantics to maximize the future anticipation, and outperforms the model with distribution consistency. Table 3 shows the comparision on EG+. Our method learns smoothed global temporal consistency, and outperforms AVT and Abstract Goal. Ablation Study To verify the probabilistic temporal consistency learning, we perform ablation studies using RGB features learned with TSM backbone (Lin, Gan, and Han 2019) on EK100. The Effect of Future Semantics Estimation. Table 4 shows the Top-1 accuracy of the verb label (Verb), the noun label (Noun), and the action label (Act.) on overall classes. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6248 Lsem Lrec Verb Noun Act. Negative dot Negative dot 33.1 33.4 15.3 L2 distance L2 distance 34.0 34.1 16.0 Negative cosine Negative cosine 34.3 34.3 16.2 Negative Jaccard Negative Jaccard 36.9 36.2 17.2 Table 5: The effect of the distance estimation in losses. Lbase Lsem Ldist Lrec Verb Noun Act. ✓ – – – 32.1 32.4 14.6 ✓ ✓ – – 33.5 33.5 15.4 ✓ ✓ ✓ – 36.2 35.7 16.8 ✓ ✓ ✓ ✓ 36.9 36.2 17.2 Table 6: The effect of the model losses. When we add the probabilistic past-to-future semantics to anticipate the future features, the model increases the performance compared with the model without probabilistic pastto-future semantics. The model with a 3-layer decoder and a 4-layer encoder gets the best performance. The Effect of Distance Estimation in Feature Losses. Table 5 shows the performance with different distance estimations in global semantics loss Lsem and reconstruction loss Lrec. To estimate the distance between two feature matrixes, we temporally split them into vectors at each timestamp. In the Jaccard function, the denominators can describe the number of semantics in two feature vectors. The model with the negative Jaccard function in two losses gets the best performance. The Effect of Model Losses. Table 6 shows the performances with different losses. The base losses can reduce the error of the verb/noun/action label predictions. The semantics loss can embed global semantics for past semantics learning. The reconstruction loss can embed the future feature into the feature anticipated from the past feature. The distance loss can embed global-aware distribution into pastaware distribution for probabilistic distribution learning. We use the above four losses to get the best performance. Visualization Visualization Analysis of Temporal Consistency Learning. Figure 4 shows two instances of action anticipation. We compare our CPM with the base model, which removes future semantics estimation, global semantics estimation, and global distribution estimation from our CPM. As the pastfuture features in Figure 4 (left), our global semantics estimation helps to learn smoothed past features, which have small distances between each other. As the feature curve in Figure 4 (middle), we visualize the curve with the semantic probability by averaging the features across channels at each timestamp, which can verify the temporal smoothness of past and anticipated future actions. In both two videos, the base model takes a large change at the past-to-future boundary and has the sharp peak on the future frame. Table 7 shows the mean absolute temporal difference (MATD) and the mean curvature (MC) on EK100. Our CPM can enhance the smoothness of the temporal consistency. As the distribution in Figure 4 (right), we notice the global-aware disBase / CPM Fig. 4(a) Fig. 4(b) EK100 MATD ↓ 0.146 / 0.072 0.128 / 0.075 0.125 / 0.067 MC ↓ 0.222 / 0.116 0.218 / 0.108 0.192 / 0.097 Table 7: The smoothness metrics of temporal consistency. Figure 4: Temporal consistency in semantics and distributions of Consistency-guided Probabilistic Model (CPM). tribution and the past-aware distribution in the base model take far apart. Our CPM enforces the distribution consistency with the probabilistic model, and the above two distributions are more consistent by overlapping each other. As the attention in Figure 4 (upper), we further show the temporal attention along the timeline of the video. In (a), the base model pays attention to ”taking tomatoes” and ”turning on tap”, and mispredicts the future action as ”washing hands”. Our model suggests more attention on “throwing carrot” in the trash can, which may dirty hands. To clean the hands, our model predicts the correct future action as “washing hands”. In (b), Our model smooths the temporal consistency and keeps attention on the “opening container”. The temporal consistency of “opening container” suggests that the future semantics should be highly relevant to “closing container”. Conclusion This work proposes a consistency-guided probabilistic model, which learns probabilistic global temporal consistency to smooth the unexpected temporal continuity for action anticipation. We learn probabilistic past-to-future consistency to anticipate smoothed future features. We learn global semantics consistency to embed the global semantics for past semantics learning. We learn probabilistic global distribution consistency to embed the global-aware distribution for past-aware future anticipation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6249 Acknowledgments This work was supported by the National Key R&D Program of China (2022YFB4500600), and the National Natural Science Foundation of China (62272144 and U20A20183). References Carreira, J.; and Zisserman, A. 2017. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 4724–4733. Damen, D.; Doughty, H.; Farinella, G. M.; Fidler, S.; Furnari, A.; Kazakos, E.; Moltisanti, D.; Munro, J.; Perrett, T.; Price, W.; and Wray, M. 2018. Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. CoRR, abs/1804.02748. Damen, D.; Doughty, H.; Farinella, G. M.; Furnari, A.; Kazakos, E.; Ma, J.; Moltisanti, D.; Munro, J.; Perrett, T.; Price, W.; and Wray, M. 2022. Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS100. Int. J. Comput. Vis., 130(1): 33–55. Dessalene, E.; Devaraj, C.; Maynord, M.; Ferm¨uller, C.; and Aloimonos, Y. 2021. Forecasting Action through Contact Representations from First Person Video. CoRR, abs/2102.00649. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. Furnari, A.; and Farinella, G. M. 2019. What Would You Expect? Anticipating Egocentric Actions With RollingUnrolling LSTMs and Modality Attention. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 6251–6260. Gao, J.; Yang, Z.; and Nevatia, R. 2017. RED: Reinforced Encoder-Decoder Networks for Action Anticipation. In British Machine Vision Conference 2017, BMVC 2017, London, UK, September 4-7, 2017. Girdhar, R.; and Grauman, K. 2021. Anticipative Video Transformer. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 13485–13495. Gong, D.; Lee, J.; Kim, M.; Ha, S. J.; and Cho, M. 2022. Future Transformer for Long-term Action Anticipation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 1824, 2022, 3042–3051. Gu, X.; Qiu, J.; Guo, Y.; Lo, B.; and Yang, G. 2021. TransAction: ICL-SJTU Submission to EPIC-Kitchens Action Anticipation Challenge 2021. CoRR, abs/2107.13259. Guo, D.; Li, K.; Zha, Z.; and Wang, M. 2019. DADNet: Dilated-Attention-Deformable ConvNet for Crowd Counting. In Proceedings of the 27th ACM International Conference on Multimedia, MM 2019, Nice, France, October 2125, 2019, 1823–1832. Guo, D.; Wang, H.; and Wang, M. 2022. Context-Aware Graph Inference With Knowledge Distillation for Visual Dialog. IEEE Trans. Pattern Anal. Mach. Intell., 44(10): 6056– 6073. Guo, D.; Zhou, W.; Li, H.; and Wang, M. 2018. Hierarchical LSTM for Sign Language Translation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, 6845–6852. Guo, H.; Wang, H.; and Ji, Q. 2022. Uncertainty-Guided Probabilistic Transformer for Complex Action Recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 1824, 2022, 20020–20029. Itkina, M.; Ivanovic, B.; Senanayake, R.; Kochenderfer, M. J.; and Pavone, M. 2020. Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 1416, 2014, Conference Track Proceedings. Li, K.; Guo, D.; and Wang, M. 2021. Proposal-Free Video Grounding with Contextual Pyramid Network. In ThirtyFifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Event, February 2-9, 2021, 1902–1910. Li, Y.; Liu, M.; and Rehg, J. M. 2018. In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Video. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part V, volume 11209 of Lecture Notes in Computer Science, 639–655. Lin, J.; Gan, C.; and Han, S. 2019. TSM: Temporal Shift Module for Efficient Video Understanding. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 7082–7092. Liu, M.; Tang, S.; Li, Y.; and Rehg, J. M. 2020. Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, volume 12346 of Lecture Notes in Computer Science, 704–721. Liu, T.; and Lam, K. 2022. A Hybrid Egocentric Activity Anticipation Framework via Memory-Augmented Recurrent and One-shot Representation Forecasting. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 13894–13903. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6250 Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Pambala, A. K.; Dutta, T.; and Biswas, S. 2020. Generative Model with Semantic Embedding and Integrated Classifier for Generalized Zero-Shot Learning. In IEEE Winter Conference on Applications of Computer Vision, WACV 2020, Snowmass Village, CO, USA, March 1-5, 2020, 1226–1235. Qi, Z.; Wang, S.; Su, C.; Su, L.; Huang, Q.; and Tian, Q. 2023. Self-Regulated Learning for Egocentric Video Activity Anticipation. IEEE Trans. Pattern Anal. Mach. Intell., 45(6): 6715–6730. Roy, D.; and Fernando, B. 2022a. Action anticipation using latent goal learning. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, 808–816. Roy, D.; and Fernando, B. 2022b. Predicting the Next Action by Modeling the Abstract Goal. CoRR, abs/2209.05044. Roy, D.; Rajendiran, R.; and Fernando, B. 2022. Interaction Visual Transformer for Egocentric Action Anticipation. CoRR, abs/2211.14154. Sener, F.; Saraf, R.; and Yao, A. 2023. Transferring Knowledge From Text to Video: Zero-Shot Anticipation for Procedural Actions. IEEE Trans. Pattern Anal. Mach. Intell., 45(6): 7836–7852. Sener, F.; Singhania, D.; and Yao, A. 2020. Temporal Aggregate Representations for Long-Range Video Understanding. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI, volume 12361 of Lecture Notes in Computer Science, 154–171. Song, P.; Guo, D.; Cheng, J.; and Wang, M. 2023a. Contextual Attention Network for Emotional Video Captioning. IEEE Trans. Multim., 25: 1858–1867. Song, P.; Guo, D.; Yang, X.; Tang, S.; Yang, E.; and Wang, M. 2023b. Emotion-Prior Awareness Network for Emotional Video Captioning. In Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, Ottawa, ON, Canada, 29 October 2023- 3 November 2023, 589–600. Tai, T.; Fiameni, G.; Lee, C.; and Lanz, O. 2021. Higher Order Recurrent Space-Time Transformer. CoRR, abs/2104.08665. Tai, T.; Fiameni, G.; Lee, C.; See, S.; and Lanz, O. 2022. Inductive Attention for Video Action Anticipation. CoRR, abs/2212.08830. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 5998–6008. Wang, S.; Guo, D.; Zhou, W.; Zha, Z.; and Wang, M. 2018. Connectionist Temporal Fusion for Sign Language Translation. In 2018 ACM Multimedia Conference on Multimedia Conference, MM 2018, Seoul, Republic of Korea, October 22-26, 2018, 1483–1491. Wu, C.; Li, Y.; Mangalam, K.; Fan, H.; Xiong, B.; Malik, J.; and Feichtenhofer, C. 2022. MeMViT: MemoryAugmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 13577–13587. Wu, Y.; Zhu, L.; Wang, X.; Yang, Y.; and Wu, F. 2021. Learning to Anticipate Egocentric Actions by Imagination. IEEE Trans. Image Process., 30: 1143–1152. Xu, X.; Li, Y.; and Lu, C. 2022. Learning to Anticipate Future with Dynamic Context Removal. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 12724– 12734. Xue, B.; Hu, S.; Xu, J.; Geng, M.; Liu, X.; and Meng, H. 2022. Bayesian Neural Network Language Modeling for Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process., 30: 2900–2917. Yang, W.; Zhang, T.; Zhang, Y.; and Wu, F. 2023. Uncertainty Guided Collaborative Training for Weakly Supervised and Unsupervised Temporal Action Localization. IEEE Trans. Pattern Anal. Mach. Intell., 45(4): 5252–5267. Zhang, J.; Fan, D.; Dai, Y.; Anwar, S.; Saleh, F. S.; Zhang, T.; and Barnes, N. 2020. UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 8579–8588. Zhang, S.; Abdel-Aty, M. A.; Wu, Y.; and Zheng, O. 2022. Pedestrian Crossing Intention Prediction at Red-Light Using Pose Estimation. IEEE Trans. Intell. Transp. Syst., 23(3): 2331–2339. Zhang, S.; Fan, X.; Chen, B.; and Zhou, M. 2021. Bayesian Attention Belief Networks. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, 12413–12426. Zheng, Y.; He, T.; Qiu, Y.; and Wipf, D. P. 2022. Learning Manifold Dimensions with Conditional Variational Autoencoders. In NeurIPS. Zhong, Z.; Schneider, D.; Voit, M.; Stiefelhagen, R.; and Beyerer, J. 2023. Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, HI, USA, January 2-7, 2023, 6057–6066. Zhou, J.; Guo, D.; and Wang, M. 2023. Contrastive Positive Sample Propagation Along the Audio-Visual Event Line. IEEE Trans. Pattern Anal. Mach. Intell., 45(6): 7239–7257. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6251 | 2024 | 694 |
18,511 | Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model Zhenyu Xie1, Yang Wu2, Xuehao Gao3, Zhongqian Sun2, Wei Yang2, Xiaodan Liang1,4* 1Shenzhen Campus of Sun Yat-sen University, Shenzhen, China 2Tencent AI Lab, Shenzhen, China 3 Xi’an Jiao Tong University, Xi’an, China 4 DarkMatter AI Research, Beijing, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Text-guided motion synthesis aims to generate 3D human motion that not only precisely reflects the textual description but reveals the motion details as much as possible. Pioneering methods explore the diffusion model for text-to-motion synthesis and obtain significant superiority. However, these methods conduct diffusion processes either on the raw data distribution or the low-dimensional latent space, which typically suffer from the problem of modality inconsistency or detail-scarce. To tackle this problem, we propose a novel Basic-to-Advanced Hierarchical Diffusion Model, named B2A-HDM, to collaboratively exploit low-dimensional and high-dimensional diffusion models for high quality detailed motion synthesis. Specifically, the basic diffusion model in low-dimensional latent space provides the intermediate denoising result that to be consistent with the textual description, while the advanced diffusion model in high-dimensional latent space focuses on the following detail-enhancing denoising process. Besides, we introduce a multi-denoiser framework for the advanced diffusion model to ease the learning of high-dimensional model and fully explore the generative potential of the diffusion model. Quantitative and qualitative experiment results on two text-tomotion benchmarks (HumanML3D and KIT-ML) demonstrate that B2A-HDM can outperform existing state-of-the-art methods in terms of fidelity, modality consistency, and diversity. Introduction Text-to-motion synthesis, which aims to generate human motion that conforms to the textual descriptions (with result examples of our proposed model shown in Figure 1), has made significant progress in recent years. It has the potential to revolutionize the traditional process of acquiring human motion, which typically requires expert knowledge from artists or expensive motion capture equipment. However, inferring human motion from textual description is a no-trivial task due to the essential discrepancy between the two data modalities. To address this challenge, some existing works (Ahuja and Morency 2019; Ghosh et al. 2021; Tevet et al. 2022; Petrovich, Black, and Varol 2022) resort to the auto-encoder/VAE for motion synthesis and strive to align *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the cross-modal information in a shared embedding space. On the other hand, motivated by fruitful attempts of diffusion model (Sohl-Dickstein et al. 2015; Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021) in the cross-modal image synthesis (Rombach et al. 2022; Saharia et al. 2022; Ramesh et al. 2022), some pioneering works (Tevet et al. 2023; Zhang et al. 2022; Dabral et al. 2023; Ma, Bai, and Zhou 2022; Chen et al. 2023; Jin et al. 2023) exploit the diffusion model for textto-motion synthesis, demonstrating significant improvements in fidelity and cross-modal consistency. In spite of the powerful generative ability, training a diffusion model for text-to-motion synthesis remains challenging, which is mainly attributed to the complexity of the data distribution and the insufficiency of the text-annotated training data. Without adequate data, it is difficult for the neural network to learn the denoising process that converts the Gaussian distribution into the complex motion distribution. To address this problem, MLD (Chen et al. 2023) applies VAE to project the raw motion from the initial 3D pose space into the latent code in the low-dimensional latent space, and then conducts diffusion process on the latent space. Although simplifying the target distribution can ease the learning of the denoising process, the low-dimensional latent code is less expressive, which hinders the diffusion model from generating detailed motion. Specifically, as shown in Fig. 2(a), reducing the dimension of the latent space in VAE results in reconstructed motions with fewer captured details. Additionally, Fig. 2(b) reveals that decreasing the dimension of the latent space leads to an increase in the FID scores of the reconstructed motions, indicating a degradation in their quality. However, simply increasing the dimension of the latent space makes the target distribution complex again, leading to more difficulties in network learning, which will further result in a significant drop in performance in terms of cross-modal consistency, as demonstrated in Fig. 2(c). Nevertheless, the comparisons in Fig. 2(c) provide an intuitive insight that while the lowdimensional diffusion model is ineffective for detail generation, it significantly benefits the modality transformation. This insight further inspires us to integrate the complementary advantages of low-dimensional and high-dimensional diffusion models to enhance cross-modal consistency and facilitate detail-rich motion generation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6252 “a person dances with someone.” “the person is waving with their right arm.” Ours 1 GT Ours 2 Ours 1 GT Ours 2 “a person backs up and sits in a chair with their arms at their sides and then gets up from the chair.” Ours 1 GT Ours 2 Figure 1: The visual results of our B2A-HDM on HumanML3D (Guo et al. 2022). Our method can generate diverse and high-quality motion sequences that conform to the provided textual descriptions. GT VAE-1 (1*256) VAE-2 (2*256) (a) VAE Reconstruction VAE-4 (4*256) VAE-8 (8*256) VAE-12 (12*256) “a person shakes his arms like a bird, leaning forward and back” (b) FID (↓) (c) Top-1 R-Precision (↑) Figure 2: (a) Visual comparisons among the reconstruction results of different VAEs. (b) Comparison of FID scores (lower is better). (c) Comparison of Top-1 R-Precision scores (higher is better). To this end, we proposed a novel Basic-to-Advanced Hierarchical Diffusion Model, named B2A-HDM, for textguided motion synthesis, in which the basic diffusion model focuses on consistent but detail-scarce text-to-motion transformation, while the advanced diffusion model aims to conduct a detail-enhancing denoising process based on the intermediate results from the basic model. Specifically, the Basic Diffusion Model (BDM) is trained in the low-dimensional latent space, in which the data distribution is much simpler than the raw motion distribution, making the learning of text-to-motion transformation easier. However, since the low-dimensional latent space is less expressive, BDM is ineffective to synthesize detail-rich results. On the other hand, the Advanced Diffusion Model (ADM) is trained on the high-dimensional latent space, thus has a larger representation capacity for characterizing more motion details and improving high-fidelity synthesis. However, directly using ADM to conduct the whole text-to-motion denoising process will lead to poor modality consistency. To tackle this problem, B2A-HDM explicitly divides the denoising process into several sub-processes, in which BDM and ADM focus on different denoising stages. To be specific, B2A-HDM first conducts the forward diffusion on the synthesized result from BDM, resulting in the noised motion that will be regarded as the result of the early denoising sub-process. Then, ADM conducts the following denoising process based on the noised motion. Since the noised motion derived from BDM provides a proper initial state (i.e., consistent with the textual description), ADM can focus on the detail-enhancing denoising process. Moreover, to further ease the learning of high-dimensional diffusion model, B2A-HDM exploits the multi-denoisers framework for ADM, in which each denoiser dominates a specific denoising sub-process. Overall, our contributions can be summarized as follows: (1) We propose a novel Basic-to-Advanced Hierarchical Diffusion Model (B2A-HDM) for text-to-motion synthesis, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6253 which jointly incorporates the complementary benefits of low-/high-dimensional diffusion models into detailed motion synthesis. (2) We explicitly divides the denoising process into several sub-processes, which are separately dominated by one basic and two advanced diffusion models. (3) Extensive experiments on two text-to-motion benchmarks (Guo et al. 2022; Plappert, Mandery, and Asfour 2016) show the superiority of B2A-HDM over the existing SOTAs. Related Work Conditional Diffusion Models. Diffusion models (SohlDickstein et al. 2015; Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021) are a novel class of generative models that have made significant strides in cross-modal synthesis, spanning a diverse range of applications such as text-toimage (Rombach et al. 2022; Saharia et al. 2022; Ramesh et al. 2022), text-to-video (Ho et al. 2022; Esser et al. 2023; Yu et al. 2023), text-to-3d (Poole et al. 2022; Lin et al. 2023), text-to-audio (Popov et al. 2021), among others. Typically, diffusion models consist of the forward and reverse diffusion process, in which the forward process gradually add Gaussian noise into real data to construct training data, while the reverse process involves a neural network to conduct denoising. To adapt diffusion models for conditional generation, Dharival et al. (Dhariwal and Nichol 2019) propose a classifier-guided diffusion model that incorporates conditional information into the reverse diffusion process using additional classifiers. Besides, Ho et al. (Ho and Salimans 2021) propose a classifier-free guidance strategy for conditional diffusion models. This approach strikes a balance between synthesis quality and diversity, which can obtain better results and is widely used by the following works. Text-Guided Human Motion Synthesis. Following the development of generative models, text-guided motion synthesis has witnessed significant progress in recent years. JL2P (Ahuja and Morency 2019) employs auto-encoder to model a share space for the text and motion embedding, from which the text embedding will be used to reconstruct the corresponding motion during inference. MotionCLIP (Tevet et al. 2022) attempts to improve the auto-encoder’s generalization by aligning the shared space with the expressive CLIP (Radford et al. 2021) embedding space, enabling it to handle out-of-distribution motion synthesis. TMEOS (Petrovich, Black, and Varol 2022) and T2M (Guo et al. 2022) use VAE framework to enhance the diversity of the generated results by constraining the share space into a normal distribution. Different from the above encoder-decoder paradigm, T2M-GPT (Zhang et al. 2023) generates motion sequence in an auto-regressive manner by jointly using VQ-VAE (van den Oord, Vinyals, and Kavukcuoglu 2017) and GPT (Radford et al. 2018), which gains improvement in term of fidelity and modality consistency. Building on the great success of diffusion model on image synthesis, some recent works (Tevet et al. 2023; Zhang et al. 2022; Ma, Bai, and Zhou 2022; Chen et al. 2023) explore the generative potential of diffusion for text-to-motion synthesis. However, these methods model the diffusion process either on the raw motion distribution or on a low-dimensional latent space, leading to modality-inconsistent or detail-scarce synthesis. In this paper, we exploit the basic and advanced diffusion models in different latent spaces to conduct the reverse diffusion process, in which basic and advanced models are separately in charge of modality transformation and detail-enhancing denoising process. Note that, while eDiff-I (Balaji et al. 2022) also employs multiple denoisers for reverse diffusion, our method differs in that we we train denoisers on different latent spaces, whereas in eDiff-I various denoisers are all modeled on the same raw data space. Methodology Given a textual description w = {wi}L i=1 with L words, text-to-motion synthesis aims to generate the 3D motion s = {si}N i=1 with N frames that conform to the text input, where si ∈RJ denotes a J-dimensional body pose representation at i-th frame. To achieve this, we propose a novel Basicto-Advanced Hierarchical Diffusion Model (B2A-HDM) to collaboratively exploit the low- and high-dimensional latent diffusion models for modality consistency and detail-rich motion synthesis. A method overview is shown in Fig. 3. Latent Diffusion for Text-to-Motion Synthesis Recently, diffusion model (Sohl-Dickstein et al. 2015; Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021) has shown its outstanding generative ability for cross-modal synthesis tasks (Rombach et al. 2022; Saharia et al. 2022; Ramesh et al. 2022; Esser et al. 2023), inspiring researchers to explore diffusion model for high-quality text-to-motion synthesis. However, directly modeling the diffusion process on the raw motion distribution typically suffers from inferior synthesis due to the high complexity of raw distribution and the insufficiency of text-annotated training data. To tackle this problem, existing method (Chen et al. 2023) exploits a latent diffusion model to degrade the complexity of target distribution and conduct the diffusion process in low-dimensional latent space, leading to higher quality text-to-motion synthesis. Specifically, the latent diffusion model is composed of a motion VAE and a diffusion model in VAE latent space. The motion VAE consists of transformer-based encoder E and decoder D, in which E is used to encode the raw motion s ∈RN×J into latent code z ∈RK×D (K ≪N) in the latent space, while D is used to decode sample in latent space back to the real motion. By using the Kullback-Leibler (KL) loss and the Mean Squared Error (MSE) loss for the training procedure, the motion VAE can provide a low-dimensional but representative latent space. On the other hand, the diffusion model aims to generate a motion latent code in the VAE latent space according to the textual description, which is achieved by a reverse diffusion process that gradually transfers a random noise n ∈RK×D from Gaussian distribution to the motion latent code z0 ∈ RK×D. To train a denoiser ϵθ for this reverse process, a forward diffusion process is required to successively add Gaussian noise onto z0 in a Markov chain manner, which can be formulated as: q (z1:T | z0) := T Y t=1 q (zt | zt−1) , (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6254 “a person shakes his arms like a bird, leaning forward and back.” ℰ! 𝒟! LD-LS ℰ" 𝒟" HD-LS CLIP 𝒟! 𝒟" ℰ" T-steps Denoising by 𝜖! " 𝑇#-steps Denoising by 𝜖! $! 𝑇%-steps Denoising by 𝜖! $" Gaussian Distribution 𝑞(𝑧&|𝑧&'#) Basic Diffusion Model Advanced Diffusion Model 𝑧! " 𝑧# " ̂𝑧# $ ̂𝑧!!%!" $ ̂𝑧& $ 𝑧!!%!" $ 𝑧!" $ 𝑧!" $ 𝑧# $ Figure 3: Method Overview. B2A-HDM consists of a Basic Diffusion Model(BDM) and an Advanced Diffusion Model(ADM). BDM comprises a VAE {El, Dl} and a denoiser ϵl θ, in which ϵl θ is in charge of the complete T-steps denoising process in the low-dimensional latent space (LD-LS). ADM comprises a VAE {Eh, Dh} and two denoisers ϵh1 θ and ϵh2 θ , in which ϵh1 θ and ϵh2 θ are responsible for T1- and T2-steps denoising sub-process on the high-dimensional latent space (HD-LS), respectively. q (zt | zt−1) := N zt; p 1 −βtzt−1, βtI , (2) where T is the total steps of the forward process and βt is a hyperparameter of the noising weight. By using the reparameterization trick (Kingma and Welling 2014), we can sample zt from q (zt | z0) at an arbitrary timestep t: zt := √¯αtz0 + ϵ √ 1 −¯αt, ϵ ∼N(0, I), (3) where ¯αt := Qt s=1 αs and αs := 1 −βs. During training, given the noised data zt and the textual description w as inputs, denoiser ϵθ is expected to predict the noise ϵ added at t-th Markov step. The object function for the learning of ϵθ only contains the MSE loss: L := Eϵ∼N (0,I),t∈[1,T ] ∥ϵ −ϵθ (zt, τθ(w), t) ∥2 2 , (4) where τθ represents a pre-trained CLIP (Radford et al. 2021) text encoder which is used to extract the text embedding and is frozen during training. Furthermore, denoiser ϵθ is trained by using classifier-free guidance (Ho and Salimans 2021). Therefore, during inference, the predicted noise ϵ′ is formulated as the linear combination of the conditional and unconditional predictions: ϵ′ := ϵθ (zt, ∅, t)+g·(ϵθ (zt, τθ(w), t)−ϵθ (zt, ∅, t)), (5) where ∅represents a null-text input and g is the hyperparameter of guidance scale. B2A-HDM Although using latent diffusion model can ease the learning of diffusion network, the low-dimensional latent space may be under representative (i.e., as shown in Fig 2(a)) and thus constrains the generative upper bound of diffusion model, leading to detail-scarce motion synthesis. Directly increasing the dimension of the VAE latent space will make target distribution complex again and cause a significant performance drop in modality consistency (i.e., as illustrated in Fig 2(c)). To boost the representation capacity of latent motion embedding without degrading the cross-modal mapping consistency, our B2A-HDM employs Basic Diffusion Model (BDM) to provide modality consistent generated results, which will be further processed by Advanced Diffusion Model (ADM) for detail-enhancing synthesis. To be specific, our BDM and ADM are defined in the lowdimensional and high-dimensional latent space, respectively. To obtain BDM, a motion VAE Vl = {El, Dl} is trained on the raw motion distribution to obtain a low-dimensional latent space Wl (with latent code zl ∈RK1×D). Then, a denoiser ϵl θ on Wl is trained to handle arbitrary t-th denoising step (t ∈[1, T]), which will be used to conduct the reverse diffusion process from random noise nl ∈RK1×D during inference. For ADM, motion VAE Vh = {Eh, Dh} is also required to obtain a high-dimensional latent space Wh (with latent code zh ∈RK2×D, K2 > K1). However, directly training a denoiser ϵh θ on Wh for arbitrary timestep t is notrivial and typically results in model degradation. To tackle this problem, B2A-HDM improves the common reverse diffusion process in two-folds. First, instead of using a single denoiser ϵh θ to conduct the whole reverse diffusion process, B2A-HDM applies BDM to provide the early T l steps denoising result for ADM, and ADM is only required to conduct the following T −T l denoising steps. Since BDM is trained on Wl with lower distribution complexity and performs better in modality consistent synthesis, it can provide an intermediate denoising result with proper modality information. Therefore ADM is designed for the following detail-enhancing denoising process. Furthermore, using a single denoiser ϵh θ for the remaining T −T l steps denoising process still remains challenging due to the high distribution complexity of Wh and the significant discrepancy of zh t in various timestep. To further ease the learning of denoiser on Wh, our B2A-HDM assigns k denoisers for ADM, in which The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6255 Algorithm 1: Reverse Diffusion Process of B2A-HDM Require: A textual description w, a random seed r. Ensure: A motion sequence s. 1: zl T ∼N(0, I), ϵ ∼N(0, I) unit Gaussian random variables with random seed r; 2: T h ←T −T l denoising steps for ADM; 3: for t = T, T −1, ..., 1 do 4: zl t−1 ←ϵl θ(zl t, τθ(w), t); 5: end for 6: sl ←Dl(zl 0); ˆzh 0 ←Eh(sl); zh T h ←√¯αT hˆzh 0 + ϵ√1 −¯αT h; 7: for t = T h, T h −1, ..., 1 do 8: if t > T h 2 then 9: zh t−1 ←ϵh1 θ (zh t , τθ(w), t); 10: else 11: zh t−1 ←ϵh2 θ (zh t , τθ(w), t); 12: end if 13: end for 14: s ←Dh(zh 0); 15: return s each denoiser ϵhk θ is only in charge of the specific interval of the denoising process in the high-dimensional latent space. During inference, the T l steps denoising result zl T l from BDM can not used by ADM due to dimension inconsistency between zl T l and zh in Wh. To address this problem, B2AHDM first employs BDM to generate a motion sequence sl, which will be transferred into a latent code ˆzh 0 in Wh. Then, B2A-HDM conducts T −T l steps forward diffusion to obtained intermediate denoising result for ADM. We take k = 2 as an example and show the complete reverse diffusion process in Algorithm 1. Training Details and Objective Functions The training procedure for BDM and ADM are similar to that for the common latent diffusion model(Chen et al. 2023; Rombach et al. 2022), except for the training of the diffusion networks for ADM. Specifically, since denoisers in ADM are in charge of different denoising sub-processes, each denoiser is assigned a specific timestep interval during training. Note that although each denoiser is independent of the others, we train all of them in a single training procedure, which enables us to conduct the complete reverse process during training and observe the performance change on the evaluation set. During training VAE for BDM and ADM, the objective functions can be formulated as: Lvae = λklLkl + λvae mseLvae mse, (6) where λkl and λvae mse are the trade-off hyperparameters and are set to 1e-4 and 1.0, respectively. When training the denoiser for BDM and ADM, only MSE loss Lϵθ mse in Equation 4 is used. In addition, we introduce a timestep-aware loss weight λ(t) for the MSE loss in BDM to increase the penalty for early denoising process learning. The timestep-aware MSE loss can be formulated as: Lt mse = λ(t)Lϵ mse, λ(t) = (1 −¯αt) ∗w1 + w2, (7) where ¯αt is the diffusion parameter defined in Equation 3, w1 and w2 are used to rescale the loss weight to a specific interval (i.e., λ(t) ∈[0.5, 5]) and are set to 4.5 and 0.5, respectively. In Sec. , we will analyse the impact of using timestep-aware MSE loss for BDM, which will demonstrate improved performance in high-dimensional scenarios. Experiments Datasets. Our experiments are conducted on two publicly available benchmarks for text-to-motion synthesis, namely KIT-ML (Plappert, Mandery, and Asfour 2016) and HumanML3D (Guo et al. 2022). Specifically, KIT-ML consists of 3,911 motion sequences with 12.5 FPS and 6,278 language annotations. HumanML3D contains 14,616 motion sequences with 20FPS and 44,970 textual descriptions. Regarding the data format, we follow (Guo et al. 2022) to use the redundant representation for each pose frame, which is composed of the local/global joint velocities, joint positions, joint rotations, and the foot contact binary labels. Baselines. We perform quantitative and qualitative comparisons with four most advanced text-to-motion synthesis methods, including MDM (Tevet et al. 2023), MotionDiffuse (Zhang et al. 2022), MLD (Chen et al. 2023), and T2MGPT (Zhang et al. 2023). For these methods, we employ the official pre-trained models and strictly adhere to the official instructions to conduct text-to-motion synthesis. Evaluation Metrics. We employ five evaluation metrics originated from (Guo et al. 2022) to assess the performance of different methods. Specifically, FID (Heusel et al. 2017) measures the distribution difference between the generated and real motion, which is widely used for realism evaluation. RPrecision and MM-Dist are designed to evaluate modality consistency, in which R-Precision calculates the Top-1/2/3 accuracy for the motion-to-text retrieval while MM-Dist calculates the Euclidean distances between the generated motion and its corresponding text. Diversity and MModality are designed for diversity evaluation, which are used to measure the variance of all generated motions and the variance of the particular generations for each text input, respectively. Implementation Details. The dimension of the latent space for BDM and ADM are 4 × 256 and 8 × 256, respectively. ADM is equipped with 2 denoisers. In spite of using different latent space, BDM and ADM share the same network architecture for the motion VAE encoder, decoder, and the diffusion denoiser. In line with MLD (Chen et al. 2023), all of the three modules are composed of 9 transformer layers with skip connection. Our B2A-HDM is implemented using PyTorch (Paszke et al. 2019) and both of motion VAE and diffusion denoiser are trained on 4 Tesla V100 GPUs. During training, for both HumanML3D (Guo et al. 2022) and KIT-ML (Plappert, Mandery, and Asfour 2016) dataset, the batch size on each GPU is set to 96 and the all modules are trained by using AdamW (Loshchilov and Hutter 2019) optimizer with a fixed learning rate 1e-4. For HumanML3D, both VAE and denoiser are trained for 6,000 epochs, while for KIT-ML, the VAE and denoiser are trained for 25,000 epochs and 2,500 epochs, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6256 Method R-Precision ↑ FID ↓ MM-Dist ↓Diversity →MModality ↑ Top-1 Top-2 Top-3 (a) Real motion 0.511±.003 0.703±.003 0.797±.002 0.002±.000 2.974±.008 9.503±.065 MDM (Tevet et al. 2023) 0.320±.005 0.498±.004 0.611±.007 0.544±.044 5.566±.027 9.559±.086 2.799±.072 MotionDiffuse (Zhang et al. 2022) 0.491±.001 0.681±.001 0.782±.001 0.630±.001 3.113±.001 9.410±.049 1.553±.042 MLD (Chen et al. 2023) 0.481±.003 0.673±.003 0.772±.002 0.473±.013 3.196±.010 9.724±.082 2.413±.079 T2M-GPT (Zhang et al. 2023) 0.491±.003 0.680±.003 0.775±.002 0.116±.004 3.118±.011 9.761±.081 1.856±.011 B2A-HDM (Ours) 0.511±.002 0.699±.002 0.791±.002 0.084±.004 3.020±.010 9.526±.080 1.914±.078 (b) Real motion 0.424±.005 0.649±.006 0.779±.006 0.031±.004 2.788±.012 11.08±.097 MDM (Tevet et al. 2023) 0.164±.004 0.291±.004 0.396±.004 0.497±.021 9.191±.022 10.847±.109 1.907±.214 MotionDiffuse (Zhang et al. 2022) 0.417±.004 0.621±.004 0.739±.004 1.954±.062 2.958±.005 11.10±.143 0.730±.013 MLD (Chen et al. 2023) 0.390±.008 0.609±.008 0.734±.007 0.404±.027 3.204±.027 10.80±.117 2.192±.071 T2M-GPT (Zhang et al. 2023) 0.416±.006 0.627±.006 0.745±.006 0.514±.029 3.007±.023 10.921±.108 1.570±.039 B2A-HDM (Ours) 0.436±.006 0.653±.006 0.773±.005 0.367±.020 2.946±.024 10.86±.124 1.291±.047 Table 1: Quantitative results on (a) HumanML3D (Guo et al. 2022) and (b) KIT-ML (Plappert, Mandery, and Asfour 2016). Quantitative Results The quantitative comparison of our B2A-HDM against the existing state-of-the-art methods on HumanML3D (Guo et al. 2022) and KIT-ML (Plappert, Mandery, and Asfour 2016) datasets are reported in Tab. 1 (a) and (b), respectively. Both tables demonstrate that B2A-HDM outperforms other methods in terms of modality consistency and fidelity. Specifically, B2A-HDM achieves the highest R-Precision (Top-1/2/3) and the lowest MM-Dist scores on both datasets, indicating that the generated results of B2A-HDM are more consistent with the input text than those of other methods. Moreover, B2AHDM obtains the lowest FID score on both datasets, highlighting its superiority over other methods in realistic synthesis. It’s worth noting that B2A-HDM is the only approach that consistently improves on the above three metrics, which further validates the effectiveness of the combination of basic and advanced diffusion models. Additionally, B2A-HDM achieves comparable Diversity and Modality scores on both datasets, demonstrating its ability for diverse generation. Qualitative Results Fig. 4 shows a qualitative comparison of B2A-HDM against the existing SOTA methods on HumanML3D (Guo et al. 2022) dataset. The visual comparison illustrates B2A-HDM outperforms other methods in generating motion sequences with better modality consistency and detail preservation. For instance, in the first row of Fig. 4, MotionDiffuse (Zhang et al. 2022) and MDM (Tevet et al. 2023) fail to generate motion that coheres with the input text, while T2M-GPT (Zhang et al. 2023) and MLD (Chen et al. 2023) tend to overlook details of the hands. In contrast, B2A-HDM generates motion sequences that conform to the input text and capture the finegrained motion details in the hand region. Method BD AD LD-LS HD-LS R-P Top-1 ↑FID ↓ No. No. Dim Dim BDM-4 1 0 4×256 0.505 0.284 ADM-8 0 1 8×256 0.481 0.171 B2A-HDM⋆ 3 0 4×256 0.508 0.220 B2A-HDM∗ 0 3 8×256 0.490 0.225 B2A-HDM 1 2 4×256 8×256 0.511 0.084 Table 2: Quantitative results of the ablation study with different configurations, in which BD/AD No., LD/HD-LS Dim refer to basic/advanced denoiser number and low/highdimension latent space dimension, respectively Ablation Study Impact of the timestep-aware MSE loss. As shown in Fig. 5, when the dimension of the latent space is higher than 2×256, using the timestep-aware MSE loss consistently enhances the performance of diffusion model in terms of lower FID and higher Top-1 R-Precision, which highlights the ability of timestep-aware MSE loss to facilitate the learning of denoisers in high-dimensional latent spaces. Effectiveness of B2A-HDM. We compare B2A-HDM with BDM in a 4 × 256 latent space (BDM-4) and ADM in an 8 × 256 latent space (ADM-8). Additionally, we compare B2A-HDM with two variants (i.e., B2A-HDM⋆and B2A-HDM∗) that separately include three denoisers in lowdimemsion and high-dimension latent space to demonstrate the effectiveness of combining basic and advanced diffusion models. As reported in Tab. 2, directly using BDM-4 or ADM-8 leads to higher FID or lower R-Precision. Although increasing the denoiser number enables B2A-HDM⋆and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6257 B2A-HDM (Ours) Real T2M-GPT MLD MotionDiffuse MDM “a person walks forward then turn around then starts to jump back.” “a man walks down some stairs, ending in a standing position.” “a person bends over slightly and picks something up on their left side, and puts it down on their right side.” Figure 4: Qualitative comparisons on HumanML3D dataset (Guo et al. 2022). The flow of time is represented by colors, with lighter shades indicating the past. Please zoom in for more details. (a) FID (↓) (b) Top 1 RP (↑) Figure 5: Impact of the timestep-aware MSE loss for BDMs in different latent space (LS). B2A-HDM∗to gain performance improvement against their single-denoiser counterparts (i.e., BDM-4 and ADM-8), they still fall short of B2A-HDM. By collaborativly using basic and advanced diffusion models, B2A-HDM achieves best FID and R-Precision scores, which demonstrate the necessity and effectiveness of combining basic and advanced denoisers. Moreover, Fig. 6 shows that ADM-8 is prone to generate modality inconsistent motions while BDM-4 tends to ignore the motion details. In contrast, our B2A-HDM performs better in modality transformaion and detail preservation, which further validates the effectiveness of our method. “a person walks up four steps while holding onto the railing with their right hand.” “a person walks forward and raises their arms in victory.” B2A-HDM (Ours) Real BDM-4 ADM-8 Figure 6: Ablation Study on the effectiveness of B2A-HDM. Conclusion We propose a novel Basic-to-Advanced Hierarchical Diffusion Model (B2A-HDM) for text-to-motion synthesis. B2AHDM comprises a basic diffusion model (BDM) in lowdimensional latent space and a advanced diffusion model (ADM) with two denoisers in high-dimensional latent space, in which BDM are in charge of the modality-consistent denoising, whereas ADM is responsible for the following detailenhancing denoising. In this way, B2A-HDM can fully leverage the generative potential of diffusion models to produce high-quality motion sequences that conform to the provided textual descriptions. Extensive experiments on two public text-to-motion benchmarks demonstrate the superiority of B2A-HDM over existing state-of-the-art methods, while ablation studies further validate the effectiveness of our approach. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6258 Acknowledgments This work was supported in part by National Key R&D Program of China under Grant No.2020AAA0109700, Guangdong Outstanding Youth Fund (Grant No.2021B1515020061), National Natural Science Foundation of China (NSFC) under Grant No.61976233 and No.92270122, Mobility Grant Award under Grant No.M0461, Shenzhen Science and Technology Program (Grant No.RCYX20200714114642083), Shenzhen Science and Technology Program (Grant No.GJHZ20220913142600001), Nansha Key RD Program under Grant No.2022ZD014 and Sun Yat-sen University under Grant No.22lgqb38 and 76160-12220011. References Ahuja, C.; and Morency, L.-P. 2019. Language2pose: Natural Language Grounded Pose Forecasting. In International Conference on 3D Vision (3DV), 719–728. Balaji, Y.; Nah, S.; Huang, X.; Vahdat, A.; Song, J.; Zhang, Q.; Kreis, K.; Aittala, M.; Aila, T.; Laine, S.; Catanzaro, B.; Karras, T.; and Liu, M.-Y. 2022. eDiff-I: Text-to-Image Diffusion Models with Ensemble of Expert Denoisers. arXiv preprint arXiv:2211.01324. Chen, X.; Biao, J.; Wen, L.; Zilong, H.; Bin, F.; Tao, C.; Jingyi, Y.; and Gang, Y. 2023. Executing your Commands via Motion Diffusion in Latent Space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Dabral, R.; Mughal, M. H.; Golyanik, V.; and Theobalt, C. 2023. MoFusion: A Framework for Denoising-Diffusionbased Motion Synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Dhariwal, P.; and Nichol, A. 2019. Diffusion models beat gans on image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 8780–8794. Esser, P.; Chiu, J.; Atighehchian, P.; Granskog, J.; and Germanidis, A. 2023. Structure and Content-Guided Video Synthesis with Diffusion Models. arXiv preprint arXiv:2302.03011. Ghosh, A.; Cheema, N.; Oguz, C.; Theobalt, C.; and Slusallek, P. 2021. Synthesis of Compositional Animations from Textual Descriptions. In IEEE/CVF International Conference on Computer Vision (ICCV), 1396–1406. Guo, C.; Zou, S.; Zuo, X.; Wang, S.; Ji, W.; Li, X.; and Cheng, L. 2022. Generating Diverse and Natural 3d Human Motions from Text. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5152–5161. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems (NeurIPS). Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D. P.; Poole, B.; Norouzi, M.; Fleet, D. J.; and Salimans, T. 2022. Imagen Video: High Definition Video Generation with Diffusion Models. arXiv preprint arXiv:2210.02303. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. arXiv preprint arxiv:2006.11239. Ho, J.; and Salimans, T. 2021. Classifier-Free Diffusion Guidance. In Deep Generative Models and Downstream Applications at NeurIPS (NeurIPSW). Jin, P.; Wu, Y.; Fan, Y.; Sun, Z.; Wei, Y.; and Yuan, L. 2023. Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs. In NeurIPS. Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In International Conference on Learning Representations (ICLR). Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; and Lin, T.-Y. 2023. Magic3D: High-Resolution Text-to-3D Content Creation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Loshchilov, I.; and Hutter, F. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR). Ma, J.; Bai, S.; and Zhou, C. 2022. Pretrained Diffusion Models for Unified Human Motion Synthesis. arXiv preprint arXiv:2212.02837. Nichol, A.; and Dhariwal, P. 2021. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (ICML), 8162–8171. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; K¨opf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In arXiv preprint arXiv:1912.01703v1. Petrovich, M.; Black, M. J.; and Varol, G. 2022. TEMOS: Generating diverse human motions from textual descriptions. In European Conference on Computer Vision (ECCV). Plappert, M.; Mandery, C.; and Asfour, T. 2016. The KIT Motion-Language Dataset. Big Data, 4(4): 236–252. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2022. DreamFusion: Text-to-3D using 2D Diffusion. arXiv preprint arXiv:2209.14988. Popov, V.; Vovk, I.; Gogoryan, V.; Sadekova, T.; and Kudinov, M. A. 2021. Grad-tts: A diffusion probabilistic model for textto-speech. In International Conference on Machine Learning (ICML). Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models from Natural Language Supervision. In International Conference on Machine Learning (ICML), 8748–8763. Radford, A.; Narasimhan, K.; Salimans, T.; and Sutskever, I. 2018. Improving language understanding by generative pre-training. In Advances in Neural Information Processing Systems (NeurIPS). Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-Resolution Image Synthesis with Latent The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6259 Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10684–10695. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour, S. K. S.; Ayan, B. K.; Mahdavi, S. S.; Lopes, R. G.; Salimans, T.; Ho, J.; Fleet, D. J.; and Norouzi, M. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv preprint arXiv:2205.11487. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2256–2265. Tevet, G.; Gordon, B.; Hertz, A.; Bermano, A. H.; and CohenOr, D. 2022. Motionclip: Exposing human motion generation to clip space. In European Conference on Computer Vision (ECCV), 358–374. Tevet, G.; Raab, S.; Gordon, B.; Shafir, Y.; Bermano, A. H.; and Cohen-Or, D. 2023. Human Motion Diffusion Model. In International Conference on Learning Representations (ICLR). van den Oord, A.; Vinyals, O.; and Kavukcuoglu, K. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems (NeurIPS). Yu, S.; Sohn, K.; Kim, S.; and Shin, J. 2023. Video Probabilistic Diffusion Models in Projected Latent Space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, J.; Zhang, Y.; Cun, X.; Huang, S.; Zhang, Y.; Zhao, H.; Lu, H.; and Shen, X. 2023. T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, M.; Cai, Z.; Pan, L.; Hong, F.; Guo, X.; Yang, L.; and Liu, Z. 2022. MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model. arXiv preprint arXiv:2208.15001. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6260 | 2024 | 695 |
18,512 | Learning by Erasing: Conditional Entropy Based Transferable Out-of-Distribution Detection Meng Xing1,3, Zhiyong Feng1, Yong Su2, Changjae Oh3 1College of Intelligence and Computing, Tianjin University 2Tianjin Normal University 3Centre for Intelligent Sensing, Queen Mary University of London {xingmeng, zyfeng, suyong}@tju.edu.cn, [email protected] Abstract Detecting Out-of-distribution (OOD) inputs is crucial to deploying machine learning models to the real world safely. However, existing OOD detection methods require an indistribution (ID) dataset to retrain the models. In this paper, we propose a Deep Generative Models (DGMs) based transferable OOD detection that does not require retraining on the new ID dataset. We first establish and substantiate two hypotheses on DGMs: DGMs are prone to learn low-level features rather than high-level semantic information; the lower bound of DGM’s log-likelihoods is tied to the conditional entropy between the model input and target output. Drawing on the aforementioned hypotheses, we present an innovative image-erasing strategy, which is designed to create distinct conditional entropy distributions for each ID dataset. By training a DGM on a complex dataset with the proposed image-erasing strategy, the DGM could capture the discrepancy of conditional entropy distribution for varying ID datasets, without re-training. We validate the proposed method on the five datasets and show that, without retraining, our method achieves comparable performance to the stateof-the-art group-based OOD detection methods. The project codes will be open-sourced on our project website. Introduction Deep neural networks (DNNs) have demonstrated their potential in solving various safety-related computer vision tasks (Wang, Shi, and Yeung 2016), such as autonomous driving (Casas, Sadat, and Urtasun 2021) and healthcare (Kim et al. 2021). However, DNNs tend to yield confident but incorrect predictions for the distribution-mismatched examples (Nguyen, Yosinski, and Clune 2015; Sensoy, Kaplan, and Kandemir 2018; Shekhovtsov and Flach 2019), and results in serious consequences, e.g., accidents by autonomous vehicles (Times 2018) and incorrect diagnosis in healthcare (BBC 2020). Therefore, determining whether inputs are outof-distribution (OOD) is an important task to safely deploy machine learning models to the real world. OOD detection can be performed using labeled data by utilizing output characteristics (Hsu et al. 2020), training dynamics (Huang et al. 2021), adversarial training (Lakshminarayanan, Pritzel, and Blundell 2017; Bevandic et al. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. CI→SV (0.85 / 24dB) SV→CI (0.23 / 10dB) Fa→MN (0.63 / 16dB) MN→Fa (0.26 / 9dB) Figure 1: Original images (top row) and their reconstruction results (bottom row) by a pre-trained DGM. The model pre-trained on CI (CIFAR10) / Fa (FashionMNIST) can reconstruct SV (SVHN) / MN (MNIST) samples well, but not vice versa. Reconstruction performances on the test set of the target datasets are evaluated with (SSIM↑/ PSNR↑). 2018; Huang and Li 2021), and metric learning (Lee et al. 2018b; Zaeemzadeh et al. 2021; Ming et al. 2023). Since it is time-consuming and laborious to obtain labeled data in real scenarios, as an alternative, Deep Generative Models (DGMs) have been used to capture the sample distribution of In-Distribution (ID) datasets (Serr`a et al. 2020). However, most DGMs-based methods focus on elaborating architectures (Ren et al. 2019; Serr`a et al. 2020), designing loss functions (Xiao, Yan, and Amit 2020) or statistical models (Zhang et al. 2020; Jiang, Sun, and Yu 2022), targeting the specific feature representation or data distribution of ID samples (Sun et al. 2023), i.e., need retraining to adapt to the normal pattern of the new ID datasets. This motivates the following unexplored question: How can we make OOD detection transferable across new ID datasets? In this paper, we aim to achieve transferable OOD detection based on the following two key hypotheses: 1) The DGMs are prone to learn low-level features, rather than semantic information (Kirichenko, Izmailov, and Wilson 2020). Following the experimental setup in (Xiao, Yan, and Amit 2020), we use the Variational Auto-Encoder to reconstruct the input image and show some results comparisons in Figure 1. Figure 1 demonstrates that a DGM pretrained on a complex dataset, which includes diverse semantic categories and a complex image texture, can capThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6261 Figure 2: The (a) and (b) are the DGM’s negative log-likelihood distribution on different datasets. The model of (a) is trained by reconstructing the input, while the model of (b) is trained by generating the erased patch based on its surrounding information. The real negative conditional entropy distribution between the erased patch and its surrounding is given in (c). The DGM is an auto-encoder proposed in this paper and is trained with ImageNet. The image size is 32 × 32, and the erased image patch in (b) and (c) is at the center of the image with the size of 16 × 16. Kernel Density Estimation (KDE) is used to estimate the probability distribution. ture the distribution of simple datasets, but not vice versa. This means that the DGMs pre-trained on a complex dataset can approach the lower bounds of negative-log-likelihoods of simple datasets without retraining. 2) In the DGMs, the lower bound of negative-loglikelihoods is determined by the conditional entropy between the model input and target output. We give supporting experimental results in Figure 2 and theoretically demonstrate this hypothesis in Motivation. The log-likelihood distributions of all datasets in Figure 2(a) are approaching 0 since the conditional entropy between the model input and output is 0 in this experiment setting. This result explains why traditional DGMs cannot be used directly for OOD detection(Serr`a et al. 2020). In contrast, the log-likelihood distributions of five datasets in Figure 2(b) are significantly different from each other and the distribution discrepancy is consistent with real conditional entropy distribution in Figure 2(c). Therefore, we can assign an exclusive conditional entropy distribution for each dataset by designing an appropriate image-erasing strategy, which is an indispensable prerequisite for achieving transferable OOD detection. Motivated by the proven hypotheses, we propose a novel Conditional Entropy based Transferable OOD detection (CETOOD). Specifically, we first propose an image-erasing strategy that creates exclusive conditional entropy distribution for different datasets by considering the erased patch and its surrounding information as the content and condition. Subsequently, we design the Uncertainty Estimation Network (UEN), which estimates the Maximum A Posteriori of generating the erased patch by reconstructing the surrounding information and generating the erased patch. Finally, we train the UEN on ImageNet (Deng et al. 2009) dataset, affording our model approaching the lower bounds of negative-log-likelihoods on different ID datasets, which reflects their distribution discrepancy of conditional entropy. In the experiment, we demonstrate that our method achieves comparable performance with state-of-the-art methods in group-based OOD detection. More importantly, our pipeline drastically curtails the time and memory cost of model deployment due to its transferability and concise network architecture. In summary, our contributions are as follows: • We introduce the concept of conditional entropy into OOD detection for model transferability, and theoretically demonstrate the lower bound of negative-loglikelihoods in DGMs is determined by the conditional entropy between the model input and target output. • We propose a transferable OOD detection method (CETOOD), which captures the distribution discrepancy of conditional entropy of different ID datasets to achieve transferable OOD detection. • We demonstrate the effectiveness and lightweight of the proposed method through extensive comparisons with state-of-the-art techniques, across different datasets. Related Work Some Classifier-based approaches detect OOD samples by utilizing the statistical characteristic of class probabilities. Hendrycks et al. (Hendrycks and Gimpel 2017) propose maximum softmax probability as a baseline for OOD detection in deep neural network (DNN) algorithms, and ODIN (Liang, Li, and Srikant 2018) further enhance the performance by using temperature scaling and adding small perturbations on ID inputs. Since the distribution of OOD data is not available, some methods have explored using synthesized data from generative adversarial networks (GANs) (Lee et al. 2018a) or using unlabeled data (Hendrycks, Mazeika, and Dietterich 2019; Mohseni et al. 2020) as auxiliary OOD training data, which allows the model to be explicitly regularized by fine-tuning, producing lower confidence on anomalous examples. In addition to these softmax-classification-based frameworks, recently, researchers focus on the feature embedding of the model. With the observation that the unit activation patterns of a particular layer show a significant difference between ID and OOD data, Djurisic et al. (Djurisic et al. 2023) utilize feature transformation to generate the OOD score. Similarly, some methods exploit hyperspherical embeddings (Ming et al. 2023) or The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6262 cosine similarity (Nguyen et al. 2023) between features to promote strong ID-OOD separability. Despite the promising results, classification-based approaches show limitations on the non-labeled tasks. As an alternative, most DGM-based OOD detection methods separate the ID and OOD samples by exploiting the inductive bias of DGMs, including background statistics (Ren et al. 2019; Cai and Li 2023), inputs complexity (Serr`a et al. 2020) and low-level features (Sun et al. 2023). Xiao et al. (Xiao, Yan, and Amit 2020) propose the Likelihood Regret, which is a log-ratio between the likelihood of input obtained by posteriori distribution and approximated by VAE, to detect OOD samples. Serr`a et al. (Serr`a et al. 2020) design a complexity estimate score and utilize the subtraction between negative log-likelihoods and the complexity estimate score to detect OOD inputs. Kirichenko et al. (Kirichenko, Izmailov, and Wilson 2020) prove through experiments that what DGMs learn from images is local pixel correlation and local geometric structure rather than semantic information. Therefore, Sun et al. (Sun et al. 2023) utilizes sample repairing to encourage the generative model to focus on semantics instead of low-level features. A recent work (Zhang, Goldstein, and Ranganath 2021) has shown that for the pointbased OOD detection method, a perfect model can perform worse than a falsely estimated one when the ID and OOD data are overlapped. Therefore, Group-based OOD detection methods utilize the distribution characteristics of grouped inputs for OOD detection. Most group-based methods consider either the raw input or a certain representation of samples for OOD detection. Nalisnick et al. (Nalisnick et al. 2019) propose an explicit test for typicality employing a Monte Carlo estimate of the empirical entropy. As an alternative, exploit data representations in the latent space can also be utilized to achieve OOD. Zhang et al. (Zhang et al. 2020) find that the representations of inputs in DGMs can be approximated by fitted Gaussian and the distance between the distribution of representations of inputs and prior of the ID dataset can be utilized to detect OOD samples. Jiang et al. (Jiang, Sun, and Yu 2022) propose to compare the training and test samples in the latent space of a flow model. However, these methods require retraining when encountering new ID datasets, which is computationally expensive and time-consuming. Motivation In this section, we demonstrate the relationship between the lower bound of DGM’s negative-log-likelihoods and the conditional entropy between model input and target output. We take the grayscale image as an example, which can be extended to RGB easily. Given an image pair (A, B), we can calculate the uncertainty of random variable B given random variable A, i.e., the conditional entropy H(B|A) as follows: H(B|A) = −ΣNB i=0ΣNA j=0P(Aj, Bi)log(P(Bi|Aj)) = −ΣNB i=0P(Bi)log(P(Bi|A)) where Aj and Bi are the pixel value at locations j and i of images A and B. NA and NB are the number of pixel of Figure 3: The pipeline of the proposed CETOOD. images A and B. For image generation, given the model input A, target output B and a pre-trained DGM (parameters: Z), the Maximum A Posteriori (MAP) of generating output can be estimated as follows: MAP = arg min Z KL(P(B|Z)P(Z) || P(B|A)P(A)) According to the information bottleneck theory (Tishby, Pereira, and Bialek 2000), the lower bound of the negativelog-likelihoods of DGM can be formulated as follows: Llower bound = −(log(P(B|A)) + log(P(A))) = −(ΣNB i=0log(P(Bi|A))/NB) −log(P(A)) = −(ΣNB i=0P(Bi)log(P(Bi|A))) | {z } H(B|A) −log(P(A)) where P(Bi|A) is modeled by the pre-trained DGM. Therefore, given A as input, the conditional entropy between A and B would determine the lower bound of DGM’s negative-log-likelihoods. Method The proposed framework consists of the image-erasing strategy, UEN, and OOD detection algorithm, as shown in Figure 3. Image-Erasing Strategy To create exclusive conditional entropy distribution for different datasets, we design an image-erasing strategy that divides the image into the erased patch and its surrounding information. Conditional entropy is a measure of the information difference between the image’s erased patch and its surrounding. Due to semantic differences existing between different datasets, exclusive conditional entropy distributions can be generated for different datasets by erasing the most semantically meaningful regions. We empirically choose to erase the center of the input, but also propose other erasing strategies for comparison. The details of the image-erasing strategies are shown in Figure 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6263 Figure 4: The center (a), corner (b) and side (c) of the image is erased, which is indicated with white color. With the original image x, the model input (surrounding information) xr and output (erased patch) xf are generated as follows: xr = Mask(x), with xf = x −xr, (1) where Mask(x) indicates putting a mask on x. Uncertainty Estimation Network To capture the distribution discrepancy of conditional entropy for different datasets, we propose UEN, a concise auto-encoder, as shown in Figure 3. To estimate the MAP of generating the erased patch, UEN needs to calculate the probability of generating the target output directly based on the model input. We assume that the pixel values at each position of the image conform to a continuous distribution, and all parameters of the distribution depend on the model input. Inspired by PixelCNN++ (Salimans et al. 2017), we use the mixed logistic distribution as the above continuous distribution and name the feature space on which the parameters of the distribution depend as uncertainty estimation space, Z: Z ∼ K X k=1 πk e−(x−µk)/γk γk(1 + e−(x−µk)/γk)2 , (2) where K is the number of components in the mixed logistic distribution, πk is the weight of each component, µk and γk are the shape and position parameters of the logistic distribution, respectively (πk, µk and γk are learnable parameters, where k = {1, 2, . . . , K}). Given the erased patch, xf, the likelihood of each discretized pixel value can be directly calculated as follows: P(xf|Z) = K X k=1 πk[σ(xf + 1 255 −µk γk )−σ(xf − 1 255 −µk γk )], (3) where σ(·) is the sigmoid function, and we set K to 10 in this paper. For RGB images, we only allow linear dependence between three-channel pixel values. The encoder consists of three parallel multi-layer convolutional branches with different kernel sizes. Upsampling layers are designed to ensure the size of the feature map in the uncertainty estimation space is consistent with the input. The deep feature in uncertainty estimation space is further mapped into the image domain by a decoder. To ensure that no surrounding information is lost in the process of constructing the uncertainty estimation space, i.e, P(xf|Z) ≈P(xf|xr), we design the reconstruction loss, Lr, as follows: Lr = ∥xr −or∥2, (4) Algorithm 1: OOD Detection Algorithm Require: Z: pre-constructed uncertainty estimation space; X∗ = {x∗ 1, x∗ 2, . . . x∗ N}: all of ID samples; X = {x1, x2, . . . xm}: a batch of of test samples; Mask(): the function of erasing image patch; t: threshold. 1: i ←1 2: while i ≤N do 3: x∗ if = x∗ i −Mask(x∗ i ) 4: L∗[i] = Le(x∗ if|Z); i ←i + 1 5: end while 6: P(L∗) = KDE(L∗) 7: while testset ̸= ∅do 8: j ←1 9: while j ≤m do 10: xjf = xj −Mask(xj) 11: L[j] = Le(xjf|Z); j ←j + 1 12: end while 13: P(L) = KDE(L) 14: k = KL(P(L)∥P(L∗)) 15: if k > t then 16: return X is out-of-distribution data. 17: else 18: return X is in-distribution data. 19: end if 20: reload X 21: end while where or = Mask(o) is the masked output and o is the model output. To highlight the distribution discrepancy, the generation loss, Le, which measures the posterior probability of generating the erased patch, is presented as: Le = −log2[P(xf|Z)] = − N X i=1 log2[P(xfi|Z)]/Nf, (5) where i is the pixel location, xfi is the pixel value at location i and Nf is the number of pixels in the erased patch, xf. Le encourages UEN to narrow the log-likelihood distribution gap between the samples that contain similar semantic discrepancies. The final loss function is as follows: Ltotal = λLr + (1 −λ)Le, (6) where λ is used to balance the effect between Lr and Le. OOD Detection Algorithm Algorithm 1 shows the proposed OOD detection using the pre-trained uncertainty estimation network. Given all ID samples X∗= {x∗ 1, x∗ 2, . . . x∗ N} and image-erasing strategy Mask(), we first utilize Kernel Density Estimation (KDE) to obtain the distribution of log-likelihood for ID dataset. Then, in the same way, given a set of test samples X = {x1, x2, . . . , xn}, we estimate the distribution of log-likelihood on the test group. Finally, we measure the estimated total correlation between the test group and the ID samples by using KL-divergence, and determine the test group as the OOD data if there exists a significant distribution discrepancy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6264 Methods DOCR-TC-M Ty-test RF-GM Ours (Retraining) (Required) (Required) (Required) (Not required) GS ID OOD AUROC AUPR AUROC AUPR AUROC AUPR AUROC AUPR 5 MNIST F-MNIST 99.2 97.2 F-MNIST MNIST 100.0 100.0 95.5 92.1 99.0 99.0 93.8 89.5 SVHN CIFAR10 99.7 99.7 100.0 100.0 89.0 93.0 99.2 97.5 CelebA 100.0 100.0 100.0 100.0 92.0 94.0 98.9 96.0 CelebA CIFAR10 91.6 91.9 5.7 31.2 92.0 93.0 91.2 86.8 SVHN 100.0 100.0 83.1 80.1 97.0 96.0 91.2 87.2 CIFAR10 SVHN 99.0 99.6 98.6 99.3 88.0 83.0 99.7 85.6 CelebA 100.0 100.0 100.0 100.0 76.0 77.0 98.6 92.6 10 MNIST F-MNIST 100.0 100.0 F-MNIST MNIST 100.0 100.0 99.4 99.3 99.0 99.0 99.1 96.3 SVHN CIFAR10 100.0 100.0 100.0 100.0 95.0 98.9 100.0 100.0 CelebA 100.0 100.0 100.0 100.0 98.0 99.0 100.0 100.0 CelebA CIFAR10 99.2 99.3 0.9 30.7 98.0 99.0 99.4 93.9 SVHN 100.0 100.0 91.6 90.5 100.0 100.0 99.0 94.6 CIFAR10 SVHN 100.0 100.0 99.9 100.0 99.0 98.0 99.3 99.7 CelebA 100.0 100.0 100.0 100.0 89.0 90.0 99.9 99.9 Table 1: The OOD detection results with different group size (GS) on five different datasets. Unlike other methods, our transferable method does not require retraining on the ID dataset. Experiments Implementation Details All three parallel encoder branches consist of multiple convolution and upsampling layers with different kernel sizes (3×3, 5×5 and 7×7). A shared convolutional layer with a kernel size of 1×1 is utilized to transform the features from 3 parallel encoder branches into uncertainty estimation space. The decoder consists of two convolutional layers with kernel size of 3×3. We set the batch size and learning rate to 64 and 10−5, respectively. λ is empirically set to 0.8. We trained the network for 250 epochs, taking about 48.29 hours. We conduct all experiments on a single NVIDIA GPU 3080 that follows the experimental setup of the baseline methods. Experimental Setting Datasets We train our model on ImageNet32 (Deng et al. 2009) and validate our model on different ID datasets, including MNIST (LeCun et al. 1998), FashionMNIST (Xiao, Rasul, and Vollgraf 2017), SVHN (Netzer et al. 2011), CelebA (Liu et al. 2015) and CIFAR10 (Krizhevsky and Hinton 2009). All the inputs are resized to 32×32 to fit the input size of UEN. We transform the grayscale image into an RGB image by replicating the channel. Metrics We use threshold-independent metrics: the area under the receiver operating characteristic curve (AUROC) (Davis and Goadrich 2006) and the area under the precision-recall curve (AUPR) to evaluate our method. We consider OOD data and ID data as positive and negative ones for detection, respectively. Unless noted otherwise, we calculate the False Positive Rate (FPR) of the detector when the threshold is set at 95% TPR. We randomly select 10k samples from the test set of the target dataset. We generate test sample groups according to group size gs. For the fair comparison, we generate the test set 2 times and test groups 5 times then report the averaged result. OOD Detection To evaluate the robustness of our method, we utilize five different datasets as ID datasets and test each of them on one (MNIST or FashionMNIST) or three (SVHN, CelebA and CIFAR10) different disjoint OOD datasets. The obtained performance for OOD detection and comparison with three baselines including the Ty-test (Nalisnick et al. 2019), DOCR-TC-M (Zhang et al. 2020) and RF-GM (Jiang, Sun, and Yu 2022) are shown in Table 1. We utilize the three methods as our baselines as they outperform other existing group-based OOD detection methods. As shown in Table 1, our method can achieve competitive performance compared with the SOTA methods. Our method achieves higher AUROC compared to RF-GM across various detection scenarios, especially, our method outperforms RF-GM 22.6% AUROC when detecting CelebA from CIFAR10 with 5 as group size. Likewise, our method shows 0.7% higher AUROC compared to DOCR-TCM when detecting SVHN from CIFAR10 with 5 as group size. Notably, compared to the baseline methods, our framework does not require retraining when deployed on new ID datasets. Deployment Cost Analysis In order to comprehensively analyze the performance of our model, we compare the time (training time) and space complexity (memory cost of network parameters) of our approach with that of the baseline methods. Due to both DOCR-TC-M and RF-GM are based on the flow model, we choose the DOCR-TC-M with better performance as a baseline. The experiment settings for DOCR-TC-M (Zhang et al. 2020) and Ty-test (Nalisnick et al. 2019) are consistent with the original papers. The training time and memory cost comparison are shown in Figure 5. Our method does not require retraining and only needs to calculate the DGM’s likelihood distribution of the new ID dataset in the testing stage. Therefore, the time cost of model deployment can be greatly reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6265 ID OOD (a) Group size (b) Erasing strategy (c) Erasing strategy (H) 20 50 100 corner side center corner(H) side(H) center(H) MNIST F-MNIST 100.0 100.0 100.0 95.2 99.7 100.0 99.7 99.9 99.9 F-MNIST MNIST 99.9 100.0 100.0 91.4 97.2 99.1 94.2 97.7 99.6 SVHN CelebA 100.0 100.0 100.0 100.0 100.0 100.0 99.7 99.7 99.9 CIFAR10 100.0 100.0 100.0 99.9 100.0 100.0 99.5 99.1 99.9 CelebA SVHN 99.9 100.0 100.0 89.4 90.1 99.0 100.0 100.0 100.0 CIFAR10 99.8 100.0 100.0 64.8 63.7 99.4 56.3 54.3 86.3 CIFAR10 SVHN 99.6 99.9 100.0 90.3 95.2 99.3 100.0 100.0 100.0 CelebA 100.0 100.0 100.0 59.3 58.7 99.9 51.2 51.0 89.6 ID OOD (d) Loss function (e) Training set Le Lr Ltotal MNIST F-MNIST SVHN CelebA CIFAR10 ImageNet MNIST F-MNIST 99.7 99.9 100.0 100.0 99.2 99.9 100.0 99.8 100.0 F-MNIST MNIST 93.3 98.4 99.1 99.7 97.9 97.7 98.5 98.1 99.1 SVHN CelebA 61.4 77.6 100.0 61.6 74.6 100.0 49.1 100.0 100.0 CIFAR10 62.0 76.1 100.0 49.2 90.2 99.9 99.9 78.0 100.0 CelebA SVHN 64.1 82.0 99.0 88.6 71.8 97.3 58.5 98.4 99.0 CIFAR10 70.9 79.5 99.4 74.7 65.2 77.5 90.5 97.8 99.4 CIFAR10 SVHN 62.8 78.1 99.3 75.0 88.9 96.9 99.0 68.4 99.3 CelebA 68.1 77.5 99.9 46.4 65.1 63.2 68.4 99.6 99.9 Table 2: Model performance with different hyperparameters and training variations (group size for b-e is 10). Figure 5: The time (t, hours) and space (m, FLOPs) complexity comparison between our method and the baseline approaches. duced. Our model needs 48.3 hours to be pre-trained on ImageNet, which is still less than the time cost of training the baseline methods on all ID datasets. In addition, the space complexity comparison in Figure 5 shows that the memory cost of our model is significantly lower than the baseline methods. Ablation Study Effect of group size Table 2(a) reports the model performance with different group sizes. Experiment results demonstrate that the group size only has a slight impact on model performance and it is sufficient to ensure the performance of the model with the group size higher than 5. Effect of image-erasing strategy To analyze the effect of the image-erasing strategy, we use three image-erasing strategies to train the model. Note that the same imageerasing strategy is applied to both training the model and OOD detection. For the image-erasing strategy with differFigure 6: Top: Different image-erasing strategies. Bottom: The averaged likelihood heatmaps of all CIFAR10 test samples generated by our model with image-erasing strategies. ent variations, we calculate the average of all the variations. We tabulate the model performance in Table 2(b), denoted as corner, side and center. The experimental results indicate that the center strategy can significantly improve the detection performance in some scenarios (CelebA versus CIFAR10 and CIFAR10 versus CelebA), but only slight performance improvement is observed in other scenarios. To explore the reasons for poor robustness in performance improvement, we feed the real conditional entropy under different image-erasing strategies into Algorithm 1 and calculate the OOD detection results, as shown in Table 2(c). The experimental results show that in the scenarios with slight performance improvement, the corner and side strategies can create exclusive conditional entropy distributions for different datasets. The above experimental results support hypothesis 2), i.e., the detection performance of the model depends on the conditional entropy distribution discrepancy between different datasets. In addition, as the Le encourages UEN to narrow the loglikelihood distribution gap between the samples with similar semantic discrepancies, the detection performance of the model should be superior to the detection results based on real conditional entropy. However, in some experimental settings, the experimental results do not match expectations. For example, when detecting SVHN from CIFAR10, the performance of the model decreases compared to the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6266 detection results based on real conditional entropy, and the performance degradation caused by different image-erasing strategies varies significantly. To investigate the impact of the image-erasing strategy on the conditional entropy capturing, we presented the averaged likelihood heatmaps of all samples in the CIFAR10 dataset under different imageerasing strategies, as shown in Figure 6. The expected experimental results should be like Figure 6(a), the blue region in the bottom heatmap is aligned with the white region in the top sketch map, which indicates the information discrepancy between the erased patch and its surrounding can be captured by the proposed model effectively. However, in the corner (Figure 6(b)) and side (Figure 6(c)) strategies, the blue regions in the heatmaps are much smaller than their corresponding white regions. The results demonstrate that the model with these two erasing strategies could generate partial information about the target output. In other words, the model’s ability to capture conditional entropy is affected by the value of conditional entropy, the larger the better. Effect of loss functions We show the quantitative comparison results of different model objectives in Table 2(d). We also show the feature visualization of samples when the model is trained with different model objectives in Figure 7. Experimental results in Table 2(d) show that the performance of Lr consistently outperforms Le across different datasets, which demonstrates ensure P(xf|Z) ≈P(xf|xr) plays a major role in capturing the inter-dataset distribution discrepancy (the conclusion is consistent with the results in Figure 7 (b) and (c)). In addition, the comparison between Figure 7 (c) and (d) shows that Le reduces the distribution variance of each dataset, thus further increasing the distribution discrepancy. The results demonstrate that Lr ensures the model captures the condition (surrounding information) of the conditional entropy, while Le encourages the model to capture the content (erased patches), and the complementarity between them helps to accurately capture the conditional entropy. Effect of training set To analyze the effect of the training set, we train our model using training sets of 6 datasets, including MNIST, F-MNIST, SVHN, CelebA, CIFAR10 and ImageNet. As shown in Table 2(e), the results show that the model performance improves with the increases in training data’s complexity and achieves optimal performance on ImageNet. The experimental results support hypothesis 1), i.e., the DGMs learn low-level features rather than semantic information. As the same image-erasing strategy is used among 6 experiment settings, the experimental results also demonstrate that conditional entropy capturing is affected by the complexity of training data. Highly complex training data helps the model better capture the conditional entropy of generating the erased patch from its surrounding. Limitation To further explore the potential of CETOOD, we utilize the model that is pre-trained on ImageNet to distinguish CIFAR100 (Krizhevsky and Hinton 2009) and CIFAR10. We also feed the real conditional entropy into Algorithm 1 for OOD detection. The center image-erasing strategy is used in both experiments. The reason for poor model Figure 7: Feature visualization of 1000 samples from CIFAR10, extracted from the uncertainty estimation space (Z). The model is trained with Le (b), Lr (c) and Ltotal (d), with center image-erasing. The control experiment is training the model with Ltotal, without image-erasing. ID OOD Ours Ours (H) AUROC AUPR AUROC AUPR C10 C100 64.1 52.3 50.1 49.4 C100 C10 62.2 53.4 48.4 49.8 Table 3: The OOD detection results on CIFAR10 (C10) and CIFAR100 (C100), the group size is 10. performance in Table 3 is that the current image-erasing strategy cannot create exclusive conditional entropy distribution for CIFAR10 and CIFAR100. The performance improvement compared with the detection results of real conditional entropy proves that our model has the ability to capture conditional entropy. An adaptive image-erasing strategy can be further investigated to address the limitation. Conclusion We proposed a method to perform transferable OOD detection by leveraging the concept of conditional entropy to OOD detection. We first validated two hypotheses: The DGMs are prone to learn low-level features rather than semantic information. In the DGMs, the lower bound of negative-log-likelihoods is determined by the conditional entropy between the model input and target output. Based on these hypotheses, we presented an image-erasing strategy and UEN to assign and capture the conditional entropy distribution discrepancy between different ID datasets. Our model, trained on a complex dataset, becomes transferable to other ID datasets. Experimental results on the five datasets show that our method, without retraining, achieves comparable performance with the SOTA group-based OOD detection methods that require retraining on the ID datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6267 References BBC. 2020. AI “Outperforms” Doctors Diagnosing Breast Cancer. https://www.bbc.com/news/health-50857759. Accessed: 2020-01-02. Bevandic, P.; Kreso, I.; Orsic, M.; and Segvic, S. 2018. Discriminative out-of-distribution detection for semantic segmentation. CoRR. Cai, M.; and Li, Y. 2023. Out-of-distribution Detection via Frequency-regularized Generative Models. In IEEE Winter Conference on Applications of Computer Vision (WACV). Casas, S.; Sadat, A.; and Urtasun, R. 2021. MP3: A Unified Model To Map, Perceive, Predict and Plan. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Davis, J.; and Goadrich, M. 2006. The relationship between Precision-Recall and ROC curves. In International Conference on Machine Learning (ICML). Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; and Fei-Fei, L. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Djurisic, A.; Bozanic, N.; Ashok, A.; and Liu, R. 2023. Extremely Simple Activation Shaping for Out-of-Distribution Detection. In International Conference on Learning Representations (ICLR). Hendrycks, D.; and Gimpel, K. 2017. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. In International Conference on Learning Representations (ICLR). Hendrycks, D.; Mazeika, M.; and Dietterich, T. G. 2019. Deep Anomaly Detection with Outlier Exposure. In International Conference on Learning Representations (ICLR). Hsu, Y.; Shen, Y.; Jin, H.; and Kira, Z. 2020. Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Huang, H.; Li, Z.; Wang, L.; Chen, S.; Zhou, X.; and Dong, B. 2021. Feature Space Singularity for Out-of-Distribution Detection. In Proceedings of the Workshop on Artificial Intelligence. Huang, R.; and Li, Y. 2021. MOS: Towards Scaling Out-ofDistribution Detection for Large Semantic Space. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Jiang, D.; Sun, S.; and Yu, Y. 2022. Revisiting flow generative models for Out-of-distribution detection. In International Conference on Learning Representations (ICLR). Kim, E.; Kim, S.; Seo, M.; and Yoon, S. 2021. XProtoNet: Diagnosis in Chest Radiography With Global and Local Explanations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kirichenko, P.; Izmailov, P.; and Wilson, A. G. 2020. Why Normalizing Flows Fail to Detect Out-of-Distribution Data. In Advances in Neural Information Processing Systems (NeurIPS). Krizhevsky, A.; and Hinton, G. 2009. Learning multiple layers of features from tiny images. Handbook of Systemic Autoimmune Diseases. Lakshminarayanan, B.; Pritzel, A.; and Blundell, C. 2017. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In Advances in Neural Information Processing Systems (NeurIPS). LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE. Lee, K.; Lee, H.; Lee, K.; and Shin, J. 2018a. Training Confidence-calibrated Classifiers for Detecting Outof-Distribution Samples. In International Conference on Learning Representations (ICLR). Lee, K.; Lee, K.; Lee, H.; and Shin, J. 2018b. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. In Advances in Neural Information Processing Systems (NeurIPS). Liang, S.; Li, Y.; and Srikant, R. 2018. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In International Conference on Learning Representations (ICLR). Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep Learning Face Attributes in the Wild. In IEEE International Conference on Computer Vision (ICCV). Ming, Y.; Sun, Y.; Dia, O.; and Li, Y. 2023. How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection? In International Conference on Learning Representations (ICLR). Mohseni, S.; Pitale, M.; Yadawa, J. B. S.; and Wang, Z. 2020. Self-Supervised Learning for Generalizable Out-ofDistribution Detection. In AAAI Conference on Artificial Intelligence (AAAI). Nalisnick, E. T.; Matsukawa, A.; Teh, Y. W.; and Lakshminarayanan, B. 2019. Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality. CoRR. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. Y. 2011. Reading Digits in Natural Images with Unsupervised Feature Learning. In Workshop on Deep Learning and Unsupervised Feature Learning (NeurIPS workshop). Nguyen, A. M.; Yosinski, J.; and Clune, J. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Nguyen, H. N.; Hung-Quang, N.; Ta, T.; Nguyen-Tang, T.; Doan, K. D.; and Thanh-Tung, H. 2023. A Cosine Similarity-based Method for Out-of-Distribution Detection. CoRR. Ren, J.; Liu, P. J.; Fertig, E.; Snoek, J.; Poplin, R.; DePristo, M. A.; Dillon, J. V.; and Lakshminarayanan, B. 2019. Likelihood Ratios for Out-of-Distribution Detection. In Advances in Neural Information Processing Systems (NeurIPS). Salimans, T.; Karpathy, A.; Chen, X.; and Kingma, D. P. 2017. PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other ModificaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6268 tions. In International Conference on Learning Representations (ICLR). Sensoy, M.; Kaplan, L. M.; and Kandemir, M. 2018. Evidential Deep Learning to Quantify Classification Uncertainty. In Advances in Neural Information Processing Systems (NeurIPS). Serr`a, J.; ´Alvarez, D.; G´omez, V.; Slizovskaia, O.; N´u˜nez, J. F.; and Luque, J. 2020. Input Complexity and Outof-distribution Detection with Likelihood-based Generative Models. In International Conference on Learning Representations (ICLR). Shekhovtsov, A.; and Flach, B. 2019. Feed-forward Propagation in Probabilistic Neural Networks with Categorical and Max Layers. In International Conference on Learning Representations (ICLR). Sun, R.; Zhang, A.; Zhang, H.; Zhu, Y.; Zhang, R.; and Li, Z. 2023. SR-OOD: Out-of-Distribution Detection via Sample Repairing. CoRR. Times, T. N. Y. 2018. After Fatal Uber Crash, a Self-Driving Start-Up Moves Forward. https://www.nytimes.com/2018/ 05/07/technology/uber-crash-autonomous-driveai.html. Accessed: 2018-05-07. Tishby, N.; Pereira, F. C. N.; and Bialek, W. 2000. The information bottleneck method. CoRR. Wang, H.; Shi, X.; and Yeung, D. 2016. Natural-Parameter Networks: A Class of Probabilistic Neural Networks. In Lee, D. D.; Sugiyama, M.; von Luxburg, U.; Guyon, I.; and Garnett, R., eds., Advances in Neural Information Processing Systems (NeurIPS). Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. CoRR. Xiao, Z.; Yan, Q.; and Amit, Y. 2020. Likelihood Regret: An Out-of-Distribution Detection Score For Variational Autoencoder. In Advances in Neural Information Processing Systems (NeurIPS). Zaeemzadeh, A.; Bisagno, N.; Sambugaro, Z.; Conci, N.; Rahnavard, N.; and Shah, M. 2021. Out-of-Distribution Detection Using Union of 1-Dimensional Subspaces. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, L. H.; Goldstein, M.; and Ranganath, R. 2021. Understanding Failures in Out-of-Distribution Detection with Deep Generative Models. In International Conference on Machine Learning (ICML). Zhang, Y.; Liu, W.; Chen, Z.; Wang, J.; Liu, Z.; Li, K.; Wei, H.; and Chen, Z. 2020. Out-of-Distribution Detection with Distance Guarantee in Deep Generative Models. CoRR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6269 | 2024 | 696 |
18,513 | Unsupervised Action Segmentation via Fast Learning of Semantically Consistent Actoms Zheng Xing1, Weibing Zhao2* 1 Future Network of Intelligence Institute, School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 2 Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, China {zhengxing, weibingzhao}@link.cuhk.edu.cn Abstract Action segmentation serves as a pivotal component in comprehending videos, encompassing the learning of a sequence of semantically consistent action units known as actoms. Conventional methodologies tend to require a significant consumption of time for both training and learning phases. This paper introduces an innovative unsupervised framework for action segmentation in video, characterized by its fast learning capability and absence of mandatory training. The core idea involves splitting the video into distinct actoms, which are then merging together based on shared actions. The key challenge here is to prevent the inadvertent creation of singular actoms that attempt to represent multiple actions during the splitting phase. Additionally, it is crucial to avoid situations where actoms associated with the same action are incorrectly grouped into multiple clusters during the merging phase. In this paper, we present a method for calculating the similarity between adjacent frames under a subspace assumption. Then, we employ a local minimum searching procedure, which effectively splits the video into coherent actoms aligned with their semantic meaning and provides us an action segmentation proposal. Subsequently, we calculate a spatio-temporal similarity between actoms, followed by developing a merging process to merge actoms representing identical actions within the action segmentation proposals. Our approach is evaluated on four benchmark datasets, and the results demonstrate that our method achieves state-of-theart performance. Besides, our method also achieves the optimal balance between accuracy and learning time when compared to existing unsupervised techniques. Code is available at https://github.com/y66y/SaM. Introduction Large volumes of videos are uploaded to both cloud and edge storage every day, leading to a significant demand for rapid video analysis. Efficient video comprehension plays a pivotal role in real-world applications, such as video retrieval, surveillance analysis (Vishwakarma and Agrawal 2013), robot perception (Qi et al. 2019; ?, 2021; Sun et al. 2023, 2022b,c,a), indoor localization (Wang et al. 2021, 2020a; Liu, Wang, and Luo 2020; Luo, Zhang, and Wang 2020). In recent years, a considerable focus within the *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. GT Ours Take cup Spoon powder Pour milk Stir milk Figure 1: Action segmentation output example from Breakfast Dataset (Kuehne, Arslan, and Serre 2014): P46 webcam02 P46 milk. Colors indicate different actions in chronological order: take cup, spoon powder, pour milk, stir milk. The background is shown in white color. field of video comprehension has been devoted to action segmentation in videos (Wang et al. 2023; Sheng and Li 2023; van Amsterdam et al. 2023). The objective of action segmentation involves categorizing concise, pre-edited segments that characterize individual actions. Despite significant progress in supervised action segmentation techniques, driven by the emergence of deep neural networks and extensive datasets, models based on fully supervised learning still require laborious manual data annotation. This process is time-consuming, costly, and susceptible to errors. Consequently, unsupervised action segmentation has emerged as an alternative strategy to tackle this challenge. Action segmentation involves assigning action labels to individual frames within a video sequence, typically depicting a person engaging in a series of actions as part of a higher-level activity. An illustrative example of breakfast preparation is depicted in Fig. 1. Compared to recognizing activities in videos, action segmentation introduces more formidable challenges due to the presence of extraneous background frames. A significant obstacle arises from the necessity for a substantially larger number of annotations to effectively guide learning-based methodologies, which has resulted in the popularity of weakly supervised and unsupervised approaches for action segmentation (Wang et al. 2023; de AP Marques et al. 2022; Sheng and Li 2023; Li, He, and Xu 2022). Some techniques utilize textual information extracted from accompanying audio to assign action labels at the frame level for training action segmentation models, as introduced in (Alayrac et al. 2016). However, this approach relies on the assumption of well-synchronized The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6270 audio and video frames. Alternatively, other methodologies presuppose some prior knowledge of actions, such as the high-level activity labels or lists of depicted actions in each video (Souri et al. 2021). However, even this level of annotation demands substantial annotation effort for each training video, due to the variability of constituent actions across diverse activities. Regardless of the level of prior knowledge, the majority of weakly- and unsupervised methods, concentrate on obtaining pseudo-labels, which subsequently supervise the training of task-specific feature embeddings. However, the acquired pseudo-labels are inherent noisy, which may potentially impede the effectiveness of the learned embeddings This paper introduces an innovative unsupervised action segmentation framework comprising two distinct phases: splitting and merging. Our approach is grounded in two fundamental insights. Firstly, we assume that high-dimensional frames within action videos reside in distinct subspaces, each corresponding to a specific action. Secondly, humans typically perceive frame segments as manifestations of individual actions. Building upon this comprehension, the identification of actoms arises as an effective and efficient approach for segmenting actions within lengthy, untrimmed videos. Leveraging these insights, our algorithm begins by dividing the video into multiple actoms, where each actom encapsulates frames representing a particular action. Nonetheless, considering the potential recurrence of the same action multiple times within a single video (for instance, a dance video featuring actions A and B followed by another instance of action A), there is a necessity to merge actoms associated with identical actions. The primary challenge lies in effectively preventing a single actom from erroneously encapsulating multiple actions during the splitting phase. It is also equally important to avoid the situation where actoms linked with the same action are incorrectly grouped into separate actions during the subsequent merging process. Specifically, our objective is to ensure that each distinct actom obtained through the video-splitting process accurately encapsulates only one specific action. Given the intricate dynamics of motion backgrounds and the inherent variations in action execution, the task of splitting videos containing non-clustered frames into discrete actoms poses a significant challenge. Furthermore, during the actom merging phase, it is challenging to prevent the fusion of two actoms representing different actions. Our strategy relies on capturing the intricate relationship between these actoms with precision. However, accurately evaluating the degree of similarity between these two actoms is a complex task. In this paper, we draw inspiration from the Canny detector (Canny 1986), commonly used in image processing, and the segmentation method (Keogh et al. 2004), applied in time series analysis. Our initial effort involves identifying distinctive features that exhibit coherence within a specific action context while manifesting variability when compared against different actions. However, it’s essential to acknowledge that challenges like occlusion, shifts in viewpoints, or fluctuations in lighting can result in temporal features derived from actions lacking strict uniformity. Diverging from the methodology of the Canny detector, which identifies intensity gradients within images, we obtain an understanding of actions within videos by learning the actom in the video. Specifically, we identify potential boundaries of actoms through a comprehensive evaluation of the subspacebased similarity between consecutive frames. One of the previously mentioned challenges is to prevent a single actom from erroneously encapsulating multiple actions during the splitting phase. To tackle this challenge, we propose to utilize the minimum value selected from localized temporal windows on the similarity curve to establish the boundaries of actoms. These identified boundaries will then serve as guides for segmenting the video into distinct, semantically consistent actoms. During the actom merging phase, a challenge arises in ensuring that actoms associated with the same action are not incorrectly separated into distinct actions. Accurately quantifying the degree of similarity between two such actoms is crucial. Therefore, we introduce a novel spatio-temporal similarity measure between actoms, considering both their temporal separations and appearance feature distances and facilitating the fast amalgamation of actoms into cohesive actions. Our work contains the following main contributions: • We introduce a novel unsupervised learning framework for action segmentation, consisting of two essential components: a splitting procedure and a merging procedure. The splitting procedure ensures the precise division of the video into distinct actoms, while the merging procedure guarantees the aggregation of actoms that represent the same action into coherent clusters. • First, compared to traditional unsupervised methodologies that heavily rely on pseudo-labels for supervised training, our approach distinguishes itself by completely bypassing the necessity for any form of training, thus possessing the advantage of speed. Second, our method adeptly leverages the semantically consistent attributes of the temporal frames within action videos. Specifically, we employ a splitting procedure to partition the video into concise actoms. This process ensures that each actom comprises frames with similar semantic characteristics. Under the premise that actoms in close temporal proximity are more likely to exhibit similar semantic traits, we propose merging these actoms based on an effective spatio-temporal similarity measure between them. • Our proposed method skillfully achieves a balanced trade-off between model accuracy and learning speed, outperforming weakly supervised and unsupervised action segmentation techniques across four benchmark datasets. Remarkably, our method demonstrates comparable performance even when compared to supervised methods. Related Work Action Segmentation in videos has attracted significant research interest, as evidenced by the considerable volume of related studies (Bueno-Benito, Vecino, and Dimiccoli The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6271 2023; van Amsterdam et al. 2023). In this section, our attention is directed towards a comprehensive review of existing methodologies relevant to the challenge of action segmentation. We particularly emphasize approaches about weaklysupervised and unsupervised paradigms. Existing methods for action segmentation can be broadly categorized into three groups: fully supervised (Liu et al. 2022; van Amsterdam et al. 2023; Lim et al. 2023), weakly supervised (Souri et al. 2021; Sheng and Li 2023; Luo et al. 2022; Fayyaz and Gall 2020), and unsupervised (Kukleva et al. 2019; Bueno-Benito, Vecino, and Dimiccoli 2023; Wang et al. 2023). They differ in whether the annotations are collected by human annotators or extracted in a semior unsupervised manner. These models typically follow a paradigm where an embedding is trained on top of preextracted frame-level video features, as seen in the works of Sheng et al. (Sheng and Li 2023), Sener et al. (Sener and Yao 2020), and Richard et al. (Richard et al. 2018), or hand-crafted video features, as demonstrated by Ding et al. (Ding and Xu 2018) and Kukleva et al. (Kukleva et al. 2019). The training process of the embedding layer involves the use of a discriminative objective function in conjunction with available annotations (Li, Lei, and Todorovic 2019; Sheng and Li 2023; Richard et al. 2018; Souri et al. 2021). In the subsequent sections, we explore the specifics of weakly supervised and unsupervised techniques, which differ significantly in how they extract and exploit pseudo-labels. Weakly-supervised approaches often assume the availability of both the activity label at the video level and action ordering, referred to as transcripts, during the training phase. Many methods follow a two-step procedure: initially generating pseudo-labels using transcripts and subsequently training a frame classification network with these inferred labels (Sheng and Li 2023; Luo et al. 2022). In contrast, NN-Vit (Richard et al. 2018) directly utilizes transcripts to train a frame classification model. To enforce consistency between frame-level label predictions, they introduce a loss based on Viterbi decoding. In a similar vein, MuCoN (Souri et al. 2021) aims to leverage transcripts in learning a frame classification model. They employ two network branches, with only one having access to transcripts, ensuring mutual consistency between both branches. Another recent method, CDFL (Li, Lei, and Todorovic 2019), also seeks to utilize transcripts in training its frame labeling model. Initially, they construct a fully-connected, directed segmentation graph, where paths represent actions. Training the model involves maximizing the energy difference between valid paths (i.e., paths consistent with the ground-truth transcript) and invalid ones. In SCT (Fayyaz and Gall 2020), the authors assume knowledge of the set of action labels for a given video, but without their order. They determine the ordering and temporal boundaries of actions by alternately optimizing set and frame classification objectives. This ensures that frame-level action predictions align with set-level predictions. Unsupervised approaches typically rely solely on knowledge of the video-level activity label (Bueno-Benito, Vecino, and Dimiccoli 2023; VidalMata et al. 2021; Aakur and Sarkar 2019). The Mallow method (Sener and Yao 2020) utilizes video-level annotations in an iterative approach to action segmentation. This involves alternating optimization between a discriminative appearance model and a generative temporal model of action sequences. Conversely, the Frank-Wolfe (Alayrac et al. 2016) method extracts video narrations using Automatic Speech Recognition (ASR). These narrations are then employed to extract an action sequence for a set of videos related to a specific activity. This is achieved by independently clustering the videos and the ASR-recovered speech to identify action verbs in each video. Temporal localization is subsequently obtained by training a linear classifier. CTE proposes learning frame embeddings that incorporate relative temporal information. They train a video activity model using pseudolabels generated from K-means clustering of the videos’ IDT features. The trained embeddings are then re-clustered to match the groundtruth number of actions, and their order is determined using statistics of the relative timestamps with GMM+Viterbi decoding. VTE-UNET (VidalMata et al. 2021) leverages similarly learned embeddings, combining them with temporal embeddings to enhance the performance of CTE. LSTM+AL (Aakur and Sarkar 2019) fine-tunes a pre-trained VGG16 model with an LSTM, using future frame prediction as a self-supervision objective to learn frame embeddings. These embeddings are subsequently employed to train an action boundary detection model. However, all these methods necessitate training on the target video dataset, which, from a practical standpoint, imposes significant restrictions. In contrast, our method eliminates the need for training and relies solely on video splitting and merging. Methodology Observed that with a well-established similarity between frames in the video, the boundaries of actoms within a video can be identified without resorting to additional training on objectives reliant on objectives that use noisy pseudo-labels, something that almost all current methods pursue. Previous endeavors in actom boundary detection have involved complex neural networks or the generation of pseudo-labels that may not be directly relevant (Ishikawa et al. 2021; Wang et al. 2020b). Contrary to this prevailing trend, our approach takes a different path. The bottom-up framework, exemplified by (Menon, Muthukrishnan, and Kalyani 2020), emerges as a promising choice for our task. Such methods furnish a hierarchy of data partitions instead of a singular partition. In this paper, we embrace a bottom-up framework for action segmentation, bypassing the need for video-level activity labels. The capacity of our approach to generating plausible action segmentation without training holds considerable practical value. Given a video X = {x1,x2,..., xN}, our primary goal is to categorize these frames into K actions, where K represents the number of distinct actions present in the video. Our approach begins with an in-depth explanation of frame similarity. Subsequently, we outline the methodology for detecting actom boundaries based on the similarity between adjacent frames, a process that divides videos into discrete and semantically consistent actoms. Moreover, we assume that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6272 actoms located closely in the temporal domain share similar semantic characteristics. To facilitate this, we introduce a spatio-temporal similarity measure that forges connections between actoms, taking into account their proximity not only in feature space but also in the temporal dimension. This fusion of feature-based proximity and temporal alignment aims to incorporate spatial and temporal coherence in a unified manner. Splitting Video into Semantically Consistent Actoms Our initial objective revolves around partitioning a video into actoms by accurately identifying their boundaries. However, due to factors such as occlusion, changes in viewpoint, or variations in lighting, the alterations along the entire temporal dimension of a video can be abrupt. Consequently, precisely detecting actom boundaries based solely on the general distance assessment (e.g., Euclidean) between consecutive frames poses considerable challenges. The subspace assumption has found extensive application in various domains, including image representation and compression, as well as in addressing computer vision challenges like action segmentation, face clustering, image segmentation, and video segmentation. Our work aligns with these principles and similarly operates under the premise that distinct actions present in video frames can be discerned by their respective placements within distinct subspaces. We introduce a cosine-based measurement to quantify the similarity between frames, building upon the subspace assumption. Specifically, we measure the similarity between frames within the same or different subspaces by evaluating the cosine angle between them. To achieve this, we first normalize each frame xi as ˜xi = xi/∣∣xi∣∣2, where ∣∣⋅∣∣denotes the l2 norm. These normalized frames ˜xi ∈RD, with i ∈ 1,2,...,N, are situated within the high-dimensional hypersphere SD−1 (Menon, Muthukrishnan, and Kalyani 2020). We define the angle θi,j between two frames xi and xj as θi,j = cos−1(˜xT i ˜xj), where θi,j ∈[0,π]. This radian-based θi,j is then converted to degrees, denoted as ˜θi,j, using the formula ˜θi,j = θi,j ⋅180/π, with ˜θi,j ∈[0,180]. The angle ˜θi,j = 0 when frames xi and xj reside within the same subspace, while ˜θi,j > 0 indicates that xi and xj are located in different subspaces. The similarity between consecutive frames xi and xi+1 is defined as si = exp(−˜θi,i+1/σ2 θ) for i ∈[1,N −1] and sN is set to be sN = sN−1 for convenience. In this paper, the variance σ2 θ is designated as σ2 θ = Var[˜θ]. If we graph the similarity si against their corresponding time stamps i, we would observe a sequence of undulating patterns resembling the shape of ⊓. The fluctuations in similarity over time can be attributed to the fact that actions belonging to the same category exhibit high similarity values (approaching 1), while significant decreases in similarity values indicate substantial changes in actions. Consequently, the boundaries represented by the falling edge of ⊓are the critical delineations between different actoms. We can simply locate the low value in the curve of si to identify these boundaries of actoms. However, in practice, pinpointAlgorithm 1: The split-and-merge (SaM) algorithm. Input: the video X, and the number of actions K. Output: the K actoms. 1: Calculate the similarity between adjacent frames: {si}. 2: Local minimum searching resulting in the segmentation indices B = {b1,b2,...,bM−1}. 3: Initialize the actom {X1,X2,..., XM} with the corresponding index set {C1,C2,..., CM} according to B. 4: repeat 5: Compute the actom feature {¯xm}m∈[1,2,..,M], and averaged time-stamps ¯tm = 1 ∣Cm∣∑t∈Cm t. 6: Compute the spatio-temporal similarity G(M)(i,j) for any i ≠j. 7: Merge the most similar ith, and jth actoms, resulting in new actom. 8: until the number of actoms is K. ing these troughs on the curve of si is not straightforward due to various factors. In particular, the regions around the boundaries of actions tend to be plagued by a multitude of erroneous responses. To address this issue, we develop a local minimum search algorithm to mitigate the influence of these errors. Specifically, we embark on a comprehensive boundary search by constructing a set of boundaries as { argmin t∈{i+1,i+2,...,i+L} st∣i ∈{0,L, 2L,3L,...N −L}} where the window size L = ⌊δN/K⌋, the length of the data N, and the number of actions K. In this construction, we search for minima within locally prominent windows of size L along the temporal dimension. These identified minima hold the potential to serve as effective boundaries between adjacent actoms. In the upcoming experimental section, we will conduct a detailed analysis of how the window size δ affects the performance of our algorithm. Furthermore, we will illustrate the robustness of our algorithm in response to changes in δ. Merging Actoms Representing the Same Action Denote the boundary set obtained from the previous section as B = {b1,b2,...,bM−1} ⊂{1,2,...,N}. We assume that the boundaries are both ordered and non-repeating, such that bk−1 < bk for any k ∈[1,M]. The dummy boundaries are implicitly available: b0 ∶= 0 and bM ∶= N. Since the same action usually occurs multiple times in a video, M is always greater than the number of actions K. Therefore, it is necessary to develop a merging procedure to further group actoms into K actions. Denote the mth actom as Xm = {xi}i∈Cm, where the index set Cm = {bm−1 + 1,bm−1 + 2,...,bm}. The key to merging actoms lies in measuring the similarity between them. Drawing upon the observation of identifying linking chains within data through the presence of nearest or shared neighbors, we introduce a spatio-temporal measurement that considers both proximities in the feature space and the temporal arrangement of actoms. Specifically, we aim to design a similarity metric that captures the essence of both feature-space and temporal closeness among actoms. This is achieved by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6273 incorporating the progression of time as a modulating factor during the similarity computation. We first compute the feature vector of actoms, denoted as {¯xm}m∈[1,2,..,M], where ¯xm = 1 ∣Cm∣∑i∈Cm xi represents the feature of the mth actom that characterizes a specific action. The similarity between the ith and jth actoms is then defined as G(M)(i, j) = exp(−λ∣¯ti −¯tj∣ N ) ⋅exp[−180 πσ2 θ cos−1( ¯xT i ¯xj ∣∣¯xi∣∣2∣∣¯xj∣∣2 )] for any i,j ∈[1,2,...,M], i ≠j, where ¯tm = 1 ∣Cm∣∑t∈Cm t, and λ serves as a trade-off parameter, offering control over the influence of the temporal consistency requirement. The introduced term exp(−λ∣¯ti−¯tj∣ N ) is employed to accentuate the significance of the temporal consistency on the similarity measure. Here, ∣¯ti−¯tj∣represents the temporal disparity between the respective nodes, with its impact adjusted by λ. This factor is especially pertinent as we intend to employ temporal similarity as a modulating component for the feature-space similarity. Consequently, G(M)(i,j) signifies the spatio-temporal similarity between the actoms i and j, taking into account the temporal relationships while considering the weight derived from the duration of the video sequence. The entry G(M)(i,j), for any i,j ∈{1,2,...,M}, i ≠j with the maximum value is selected and the corresponding ¯xi and ¯xj are considered as the most similar actoms, which are merged to generate a new actom Xm′ = {xi}i∈Cm′ , where the index set Cm′ = Ci ∪Cj. We repeat the merging process until the number of actoms reduces to K. The main steps of the proposed algorithm are shown in Alg. 1. Experiments Experimental Setup Datasets. We assessed the efficacy of our approach on four benchmark datasets: Breakfast (BF)(Kuehne, Arslan, and Serre 2014), YouTube Instructional Videos (YTI) (Alayrac et al. 2016), Hollywood Extended (HE) (Bojanowski et al. 2014), and 50Salads (FS) (Stein and McKenna 2013). These four datasets encompass a broad spectrum of activities, ranging from diverse cooking routines to tasks like car maintenance. The dataset characteristics span varying video lengths, with averages ranging from approximately 520 frames to as high as 11788 frames. Features. To ensure an equitable comparison with relevant prior studies, we adopt the same input features as recent methodologies (Sheng and Li 2023; Wang et al. 2023; Fayyaz and Gall 2020). Specifically, for the BF, FS, and HE datasets, we utilize the Improved Dense Trajectory (IDT) features (Wang and Schmid 2013) as computed and provided by the authors of CTE (Kukleva et al. 2019) (for BF and FS) and SCT (Fayyaz and Gall 2020) (for HE). For YTI (Alayrac et al. 2016), we leverage the features made available by the authors themselves. These features consist of 3000-dimensional vectors, achieved by concatenating Histogram of Optical Flow (HOF) (Laptev et al. 2008) descriptors with feature embeddings extracted from VGG16-conv5 (Simonyan and Zisserman 2014). Throughout all datasets, our reporting of performance encompasses the entire dataset, ensuring alignment with established practices within the literature. Evaluation Metrics. Since our method outputs clusters without particular correspondences to the ground-truth labels, we require a one-to-one mapping between the outputs and the ground-truth labels. Following (Aakur and Sarkar 2019; Kukleva et al. 2019; Sener and Yao 2020), we utilize the Hungarian algorithm to generate this mapping based on the overlap between matched clusters. Since our method does not concern cluster labels, we conduct this mapping on the video level as in (Aakur and Sarkar 2019). We also report the F1 score and mean over frames (MoF) for all datasets as used in previous works (Kukleva et al. 2019). We report the Jaccard index as an intersection over union (IoU) as an additional measurement. Comparison to State-of-the-art We proceed to present a comprehensive comparison of our method against the current state-of-the-art techniques, including WPI (Ghoddoosian et al. 2022), SSTDA (Chen et al. 2020), ASAL (Li and Todorovic 2021), TOT+TCL (Kumar et al. 2022), Mallow (Sener and Yao 2020), ASAL (Li and Todorovic 2021), SCV (Li and Todorovic 2020), US-FGW (Luo et al. 2022), DMR (Asghari-Esfeden, Sznaier, and Camps 2020), D3TW (Chang et al. 2023), SRL (Feichtenhofer et al. 2021), SRA (Lai et al. 2019), STPE (de AP Marques et al. 2022), GMM+CNN (Kuehne, Richard, and Gall 2019), FFA (Ng and Fernando 2020), C2F (Sheng, Li, and Tian 2021), TAD (Li, He, and Xu 2022), etc. We will discuss the results individually for each of the four datasets, as summarized in the following tables: Tab. 1 (BF), Tab. 2 (YTI), Tab. 4 (FS), and Tab. 3 (HE). However, it’s important to acknowledge that as highlighted in (Kukleva et al. 2019), while our evaluation metrics are comparable to those utilized by both weakly and fully supervised approaches, a certain nuance must be taken into account. Specifically, the results of unsupervised learning are presented concerning an optimal cluster assignment to ground-truth classes, thereby representing the best conceivable scenario for this task. For each dataset, we incorporate partial relevant metrics whenever they are conventionally utilized for that specific dataset. In the presented tables, the Train column serves as an indicator of whether the method necessitates training on the target activity videos before executing the segmentation process. A hyphen – denotes instances where no reported results are available. Performance on BF. In Tab. 1, we present a performance comparison with state-of-the-art methods on BF. In addition to unsupervised methods, we also include a comparison with several supervised and weakly supervised methods, which serve as upper bounds for evaluating our method’s performance. Our SaM method demonstrates superior performance over all unsupervised methods, showcasing absolute improvements of 9.5%, 10.2%, and 12% compared to the best-reported unsupervised method CoSeg (Wang The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6274 BF Weakly Supervised IoU F1 MoF Train CDFL – – 50.2 ✓ SCT – – 30.4 ✓ MuCon – – 48.5 ✓ WPI 25.0 – – ✓ C2FL – – 50.4 ✓ Unsupervised IoU F1 MoF Train CTE – 26.4 41.8 ✓ SSTDA – – 55.2 ✓ ASAL – 37.9 52.5 ✓ TOT+TCL – 30.3 39.0 ✓ CoSeg 42.6 44.7 53.1 ✓ SaM 44.4 55.9 64.0 Table 1: Comparison to the state-of-the-art on BF. YTI Unsupervised F1 MoF Train CTE 28.3 39.0 ✓ Mallow 27.0 27.8 ✓ ASAL 32.1 44.9 ✓ TOT+TCL 32.9 45.3 ✓ LTL 34.7 52.4 SaM 49.6 68.1 Table 2: Comparison to the state-of-the-art learning on YTI. et al. 2023). Similarly, our approach outperforms the leading weakly supervised method C2FL (Sheng and Li 2023) with a remarkable 14.7% enhancement on the MoF metric. Furthermore, in comparison with fully supervised methods, our approach achieves results 5.9% lower than the F1 metric of (van Amsterdam et al. 2023) and 5.1% lower than the MoF metric of (Chen et al. 2020). Although our method falls short of the performance exhibited by fully supervised methods, we have achieved remarkable proximity to their results. Performance on YTI. We summarize the performance of our method on YTI in Tab. 2. To make a fair comparison, we remove background frames from videos as previous approaches (Kukleva et al. 2019; Sener and Yao 2020). Our SaM method significantly outperforms all unsupervised methods, with absolute gains of 14.9%/15.7% on F1/MoF over the best published unsupervised method LTL (BuenoBenito, Vecino, and Dimiccoli 2023). Performance on HE. As shown in Tab. 3, our method achieves a significant leap compared with existing approaches. In particular, our method obtains improvements of 5.7% on MoF compared with the best unsupervised method DHC (Sharma et al. 2023). Similarly, our method outperforms the best weakly supervised method MuCon (Souri et al. 2021) with 19.3% gains on the MoF metric. Remarkably, our method even outperforms all fully supervised methods on the IoU and MoF metrics. HE Fully Supervised IoU F1 MoF Train GMM+CNN 8.4 – 39.5 ✓ SCV 35.5 – – ✓ Weakly Supervised IoU F1 MoF Train CDFL 19.5 – 40.6 ✓ SCT 17.7 – – ✓ MuCon – – 41.6 ✓ US-FGW 23.1 – 38.4 ✓ D3TW – – 33.6 ✓ Unsupervised IoU F1 MoF Train SRA – 41.1 – ✓ DMR – 47.4 – ✓ SRL 30.0 45.7 53.0 ✓ STPE – – 47.3 ✓ DHC – – 55.2 ✓ SaM 39.4 57.3 60.9 Table 3: Comparison to the state-of-the-art on HE. FS Weakly Sup. F1 MoF(eval) MoF(mid) Train CDFL – – 54.7 ✓ FFA – – 49.4 ✓ C2F – – 24.7 ✓ TAD – – 45.5 ✓ C2F2 – – 56.2 ✓ Unsup. F1 MoF(eval) MoF(mid) Train CTE – 35.5 30.2 ✓ SSTDA 73.8 – – ✓ ASAL – 39.2 34.4 ✓ TOT+TCL – 44.5 34.3 ✓ CoSeg 71.8 – – ✓ SaM 78.2 71.6 71.9 Table 4: Comparison to the state-of-the-art on FS. Performance on FS. We provide a summary of the performance of our method on the FS dataset in Tab. 4. The FS dataset encompasses an average of 19 actions per video, with 14.1% of all frames classified as background frames. We evaluate our method based on two levels of action granularity, as outlined in (Stein and McKenna 2013). The mid granularity level assesses performance across the complete set of 19 actions, while the eval granularity level combines certain action classes to yield 10 distinct action classes. In the mid granularity evaluation, our method achieves an MoF of 71.9%, which is notably higher by 15.7% (in absolute terms) compared to the leading weakly supervised method C2F2 (Sheng and Li 2023). This trend of performance improvement is also evident in the eval granularity level evaluation. In comparison to the most effective fully supervised method, our approach achieves results that are 10.29% lower in terms of F1 metric, 12.2% lower in terms of eval metric, and 4.4% lower in terms of mid metric. This indicates that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6275 Train (h) Learn (s) Train Weakly Supervised CDFL 66.73 62.37 ✓ MuCon-full 4.57 3.03 ✓ Unsupervised CTE – 217.94 ✓ Ours 0.00 0.20 Table 5: Comparison of training and learning time. The training time is measured for training on split 1 on BF and the learning time is measured as the average learning time for a single video. (a) (b) 0 1 2 3 4 5 6 7 8 9 10-3 20 40 60 80 MoF IoU F1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 20 40 60 80 MoF IoU F1 Performance Performance 𝜆 𝛿 Figure 2: The performance of our method with varying (a) δ and (b) λ on BF dataset. there is still room for enhancement in our method’s performance on the FS dataset. Run-Time Comparison The comparison of runtime efficiency between our method and alternative approaches is presented in Tab. 5, including MuCon-full (Souri et al. 2021), etc. All experiments were conducted using split 1 of the BF dataset, with learning times reported for individual videos. Each video within this dataset comprises approximately 2,000 frames. The time used for feature extraction is not included. A discernible pattern emerges from the table: our method eliminates the necessity for hours of GPU-intensive model training. In comparison to the faster unsupervised approach CTE (Kukleva et al. 2019), which either requires no training or entails training overhead, our method achieves an impressive 56x faster learning rate. The combination of swift learning speed and the absence of training prerequisites underscores the practical feasibility of our approach. When paired with an off-theshelf feature extractor, our method becomes readily applicable to real-world applications. Parameter Sensitivity Analysis Impact of δ. We delve into investigating the impact of the window size parameter, δ, which governs the search for boundaries of actoms. We fix the parameter λ as λ = 0.001, The relationship between action segmentation performance and the parameter δ is illustrated in Fig. 2 (a). When δ is increased, our method tends to yield stable results, particularly within the range of δ < 0.4. However, when δ exceeds 0.4, the effectiveness of our algorithm begins to diminish. This decline can be attributed to the fact that with larger δ values, the algorithm detects fewer boundaries of actoms. Nonetheless, it’s worth noting that our algorithm maintains a relatively stable performance across a wide range of δ values, 0.1 < δ < 0.4. Our algorithm exhibits consistent behavior within this range. We set δ = 0.3 in our experiment. Influence of λ. We further delve into investigating the impact of the trade-off parameter, λ, on the actom merging process. We maintain a fixed value for the parameter δ, setting it to δ = 0.3. The correlation between action segmentation performance and the parameter λ is depicted in Fig. 2 (b). Similar to δ, it appears that the parameter λ also exerts a significant influence on the performance of our algorithm. We notice that the performance of our algorithm improves as λ increases, and this trend starts to diminish when λ surpasses 0.001. However, there is a potential issue to consider. Increasing the value of λ to enhance temporal consistency can inadvertently lead to the merging of adjacent atoms that actually represent distinct actions. For example, if a video showcases actions A, B, and C, and the sequential occurrence of these actions follows the pattern A, B, A, C, B. In this case, a tremendous value of λ would result in merging the first three actoms A, B, and A into an action. To address this concern, it is advisable to adopt a balanced approach and refrain from assigning excessively large values to λ. A commonly suggested value for λ is 0.001. Ablation Study Impact of Splitting. We demonstrate the role of the splitting step by removing it from our algorithm. We assume that each frame represents an actom, and subsequently, our merging method is applied to these actoms. This adaptation results in a performance decline in our method on the BF dataset, with the MoF score dropping from 64.0 to 54.8, IoU decreasing from 44.4 to 35.1, and F1 diminishing from 55.9 to 39.6. This illustrates that our method cannot function effectively without the splitting step. Impact of Merging. To illustrate the impact of the merging step, we exclude it from our algorithm. We execute the boundary search process, retaining only K −1 boundaries with the lowest similarity si to ensure the generation of K segments, with each segment corresponding to an action. This adjustment leads to a performance decline in our method on the BF dataset, with the MoF score dropping from 64.0 to 50.6, IoU decreasing from 44.4 to 22.0, and F1 diminishing from 55.9 to 30.4. This emphasizes that our method cannot operate effectively without the merging step. Conclusion This paper introduces an innovative unsupervised learning framework for action segmentation. By considering both the consistency within actions and the variation across actions, we develop a SaM algorithm to learn the semantically consistent actoms. Rigorous evaluations conducted on four demanding datasets substantiate the efficacy of our approach. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6276 References Aakur, S. N.; and Sarkar, S. 2019. A perceptual prediction framework for self supervised event segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1197–1206. Alayrac, J.-B.; Bojanowski, P.; Agrawal, N.; Sivic, J.; Laptev, I.; and Lacoste-Julien, S. 2016. Unsupervised learning from narrated instruction videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4575–4583. Asghari-Esfeden, S.; Sznaier, M.; and Camps, O. 2020. Dynamic motion representation for human action recognition. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 557–566. Bojanowski, P.; Lajugie, R.; Bach, F.; Laptev, I.; Ponce, J.; Schmid, C.; and Sivic, J. 2014. Weakly supervised action labeling in videos under ordering constraints. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 628–643. Bueno-Benito, E. B.; Vecino, B. T.; and Dimiccoli, M. 2023. Leveraging triplet loss for unsupervised action segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4921–4929. Canny, J. 1986. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6): 679–698. Chang, C.-Y.; Huang, D.-A.; Sui, Y.; Fei-Fei, L.; and Niebles, J. C. 2023. D3tw: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3546– 3555. Chen, M.-H.; Li, B.; Bao, Y.; AlRegib, G.; and Kira, Z. 2020. Action segmentation with joint self-supervised temporal domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9454–9463. de AP Marques, G.; Busson, A. J. G.; Guedes, A. L. V.; Duarte, J. C.; and Colcher, S. 2022. Unsupervised method for video action segmentation through spatio-temporal and positional-encoded embeddings. In Proceedings of the 13th ACM Multimedia Systems Conference, 136–149. Ding, L.; and Xu, C. 2018. Weakly-supervised action segmentation with iterative soft boundary assignment. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6508–6516. Fayyaz, M.; and Gall, J. 2020. Sct: Set constrained temporal transformer for set supervised action segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 501–510. Feichtenhofer, C.; Fan, H.; Xiong, B.; Girshick, R.; and He, K. 2021. A large-scale study on unsupervised spatiotemporal representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3299–3309. Ghoddoosian, R.; Dwivedi, I.; Agarwal, N.; Choi, C.; and Dariush, B. 2022. Weakly-supervised online action segmentation in multi-view instructional videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13780–13790. Ishikawa, Y.; Kasai, S.; Aoki, Y.; and Kataoka, H. 2021. Alleviating over-segmentation errors by detecting action boundaries. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2322–2331. Keogh, E.; Chu, S.; Hart, D.; and Pazzani, M. 2004. Segmenting time series: A survey and novel approach. In Data mining in time series databases, 1–21. Kuehne, H.; Arslan, A.; and Serre, T. 2014. The language of actions: Recovering the syntax and semantics of goaldirected human activities. In Proceedings of the IEEE conference on computer vision and pattern recognition, 780– 787. Kuehne, H.; Richard, A.; and Gall, J. 2019. Weakly supervised learning of actions from transcripts. Computer Vision and Image Understanding, 163: 78–89. Kukleva, A.; Kuehne, H.; Sener, F.; and Gall, J. 2019. Unsupervised learning of action classes with continuous temporal embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12066–12074. Kumar, S.; Haresh, S.; Ahmed, A.; Konin, A.; Zia, M. Z.; and Tran, Q.-H. 2022. Unsupervised action segmentation by joint representation learning and online clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20174–20185. Lai, Q.; Wang, W.; Sun, H.; and Shen, J. 2019. Video saliency prediction using spatiotemporal residual attentive networks. IEEE Transactions on Image Processing, 29: 1113–1126. Laptev, I.; Marszalek, M.; Schmid, C.; and Rozenfeld, B. 2008. Learning realistic human actions from movies. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, 1–8. Li, J.; Lei, P.; and Todorovic, S. 2019. Weakly supervised energy-based learning for action segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6243–6251. Li, J.; and Todorovic, S. 2020. Set-constrained viterbi for set-supervised action segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10820–10829. Li, J.; and Todorovic, S. 2021. Action shuffle alternating learning for unsupervised action segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12628–12636. Li, Z.; He, L.; and Xu, H. 2022. Weakly-Supervised Temporal Action Detection for Fine-Grained Videos with Hierarchical Atomic Actions. In European Conference on Computer Vision, 567–584. Lim, K. M.; Lee, C. P.; Tan, K. S.; Alqahtani, A.; and Ali, M. 2023. Fine-Tuned Temporal Dense Sampling with 1D Convolutional Neural Network for Human Action Recognition. Sensors, 23(11): 5276. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6277 Liu, C.; Wang, C.; and Luo, J. 2020. Large-scale deep learning framework on FPGA for fingerprint-based indoor localization. IEEE Access, 8: 65609–65617. Liu, K.; Li, Y.; Liu, S.; Tan, C.; and Shao, Z. 2022. Reducing the Label Bias for Timestamp Supervised Temporal Action Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6503–6513. Luo, D.; Wang, Y.; Yue, A.; and Xu, H. 2022. WeaklySupervised Temporal Action Alignment Driven by Unbalanced Spectral Fused Gromov-Wasserstein Distance. In Proceedings of the 30th ACM International Conference on Multimedia, 728–739. Luo, J.; Zhang, C.; and Wang, C. 2020. Indoor multi-floor 3D target tracking based on the multi-sensor fusion. IEEE Access, 8: 36836–36846. Menon, V.; Muthukrishnan, G.; and Kalyani, S. 2020. Subspace clustering without knowing the number of clusters: A parameter free approach. IEEE Transactions on Signal Processing, 68: 5047–5062. Ng, Y. B.; and Fernando, B. 2020. Forecasting future action sequences with attention: a new approach to weakly supervised action forecasting. IEEE Transactions on Image Processing, 29: 8880–8891. Qi, S.; Lin, W.; Hong, Z.; Chen, H.; and Zhang, W. 2021. Perceptive autonomous stair climbing for quadrupedal robots. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2313–2320. Qi, S.; Wu, X.; Wang, J.; and Zhang, J. 2019. Recognition of composite motions based on sEMG via deep learning. In 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), 31–36. Richard, A.; Kuehne, H.; Iqbal, A.; and Gall, J. 2018. Neuralnetwork-viterbi: A framework for weakly supervised video learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 7386–7395. Sener, F.; and Yao, A. 2020. Unsupervised learning and segmentation of complex activities from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8368–8376. Sharma, V.; Gupta, M.; Pandey, A. K.; Mishra, D.; and Kumar, A. 2023. A review of deep learning-based human activity recognition on benchmark video datasets. Applied Artificial Intelligence, 36(1): 2093705. Sheng, L.; and Li, C. 2023. Weakly supervised coarse-tofine learning for human action segmentation in HCI videos. Multimedia Tools and Applications, 82(9): 12977–12993. Sheng, L.; Li, C.; and Tian, Y. 2021. Coarse-to-Fine Loss Based On Viterbi Algorithm for Weakly Supervised Action Segmentation. In 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), 1–6. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Souri, Y.; Fayyaz, M.; Minciullo, L.; Francesca, G.; and Gall, J. 2021. Fast weakly supervised action segmentation using mutual consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6196–6208. Stein, S.; and McKenna, S. J. 2013. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, 729–738. Sun, X.; Chen, W.; Xiong, X.; Chen, W.; and Jin, Y. 2022a. A Variable Configuration Force Sensor with Adjustable Resolution for Robotic Applications. IEEE Trans. Ind. Electron., 70(2): 2066–2075. Sun, X.; Wang, C.; Chen, W.; Chen, W.; Yang, G.; and Jin, Y. 2023. A single-actuator four-finger adaptive gripper for robotic assembly. IEEE Trans. Ind. Electron. Sun, X.; Wang, C.; Chen, W.; Yang, S.; He, C.; and Zhi, Y. 2022b. Design and Analysis of a Novel Underactuated Adaptive Gripper for Robotic Assembly. In IEEE 17th Conference on Industrial Electronics and Applications, 207– 212. Sun, X.; Yang, Y.; Chen, W.; Chen, W.; and Zhi, Y. 2022c. Grasping Operation of Irregular-Shaped Objects Based on a Monocular Camera. In International Joint Conference on Energy, Electrical and Power Engineering, 423–429. van Amsterdam, B.; Kadkhodamohammadi, A.; Luengo, I.; and Stoyanov, D. 2023. ASPnet: Action Segmentation With Shared-Private Representation of Multiple Data Sources. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2384–2393. VidalMata, R. G.; Scheirer, W. J.; Kukleva, A.; Cox, D.; and Kuehne, H. 2021. Joint visual-temporal embedding for unsupervised learning of actions in untrimmed sequences. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1238–1247. Vishwakarma, S.; and Agrawal, A. 2013. A survey on activity recognition and behavior understanding in video surveillance. The Visual Computer, 29: 983–1009. Wang, C.; Luo, J.; Liu, X.; and He, X. 2021. Secure and reliable indoor localization based on multitask collaborative learning for large-scale buildings. IEEE Internet of Things Journal, 9(22): 22291–22303. Wang, C.; Luo, J.; Zhang, C.; and Liu, X. 2020a. A Dynamic Escape Route Planning Method for Indoor Multi-floor Buildings Based on Real-time Fire Situation Awareness. In 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS), 222–229. Wang, H.; and Schmid, C. 2013. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, 3551–3558. Wang, X.; Liu, J.; Mei, T.; and Luo, J. 2023. CoSeg: Cognitively Inspired Unsupervised Generic Event Segmentation. IEEE Transactions on Neural Networks and Learning Systems. Wang, Z.; Gao, Z.; Wang, L.; Li, Z.; and Wu, G. 2020b. Boundary-aware cascade networks for temporal action segmentation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, 34–51. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6278 | 2024 | 697 |
18,514 | SPEAL: Skeletal Prior Embedded Attention Learning for Cross-Source Point Cloud Registration Kezheng Xiong1, Maoji Zheng1, Qingshan Xu2, Chenglu Wen1*, Siqi Shen1*, Cheng Wang1 1 Xiamen University 2 Nanyang Technological University {xiongkezheng, zhengmaoji}@stu.xmu.edu.cn, [email protected], {clwen, siqishen, cwang}@xmu.edu.cn, Abstract Point cloud registration, a fundamental task in 3D computer vision, has remained largely unexplored in cross-source point clouds and unstructured scenes. The primary challenges arise from noise, outliers, and variations in scale and density. However, neglected geometric natures of point clouds restrict the performance of current methods. In this paper, we propose a novel method, termed SPEAL, to leverage skeletal representations for effective learning of intrinsic topologies of point clouds, facilitating robust capture of geometric intricacy. Specifically, we design the Skeleton Extraction Module to extract skeleton points and skeletal features in an unsupervised manner, which is inherently robust to noise and density variances. Then, we propose the Skeleton-Aware GeoTransformer to encode high-level skeleton-aware features. It explicitly captures the topological natures and inter-pointcloud skeletal correlations with the noise-robust and densityinvariant skeletal representations. Next, we introduce the Correspondence Dual-Sampler to facilitate correspondences by augmenting the correspondence set with skeletal correspondences. Furthermore, we construct a challenging novel crosssource point cloud dataset named KITTI CrossSource for benchmarking cross-source point cloud registration methods. Extensive quantitative and qualitative experiments are conducted to demonstrate our approach’s superiority and robustness on both cross-source and same-source datasets. To the best of our knowledge, our approach is the first to facilitate point cloud registration with skeletal geometric priors. Introduction Point cloud registration is an essential task in graphics, vision, and robotics. It aims at estimating a rigid transformation to align two partially overlapping frames of point clouds. Recently, there has been a surge of interest in learning-based point cloud registration methods. These methods have made significant progress in addressing the sparsity, partial overlap, and complex distribution of point clouds in large outdoor scenes (Lu et al. 2021; Huang et al. 2021a; Yew and Lee 2022; Qin et al. 2022). However, the practical application and advances in point cloud acquisition present more challenges for point cloud registration, including unstructured scenes and cross-source data. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. GeoTrans. PCAM Predator CoFiNet HRegNet ⭐SPEAL (Ours)
Figure 1: Impossible Triangle of Current Methods. Registration recalls under different settings are shown: KITTI Odometry (Same-Source), KITTI CrossSource and the lowoverlap test split of KITTI CrossSource. Existing methods fail to perform as well as SPEAL on all three challenging circumstances. In unstructured scenes, the complex natural scenes and objects often make it difficult to learn discriminative features for registration. This results in degraded performance of registration algorithms. In the case of cross-source data, challenges mainly arise from partial overlap, as well as considerable differences in scale and density, leading to difficulties in effective feature matching. The combination of noise and outliers from different sources further downgrades the quality of correspondences. Existing methods either focus solely on same-source point clouds, or overlook the intrinsic topological natures of the point clouds. This leads to suboptimal results for challenging scenarios such as cross-source point cloud registration and unstructured scenes. We have observed that skeletons serve as an efficient and robust geometric representation for point clouds, exhibiting significant potential in various point cloud understanding tasks (Shi et al. 2021; Lin et al. 2021). They can effectively encode the geometric intricacy of point clouds. Inspired by this, we propose a novel transformer-based approach, termed Skeletal Prior Embedded Attention Learning (SPEAL), to address the aforementioned challenges. Our method utilizes skeletal geometric priors to learn discriminative features for accurate and robust correspondences. To the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6279 best of our knowledge, our approach is the first to facilitate point cloud registration with skeletal geometric priors. Such skeletal geometric priors encourage robust feature learning by explicitly encoding the intrinsic topological characteristics, thereby facilitating the correspondences and registration results, as shown in Fig. 1 Specifically, to incorporate skeletal representations as a geometric prior, SPEAL comprises three key components: Skeleton Extraction Module (SEM), Skeleton-Aware GeoTRansformer (SAGTR), and Correspondence DualSampler (CDS). First, with the insights from the medial axis transform (MAT) (Blum 1967), SEM extracts a set of skeleton points with skeletal features from input point clouds in an unsupervised manner. It is robust to noise and density variances. Next, SAGTR is designed to learn skeleton-aware discriminative features, facilitating accurate and robust correspondences. It explicitly captures the topological natures and effectively learns inter-point-cloud geometric correlations with our skeletal representations. Finally, CDS samples reliable correspondences from both superpoints and skeleton points, which produces reliable coarse correspondences with awareness of the skeletal structure. Extensive experiments are carried out on two datasets. One is KITTI Odometry, a large-scale outdoor registration benchmark (Geiger, Lenz, and Urtasun 2012), as well as its cross-source variant named KITTI CrossSource proposed by us. The other is a large-scale cross-source dataset mostly consisting of unstructured forest scenes (Weiser et al. 2022). The results demonstrate that SPEAL is effective and robust for both same-source and cross-source point cloud registration, as well as for point clouds of unstructured scenes. Overall, our contributions are threefold: • We propose a novel learning-based point cloud registration approach, SPEAL, which is the first to utilize skeletal representations as a geometric prior to achieve improved performance. • The proposed SEM is an effective and portable skeleton extractor. Our SAGTR combined with CDS effectively produces accurate and robust correspondences for both same-source and cross-source point cloud registration. • KITTI CrossSource, a novel cross-source point cloud dataset, meets the dire need (Huang et al. 2021b) of cross-source point cloud registration benchmarks. This opens up the possibility to bridge the gap between sensor technology and cross-source applications. Related Work Learning-based Registration Methods. Learningbased registration methods fall into two categories: correspondence-based methods and direct registration methods. Correspondence-based methods (Choy, Park, and Koltun 2019; Deng, Birdal, and Ilic 2018a,b; Gojcic et al. 2019; Yao et al. 2020) first extract correspondences between two point clouds, and then estimate the transformation with robust pose estimators. However, traditional robust estimators suffer from slow convergence and are sensitive to outliers. To address this, deep robust estimators (Choy, Dong, and Koltun 2020; Bai et al. 2021; Pais et al. 2020; Lee et al. 2021) utilize deep neural networks to reject outliers and compute the transformation. While these methods require a training procedure, they improve accuracy and speed. Direct registration methods directly estimate the transformation between two point clouds in an end-to-end way. Inspired by Iterative Closest Point (Besl and McKay 1992), some of them (Fu et al. 2021; Wang and Solomon 2019b,a; Yew and Lee 2020) iteratively build soft correspondences and then estimate the transformation with SVD. Others (Xu et al. 2021; Aoki et al. 2019; Huang, Mei, and Zhang 2020) extract a global feature vector and regress the transformation directly with a neural network. However, such methods could potentially fail in large-scale scenes. Transformers in Point Cloud Registration. Originally designed for NLP tasks, Transformers (Vaswani et al. 2017) have shown remarkable efficacy in computer vision (Misra, Girdhar, and Joulin 2021; Carion et al. 2020; Dosovitskiy et al. 2020; Yu et al. 2021b). Recently, transformer-based methods for point cloud registration have also emerged. Geometric Transformer (Qin et al. 2022) leverages transformer layers for superpoint matching, while REGTR (Yew and Lee 2022) uses a transformer cross-encoder and a transformer decoder to directly predict overlap scores. PEAL(Yu et al. 2023) leverages additional overlap priors from 2D images. Point Cloud Skeletal Representations. The curve skeleton is a widely-used skeletal representation due to its simplicity (Huang et al. 2013; Ma, Wu, and Ouhyoung 2003; Au et al. 2008; Cao et al. 2010). It has shown its potential in some learning-based methods (Xu et al. 2019; Shi et al. 2021) like keypoint extraction. However, it is only well-defined for tubular geometries, thus limiting its expressiveness for point clouds with complex shapes or in large-scale scenes. The Medial Axis Transform (MAT) (Blum 1967) is another skeletal representation capable of encoding arbitrary shapes. Some methods (Sun et al. 2015; Yan, Letscher, and Ju 2018; Li et al. 2015) employ simplification techniques to alleviate the distortion caused by surface noise, but they are computationally ineffective and require watertight input surfaces. Recent learning-based efforts (Lin et al. 2021; Wen, Yu, and Tao 2023) use deep neural networks to predict MAT-based skeletons, thus greatly enhancing the robustness and computational efficiency. These methods have shown promising results in various 3D vision tasks, including shape reconstruction and point cloud sampling (Wen, Yu, and Tao 2023). Method Problem Statement. Given two point clouds P = {pi ∈ R3|i = 1, . . . , N} and Q = {qi ∈R3|i = 1, . . . , M}, our goal is to align the two point clouds by estimating a rigid transformation T = {R, t}, where R ∈SO(3) is a 3D rotation matrix and t ∈R3 is a 3D translation vector. The transformation can be solved by: min R,t X (pxi,qyi)∈C⋆∥Rpxi + t −qyi∥2, (1) where C⋆denotes the set of correspondences between two point clouds P and Q. In reality, C⋆is usually unknown. Hence, we need to establish accurate correspondences C between two point clouds for a good transformation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6280 Figure 2: The backbone extracts superpoints and multi-level features from P and Q. Then, SEM and SAGTR extract skeletal representations and learn discriminative skeleton-aware features, respectively. Finally, CDS extracts hybrid coarse correspondences with skeletal priors. The result transformation is computed with LGR. Overview and Notations. Our work leverages skeletal priors in an end-to-end neural network to facilitate correspondences. The pipeline is shown in Fig. 2, following the hierarchical correspondence paradigm. To extract multi-level features for point clouds, we leverage the KPConv-FPN backbone (Lin et al. 2017; Thomas et al. 2019). The points at the coarsest level of the backbone are superpoints, denoted as ˆP and ˆQ. Their associated features are ˆFP ∈R| ˆ P|×dt and ˆFQ ∈R| ˆ Q|×dt. Then, our proposed SEM, SAGTR and CDS are used to extract reliable and accurate coarse correspondences with skeletal priors. Finally, we employ the Point Matching Module and Local-to-Global Registration (Qin et al. 2022) to obtain dense correspondences and estimate the final rigid transformation. Skeleton Extraction Module The Skeleton Extraction Module aims to approximate the Medial Axis Transform (MAT) by leveraging a convex combination of input points, which provides a well-defined skeletal representation for arbitrary shapes in an unsupervised manner. Inspired by existing methods (Lin et al. 2021), it overcomes the computational expense and sensitivity to surface noise of traditional MAT computation. Specifically, for all points in ˆP ∈R| ˆ P|×3 and their features ˆFP ∈R| ˆ P|×dt, SEM aims to extract Ns skeleton points SP ∈RNs×3, their skeletal features ˆFP s ∈RNs×dt, and their radii RP ∈RNs×1 . We extract skeletons for Q in the same way. To this end, we employ a multi-layer perceptron (MLP) to predict the weights W ∈R| ˆ P|×Ns. The MLP is shared across ˆP and ˆQ. Then, the skeleton points SP are obtained as the convex combination (Lin et al. 2021) of input points ˆP: SP =WT ˆP s.t. j =1, . . . , Ns, X| ˆ P| i=1 W(i, j) = 1 (2) The weighting scheme enhances the robustness of skeleton extraction by effectively filtering out noise and outliers. Similarly, we extract their skeletal features by ˆFP s = WT ˆFP. To predict the radius of each skeleton point, we first compute the closest distance for an input point ˆp to all skeleton points as follows: d(ˆp, SP) = mins∈SP ∥ˆp −s∥2. (3) The distances for all input points are then summarized in a vector DP ∈R| ˆ P|×1. Next, the radii of all the skeleton points are computed through a linear combination of their closest distances from all the input points, i.e., RP = WT DP. This approximation is based on the observation that the predicted weights for a skeleton point s are significant only for the input points that in a local neighborhood of s, and diminish to 0 for the input points far away from s. The skeleton extraction is a fundamentally different task from the point cloud registration. Therefore, the module is separately supervised by the skeleton loss (Lin et al. 2021), and we block the gradient flow from this module to the backbone for a more stable training process and better performance (See supplementary materials). Skeleton-Aware GeoTransformer The registration of cross-source point clouds poses significant challenges, including noise, density differences, and scale variances. Skeleton points exhibit consistency and robustness against these challenges. Therefore, we propose the SAGTR module to encode the structure of point clouds. It comprises two key components: Skeleton-aware Geometric Self-Attention and Skeleton-aware Cross-Attention. They are interleaved for Nt times to further extract non-skeletal and skeletal hybrid features (HP, HP s ) and (HQ, HQ s ). These features encode inter-point-cloud and intra-pointcloud correlations and skeletal geometric priors. They contribute to accurate and robust coarse correspondences. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6281 Figure 3: The structure (left) and computational graph (right) of skeleton-aware geometric self-attention. Skeleton-Aware Geometric Self-Attention. In the following, we describe the computation for ˆP, and the computation for ˆQ is exactly the same. Given an input feature matrix X ∈RL×dt (L = | ˆP| + Ns is the length of the input sequence), the output feature matrix Z ∈RL×dt is the weighted sum of all projected input features: zi = XL j=1 ai,j(xjWV ), (4) where ai,j is the weight coefficient computed by a row-wise softmax on the attention score ei,j, and ei,j is computed as: ei,j = (xiWQ)(xjWK +rp i,jWP +rs i,jWS)T / p dt. (5) Fig. 3 shows the computation of Skeleton-aware Geometric Self-attention. Here, rs i,j ∈Rs and rp i,j ∈R are SkeletonAware Structure Embedding and Point-Wise Structure Embedding, respectively. We follow (Qin et al. 2022) to compute rp i,j, which encodes non-skeletal geometric structures between superpoints. rs i,j encodes skeletal latent geometric information of point clouds, which will be described next. WQ, WK, WV , WP , WS ∈Rdt×dt are the respective projections for queries, keys, values, point-wise structure embedding and skeleton-aware structure embedding. We design a novel approach, termed Skeleton-Aware Structure Embedding, to encode skeletal latent structural information in the geometric space. The insight is to leverage the transformation invariance and robustness in the local geometric structure formed by the skeleton points. This embedding includes skeleton-wise distance embedding and skeleton-wise angular embedding. They respectively capture distance and angle information of the local geometric structure formed by skeleton points around superpoints. Specifically, given two superpoints ˆpi, ˆpj∈ˆP, their k-NN skeleton points are si 1, . . . , si k ∈Ks i and sj 1, . . . , sj k ∈Ks j, respectively. Based on them, as shown in Fig. 4, the computation of skeleton-wise structure embedding rs i,j is twofold: 1) Skeleton-Aware Distance Embedding. For each superpoint pj, we first compute ρs j = P sj x∈Ks j d(sj x, pj) , where d(sj x, pj) = ∥sj x −pj∥2 denotes the distance between si and pj in the Euclidean space. Then, the skeleton-aware distance embedding ds j is computed by applying a sinusoidal function on (ρs i −ρs j)/σs d. 2) Skeleton-Aware Angular Embedding. For each skeleton point sj x ∈Ks j, we first compute Figure 4: The computation of skeleton-aware structure embedding. Figure 5: The structure (left) and computational graph (right) of skeleton-aware cross-attention. the angle θx i,j = ∠(sj x −pj, pi −pj). Based on the angles, the skeleton-wise angular embedding as i,j,x is computed by applying a sinusoidal function on θx i,j/σs a. Herein, σs d and σs a control the sensitivity on skeleton-wise distances and angles respectively. The final skeleton-aware structure embedding rs i,j is the aggregation of the skeleton-aware angular embedding as and the skeleton-aware distance embedding ds: rs i,j = ds i,jWD + meanx{as i,j,xWA}, (6) where WD and WA are trainable weights. To help the successive cross-attention layers to capture the geometric structure with skeletal priors, our skeleton-aware self-attention layers also produce the skeleton-aware positional encoding E ′ by applying the attention scores on the skeleton-aware structure embedding rs i,j: E ′ i,k = XL j=1 ai,j · rs i,j,k. (7) Skeleton-Aware Cross-Attention. Several existing works (Qin et al. 2022; Yew and Lee 2022) have utilized the crossattention mechanism for inter-point-cloud feature exchange. However, they either lack positional encoding or fail to explicitly consider the geometric structure of point clouds, leading to suboptimal performance. To address this, we propose the skeleton-aware cross-attention to explicitly learn the correlation of point clouds with skeletal priors, as is shown in Fig. 5. Given feature maps with their skeleton-aware positional encoding (XP, E ′ P) and (XQ, E ′ Q) for ˆP and ˆQ respectively. A skeleton-aware cross-attention layer first adds the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6282 positional encoding to features to produce skeleton-aware features X ′P and X ′Q. Then, the output for ˆP are computed with the features of ˆQ: zP i = X| ˆ Q| j=1 ai,j(x ′Q j WV ). (8) Similarly, the weights ai,j are computed by a row-wise softmax on the attention score ei,j: ei,j = (x ′P i WQ)(x ′Q j WK)T / p dt. (9) The same cross-attention implementation goes for ˆQ. In contrast to skeleton-aware geometric self-attention that captures the intra-point-cloud transformation-invariant geometric structure, the cross-attention here captures the interpoint-cloud geometric corrections and consistency. The hybrid features obtained from SAGTR are therefore discriminative enough for matching. Correspondence Dual-Sampler With discriminative features, it is vital to extract accurate coarse correspondences. Geometric Transformer (Qin et al. 2022) only matches the superpoints. Despite its efficiency, superpoints are sparse and may be unrepeatable, leading to outlier correspondences. Existing efforts strive to tackle this issue with sophisticated sampling strategies (Li et al. 2023) or overlap priors from 2D images (Yu et al. 2023). Unfortunately, their required complicated sampling and extra 2D images result in suboptimal computational efficiency, which hinders the application of such methods. To this end, we propose CDS to effectively augment the correspondence set with our effective skeletal representation, leading to a more accurate and robust hybrid coarse correspondence set. We separately construct the non-skeletal correspondence set C and skeletal correspondence set Cs by feature matching: We first compute the Gaussian correlation matrix S ∈ R| ˆ P|×| ˆ Q| for the normalized features FP and FQ, and then use a dual-normalization operation (Sun et al. 2021; Rocco et al. 2018) to suppress ambiguous matches. Finally, we select at most Nc largest entries for each correspondence set. Since skeleton points lying on non-overlap regions may introduce outlier correspondences, we introduce the Spectral Denoising procedure to filter Cs with a spectral matching algorithm (Leordeanu and Hebert 2005): We firstly compute a compatibility matrix based on the 3D spatial consistency of Cs. Then, we iteratively remove components conflicting with the item of the maximum principal eigenvector until either the principal eigenvector becomes zero or |Cs| equals the minimum number of the main cluster. The main cluster, denoted as C ′ s, is the final skeletal correspondence set. Finally, we resample the least confident Ns entries of C with top Nk correspondences in C ′ s to obtain the hybrid correspondence set C ′, thereby improving the accuracy and robustness of the hybrid coarse correspondence set by replacing potential outliers with more reliable correspondences. Losses We use a registration loss and a skeleton loss to supervise SPEAL. The registration loss consists of Overlap-aware Circle Loss (Loc) and Point Matching Loss (Lp) from Geometric Transformer (Qin et al. 2022): L = Loc + Lp. (10) The skeleton loss in Lin et al. (2021) is used to supervise the SEM. It is the weighted sum of Sampling Loss Ls, Point-tosphere Loss Lr and Radius Regularizing Loss Lp2s: Lskeleton = Ls + λ1Lp2s + λ2Lr, (11) where λ1 and λ2 are hyperparameters to balance the losses. Please refer to the supplementary material for more details. Experiments Datasets and Experimental Setup Same-Source Dataset. The KITTI Odometry dataset (Geiger, Lenz, and Urtasun 2012) serves as a widely-used dataset for odometry and SLAM evaluation. It can also be employed to test same-source point cloud registration. This dataset comprises 11 sequences of LiDAR point clouds. We follow the existing practices (Qin et al. 2022; Huang et al. 2021a) to use sequences 00-06 for training, sequences 0708 for evaluation and sequences 09-10 for testing. Cross-Source Datasets. Currently, there are few crosssource datasets of large-scale outdoor scenes available for registration tasks. This hinders the development of crosssource registration methods. Therefore, we have developed a novel dataset1 termed KITTI CrossSource derived from KITTI Odometry. Our proposed dataset includes 11 sequences of LiDAR point clouds and reconstructed point clouds generated from stereo images using MonoRec (Wimbauer et al. 2021). We improve the reconstruction quality with a filter-and-combine strategy. Please refer to the supplementary material for details. The GermanyForest3D dataset is derived from an existing large-scale forest scene dataset (Weiser et al. 2022). It contains cross-source point cloud data acquired in 12 forest plots in south-west Germany under leaf-on and leafoff conditions. Each plot provides Airborne Laser Scanning (ALS), Terrestrial Laser Scanning (TLS) and UAV-borne Laser Scanning (ULS) point clouds. In this paper, we use ALS and ULS scans to evaluate the cross-source registration performance. In experiments, we use 10 plots for training, 1 for validation and 1 for testing. Data Preprocessing. For the GermanyForest3D dataset, the point clouds of each plot are subdivided into 30m × 30m × 30m blocks to make them suitable for the registration task. For all datasets, the Iterative Closest Point (ICP) algorithm from the Open3D library (Zhou, Park, and Koltun 2018) is used to refine the noisy ground truth transformation, following previous works (Qin et al. 2022; Lu et al. 2021). The point clouds are downsampled with a voxel size of 0.3m. Metrics. Following previous practices (Qin et al. 2022; Lu et al. 2021; Huang et al. 2021a), we evaluate the registration performance using following metrics: Relative Rotation Error (RRE), Relative Translation Error (RRE), and Registration Recall (RR). We use a RRE threshold and a RTE 1The dataset will be made publicly available. Please refer to github.com/kezheng1204/KITTI-CrossSource for updates. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6283 Method KITTI CrossSource KITTI Odometry RRE(◦) RTE(m) RR(%) RRE(◦) RTE(m) RR(%) RANSAC 6.14 9.46 0.8 0.54 0.13 91.9 FGR – – – 0.96 0.93 39.4 FCGF – – – 0.30 0.095 96.6 DGR – – – 0.37 0.320 98.7 HRegNet 2.19 0.84 69.3 0.29 0.120 99.7 CoFiNet 1.99 0.81 67.6 0.41 0.085 99.8 Predator 5.06 2.59 34.2 0.27 0.068 98.8 PCAM 4.07 2.40 45.9 0.79 0.12 98.0 GeoTrans. 1.87 0.63 96.8 0.24 0.068 99.8 SPEAL 1.41 0.58 97.3 0.23 0.069 99.8 Table 1: Cross-source (the proposed KITTI CrossSource) and same-source (KITTI Odometry) registration results on the KITTI datasets. ”–” indicates the method is not applicable to the dataset. threshold to compute RR for all datasets (RRE < 0.5◦and RTE < 0.3m for GermanyForest3D and RRE < 5◦and RTE < 2m for KITTI datasets). Additionally, we measure the quality of correspondences with Inlier Ratio (IR), which is the fraction of extracted correspondences whose residuals are below a certain threshold under the ground-truth transformation. Implementation Details. To train SPEAL, we use Adam (Kingma and Ba 2014) optimizer with an initial learning rate of 1e −4 and a weight decay of 1e −6. We train SPEAL for 200 epochs with a batch size of 1 on a NVIDIA RTX 3090 GPU. Baselines. We compare our method with state-of-the-art methods of three classes: (a) Traditional methods, including RANSAC (Fischler and Bolles 1981) and FGR (Zhou, Park, and Koltun 2016). (b) Transformer-based methods, including CoFiNet (Yu et al. 2021a), Predator(Huang et al. 2021a), PCAM (Cao et al. 2021), REGTR (Yew and Lee 2022) and Geometric Transformer (abbreviated as GeoTrans.) (Qin et al. 2022). (c) Other learning-based methods, including FCGF (Choy, Park, and Koltun 2019), DGR (Choy, Dong, and Koltun 2020) and HRegNet (Lu et al. 2021). Cross-Source Results KITTI CrossSource. The quantitative results are reported in Table 1. Our method achieves state-of-the-art performance on this dataset. For traditional methods, FGR is not applicable to this dataset and our approach outperforms RANSAC by a large margin. HRegNet is a recent SOTA for outdoor large-scale scenes. However, it presents suboptimal performance on this dataset, showing considerable performance decay for cross-source data. In contrast, our SPEAL is more accurate in terms of all metrics, and has a 28% higher RR than HRegNet. Among transformer-based methods, our method surpasses GeoTrans. by a large margin, showing the effectiveness of the integrated skeletal priors. GermanyForest3D. This cross-source dataset is with large scale and unstructured scenes, which are challenging for registration. The evaluation results under different overlap ratios are shown in Table 2. Traditional methods show suboptimal performance, and RANSAC even fails to register low Overlap RRE (◦) RTE (m) RR(%) ≤30% > 30% ≤30% > 30% ≤30% > 30% RANSAC 112.0 91.1 24.66 17.4 – – FGR 41.86 28.3 14.32 8.49 – – FCGF 1.54 0.53 0.49 0.19 8.7 56.6 DGR 1.06 0.38 0.36 0.10 32.8 76.2 HRegNet 1.16 1.40 0.141 0.238 24.7 41.7 REGTR 3.11 2.48 0.89 0.73 11.3 23.9 GeoTrans. 0.328 0.176 0.097 0.053 88.1 96.5 SPEAL(ours) 0.296 0.165 0.088 0.048 91.7 99.3 Table 2: Cross-source registration results on the GermanyForest3D dataset. ”–” indicates that the method is not applicable to the dataset. overlap point clouds. Learning-based methods overall perform better. However, current methods still suffer from considerable performance decay especially under low overlap condition. SPEAL outperforms all the others by a large margin, showing outstanding robustness introduced by skeletal priors in low overlap and unstructured condition. Same-Source Results Table 1 also lists the quantitative results on the same-source dataset KITTI Odometry. Compared with recent state-ofthe-arts, our method achieves comparable performance in terms of RTE and RR, and outperforms all the other methods in terms of RRE. This result indicates that our method is also effective for same-source registration, while achieving state-of-the-art performance for cross-source registration. Analysis Effectiveness of the Skeletal Priors. To qualitatively verify the effectiveness of the skeletal representation, we visualize the hybrid correspondences from the CDS module, including superpoints and skeleton points. The qualitative results on KITTI CrossSource are shown in Fig. 6. In addition to the challenges of partial overlap and density differences, this scan also presents the challenge of unstructured objects. However, SPEAL is still able to extract right correspondences with the help of skeletons, while the current stateof-the-art, GeoTrans., completely fails. SPEAL achieves an IR of 36.8%, which is nearly 10× higher than GeoTrans. It is worth noting that with our spectral denoising step, the skeletal correspondences achieve an IR of 75%, demonstrating the effectiveness of the spectral denoising step. Robustness. Fig. 7(a) displays registration recalls with different RRE and RTE thresholds in KITTI CrossSource. SPEAL consistently outperforms the other methods. In particular, it achieves a registration recall of 86.04% in the challenging low overlap of 30% ∼50%, which is 9.3% higher than the second-best method. In addition, Fig. 7(b) compares registration recalls and inlier ratios under different overlap with GeoTrans. Our method consistently achieves higher inlier ratios under all overlap ratios. This demonstrates the superior quality of correspondences generated by our method with skeletal priors. The results prove that our SPEAL is robust to various threshold settings, and demonstrates outstanding performance in low overlap condition. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6284 (a) Gorund Truth (b) Pose (SPEAL) (c) Pose (GeoTrans.) (d) Superpoint Correspondences (GeoTrans.) (e) Superpoint Correspondences (SPEAL) (f) Skeletal Correspondences (SPEAL) Figure 6: Qualitative results. Red and green denote outlier and inlier correspondences, respectively. Ablation Studies Overall Effectiveness. We conduct ablation studies to assess the effectiveness of SPEAL on GermanyForest3D. We compare different configurations of SPEAL, including: (a) vanilla geometric self-attention and vanilla cross attention, (b) skeleton-aware self-attention and vanilla cross attention, (c) skeleton-aware self-attention and skeleton-aware cross attention. In addition, we also compare with (d) the method without the CDS module, which only samples the coarse correspondences from superpoints. The results in Table 3 demonstrate the effectiveness of our design. Model RRE(◦) RTE(m) RR(%) (a) vanilla 0.19 0.054 96.2 (b) cross attn. w/o SPE 0.18 0.051 97.4 (c) w/ SSE & SPE 0.18 0.049 97.1 (d) corr. w/o CDS 0.19 0.051 96.3 (e) SPEAL (Ours) 0.16 0.048 99.3 Table 3: Ablation studies on SPEAL. Spectral Denoising in CDS. To validate the effectiveness of spectral denoising, we also ablate the CDS module. We compare different schemes for applying the spectral denoising. The results in Table 4 demonstrate that the spectral denois(a) Registration recalls with different RRE and RTE thresholds (b) Correspondence and registration results under different overlap ratios Figure 7: Quantitative results on robustness of SPEAL on the KITTI CrossSource dataset. Spectral Denoising RRE(◦) RTE(m) RR(%) Sup. Corr. Skel. Corr. 0.17 0.051 98.1 ✓ 0.20 0.063 97.8 ✓ ✓ 0.19 0.062 98.1 ✓ 0.16 0.048 99.3 Table 4: Ablation studies on the CDS module. Sup. Corr. and Skel. Corr. denote the superpoint correspondences and skeletal correspondences, respectively. ing step is only necessary for the skeletal correspondences and leads to inferior performance in other configurations. Conclusion In this paper, we have proposed SPEAL, a novel point cloud registration method that leverages a MAT-based skeletal representation to capture the geometric intricacies, thereby facilitating registration. We introduce SEM to extract the skeleton points and their skeletal features. Furthermore, we design SAGTR and CDS which explicitly integrate skeletal priors to ensure robust and accurate correspondences. Extensive experiments demonstrate that SPEAL is effective for both same-source and cross-source point cloud registration. Acknowledgments This work was supported in part by the National Key R&D Program of China under Grant 2021YFF0704600, the Fundamental Research Funds for the Central Universities (No. 20720220064). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6285 References Aoki, Y.; Goforth, H.; Srivatsan, R. A.; and Lucey, S. 2019. Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7163–7172. Au, O. K.-C.; Tai, C.-L.; Chu, H.-K.; Cohen-Or, D.; and Lee, T.-Y. 2008. Skeleton extraction by mesh contraction. ACM transactions on graphics (TOG), 27(3): 1–10. Bai, X.; Luo, Z.; Zhou, L.; Chen, H.; Li, L.; Hu, Z.; Fu, H.; and Tai, C.-L. 2021. Pointdsc: Robust point cloud registration using deep spatial consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15859–15869. Besl, P. J.; and McKay, N. D. 1992. Method for registration of 3-D shapes. In Sensor fusion IV: control paradigms and data structures, volume 1611, 586–606. Spie. Blum, H. 1967. A transformation for extracting new descriptions of shape. Models for the perception of speech and visual form, 362–380. Cao, A.-Q.; Puy, G.; Boulch, A.; and Marlet, R. 2021. PCAM: Product of cross-attention matrices for rigid registration of point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13229–13238. Cao, J.; Tagliasacchi, A.; Olson, M.; Zhang, H.; and Su, Z. 2010. Point cloud skeletons via laplacian based contraction. In 2010 Shape Modeling International Conference, 187–197. IEEE. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Choy, C.; Dong, W.; and Koltun, V. 2020. Deep global registration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2514–2523. Choy, C.; Park, J.; and Koltun, V. 2019. Fully convolutional geometric features. In Proceedings of the IEEE/CVF international conference on computer vision, 8958–8966. Deng, H.; Birdal, T.; and Ilic, S. 2018a. Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the European conference on computer vision (ECCV), 602–618. Deng, H.; Birdal, T.; and Ilic, S. 2018b. Ppfnet: Global context aware local features for robust 3d point matching. In Proceedings of the IEEE conference on computer vision and pattern recognition, 195–205. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fischler, M. A.; and Bolles, R. C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): 381–395. Fu, K.; Liu, S.; Luo, X.; and Wang, M. 2021. Robust point cloud registration framework based on deep graph matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8893–8902. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, 3354–3361. IEEE. Gojcic, Z.; Zhou, C.; Wegner, J. D.; and Wieser, A. 2019. The perfect match: 3d point cloud matching with smoothed densities. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5545–5554. Huang, H.; Wu, S.; Cohen-Or, D.; Gong, M.; Zhang, H.; Li, G.; and Chen, B. 2013. L1-medial skeleton of point cloud. ACM Trans. Graph., 32(4): 65–1. Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; and Schindler, K. 2021a. Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 4267–4276. Huang, X.; Mei, G.; and Zhang, J. 2020. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11366–11374. Huang, X.; Mei, G.; Zhang, J.; and Abbas, R. 2021b. A comprehensive survey on point cloud registration. arXiv preprint arXiv:2103.02690. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lee, J.; Kim, S.; Cho, M.; and Park, J. 2021. Deep hough voting for robust global registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15994–16003. Leordeanu, M.; and Hebert, M. 2005. A spectral technique for correspondence problems using pairwise constraints. In Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, volume 2, 1482–1489. IEEE. Li, P.; Wang, B.; Sun, F.; Guo, X.; Zhang, C.; and Wang, W. 2015. Q-mat: Computing medial axis transform by quadratic error minimization. ACM Transactions on Graphics (TOG), 35(1): 1–16. Li, Y.; Tang, C.; Yao, R.; Ye, A.; Wen, F.; and Du, S. 2023. HybridPoint: Point Cloud Registration Based on Hybrid Point Sampling and Matching. arXiv preprint arXiv:2303.16526. Lin, C.; Li, C.; Liu, Y.; Chen, N.; Choi, Y.-K.; and Wang, W. 2021. Point2skeleton: Learning skeletal representations from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4277– 4286. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2117–2125. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6286 Lu, F.; Chen, G.; Liu, Y.; Zhang, L.; Qu, S.; Liu, S.; and Gu, R. 2021. Hregnet: A hierarchical network for largescale outdoor lidar point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16014–16023. Ma, W.-C.; Wu, F.-C.; and Ouhyoung, M. 2003. Skeleton extraction of 3D objects with radial basis functions. In 2003 Shape Modeling International., 207–215. IEEE. Misra, I.; Girdhar, R.; and Joulin, A. 2021. An end-to-end transformer model for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2906–2917. Pais, G. D.; Ramalingam, S.; Govindu, V. M.; Nascimento, J. C.; Chellappa, R.; and Miraldo, P. 2020. 3dregnet: A deep neural network for 3d point registration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7193–7203. Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; and Xu, K. 2022. Geometric transformer for fast and robust point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11143–11152. Rocco, I.; Cimpoi, M.; Arandjelovi´c, R.; Torii, A.; Pajdla, T.; and Sivic, J. 2018. Neighbourhood consensus networks. Advances in neural information processing systems, 31. Shi, R.; Xue, Z.; You, Y.; and Lu, C. 2021. Skeleton merger: an unsupervised aligned keypoint detector. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 43–52. Sun, F.; Choi, Y.-K.; Yu, Y.; and Wang, W. 2015. Medial meshes–a compact and accurate representation of medial axis transform. IEEE transactions on visualization and computer graphics, 22(3): 1278–1290. Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; and Zhou, X. 2021. LoFTR: Detector-free local feature matching with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8922–8931. Thomas, H.; Qi, C. R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; and Guibas, L. J. 2019. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision, 6411–6420. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, Y.; and Solomon, J. M. 2019a. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF international conference on computer vision, 3523–3532. Wang, Y.; and Solomon, J. M. 2019b. Prnet: Self-supervised learning for partial-to-partial registration. Advances in neural information processing systems, 32. Weiser, H.; Sch¨afer, J.; Winiwarter, L.; Kraˇsovec, N.; Fassnacht, F. E.; and H¨ofle, B. 2022. Individual tree point clouds and tree measurements from multi-platform laser scanning in German forests. Earth System Science Data, 14(7): 2989– 3012. Wen, C.; Yu, B.; and Tao, D. 2023. Learnable SkeletonAware 3D Point Cloud Sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17671–17681. Wimbauer, F.; Yang, N.; Von Stumberg, L.; Zeller, N.; and Cremers, D. 2021. MonoRec: Semi-supervised dense reconstruction in dynamic environments from a single moving camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6112–6122. Xu, H.; Liu, S.; Wang, G.; Liu, G.; and Zeng, B. 2021. Omnet: Learning overlapping mask for partial-to-partial point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3132–3141. Xu, Z.; Zhou, Y.; Kalogerakis, E.; and Singh, K. 2019. Predicting animation skeletons for 3d articulated models via volumetric nets. In 2019 international conference on 3D vision (3DV), 298–307. IEEE. Yan, Y.; Letscher, D.; and Ju, T. 2018. Voxel cores: Efficient, robust, and provably good approximation of 3d medial axes. ACM Transactions on Graphics (TOG), 37(4): 1–13. Yao, Y.; Deng, B.; Xu, W.; and Zhang, J. 2020. Quasinewton solver for robust non-rigid registration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7600–7609. Yew, Z. J.; and Lee, G. H. 2020. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11824–11833. Yew, Z. J.; and Lee, G. H. 2022. Regtr: End-to-end point cloud correspondences with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6677–6686. Yu, H.; Li, F.; Saleh, M.; Busam, B.; and Ilic, S. 2021a. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. Advances in Neural Information Processing Systems, 34: 23872–23884. Yu, J.; Ren, L.; Zhang, Y.; Zhou, W.; Lin, L.; and Dai, G. 2023. PEAL: Prior-Embedded Explicit Attention Learning for Low-Overlap Point Cloud Registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17702–17711. Yu, X.; Rao, Y.; Wang, Z.; Liu, Z.; Lu, J.; and Zhou, J. 2021b. Pointr: Diverse point cloud completion with geometry-aware transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 12498–12507. Zhou, Q.-Y.; Park, J.; and Koltun, V. 2016. Fast global registration. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 766–782. Springer. Zhou, Q.-Y.; Park, J.; and Koltun, V. 2018. Open3D: A modern library for 3D data processing. arXiv preprint arXiv:1801.09847. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6287 | 2024 | 698 |
18,515 | Patched Line Segment Learning for Vector Road Mapping Jiakun Xu1, Bowen Xu1, Gui-Song Xia1, Liang Dong2, Nan Xue *1,3 1School of Computer Science, Wuhan University 2Google Inc. 3Ant Group {jiakun.xu, xbw, guisong.xia}@whu.edu.cn, [email protected], [email protected] Abstract This paper presents a novel approach to computing vector road maps from satellite remotely sensed images, building upon a well-defined Patched Line Segment (PaLiS) representation for road graphs that holds geometric significance. Unlike prevailing methods that derive road vector representations from satellite images using binary masks or keypoints, our method employs line segments. These segments not only convey road locations but also capture their orientations, making them a robust choice for representation. More precisely, given an input image, we divide it into non-overlapping patches and predict a suitable line segment within each patch. This strategy enables us to capture spatial and structural cues from these patch-based line segments, simplifying the process of constructing the road network graph without the necessity of additional neural networks for connectivity. In our experiments, we demonstrate how an effective representation of a road graph significantly enhances the performance of vector road mapping on established benchmarks, without requiring extensive modifications to the neural network architecture. Furthermore, our method achieves state-of-the-art performance with just 6 GPU hours of training, leading to a substantial 32-fold reduction in training costs in terms of GPU hours. 1 Introduction By “vector road mapping”, it refers to a process of converting the road features presented in satellite-borne remote sensing images into vector-based and symbolic graph representations, which is also known as road graph extraction or road network extraction within the community of remote sensing and plays a fundamental role in numerous downstream tasks including navigation (Zhang et al. 2021; Cai et al. 2023), urban planning (Shi et al. 2019; Xu et al. 2023a), and autonomous driving (Xu, Sun, and Liu 2021; He and Balakrishnan 2022; B¨uchner et al. 2023). The state-of-the-art methods for vector road mapping primarily rely on the strong representation capabilities of deep neural networks. These approaches formulate the problem as a supervised learning task, utilizing paired satellite images and annotated road graphs that use vertices and edges to depict the line and curve structures of roads. As the input *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Input image (b) Ground truth (c) Dense GT (d) Road graph learned from Keypoints (Xu et al. 2023b) (e) Road graph learned via PaLiS (Ours) Figure 1: Illustration of graphs constructed by different representations. The predicted representations (keypoints and line segments) are denoted in yellow marks and the connectivities are denoted in orange marks. images are in pixel form, it becomes crucial to establish an appropriate representation for facilitating the learning from the pixels of satellite images to the vector representation of roads. In the state-of-the-art methods (Batra et al. 2019; He et al. 2020; Xu et al. 2023b), the “appropriate representation” of vector road annotations were initially come down to mask-based representation (i.e., road masks) and were then upgraded to the keypoint-based graph representation as the main representation in the pursuit of end-to-end learning. While keypoint-based graph representations have demonstrated remarkable performance, many of these methods encounter a significant drawback: the substantial training cost The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6288 involved. For instance, RNGDet++ model (Xu et al. 2023b) requires approximately 192 GPU hours to train on a dataset of moderate size with thousands of images. This high training cost can be attributed to the prevalent oversampling strategy used to define the “keypoints” in the original annotations (depicted in Fig. 1(b)). This strategy involves densely sampling numerous points along each road, as shown in Fig. 1(c), lacking invariance to commonly employed image transformations used for data augmentation, such as random cropping and image translation, and eventually results in ambiguity during the learning process. As a consequence, methods employing keypoint-based graph representations must grapple with inherent representation ambiguity, requiring a greater number of training iterations. Such a prolonged training process often entails cluttered patterns in the keypoint detection outcomes, as illustrated by the enclosed regions in Fig. 1(d). Furthermore, the keypoint-based representations have to leverage additional modules to learn the connectives for the learned keypoints on the fly to accomplish the task of vector road mapping. In this paper, we devote ourselves to finding a better representation of vector road annotations, to eliminate the ambiguity in the existing keypoint-based graph representations, for the sake of efficient learning during training and topperforming mapping results in the testing phase. Our study is motivated by the recently-proposed PaRK-Detect (Xie et al. 2023) that defines patched keypoints, in which each small patch (e.g., 16×16) will have at most one keypoint for learning. Because the local patches are uniformly distributed over the image grids, such a definition largely eliminates the ambiguity for learning. However, since the keypoints are unary primitives that do not explicitly define the spatial relationships, PaRK-Detect (Xie et al. 2023) only obtained comparable performance in testing. Motivated by this, we are interested in presenting a patched representation to take its ambiguity-free merits while retaining the spatial context for facilitating the final vector road mapping. Our work is inspired by an observation that the spatial and geometric information of roads in local patches can be well represented by line segments instead of keypoints. Based on this, we present a novel PaLiS (Patched Line Segment) representation to depict the annotated road graphs in a geometrically-meaningful way while enjoying the ambiguity-free merits of patch-based representation. By dividing the grid of input images into a set of local (e.g., 8 × 8) patches, most of the local patches that contain a fragment of road path can uniquely define the only local line segment. To preserve the rich structural information of the local line segments, we use the closed-form xy −xy representation for the two endpoints of a line segment, which facilitates the computation of patch adjacency in a geometricallymeaningful way. As shown in Fig. 1(e), our proposed PaLiS representation could handle a variety of road graph patterns in a unified representation. With the proposed PaLiS representation, we find out that our PaLiS representation can be reliably learned via the rasterized road masks as supervision in differentiable rasterization, largely alleviating the need for vectorized road graph annotations. In the experiments, we demonstrate that our proposed APLS Iterations (k) many iterations later… 68.12 67.66 67.76 63.14 Figure 2: Convergence curves on City-Scale dataset. PaLiS representation clearly set new state-of-the-art performance on two public benchmarks, i.e., the City-Scale (He et al. 2020) and SpaceNet (Van Etten, Lindenbaum, and Bacastow 2018), without paying any extra efforts on the network design. Except for the competitive performance on these two benchmarks, our method only requires 6 GPU hours for the training, significantly reducing the training cost by 32 times for the prior art, RNGDet++ (Xu et al. 2023b). As shown in Fig. 2, for the performance evaluation by training iterations on the City-Scale dataset, our proposed method wins after the first 1K iterations by significant margins and converges to the S.O.T.A. performance after 20K iterations of training. In summary, our paper made the following contributions: • We propose a novel representation of road graphs, the patched line segment representation, which facilitates the learning of road graphs with the best efficacy in both the training and testing phases. • Based on our patched line segment representation, we present a graph construction strategy for the task of vector road mapping, which takes advantage of the geometric nature of our representation to produce vector graphs without using any additional neural networks for the learning of connectives between keypoints. • Our proposed patched line segment representation is learnable and compatible with the mask-based representation by leveraging a differentiable soft rasterizer, which helps to learn the patched line segments efficiently without introducing additional vector labels. 2 Related Works Road Graph Representations. There have been plenty of studies for vector road mapping, mainly relying on either the rasterized road map or the keypoint/vertex-based graph representations, and derived two categories, the segmentationbased (M´attyus, Luo, and Urtasun 2017; Zhou, Zhang, and Wu 2018; Mei et al. 2021; Wang et al. 2023; Batra et al. 2019; Cheng et al. 2021) and the keypoint-based approaches (He et al. 2020; He, Garg, and Chowdhury 2022; Shit et al. 2022; Yang et al. 2023; Xie et al. 2023). Regarding the popularity of end-to-end learning for better performance, the state-of-the-art approaches (He, Garg, and Chowdhury 2022; Xu et al. 2023b) mainly learn keypoints (i.e., graph The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6289 (a) The Road Ground Truth Patchify (b) PaLiS Representation Singular patch Intersected patch Patched line segment Figure 3: An illustrative figure for the proposed Patched Line Segment (PaLiS) representation. Larger patch size is applied for better illustration. vertices) and the connectivity between vertices while using the rasterized road masks/maps as the additional supervision signals to enhance the feature representation ability of ConvNets. Except for the representation ambiguity issue discussed in Sec. 1 for prolonged learning schedule, these representations mainly focus on point primitives instead of the line structure of road graphs, thus usually requiring additional design to learn or infer the connectivity between points/pixels. Regarding the above issues, we present a novel line-segment-based representation that defines the road graphs in the local image patches while characterizing the structural information of roads using line segments. We show that our well-defined and geometrically-meaningful representation largely facilitates the learning process of vector road mapping with the best efficacy. Line Segment Learning and Differentiable Rasterization. There has been a vast body of literature studying the line segments from both computer vision (CV) and graphics (CG) communities. On one hand, many works study the problem of line segment detection (Xue et al. 2019, 2020, 2021, 2022), which is similar to vector road mapping but mainly focuses on the line segment itself instead of the road graphs. On another hand, some CG researchers study the differentiable vector graphics rasterization/rendering (Li et al. 2020; Xie et al. 2014), in which they aim at using graphic primitives such as points, lines, and curves to represent rasterized digital images. The differentiable rasterization techniques were also applicable to the polygonal shape representation with end-to-end learning in instance segmentation (Lazarow, Xu, and Tu 2022) and polygonal building extraction (Zorzi et al. 2022). Our study is inspired by all these studies, but we pay more attention on the well-posedness of the primitive definition for the complicated road graphs/networks. By thinking of local patches, we eventually derive our novel PaLiS representation and set new state-of-the-art performance for the task of vector road mapping. 3 PaLiS Representation of Road Graphs In this section, we elaborate on the proposed PaLiS representation of road graphs. Denoted by the input satellite image I ∈R3×H×W and the corresponding road graph annotation R = {Γi(t) ∈R2|t ∈[0, 1]}, where Γi(t) is a parameterized 2D curve/line, Γi(0) and Γi(1) respectively represent the two endpoints of the parameterized curve. We use the local p × p patches to patch-wisely define the “key” line segments and eventually form the new PaLiS representation of road graphs. We assume that the patch size p is divisible by H and W without loss of generality. The Main Representation By generating a set of N non-overlapping p×p patches {Pi} where N = H p × W p , we define the patched line segment for each local patch Pi. As shown in Fig. 3(b), there are three cases for each patch Pi depending on the number of roads passing through the patch, denoted by N(Pi) ∈N. If N(Pi) = 0, we term it as the background patch (i.e., the gray patches in Fig. 3). If N(Pi) = 1, we uniquely define its patched line segment, denoted by PaLiS(Pi) = (xu i , yu i , xv i , yv i ) ∈R4if N(Pi) = 1. (1) For those patches that satisfy N(Pi) > 1, we cannot uniquely define their line segments, but we found such patches are playing a key role to construct the expected road graphs. As shown in Fig. 4, we further study the properties of the patches that have N(Pi) ≥1. In Fig. 4(a), the foreground patches clearly define a (local) straight road without ambiguity. But for the patches that have N(Pi) > 1, there are two types as shown in Fig. 4(b) and 4(c), depending on if there is an annotated “keypoint” to connect the multiple road paths in one keypoint. If there is such keypoint annotation, we call such a type as the X-type. Otherwise, the multiple road paths passing through the patch Pi will have different elevations like the overpasses, and we called them as the Ttype patches. In summary, the proposed PaLiS representation firstly samples N non-overlapping local patches and identifies the foreground patches by three different types, the I-type, Xtype and T-type, and defines the local line segments for the I-type patches in the form of (xu i , yu i , xv i , yv i ) to retain the geometric information of road paths. In the next section, we will show how to learn our proposed PaLiS representation for the task of vector road mapping. Road Graph Reconstruction from PaLiS Thanks to our geometric PaLiS representation, the road graphs can be reasonably reconstructed without leveraging 𝑙𝑖𝑒𝑖𝑒𝑗𝑙𝑗 (a) I-type 𝑙𝑖 𝑙𝑗 (b) X-type 𝑙𝑖 𝑙𝑗 (c) T-type Figure 4: Illustration of different types of foreground patches. Patched line segments are denoted in cyan markers and connectivities are denoted in dashed markers. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6290 MLP MLP 𝒙𝒊 𝒖, 𝒚𝒊 𝒖, 𝒙𝒊 𝒗, 𝒚𝒊 𝒗 𝐶 Input image Training Inference Final road graph 𝐹! 𝐹" Mask 𝑆 PaLiS (𝐿and 𝑀) Seg Head 𝐹! Backbone Pixel-level Patch-level Conv Conv Graph construction Figure 5: Proposed Method Pipeline: The process begins with an input image. (1) An Encoder-Decoder network extracts pixel-level (FI) and patch-level (FP) feature maps. (2) The Patch-level branch uses FP to predict line segments (coordinates (xu i , yu i , xv i , yv i )) and patch types (C). (3) The Pixel-level branch generates a binary mask of road centerlines from FI. (4) Finally, a road graph is reconstructed using the PaLiS representation. Note: Enlarged patches are shown for clarity. another subnetwork for the learning of graph connectivity. Here, we hypothesize that the PaLiS representation can be reliably learned and defer the learning details in Sec. 4. We developed a geometrically-meaningful scheme to reconstruct the road graphs from our PaLiS representation (see our supp. material for the pseudo code) by considering the properties of I-type, X-type and T-type foreground patches in the following three cases: • As shown in Fig. 4(a), we first consider the most common case for the I-type patches. For two adjacent I-type patches Pi and Pj, their line segments li = PaLiS(Pi) and lj = PaLiS(Pj) are connected with the observation that line segments of adjacent I-type patches share a common endpoint. We formulate the rule based on the shape distance ds(A, B), which represents the shortest perpendicular distance between shapes A and B. Two line segments are connected if the average of ds(lj, ei) and ds(li, ej) is less than a given distance threshold τd, where ei is the endpoint of li close to the line segment lj and ej is the endpoint of lj close to the line segment li. • While encountering the X-type patch PX (e.g., cross roads), line segments surrounding the patch PX are extended to an intersection as shown in Fig. 4(b). To achieve this, candidate intersections are calculated by pairing up lines segments around the patch PX. The intersection Ii,j ∈R2 of the line segment pair (li, lj) is valid if the two line segments intersect within the patch PX. And the final intersection Ifinal is obtained by averaging the position of all candidate intersections and is connected to the surrounding line segments. • Regarding the T-type patch PT (e.g., overpasses), the layouts with different height are made by the directional and spatial and extension of roads as shown in Fig. 4(c). We pair up lines segments around the patch PT and the connection of a line segments pair (li, lj) is valid if the shape distance ds(li, lj) and the angle difference dangle(li, lj) are less than the distance threshold τd and the angle threshold τa respectively. 4 Learning PaLiS Representations In this section, we show how to reliably learn the proposed PaLiS representation for vector road mapping in an off-the-shelf ConvNet. We use an encoder-decoder network, DLinkNet (Zhou, Zhang, and Wu 2018), with the lightweight ResNet-34 (He et al. 2016) as the backbone encoder to extract feature maps for the learning of PaLiS. Fig. 5 shows the overall pipeline of our approach. For the learning of PaLiS representation, two headnets are respectively leveraged, to classify the patches according to their PaLiS classes, and regress the two endpoints for each I-type patch. Apart from the main branches, an auxiliary segmentation head is leveraged to learn the rasterized masks from the final feature maps of the decoder network. Identifying Patch Classes/Types Our PaLiS representation categorizes the foreground patches into three different types (I-type, X-type, and T-type) for a better understanding of intricate road graph structures. To achieve this, we use a patch classification head, which consists of four convolution layers all with 3 × 3 kernels and an MLP layer, to predict the class of each patch. The patch classification head takes patch-level feature maps FP as input and produces the patch map M ∈RCP × H p × W p , where CP is the number of patch classes (i.e., CP = 4 by considering the background patches). During training, we compute the classification loss by comparing the predicted patch map M with the corresponding ground truth M∗which can be easily obtained from the original annotations of the dataset. Cross-entropy loss is employed for M: LM = CE(M, M∗). (2) Line Segments Learning for I-type Patches With the patch classification head, we focus on the I-type patches to learn the patched line segments. It should be noted that although the line segment li for the patch Pi is in the closed-form for the two endpoints, directly regressing their endpoint coordinates is suboptimal since the data augmentation techniques (like cropping) used in the training phase The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6291 Model Type City-Scale SpaceNet TOPO APLS ↑ TOPO APLS ↑ P ↑ R ↑ F1 ↑ P ↑ R ↑ F1 ↑ DLinkNet (Zhou, Zhang, and Wu 2018) Mask 78.63 48.07 57.42 54.08 88.42 60.06 68.80 56.93 Orientation (Batra et al. 2019) 75.83 68.90 72.20 55.34 81.56 71.38 76.13 58.82 Seg-DLA (He et al. 2020) 75.59 72.26 73.89 57.22 78.99 69.80 74.11 56.36 RoadTracer (Bastani et al. 2018) Point 78.00 57.44 66.16 57.29 78.61 62.45 69.90 56.03 Sat2Graph (He et al. 2020) 80.70 72.28 76.26 63.14 85.93 76.55 80.97 64.43 TD-Road (He, Garg, and Chowdhury 2022) 81.94 71.63 76.27 65.74 84.81 77.80 81.15 65.15 PaRK-Detect (Xie et al. 2023) 82.17 68.23 74.29 67.66 91.34 68.07 78.01 62.97 RNGDet (Xu et al. 2022) 85.97 69.78 76.87 65.75 90.91 73.25 81.13 65.61 RNGDet++ (Xu et al. 2023b) 85.65 72.58 78.44 67.76 91.34 75.24 82.51 67.73 Ours PaLiS 86.36 73.16 79.08 68.12 90.05 78.19 83.70 69.68 Table 1: Quantitative results on City-scale Dataset and SpaceNet dataset. Best results are highlighted in bold. 𝑎 𝑑(𝑙, 𝑎) 𝑙 Patched line segments Rasterized soft mask Figure 6: Illustration of the rasterization. Darker pixels contribute more to the line segment. will incur inefficient computation in terms of cropping the vector road annotations. To avoid this issue, we propose to use the differentiable rasterization techniques to learn the line segment li of the patch Pi from the mask supervision, similar to (Lazarow, Xu, and Tu 2022; Zorzi et al. 2022). It is interesting to see that, although we use the rasterized road mask supervision instead of the vector annotations, such a design is prevailing than the vector annotations. Please move to our ablation studies in Sec. 5 for a detailed comparison. By taking the feature map FP , we set a regression head with four 3×3 convolution layers and an MLP layer, to predict line segments L ∈R4× H p × W p where 4 is the number of coordinates of line segments. These patched line segments L are then converted into a soft mask Ssoft ∈RH×W with the proposed rasterizer. As shown in Fig. 6, the proposed rasterizer produces a p × p patch Ci ∈Rp×p, where the scalar value at the pixel a = (x, y) in the local coordinate of the patch is computed by Ci(a) = e −d2(li,a)×t τinv , (3) where d(li, a) is the projection distance from the pixel a to the line segment li. t and τinv are the projection factor and sharpness factor respectively. We empirically set t = 10 if the pixel a is projected outside of the line segment otherwise t is set to 1. The values of projection factor t and the sharpness factor τinv are chosen to accurately reflect the position of the line segment in the patch. The rasterized soft mask Ssoft ∈RH×W is obtained from the contributions of all pixels. During training, we efficiently compute the loss by comparing the soft mask Ssoft with the existing ground truth mask S∗of road centerlines. Similar to BoundaryFormer (Lazarow, Xu, and Tu 2022), we employ the DICE (Milletari, Navab, and Ahmadi 2016) loss to measure the difference: LL = DICE(Ssoft, S∗). (4) The rasterizer and backwards pass are fully implemented in CUDA, ensuring efficiency in the training process. Auxiliary Pixel-level Learning In addition to the PaLiS representation, we incorporate the learning of an auxiliary binary mask for road centerlines to extract road information. We use a segmentation head, which consists of one 3 × 3 convolution layer and one 1 × 1 convolution layer followed by a sigmoid function, to predict the binary mask S ∈RH×W of road centerlines from the pixel-level feature maps FI. We compute the loss of the predicted binary mask S with the ground truth mask S∗of road centerlines by cross-entropy loss: LS = CE(S, S∗) (5) The total loss of the PaLiS learning can be summarized as Ltotal = LS + LM + LL. (6) 5 Experiments In this section, we run experiments for our proposed PaLiSbased approach on public benchmarks and provide a comprehensive analysis of our design choices. The implementation details are in our supplementary material. Datasets and Evaluation Metrics Datasets. We conduct experiments on two widely used datasets: City-Scale dataset (He et al. 2020) and SpaceNet dataset (Van Etten, Lindenbaum, and Bacastow 2018). CityScale dataset (He et al. 2020) covers 720 km2 area of 20 cities in the United States. It consists of 180 tiles, which we divide into 144, 9, and 27 tiles for training, validation, and testing respectively, following previous methods (He et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6292 (a) Sat2Graph (b) RNGDet++ (c) PaRK-Detect (d) Ours (e) Ground Truth Figure 7: Example of qualitative road network extraction results on City-scale dataset. Predicted road segments are marked in orange lines and intersections are marked in red dots. Our approach leads to reasonable and accurate connected road graphs. 2020; He, Garg, and Chowdhury 2022; Xu et al. 2023b). Each tile of the dataset has the resolution of 2048 × 2048 pixels, representing 1 meter in the real world. SpaceNet dataset (Van Etten, Lindenbaum, and Bacastow 2018) comprises 2549 satellite images, each with the resolution of 400 × 400 pixels. We use 2040, 127, and 382 images for training, validation, and testing respectively, following the partition used in Sat2Graph (He et al. 2020). Evaluation metrics. Two quantitative metrics are utilized in the experiments: APLS (Van Etten, Lindenbaum, and Bacastow 2018) and TOPO (Biagioni and Eriksson 2012). APLS assesses the overall graph quality by comparing the similarity of shortest paths between two locations on the predicted and ground truth graphs. On the other hand, the TOPO metric (precision, recall, and F1-score) provides a stricter evaluation of detailed topology correctness by measuring the similarity of sub-graphs sampled from a seed location on the ground truth and predicted graphs. Higher scores indicate better performance for both APLS and TOPO metrics. Main Comparisons Quantitative and Qualitative Evaluation. We compare our approach to state-of-the-art segmentation- and keypointbased methods on the City-Scale and SpaceNet datasets. Table 1 presents the quantitative results. Segmentation-based methods exhibit substantially inferior performance on both TOPO and APLS metrics, because of their heuristic postprocessing schema. In contrast, graph-based methods output and refine the graph of road networks directly, gaining better performance on the two metrics. Our method achieves the highest TOPO and APLS scores on the City-Scale dataset, demonstrating superior performance in capturing road netGround truth Keypoint heatmap Patched line segments Figure 8: Comparison on early stage (10 epochs) of training. work structures with our unified PaLiS representations. Additionally, our approach outperforms all other methods in terms of recall, F1-score, and APLS on SpaceNet dataset, further validating its effectiveness. These consistently superior evaluation results across metrics indicate that our approach generates more precise and complete road graphs both locally and globally. The same conclusions can be drawn from the qualitative comparisons in Fig. 7. Keypoint-based Graph Representation v.s. PaLiS. Comparisons between keypoints and PaLiS representation during training and testing involved further analysis. Fig. 8 first visualized the predicted keypoints heatmap and line segments on the early training epoch. Apparently, the learned keypoints heatmap was ambiguous in the early stage of training, whereas the line segments were accurately predicted. Subsequently, we studied the model’s sensitivity to thresholds of keypoints (or line segments) prediction by varying the thresholds with the 0.1 step as shown in Fig. 9. Notably, our model demonstrated greater stability compared to keypoint-based methods, indicating the robustness of our PaLiS representation during testing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6293 5.04 1.26 2.88 Keypoint/Line segment threshold APLS 2.12 Figure 9: Parameter sensitivity on City-Scale dataset. Supervision TOPO APLS ↑ P ↑ R ↑ F1 ↑ unsorted vector 91.79 60.34 72.82 57.34 sorted vector 91.75 66.81 77.31 65.28 raster mask 90.05 78.19 83.70 69.68 Table 2: Comparison results on SpaceNet dataset in association with different supervisions for patched line segments. Training efficiency. The training efficiency is also compared as shown in Fig. 2. The approach relying on our unified PaLiS representation achieves superior performance with considerably fewer training iterations while methods relying on keypoints (He et al. 2020; Xu et al. 2023b; Xie et al. 2023) require much more iterations to converge. Ablation Studies Mask-supervised line segment learning. To evaluate the efficacy of the proposed soft rasterizer, we conducted additional experiments using three different types of supervision for line segments learning: unsorted vector labels, sorted vector labels, and mask labels. The unsorted and sorted vector labels are denoted by (ˆxu i , ˆyu i , ˆxv i , ˆyv i ) ∈R4, where the only difference is the direction. Directions of unsorted vector labels are randomly inherent from origin annotations, while sorted vector labels have consistent directions ((ˆxu i , ˆyu i ) is always the endpoint on the left). We use L1 loss to compute the difference between the predictions and ground truth vector labels. The results shown in Table 2 indicate that line segments are learned more precisely with the proposed rasterizer, leading to enhanced connectivity in the graph construction. Furthermore, our approach leverages the existing mask labels to guide the training process of patched line segments, without requiring the generation of vector labels. Graph construction strategy. Road graphs can be reconstructed by PaLiS representation (geometric connectivity) without the learned relationships of patches (relationship connectivity) used in PaRK-Detect (Xie et al. 2023). To compare the two different construction strategies, we learned additional relationships of patches following PaRKDetect (Xie et al. 2023). The results presented in Table 3 show that our approach outperforms the relationship connectivity on the two metrics, and provides more accurate and Graph construction TOPO APLS ↑ P ↑ R ↑ F1 ↑ Learned relationship 88.01 79.28 83.42 68.47 Ours 90.05 78.19 83.70 69.68 Table 3: Results on SpaceNet of varied connectivity strategy. Patch Size TOPO APLS ↑ P ↑ R ↑ F1 ↑ 4 × 4 91.88 74.23 82.12 67.25 8 × 8 90.05 78.19 83.70 69.68 16 × 16 82.61 77.64 80.05 67.58 Table 4: Results on Spacenet of varied patch size. reasoned connectivity as shown in Fig. 10. Ours Learned relationship Figure 10: Comparison of graph construction strategies. Patch size. The patch size serves as a crucial hyperparameter in our approach. We conducted experiments to assess the impact of patch size, and the results are shown in Table 4. We observe that both smaller and bigger patch sizes cause the inferior performance. This is due to the PaLiS representation with small patch size yields results that are close to mask representation, suffering from the disconnected issue. Whereas PaLiS representation with big patch size struggles to provide precise shape of road graphs. Considering accuracy and efficiency, we set the patch size to 8. 6 Conclusions This paper introduces a learning-based approach for vector road mapping using the innovative PaLiS (Patched Line Segment) representation. By leveraging local patches, our approach effectively represents road graphs. Through convolutional neural networks, we achieve state-of-the-art performance on public datasets, with efficient training in just 6 GPU hours. Additionally, the ability of PaLiS representation to learn line segment endpoint coordinates from rasterized road maps suggests a promising direction for largescale vector road mapping without costly manual annotations in the near future. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6294 Acknowledgments This work was supported by the NSFC under Grants 62101390, 62325111, and U22B2011. We thank Jian Ding, Ming Qian and Bin Tan for helpful discussions. References Bastani, F.; He, S.; Abbar, S.; Alizadeh, M.; Balakrishnan, H.; Chawla, S.; Madden, S.; and DeWitt, D. 2018. Roadtracer: Automatic extraction of road networks from aerial images. In IEEE Conf. Comput. Vis. Pattern Recog., 4720– 4728. Batra, A.; Singh, S.; Pang, G.; Basu, S.; Jawahar, C.; and Paluri, M. 2019. Improved road connectivity by joint learning of orientation and segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., 10385–10393. Biagioni, J.; and Eriksson, J. 2012. Inferring road maps from global positioning system traces: Survey and comparative evaluation. Transportation Research Record, 2291(1): 61– 71. B¨uchner, M.; Z¨urn, J.; Todoran, I.-G.; Valada, A.; and Burgard, W. 2023. Learning and Aggregating Lane Graphs for Urban Automated Driving. arXiv:2302.06175. Cai, Z.; Wang, T.; Mi, Q.; Su, X.; Guo, L.; and Ding, Z. 2023. Dynamic Weighted Road Network Based MultiVehicles Navigation and Evacuation. ISPRS International Journal of Geo-Information, 12(3): 127. Cheng, M.; Zhao, K.; Guo, X.; Xu, Y.; and Guo, J. 2021. Joint topology-preserving and feature-refinement network for curvilinear structure segmentation. In Int. Conf. Comput. Vis., 7147–7156. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., 770–778. He, S.; and Balakrishnan, H. 2022. Lane-Level Street Map Extraction from Aerial Imagery. In IEEE/CVF Wint. Conf. on Appl. of Comp. Vis., 1496–1505. He, S.; Bastani, F.; Jagwani, S.; Alizadeh, M.; Balakrishnan, H.; Chawla, S.; Elshrif, M. M.; Madden, S.; and Sadeghi, M. A. 2020. Sat2graph: Road graph extraction through graph-tensor encoding. In Eur. Conf. Comput. Vis., 51–67. Springer. He, Y.; Garg, R.; and Chowdhury, A. R. 2022. TD-Road: Top-down road network extraction with holistic graph construction. In Eur. Conf. Comput. Vis., 562–577. Springer. Lazarow, J.; Xu, W.; and Tu, Z. 2022. Instance segmentation with mask-supervised polygonal boundary transformers. In IEEE Conf. Comput. Vis. Pattern Recog., 4382–4391. Li, T.-M.; Luk´aˇc, M.; Gharbi, M.; and Ragan-Kelley, J. 2020. Differentiable Vector Graphics Rasterization for Editing and Learning. ACM Trans. Graph., 39(6). M´attyus, G.; Luo, W.; and Urtasun, R. 2017. Deeproadmapper: Extracting road topology from aerial images. In Int. Conf. Comput. Vis., 3438–3446. Mei, J.; Li, R.-J.; Gao, W.; and Cheng, M.-M. 2021. CoANet: Connectivity attention network for road extraction from satellite imagery. IEEE Trans. Image Process., 30: 8540–8552. Milletari, F.; Navab, N.; and Ahmadi, S.-A. 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In IEEE Int. Conf. 3D Vis., 565–571. Ieee. Shi, G.; Shan, J.; Ding, L.; Ye, P.; Li, Y.; and Jiang, N. 2019. Urban road network expansion and its driving variables: a case study of Nanjing City. International Journal of Environmental Research and Public Health, 16(13): 2318. Shit, S.; Koner, R.; Wittmann, B.; Paetzold, J.; Ezhov, I.; Li, H.; Pan, J.; Sharifzadeh, S.; Kaissis, G.; Tresp, V.; et al. 2022. Relationformer: A unified framework for image-tograph generation. In Eur. Conf. Comput. Vis., 422–439. Springer. Van Etten, A.; Lindenbaum, D.; and Bacastow, T. M. 2018. Spacenet: A remote sensing dataset and challenge series. arXiv preprint arXiv:1807.01232. Wang, B.; Liu, Q.; Hu, Z.; Wang, W.; and Wang, Y. 2023. TERNformer: Topology-enhanced Road Network Extraction by Exploring Local Connectivity. IEEE Trans. Geosci. Remote. Sens. Xie, G.; Sun, X.; Tong, X.; and Nowrouzezahrai, D. 2014. Hierarchical diffusion curves for accurate automatic image vectorization. ACM Trans. Graph., 33(6): 230:1–230:11. Xie, S.; Zheng, W.; Xian, Z.; Yang, J.; Zhang, C.; and Wu, M. 2023. PaRK-Detect: Towards Efficient Multi-Task Satellite Imagery Road Extraction via Patch-Wise Keypoints Detection. In Brit. Mach. Vis. Conf. Xu, B.; Xu, J.; Xue, N.; and Xia, G.-S. 2023a. HiSup: Accurate polygonal mapping of buildings in satellite imagery with hierarchical supervision. ISPRS Journal of Photogrammetry and Remote Sensing, 198: 284–296. Xu, Z.; Liu, Y.; Gan, L.; Sun, Y.; Wu, X.; Liu, M.; and Wang, L. 2022. Rngdet: Road network graph detection by transformer in aerial images. IEEE Trans. Geosci. Remote. Sens., 60: 1–12. Xu, Z.; Liu, Y.; Sun, Y.; Liu, M.; and Wang, L. 2023b. RNGDet++: Road Network Graph Detection by Transformer With Instance Segmentation and Multi-Scale Features Enhancement. IEEE Robotics Autom. Lett. Xu, Z.; Sun, Y.; and Liu, M. 2021. icurb: Imitation learningbased detection of road curbs using aerial images for autonomous driving. IEEE Robotics and Automation Letters, 6(2): 1097–1104. Xue, N.; Bai, S.; Wang, F.; Xia, G.; Wu, T.; Zhang, L.; and Torr, P. H. S. 2021. Learning Regional Attraction for Line Segment Detection. IEEE Trans. Pattern Anal. Mach. Intell., 43(6): 1998–2013. Xue, N.; Bai, S.; Wang, F.; Xia, G.-S.; Wu, T.; and Zhang, L. 2019. Learning attraction field representation for robust line segment detection. In IEEE Conf. Comput. Vis. Pattern Recog., 1595–1603. Xue, N.; Wu, T.; Bai, S.; Wang, F.; Xia, G.; Zhang, L.; and Torr, P. H. S. 2022. Holistically-Attracted Wireframe Parsing: From Supervised to Self-Supervised Learning. arXiv, abs/2210.12971. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6295 Xue, N.; Wu, T.; Bai, S.; Wang, F.; Xia, G.-S.; Zhang, L.; and Torr, P. H. 2020. Holistically-attracted wireframe parsing. In IEEE Conf. Comput. Vis. Pattern Recog., 2788–2797. Yang, B.; Zhang, M.; Zhang, Z.; Zhang, Z.; and Hu, X. 2023. TopDiG: Class-Agnostic Topological Directional Graph Extraction From Remote Sensing Images. In IEEE Conf. Comput. Vis. Pattern Recog., 1265–1274. Zhang, K.; Zhao, D.; Feng, L.; and Cao, L. 2021. Cycling trajectory-based navigation independent of road network data support. ISPRS International Journal of GeoInformation, 10(6): 398. Zhou, L.; Zhang, C.; and Wu, M. 2018. D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. In IEEE Conf. Comput. Vis. Pattern Recog. Worksh., 182–186. Zorzi, S.; Bazrafkan, S.; Habenschuss, S.; and Fraundorfer, F. 2022. Polyworld: Polygonal building extraction with graph neural networks in satellite images. In IEEE Conf. Comput. Vis. Pattern Recog., 1848–1857. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6296 | 2024 | 699 |
18,516 | GIN-SD: Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion Le Cheng1,2, Peican Zhu2*, Keke Tang3, Chao Gao2, Zhen Wang1,2† 1School of Computer Science, Northwestern Polytechnical University (NWPU) 2School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University (NWPU) 3Cyberspace Institute of Advanced Technology, Guangzhou University [email protected], [email protected] Abstract Source detection in graphs has demonstrated robust efficacy in the domain of rumor source identification. Although recent solutions have enhanced performance by leveraging deep neural networks, they often require complete user data. In this paper, we address a more challenging task, rumor source detection with incomplete user data, and propose a novel framework, i.e., Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion (GINSD), to tackle this challenge. Specifically, our approach utilizes a positional embedding module to distinguish nodes that are incomplete and employs a self-attention mechanism to focus on nodes with greater information transmission capacity. To mitigate the prediction bias caused by the significant disparity between the numbers of source and non-source nodes, we also introduce a class-balancing mechanism. Extensive experiments validate the effectiveness of GIN-SD and its superiority to state-of-the-art methods. Introduction Source detection in graphs represents a fundamental challenge in mathematics and plays a vital role in rumor source detection (Shah and Zaman 2011; Ling et al. 2022; Zhu et al. 2022; Cheng et al. 2022). Early solutions, such as LPSI (Wang et al. 2017), EPA (Ali et al. 2019), and MLE (Pinto, Thiran, and Vetterli 2012), primarily rely on source centrality theory (Prakash, Vreeken, and Faloutsos 2012; Shah and Zaman 2011) and maximum likelihood estimation in detecting sources. In recent years, with the advancement of deep learning techniques (Gao et al. 2022), researchers have utilized deep neural networks to encode user attributes and propagation information (Bian et al. 2020; Wang, Jiang, and Zhao 2022; Ling et al. 2022), significantly refreshing the state-of-the-art records. However, current solutions for source detection are premised on the strict assumption of having access to complete user data, encompassing details such as the forwarding frequency of all users and the time of information reception. Indeed, acquiring such exhaustive user data is exceedingly *Corresponding author. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Impact of incomplete nodes on source detection: (a-c) graphs with incomplete node ratios of 0%, 10%, and 20%; (d) influence of varying incomplete node ratios on the source detection accuracy for different methods. As the proportion of incomplete nodes increases, the performance of other methods declines more significantly, while our approach remains less affected. challenging and sometimes impossible due to time constraints, resource limitations, and privacy protection measures (Du et al. 2017; Zhou, Jagmohan, and Varshney 2019). No existing work, to our knowledge, considers the problem of source detection in graph with incomplete nodes. In practice, when user data is incomplete, the majority of cuttingedge solutions falter, as evidenced by the notable performance decline shown in Fig. 1. Source detection in graphs with incomplete nodes poses three main challenges. First, in the process of node information aggregation and transmission, the absent information from incomplete nodes may be erroneously treated as valid data from normal nodes, thus leading to significant feature errors. Second, since the efficiency of information transmission varies among nodes, e.g., nodes with higher degrees tend to relay information more rapidly, treating all nodes uniformly hinders the training efficiency. Third, a marked The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 55 imbalance between the quantities of source and non-source nodes leads the model to favor the non-source set, overlooking the source set, and thus creating a prediction bias. Intuitively, to handle the above three issues, we should 1) distinguish between incomplete and complete nodes; 2) focus on nodes with superior information transmission capacity; 3) treat source/non-source nodes differently. In this paper, we propose a novel source detection framework in graphs with incomplete nodes (GIN-SD) through positional encoding and attentive fusion. First, to distinguish incomplete nodes, a positional embedding module is developed to exploit Laplacian Positional Encodings of the infected subgraph, incorporating user states and propagation information into the feature vectors of users. Second, to focus on nodes with greater information transmission capacity, an attentive fusion module is introduced to employ the selfattention mechanism to automatically allocate varying attention weights to different users. Finally, to treat source/nonsource nodes differently, we introduce a class balancing mechanism that increases the weight of the source set while decreasing the weight of the non-source set, enabling the model to attend to both sets simultaneously. We validate the effectiveness of our approach on eight publicly available datasets. Extensive experimental results demonstrate that our approach is robust to missing nodes in a graph, outperforming state-of-the-art methods. Overall, our contribution is summarized as follows: • We are the first to formulate the rumor sources detection under incomplete user data and propose a novel approach to address this issue. • We devise a source detection method of rumors under incomplete user data via positional encoding and attentive fusion mechanism. • We show by experiments that the superiority of the proposed approach in the context of incomplete user data, comparing to baseline methods. Related Work Infection Status-based Multi-source Detection To efficiently address the Multiple Rumor Sources Detection (MRSD) problem, several approaches have been developed. Based on the source centrality theory (Prakash, Vreeken, and Faloutsos 2012; Shah and Zaman 2011; Zhu, Chen, and Ying 2017), LPSI selects locally prominent nodes through label propagation without requiring prior information (Wang et al. 2017). EPA iteratively calculates the infection time of each node (Ali et al. 2019). However, these methods do not adequately consider the heterogeneity of users and the stochastic nature of information propagation. Utilizing machine learning techniques, GCNSI (Dong et al. 2019) and SIGN (Li et al. 2021) take the states of all users as algorithm inputs, whereas GCSSI focuses on the users infected during the latest wave, known as the wavefront (Dong et al. 2022); from the perspective of model architecture, ResGCN (Shah et al. 2020) incorporates a residual structure that connects GCN layers for message passing. However, these methods fail to consider the randomness of information propagation in heterogeneous networks, and the problem of class imbalance significantly affects the precision of the algorithms. Incorporating the propagation process, IVGD (Wang, Jiang, and Zhao 2022) and SL-VAE (Ling et al. 2022) introduce diffusion learning mechanisms that thoroughly consider the heterogeneity of users and the stochasticity of information propagation. It’s undoubt that obtaining detailed information poses significant challenges due to cost constraints and privacy concerns. Moreover, all the aforementioned methods heavily rely on network snapshot information, assuming the availability of information for all users. However, obtaining a complete network snapshot is immensely challenging due to time constraints, cost limitations, and privacy considerations (Du et al. 2017). Positional Encodings and Attentive Mechanisms The introduction of Graph Neural Networks (GNNs) has enabled the direct application of neural networks, previously designed for Euclidean space, to be applied to graphs (nonEuclidean space) (Scarselli et al. 2008). The advent of Graph Convolutional Networks (GCNs) has further expedited the advancement of machine learning methods on graphs (Kipf and Welling 2017). GNNs and GCNs effectively learn node representations by leveraging information from the nodes themselves and their neighboring nodes. Moreover, Graph Attention Networks (GAT) empower nodes to allocate distinct attention weights to different neighbors through a multi-head attention mechanism (Veliˇckovi´c et al. 2017). In fact, the models above learn structural node information with invariant node positions (Srinivasan and Ribeiro 2019). In recent years, the Transformer, originally proposed for Natural Language Processing (NLP), has introduced Positional Encodings (PEs) for individual words (Han et al. 2021). Which ensures the uniqueness of each word while preserving distance information. Recognizing the merits of global learning based on PEs, PEs learning based on GNNs has also emerged (You, Ying, and Leskovec 2019; Srinivasan and Ribeiro 2019; Dwivedi et al. 2020). For instance, Dwivedi et al. (Dwivedi et al. 2020) employed Laplacian eigenvectors (Belkin and Niyogi 2003) as PEs for nodes, enhancing the generative of PEs. Building upon these, GIN-SD focuses on nodes with greater information transmission capacity through a selfattention mechanisms. Additionally, the Laplacian Positional Encodings of the infected subgraph, along with user states and propagation information are embedded into the user feature vectors to distinguish incomplete nodes. Problem Formulation Preliminary on Social Networks The social networks in the physical world can be abstracted as G = (V, E), where the nodes set V = {v1, v2, · · · , vn} represents the users; and the edges set E = {(vi, vj) | vi, vj ∈V, i ̸= j} indicates the relationships between them. Based on V and E, the adjacency matrix A (Aij ∈{0, 1}n×n) of G is defined as: Aij = 1, (vi, vj) ∈E 0, otherwise. (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 56 Figure 2: Illustration of GIN-SD. (a) The network snapshot G′ serves as the input of GIN-SD. (b) The Positional Embedding Module (PEM), where node positional information, along with state and propagation information, is embedded into feature vectors. It is noteworthy that during the position embedding process, the infected subgraph is initially extracted from the acquired snapshot, and the adjacency matrix of the infected subgraph is obtained. Subsequently, the symmetric normalized Laplacian matrix is calculated, and the positional encoding of each node are derived through factorization. (c) The Attentive Fusion Module (AFM) learns node representations through self-attention mechanisms. (d) The training loss is computed using the class-balancing mechanism, and the detected source set ˆs is output during the testing phase. Propagation Process of Social Networks Given the definition of G(V, E), the propagation process on G can be represented as a time series {X(t), t ≥0}, where X(t) denotes the nodes states in G at time t. Specifically, the state X(t) consists of two categories: G+ (positive) and G−(negative). When t = 0, only source set is positive, i.e., {s ∈ G+, t = 0}. After the propagation is triggered by sources, positive user vi decides whether to propagate the information to its neighbors based on individual forwarding probability pi. Different classical models, such as influencebased Independent Cascade (IC) model (Wen et al. 2017), infection-based Susceptible-Infected (SI) (Barth´elemy et al. 2004) and Susceptible-Infected-Recovered (SIR) (Parshani, Carmi, and Havlin 2010) models, are proposed to simulate the aforementioned propagation process. Source Detection in Graphs with Complete Nodes As the propagation unfolds and the rumor reaches a certain significance threshold, specifically when θ% of the nodes in the network are infected, a network snapshot G′(T, U, P) is obtained includes: 1) network topology T; 2) user information U: user states, information forwarding frequency; 3) propagation information P: reception time of information, information propagator. Based on the aforementioned definitions, the source detection problem with complete nodes can be formalized as: ˆs = f(G′(T, U, P)), (2) where f(·) is the corresponding sources detection methodology, and ˆs represents the detected source set. Source Detection in Graphs with Incomplete Nodes In practice, source detection with complete nodes is exceedingly challenging and sometimes impossible due to time constraints, resource limitations, and privacy protection solutions (Du et al. 2017; Zhou, Jagmohan, and Varshney 2019). Leading to incomplete user data in G′: U ′ = (1 −δ)U, P ′ = (1 −δ)P, (3) δ represents the incomplete ratio of user data. Hence the source detection in graphs with incomplete nodes is formalized as: ˆs = f(G′(T, U ′, P ′)), (4) Discussion Compared to source detection for scenario with complete nodes, the missing information from incomplete nodes may mistakenly be considered as valid data from normal nodes, leading to significant feature inaccuracies. Hence, distinguishing incomplete nodes is necessary. Additionally, the user heterogeneity and class imbalance problem in source detection hinder the effective fitting of models. Therefore, focusing on the nodes with greater information transmission capacity and differentiating between source and non-source sets becomes imperative. Method In this section, we describe our proposed framework, i.e., Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion (GIN-SD). The framework consists of two primary components: the Positional Embedding Module (PEM) and the Attentive Fusion Module (AFM), as illustrated in Fig. 2. Given the network snapshot G′ as input, PEM embeds position-based user encodings, and then AFM learns node The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 57 representations through a self-attention mechanism. Finally, the loss is computed using a class balancing mechanism. Positional Embedding Module (PEM) Several different perspectives of features, including user states, propagation information, and positional information, are embedded into the node feature vectors. User State Information (X1 i ) When θ% of users in the network receive the rumor information and are influenced by it, i.e., |G+| ≥θ%∗n, we obtain the network snapshot G′, in which the user states can be categorized into three sets: G+, users influenced by the rumor; G−, users not influenced or not receiving the rumor; and Ψ, users with lost information. Therefore, for user vi, the state feature X1 i can be determined by the following rules: X1 i = ( +1, vi ∈G+ −1, vi ∈G− 0, vi ∈Ψ. (5) The Diffusion Information (X2 i ) Social platforms like Facebook or Twitter include timestamps when users receive messages, which is a crucial factor in source detection. Therefore, for user vi, in conjunction with the timestamp ti, we define the diffusion information X2 i as follows: X2 i = ti, vi ∈G+ −1, otherwise. (6) The Positional Information (X3 i ) Under the premise of node information loss, the inter-nodal positional relationships play a pivotal role in facilitating message propagation at the global level. To address this, leveraging the generalization of Laplacian positional encodings, we utilize it as the positional information embedded in the user feature X3 i to distinguish incomplete nodes. In contrast to computing the Laplacian PEs for the entire network, we focus on calculating the Laplacian PEs for the infected subgraph G′ +. Given a network snapshot G′, if user vi did not receive the rumor or is not persuaded, the extraction process for the infected subgraph G′ + is represented as: A+ = Ji,n · A · JT i,n, (7) where An×n is the adjacency matrix of the network snapshot G′, and A(n−1)×(n−1) + is the adjacency matrix of the infected subgraph after removing user vi, serving as the basis for subsequent removals. Ji,n denotes the n-dimensional identity matrix with its i-th row removed. For example, if the user with id-2 did not receive the rumor or not be persuaded, J(n−1)×n 2,n = 1 0 0 · · · 0 0 0 1 · · · 0 ... ... ... ... ... 0 0 0 · · · 1 . (8) It is essential to note that the users with unclear states in the Ψ set are retained, meaning G+ ⊂G′ +, Ψ ⊂G′ +, and G−∩G′ + = ∅. After abstracting the infected subgraph, the symmetrically normalized Laplacian matrix is defined as: Lsym + = I −D−1/2 + A+D−1/2 + , (9) where D+ is the degree matrix of infected subgraph. Subsequently, factorization is performed on matrix Lsym + : △Lsym + = ΓT λΓ, (10) Γ and λ represent the eigenvector and eigenvalue matrices of Lsym + , respectively. We select k smallest non-trivial eigenvectors as Γi for user vi’s positional information (k ≪n). In summary, the positional encoding X3 i for user vi can be represented as: X3 i = Γi, vi ∈G′ +(V, E) −1, otherwise. (11) As the proposed framework follows a heuristic approach, the aforementioned user features can be further enriched. For instance, given a infected user vi, in order to augment the discriminative capabilities, X1 i may be defined as X1 i = (+1, −1). Furthermore, X2 i can be extended to encompass both the timestamp of vi and the unique identifier (id) of the information propagator, denoted as X2 i = (ti, idi), where idi represents the id of the individual responsible for disseminating the information to user vi. It is imperative to emphasize that such an extensible feature engineering process fosters the exploration of richer information representation, thereby potentially enhancing the overall efficacy and robustness of the model in source detection tasks. Finally, a concatenation procedure is employed to amalgamate the diverse user feature components, culminating in the derivation of the ultimate user embedding vector: Xi = ∥3 x=1Xx i . (12) Attentive Fusion Module (AFM) Considering the efficiency of information transmission varies among nodes, we focus on the nodes with greater information transmission capacity. Specifically, considering user vi and its neighbor vj, the attention coefficient eij at the l-th layer of model is formulated as: eij = ⃗a LReLU W(l) X(l) i , X(l) j , (13) where X(l) ∈Rlw×n is the feature representation of users and X(0) = X; lw signifies the number of elements in the node feature vector. W(l) ∈Rl′ w×lw represents a trainable parameter matrix, LReLU(·) is the activation function and ⃗a ∈R2l′ w is a weight vector. Following the definition of eij, the weight of user vj concerning all neighbors of vi is computed as: αij = exp ⃗aT LReLU (W [Xi∥Xj]) P vk∈N(vi) exp (⃗aT LReLU (W [Xi∥Xk])), (14) where N(vi) denotes the neighbors of node vi; ∥represents the vector concatenation operation; and (·)T symbolizes the transposition. The final representation output for user vi is: X′ i = X vj∈N(vi) αijWXj. (15) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 58 In pursuit of augmenting the expressive power of the diffusion model and promoting the stability of the self-attention learning process, we deploy K distinct and independent attention mechanisms, dedicated to capturing diverse aspects of information propagation. Subsequently, these K mechanisms are concatenated to form a comprehensive and enriched representation: X′′ i = ∥K k=1σ X′k i . (16) To achieve dimension alignment, we perform a mean pooling operation on the individual attention channels at the ultimate layer of the model: X′′′ i = σ 1 K K X k=1 X′k i ! . (17) This operation consolidates the diverse learned information from the attention mechanisms, harmonizing their representations and yielding a cohesive and coherent output for each node. Finally, the model yields an (n × 2)-dimensional matrix, wherein each row’s two elements undergo a softmax(·) transformation: S(⃗z)i = ezi P j ezj , ⃗z = X′′′ i . (18) It is important to emphasize that our focus is not on devising a novel attention mechanism, but rather on introducing an innovative attentive fusion module. This module aims to allocate attention coefficients to nodes dynamically, encompassing those that are incomplete, contingent on their information transmission capacity. This constitutes our primary contributions in this context. Loss Function and Training To rectify the class imbalance problem and ensure equitable attention across all sets, we propose a class-balancing mechanism. For the source set s and the non-source set V −s, we introduce a fixed constant ξ: ξ = |s| n −|s|, (19) where n and |s| represent the number of elements in sets V and s respectively. The constant ξ equalizes the weights of all samples and align their mathematical expectations to 1, thereby promoting unbiased and comprehensive learning. Through integrating this class-balancing mechanism into our model, we construct a novel loss function that is formulated as follows: Loss = X vi∈s Li + ξ X vj∈(V −s) Lj + λ∥w∥2, (20) where L represents the cross-entropy loss; for sample x and its label y, L(x, y) = −log(x) × y. The last term in Eq. (20) denotes the L2 regularization. The integrated GIN-SD, incorporating PEM and AFM, focuses on distinguishing incomplete nodes while prioritizing nodes with higher information transmission capacity. Additionally, the class balancing mechanism further ensures differential treatment of source/non-source sets. This synergy enables effective information extraction and efficient source detection. Experiments Experimental Setting Implementation Given the independent nature of each user’s social behavior and the short-term property of rumors, we randomly select 5% of the users as sources to construct incomplete graph about rumor propagation. Then, we employ the heterogeneous Independent Cascade (IC) model to simulate rumor dissemination, where each user’s forwarding probability p is drawn from a uniform distribution U(0.1, 0.5). The propagation is halted when 30% of the users are influenced by the rumor, and the network snapshot is obtained with proportion of δ incomplete nodes. The training and testing set have a sample ratio of 8 : 2 and the learning rate is set to 10−3. The number of attention layer equals to 3. For small-scale networks (G1-G2), the number of attention heads is set to 4, and the number of neurons in the hidden layer is 800. For medium-scale networks (G3-G7), the corresponding numbers are set to 2 and 500 respectively for the consideration computational constraints. As to the large-scale network (G8), the number of attention heads is assigned to 1, and the number of neurons in the hidden layer equals to 500. All experiments are conducted on a workstation with a single NVIDIA RTX 3090Ti GPU. Datasets Eight real-world datasets of different scales are utilized to evaluate the performance of each method, including Football (Girvan and Newman 2002), Jazz (Gleiser and Danon 2003), Facebook (Leskovec and Mcauley 2012), Twitch-ES (Rozemberczki, Allen, and Sarkar 2021), LastFM (Rozemberczki and Sarkar 2020), Enron (Klimt and Yang 2004), Github (Rozemberczki, Allen, and Sarkar 2021) and DBLP (Yang and Leskovec 2012). The specific characteristics are presented in Table 1. Evaluation Metrics The widely used Accuracy (Acc) and F-Score (Wang et al. 2017) are selected as the fundamental evaluation metrics to assess the efficacy of the methods. Acc quantifies the proportion of correctly classified samples among the entire sample population, while F-Score comprises two components: Precision and Recall. Precision quantifies the proportion of true sources within ˆs, denoted as |ˆs ∩s| / |ˆs|, while Recall gauges the proportion of detected sources in s, represented as |ˆs ∩s| / |s|. These metrics provide a comprehensive and rigorous assessment of the methods’ performance in capturing the veracity and completeness of source detection results. Network |V | |E| ⟨k⟩ G1 Football 115 613 10.66 G2 Jazz 198 2742 27.70 G3 Facebook 4039 88234 43.69 G4 Twitch-ES 4648 59382 25.55 G5 LastFM 7624 27806 7.29 G6 Enron 36692 183831 10.02 G7 Github 37700 289003 15.33 G8 DBLP 317080 1049866 6.62 Table 1: Characteristics of the considered datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 59 Football Jazz Facebook Twitch-ES Github DBLP Methods Acc F-Score Acc F-Score Acc F-Score Acc F-Score Acc F-Score Acc F-Score LPSI 0.812 0.323 0.794 0.302 0.811 0.014 0.795 0.008 0.783 0.002 0.755 0 EPA 0.783 0.303 0.806 0.295 0.798 0.010 0.783 0.002 0.792 0.001 0.763 0 GCNSI 0.831 0.284 0.829 0.271 0.835 0.004 0.820 0.003 0.811 0.001 0.807 0 SIGN 0.809 0.513 0.794 0.495 0.819 0.452 0.790 0.443 0.775 0.373 0.768 0.248 GCSSI 0.779 0.495 0.786 0.447 0.807 0.423 0.797 0.427 0.783 0.386 0.771 0.265 ResGCN 0.824 0.502 0.795 0.475 0.816 0.440 0.823 0.429 0.790 0.379 0.785 0.251 IVGD 0.897 0.729 0.904 0.684 0.882 0.661 0.837 0.625 0.819 0.580 0.804 0.533 SL-VAE 0.887 0.716 0.846 0.672 0.865 0.651 0.827 0.618 0.803 0.542 0.810 0.516 GIN-SD 0.956 0.839 0.934 0.715 0.968 0.761 0.970 0.764 0.912 0.694 0.895 0.690 Table 2: Source detection performance in graphs with 10% incomplete nodes. The best results are highlighted in bold. Football Jazz LastFM Enron Github DBLP Methods Acc F-Score Acc F-Score Acc F-Score Acc F-Score Acc F-Score Acc F-Score LPSI 0.798 0.206 0.775 0.220 0.792 0.006 0.773 0.001 0.751 0 0.733 0 EPA 0.771 0.195 0.759 0.216 0.768 0.007 0.752 0 0.738 0 0.718 0 GCNSI 0.782 0.157 0.779 0.176 0.780 0.001 0.761 0 0.750 0 0.746 0 SIGN 0.795 0.282 0.784 0.253 0.802 0.231 0.773 0.217 0.762 0.205 0.763 0.184 GCSSI 0.772 0.259 0.785 0.236 0.795 0.210 0.781 0.201 0.778 0.194 0.764 0.150 ResGCN 0.816 0.264 0.782 0.241 0.804 0.227 0.814 0.212 0.783 0.215 0.781 0.164 IVGD 0.872 0.506 0.859 0.496 0.867 0.509 0.820 0.426 0.809 0.424 0.780 0.413 SL-VAE 0.874 0.492 0.839 0.501 0.846 0.516 0.813 0.447 0.791 0.415 0.785 0.398 GIN-SD 0.897 0.721 0.904 0.635 0.914 0.657 0.921 0.694 0.854 0.605 0.846 0.613 Table 3: Source detection performance in graphs with 20% incomplete nodes. The best results are highlighted in bold. Baselines Eight recently proposed representative source detection methods are considered as baselines, including LPSI (Wang et al. 2017) and EPA (Ali et al. 2019) based on source centrality theory; GCNSI (Dong et al. 2019), SIGN (Li et al. 2021), GCSSI (Dong et al. 2022) and ResGCN (Shah et al. 2020) that consider user states; IVGD (Wang, Jiang, and Zhao 2022) and SL-VAE (Ling et al. 2022) which incorporate user and propagation information. Comparison with State-of-the-art Methods To validate the effectiveness of GIN-SD, we conduct comprehensive comparisons with benchmark methods on eight datasets (G1-G8) for two scenarios with δ being equivalent to 0.1 and 0.2. The results are summarized as in Table 2 and Table 3 respectively. Through the experimental analysis, we have derived several key observations: Firstly, all methods exhibit commendable Acc performance, whereas the F-Score appears relatively lower. This disparity stems from the class imbalance issue, where nonsource samples significantly outnumber the source samples. In other words, the larger the difference between Acc and FScore, the more the model is affected by the class imbalance problem. Notably, three benchmark methods (LPSI, EPA, and GCNSI) exhibit relatively typical performance characteristics. Furthermore, models that incorporate the learning mechanism of information diffusion processes outperform their counterparts, as evidenced by the significantly superior performance of the latter three methods compared to the initial six. Additionally, the influence of user information loss is evident, as all benchmark methods manifest a substantial decline in performance compared to their optimal results. This decline stems from the challenge posed by incomplete nodes, hindering the simulations’ convergence and yielding errors in the model’s predictions. In conclusion, among all methods, GIN-SD emerges as the optimal performer. Notably, in contrast to models that overlook the propagation process, GIN-SD exhibits an average improvement of 32%, and the enhancement ranges from 5% to 18% based on the models consider the propagation process. This substantial improvement is attributed to the salient enhancements introduced by GIN-SD, including: 1) leveraging positional information to distinguish incomplete nodes, 2) employing attention mechanism to enable the model’s targeted focus on distinct nodes, and 3) introducing a class-balancing mechanism to tackle the class imbalance problem. Performance on Early Rumor Sources Detection Due to the amplified and persistent harm incurred by the rumors propagation in society, it is of vital significance to identify the sources at the early stages of rumor dissemination to curtail further spreading. To evaluate the efficacy of distinct methodologies in the context of early rumor sources detection, we initial the source detection procedure when the rumor’s influence extends to 10% to 30% of users, with an increment of 5%. The incomplete node ratio δ is set to 0.1. The results are presented in Fig. 3. The findings reveal a discernible trend: as the scale of rumors expands, all methods exhibit a decrease in source detection precision. This underscores the imperative of timely source detection during the early phases of rumor propaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 60 Figure 3: The performance of different methods in early rumor sources detection. gation, considering both the potential societal ramifications and the inherent challenges in identifying sources. Moreover, across diverse scenarios, GIN-SD consistently attains the highest level of source detection precision, serving as empirical evidence supporting the efficacy and rationale of GIN-SD’s class-balancing mechanism, as well as its incorporation of PEM and AFM modules. Impact of Incomplete Ratio To further explore the effects of user information loss on source detection, we systematically vary the incomplete node ratio, i.e., δ in the range of 0 to 0.25 with a step size 0.05. The experimental results are depicted as in Fig. 4. The results reveal a pronounced degradation in source detection accuracy for all methods as incomplete nodes intensifies. This deterioration is primarily attributed to the perturbation caused by user information loss, impeding the effective convergence of the models. Furthermore, the accuracy of the GCSSI method, which does not consider the propagation process, is notably lower than other considered methods. In contrast, GIN-SD exhibits a remarkable superiority which amplifies with the increase of δ, thus substantiating its high applicability and resilience. Ablation Study and Analysis To validate the necessity of each module in GIN-SD, we conduct ablation studies targeting its components and summarize the results in Table 4. The variants are designed as: • GIN-SD w/o P utilizes a zero vector to replace the user’s positional encodings, i.e., X3 i = [0]n×k in Eq. (11). • GIN-SD w/ P’ calculates the proportion of correctly identified nodes among those that are sources and have missing information, i.e., (ˆs ∩Ψ) / (s ∩Ψ). • GIN-SD w/o A removes the attention mechanism, i.e., X′ i = P vj∈N(vi) WXj in Eq. (15). Facebook LastFM Github Methods Acc F-Score Acc F-Score Acc F-Score w/o P 0.823 0.413 0.815 0.404 0.798 0.339 w/ P’ 0.426 0.415 0.397 w/o A 0.942 0.743 0.923 0.705 0.908 0.651 w/ AS 0.801 0.223 0.782 0.210 0.769 0.179 w/ AL 0.907 0.716 0.891 0.683 0.854 0.617 w/o B 0.825 0.223 0.817 0.205 0.809 0.198 GIN-SD 0.968 0.761 0.950 0.726 0.912 0.694 Table 4: The performance of different variants for GIN-SD. Figure 4: The impact of varying degrees of user information loss on source detection accuracy. • GIN-SD w/ AS assigns higher attention weights to nodes with smaller degrees, i.e., αij ∝1/|N(vj)| in Eq. (14). • GIN-SD w/ AL assigns higher attention weights to nodes with larger degrees, i.e., αij ∝|N(vj)| in Eq. (14). • GIN-SD w/o B removes the class-balancing mechanism, i.e., ξ = 1 in Eq. (20). Positional Embedding We validate the importance of positional embedding through evaluating the variants GIN-SD w/o P and GIN-SD w/ P’; according to the experimental outcomes, the impact of incomplete nodes on GIN-SD w/o P is prominently pronounced, resulting in discernible deviations in the model’s accuracy. Moreover, the performance of GINSD w/ P’ in accurately discerning incomplete source nodes to a certain degree underscores the efficacy of incorporating positional information. Attentive Fusion We investigate the significance of attentive fusion by comparing GIN-SD with GIN-SD w/o A, GIN-SD w/ AS and GIN-SD w/ AL; based on their performance, GIN-SD w/ AS performs the worst due to an excessive focus on nodes with small degrees and limited information transmission capabilities. While GIN-SD w/ AL exhibits a higher proficiency, however, the presence of bridge nodes with strong information transmission capacity but not necessarily high degrees (Beers et al. 2023) limits its performance. Despite the exceptional performance of GIN-SD w/o A within the variant range, its uniform attention coefficients prevent it from reaching the superior capabilities demonstrated by the baseline GIN-SD framework. Class-balancing Mechanism The importance of classbalancing mechanism is validated through the relatively inferior performance of GIN-SD w/o B amongst the entire array of variants, which underscores the critical role played by the class balance mechanism in the source detection task. Conclusion This paper poses a new challenge for rumor source detection in graphs with incomplete nodes and has proposed a novel framework, GIN-SD, to tackle this problem. The key idea involves distinguishing incomplete nodes by leveraging position-based encoding of user features, followed by adaptive allocation of attention coefficients using a self-attention mechanism based on information transmission capacity. Additionally, a class balancing mechanism is devised to address prediction bias in the model. Extensive experimental results validate the effectiveness and superiority of our solution. We hope that this work, which introduces a new dimension to the field, will inspire further researches into robust deep learning models for source detection. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 61 Acknowledgments This work was supported by the National Key R&D Program (no. 2022YFE0112300); the National Natural Science Foundation for Distinguished Young Scholars (no. 62025602); the National Natural Science Foundation of China (nos. U22B2036, 62261136549, 11931015 and 62073263); the Fok Ying-Tong Education Foundation, China (no. 171105); the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (no. CX2023068); and the Tencent Foundation and XPLORER PRIZE. References Ali, S. S.; Anwar, T.; Rastogi, A.; and Rizvi, S. A. M. 2019. EPA: Exoneration and prominence based age for infection source identification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 891–900. Barth´elemy, M.; Barrat, A.; Pastor-Satorras, R.; and Vespignani, A. 2004. Velocity and hierarchical spread of epidemic outbreaks in scale-free networks. Physical Review Letters, 92(17): 178701. Beers, A.; Schafer, J. S.; Kennedy, I.; Wack, M.; Spiro, E. S.; and Starbird, K. 2023. Followback Clusters, Satellite Audiences, and Bridge Nodes: Coengagement Networks for the 2020 US Election. In Proceedings of the International AAAI Conference on Web and Social Media, volume 17, 59–71. Belkin, M.; and Niyogi, P. 2003. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6): 1373–1396. Bian, T.; Xiao, X.; Xu, T.; Zhao, P.; Huang, W.; Rong, Y.; and Huang, J. 2020. Rumor detection on social media with bi-directional graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 549–556. Cheng, L.; Li, X.; Han, Z.; Luo, T.; Ma, L.; and Zhu, P. 2022. Path-based multi-sources localization in multiplex networks. Chaos, Solitons & Fractals, 159: 112139. Dong, M.; Zheng, B.; Li, G.; Li, C.; Zheng, K.; and Zhou, X. 2022. Wavefront-Based Multiple Rumor Sources Identification by Multi-Task Learning. IEEE Transactions on Emerging Topics in Computational Intelligence, 6(5): 1068–1078. Dong, M.; Zheng, B.; Quoc Viet Hung, N.; Su, H.; and Li, G. 2019. Multiple rumor source detection with graph convolutional networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 569–578. Du, J.; Jiang, C.; Chen, K.-C.; Ren, Y.; and Poor, H. V. 2017. Community-structured evolutionary game for privacy protection in social networks. IEEE Transactions on Information Forensics and Security, 13(3): 574–589. Dwivedi, V. P.; Joshi, C. K.; Luu, A. T.; Laurent, T.; Bengio, Y.; and Bresson, X. 2020. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982. Gao, C.; Zhu, J.; Zhang, F.; Wang, Z.; and Li, X. 2022. A novel representation learning for dynamic graphs based on graph convolutional networks. IEEE Transactions on Cybernetics, 53(6): 3599–3612. Girvan, M.; and Newman, M. E. 2002. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12): 7821–7826. Gleiser, P. M.; and Danon, L. 2003. Community structure in jazz. Advances in Complex Systems, 6(04): 565–573. Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; and Wang, Y. 2021. Transformer in transformer. Advances in Neural Information Processing Systems, 34: 15908–15919. Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR. Klimt, B.; and Yang, Y. 2004. The enron corpus: A new dataset for email classification research. In Machine Learning: ECML 2004: 15th European Conference on Machine Learning, Pisa, Italy, September 20-24, 2004. Proceedings 15, 217–226. Springer. Leskovec, J.; and Mcauley, J. 2012. Learning to discover social circles in ego networks. Advances in Neural Information Processing Systems, 25. Li, L.; Zhou, J.; Jiang, Y.; and Huang, B. 2021. Propagation source identification of infectious diseases with graph convolutional networks. Journal of Biomedical Informatics, 116: 103720. Ling, C.; Jiang, J.; Wang, J.; and Liang, Z. 2022. Source localization of graph diffusion via variational autoencoders for graph inverse problems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1010–1020. Parshani, R.; Carmi, S.; and Havlin, S. 2010. Epidemic threshold for the susceptible-infectious-susceptible model on random networks. Physical Review Letters, 104(25): 258701. Pinto, P. C.; Thiran, P.; and Vetterli, M. 2012. Locating the source of diffusion in large-scale networks. Physical Review Letters, 109(6): 068702. Prakash, B. A.; Vreeken, J.; and Faloutsos, C. 2012. Spotting culprits in epidemics: How many and which ones? In 2012 IEEE 12th International Conference on Data Mining, 11–20. IEEE. Rozemberczki, B.; Allen, C.; and Sarkar, R. 2021. Multiscale attributed node embedding. Journal of Complex Networks, 9(2): cnab014. Rozemberczki, B.; and Sarkar, R. 2020. Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, 1325–1334. Scarselli, F.; Gori, M.; Tsoi, A. C.; Hagenbuchner, M.; and Monfardini, G. 2008. The graph neural network model. IEEE Transactions on Neural Networks, 20(1): 61–80. Shah, C.; Dehmamy, N.; Perra, N.; Chinazzi, M.; Barab´asi, A.-L.; Vespignani, A.; and Yu, R. 2020. Finding patient zero: Learning contagion source with graph neural networks. arXiv preprint arXiv:2006.11913. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 62 Shah, D.; and Zaman, T. 2011. Rumors in a network: Who’s the culprit? IEEE Transactions on Information Theory, 57(8): 5163–5181. Srinivasan, B.; and Ribeiro, B. 2019. On the equivalence between positional node embeddings and structural graph representations. In International Conference on Learning Representations. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. In International Conference on Learning Representations. Wang, J.; Jiang, J.; and Zhao, L. 2022. An Invertible Graph Diffusion Neural Network for Source Localization. In Proceedings of the ACM Web Conference 2022, 1058–1069. Wang, Z.; Wang, C.; Pei, J.; and Ye, X. 2017. Multiple source detection without knowing the underlying propagation model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. Wen, Z.; Kveton, B.; Valko, M.; and Vaswani, S. 2017. Online influence maximization under independent cascade model with semi-bandit feedback. Advances in Neural Information Processing Systems, 30. Yang, J.; and Leskovec, J. 2012. Defining and evaluating network communities based on ground-truth. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, 1–8. You, J.; Ying, R.; and Leskovec, J. 2019. Position-aware graph neural networks. In International Conference on Machine Learning, 7134–7143. PMLR. Zhou, H.; Jagmohan, A.; and Varshney, L. R. 2019. Generalized Jordan center: A source localization heuristic for noisy and incomplete observations. In 2019 IEEE Data Science Workshop (DSW), 243–247. IEEE. Zhu, K.; Chen, Z.; and Ying, L. 2017. Catch’em all: Locating multiple diffusion sources in networks with partial observations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. Zhu, P.; Cheng, L.; Gao, C.; Wang, Z.; and Li, X. 2022. Locating multi-sources in social networks with a low infection rate. IEEE Transactions on Network Science and Engineering, 9(3): 1853–1865. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 63 | 2024 | 7 |
18,517 | Boosting Neural Cognitive Diagnosis with Student’s Affective State Modeling Shanshan Wang1,2, Zhen Zeng2, Xun Yang3*, Ke Xu1, Xingyi Zhang1* 1Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, HeFei, China 2Institutes of Physical Science and Information Technology, Anhui University, HeFei, China 3University of Science and Technology of China, HeFei, China {wang.shanshan, xuke}@ahu.edu.cn, [email protected], [email protected], [email protected] Abstract Cognitive Diagnosis Modeling aims to infer students’ proficiency level on knowledge concepts from their response logs. Existing methods typically model students’ response processes as the interaction between students and exercises or concepts based on hand-crafted or deeply-learned interaction functions. Despite their promising achievements, they fail to consider the relationship between students’ cognitive states and affective states in learning, e.g., the feelings of frustration, boredom, or confusion with the learning content, which is insufficient for comprehensive cognitive diagnosis in intelligent education. To fill the research gap, we propose a novel Affect-aware Cognitive Diagnosis (ACD) model which can effectively diagnose the knowledge proficiency levels of students by taking into consideration the affective factors. Specifically, we first design a student affect perception module under the assumption that the affective state is jointly influenced by the student’s affect trait and the difficulty of the exercise. Then, our inferred affective distribution is further used to estimate the student’s subjective factors, i.e., guessing and slipping, respectively. Finally, we integrate the estimated guessing and slipping parameters with the basic neural cognitive diagnosis framework based on the DINA model, which facilitates the modeling of complex exercising interactions in a more accurate and interpretable fashion. Besides, we also extend our affect perception module in an unsupervised learning setting based on contrastive learning, thus significantly improving the compatibility of our ACD. To the best of our knowledge, we are the first to unify the cognition modeling and affect modeling into the same framework for student cognitive diagnosis. Extensive experiments on real-world datasets clearly demonstrate the effectiveness of our ACD. Our code is available at https://github.com/zengzhen/ACD. Introduction Cognitive Diagnosis Modeling (CDM) serves as a fundamental task in educational data mining (Anderson et al. 2014; Nguyen 2015), aiming at revealing students’ proficiency levels on specific knowledge concepts based on their response logs (Lord 1952). The diagnosis results can effectively support the downstream intelligent education tasks, such as knowledge tracing (Piech et al. 2015; Nakagawa, *Corresponding Authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. [0,0.2) [0.2,0.4)[0.4,0.6)[0.6,0.8) [0.8,1] Value of Concentrating 0.0 0.1 0.2 0.3 0.4 #Affect Annotation (%) Affect Annotation Correct Response 0.0 0.2 0.4 0.6 #Correct Response (%) (a) [0,0.2) [0.2,0.4)[0.4,0.6)[0.6,0.8) [0.8,1] Value of Confused 0.0 0.2 0.4 0.6 0.8 #Affect Annotation (%) Affect Annotation Correct Response 0.0 0.2 0.4 0.6 #Correct Response (%) (b) C1 C2 C3 C4 C5 proficiency level Affet-aware Cognitive Diagnosis Response Exercise Knowledge Concepts E1 C1 E2 C1,C2 E3 C3,C4 E4 C3,C5 Confused Concentrating Practice ✓✓ (c) Figure 1: The statistical correlation between studentexercise cognitive response and affective state is illustrated in (a) and (b) based on the ASSIST17 dataset. A toy example of the affect-aware cognitive diagnosis is shown in (c). Iwasawa, and Matsuo 2019; Shen et al. 2021), computerized adaptive testing (Zhuang et al. 2022b; Wu et al. 2020; Zhuang et al. 2022a), and recommendation systems. The quality of CDM largely depends on the design of the interaction function that models the complex interactions between students and exercises or concepts. Early methods primarily relied on manually designed interaction models (Lord 1952; Reckase 2009). Recent methods have achieved significant performance by incorporating deep neural networks (Wang et al. 2020) and graph neural networks (Gao et al. 2021; Wang et al. 2023b) into the interaction functions. Currently, the mainstream paradigm for deep interaction functions mostly adopts the classic Item Response Theory (IRT (Lord 1952)) to model the probability of a student correctly answering an exercise based on the student learning ability and exercise difficulty. Despite their remarkable achievements, existing efforts in CDM fail to consider the affective states of students in learning. In this paper, we argue that the affective states of students is an indispensThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 620 able subjective factor for cognitive behavior analysis. Some early studies (Pedro et al. 2013; San Pedro et al. 2014) in the field of educational data mining have investigated the relationships between affect and learning behavior in tutoring systems. They found that students who were bored or confused while answering the questions tended to do poorly on the test, and students with high levels of concentration obviously tended to make fewer mistakes. Besides, as shown in Figure 1, we illustrate the probability of student practicing exercises correctly in different affect dimensions based on the ASSIST17 dataset. Our observations are consistent with the aforementioned research findings, which clearly supports our argument in this paper that the affective factor should be carefully modeled into the interaction function for CDM. How to effectively perceive the students’ affects and further leverage the affective cue to boost the students’ cognitive diagnosis is a crucial research question, but has not been carefully studied so far in CDM. To fill the research gap, we develop a novel and flexible Affect-aware Cognitive Diagnosis (ACD) approach which can effectively diagnose students’ knowledge proficiency levels on specific knowledge concepts in a more accurate and interpretable fashion. To be specific, our proposed ACD framework mainly consists of three parts. (1) We first design a student affect perception module under the assumption that students’ affective states should be influenced by not only the students’ personalized affect traits but also the difficulty of exercises. It is inspired by the Flow theory in (Csikszentmihalyi 1992) stating that specific affective states emerge depending on the degree of challenge and skill that is present for an activity. (2) Then, we utilize the predicted affective distribution to infer the two important subjective factors of students, i.e., guessing and slipping, in the Deterministic Inputs, Noisy And gate (DINA) (De La Torre 2009) model, which is significantly different from original design in DINA where guessing and slipping are simply treated as two exercise-specific parameters. (3) Finally, we can easily integrate our learned guessing and slipping parameters with the estimated response score from basic neural cognitive diagnosis frameworks (Gao et al. 2021; Wang et al. 2023b) based on the DINA model. Moreover, our affect perception module can not only be optimized in supervised learning manner on datasets with auxiliary affect annotations, but also be extended in an unsupervised learning setting based on contrastive learning, thereby being easily integrated with existing CDM frameworks. To the best of our knowledge, we are the first to unify the cognition modeling and affect modeling into the same framework for student cognitive diagnosis. Overall, our proposed ACD firstly demonstrates the great potential of studying the relationships between affect and cognition in CDM. Despite the simplicity of our implemented ACD approach, we can significantly improve the strong CDM baselines, e.g., NCD (Wang et al. 2020) and RCD (Gao et al. 2021), in different settings. The main merit of this work is that we present a simple yet effective solution to exploit the complementation of DINA model and existing IRT based CDM methods, benefiting from the students’ affective modeling. Our key contributions are summarized as follows: • We propose a novel and effective affect-aware CDM approach, which clearly validates that the affective state is an indispensable subjective factor for CDM. • We develop a plug-and-play affect perception module which can be optimized either in fully-supervised or unsupervised learning setting, showing high compatibility. • Extensive experiments and analysis on several benchmark datasets with different CDM baselines clearly demonstrate the rationale and effectiveness of our ACD. Preliminaries We first briefly formulate the affect-aware CDM task. Considering that our ACD method depends on the DINA paradigm, we further describe the DINA model in detail. Problem Definition: Let M, N, K, and Z denote the number of students, exercises, knowledge concepts, and affect labels, respectively. S = {s1, · · · , sM}, E = {e1, · · · , eN}, and C = {c1, · · · , cK}, denote the sets of students, exercises, and knowledge concepts. Let Q = {Qij}N×K ∈ {0, 1}N×K be Q-matrix that records the relationship between exercises and knowledge concepts, where Qij = 1 if exercise ei relates to the concept cj and Qij = 0 otherwise. The student practice records R are denoted as the set of (s, e, r), where r ∈{0, 1} represents the binary score of student s on exercise e. The corresponding affect vector of student s is denoted as a = {a1, · · · , az, · · · , aZ}, where az ∈(0, 1) denotes the value of z-th affect label, e.g., concentrating or confused while student s doing exercise e. Given the students’ practice logs R, the Q-matrix, and the annotation of affective state vector a, the goal of our affectaware cognitive diagnosis is to learn an effective affectaware cognitive diagnosis interaction function Fθ(·) that can jointly estimate students’ mastery levels on each knowledge concept through student performance prediction and also infer students’ affective states in a multi-task learning manner. DINA Model: The DINA (De La Torre 2009) model is one of the most typical CDM theories. It introduces two exercise factors: slipping ˆs and guessing ˆg. ˆs denotes the probability that a student has sufficient ability but makes an incorrect response due to a slipping error for the exercise, and ˆg denotes the probability that the student does not know how to answer the exercise but guesses correctly. In DINA, ˆs and ˆg are only defined as exercise-specific factors, ignoring the individual differences between students. Let ηij be the mastery of student si on exercise ej, which is calculated by ηij = Q k θβjk ik , where both θik and βjk are binary variables. θik denotes whether the student sj has mastered the knowledge concept ck. βjk means whether the exercise ej contains ck. Based on the two factors, the probability of student si practising exercise ej correctly is defined as: ˆyi,j = ˆg1−ηi,j j (1 −ˆsj)ηi,j. (1) Methodology We provide a comprehensive overview of our Affect-aware Cognitive Diagnosis model (ACD) based on the NCD framework (Wang et al. 2020). Next, we briefly introduce how to extend our ACD in the unsupervised setting without affect The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 621 Student Affect Perception Affect-aware Cognitive Diagnosis Basic Cognitive Diagnosis Guessing Slipping Student Exercise Figure 2: The pipeline of Affect-aware Cognitive Diagnosis. labels. Note that our method can also be easily applied to other cognitive diagnosis frameworks. Affect-aware Cognitive Diagnosis (ACD) As shown in Figure 2, ACD can be mainly divided into three modules: student affect perception, basic cognitive diagnosis, and affect-aware cognitive diagnosis. The student affect perception module aims to predict the affects of students on specific exercises. Furthermore, through the affects, we can predict the probabilities of their guessing and slipping states. The basic cognitive diagnosis module can be an existing cognitive diagnosis model (here we present NCD for example) which utilizes the student’s ability and exercise difficulty to predict the accuracy of response. Finally, the affect-aware cognitive diagnosis module combines the affect-aware guessing and slipping parameters along with the basic cognitive diagnosis results to make the final prediction of student responses based on the diagnostic formula of the DINA model. Student Factors: In ACD, we design two factors hs and ha on each student. hs denotes the mastery level of student on knowledge concepts. ha is the latent affect trait of student. Let xs be the one-hot vector of student, the factors hs and ha can be obtained by multiplying xs and trainable matrices A and S respectively: hs = sigmoid(xs × S), (2) ha = sigmoid(xs × A), (3) where hs ∈(0, 1)1×K, ha ∈(0, 1)1×d, xs ∈{0, 1}1×M, S ∈RM×K and A ∈RM×d, and d denotes the dimension of the latent affect trait. Exercise Factors: The exercise factors include the knowledge concepts correlation vector Qe, which represents the knowledge concepts involved in each exercise, the exercise difficulty hdiff, and the exercise discrimination hdisc. Let xe be the one-hot vector of exercise, and Qe can be computed by Qe j = xe ×Q, where Qe ∈{0, 1}1×K, xe ∈{0, 1}1×N. hdiff represents the difficulty of the exercise on each knowledge concept. And hdisc reflects the exercise’s capacity to discriminate between students exhibiting high and low mastery levels of knowledge concepts. We calculate them by: hdiff = sigmoid(xe × E), (4) hdisc = sigmoid(xe × D), (5) where hdiff ∈(0, 1)1×K, hdisc ∈(0, 1). E ∈RN×K and D ∈RN×1 are trainable matrices. Student Affect Perception: This module is new-designed in our method. Considering that when the student is engaged in exercise-solving, the affect is influenced by both the affect latent trait and the difficulty of the exercise. Different students may exhibit different affects when facing the same exercise. Even the same student would exhibit different affects when facing different difficult exercises. We utilize ha and hdiff to predict the affect of student facing exercise by a fully connected layer: ˆa = sigmoid(Wa × [ha, hdiff]T + ba), (6) where ˆa ∈(0, 1)Z contains the predicted value of each affect. Wa ∈RZ×(d+K) represents the weight matrix and ba is bias. [·] denotes the concatenation operation. Since the affects of students is labeled as continuous values ranging from 0 to 1, we choose the mean square error (MSE) loss function for the affect perception module: La = X i ||ˆai −ai gt||2, (7) where ai gt denotes of the annotated affect vector of i-th student-exercise interaction. The affect loss La is finally averaged over the mini-batch. Note that when the affect annotation is missing in the training data, the student affect perception module in ACD can be optimized by unsupervised learning, which will be detailed later. Basic Cognitive Diagnosis: We use the NCD (Wang et al. 2020) framework as our basic cognitive diagnostic module to implement ACD for its simplicity and effectiveness. The student response score y∗in NCD is predicted by an interaction function composed of multiple fully connected layers. Its input is formulated as follows: f 0 = Qe ◦(hs −hdiff) × hdisc, (8) where ◦is the element-wise product. The output layer in NCD is formulated as follows: y∗= sigmoid(Wn × f n−1 + bn). (9) The weight matrices of all layers are restricted to positive values to satisfy the monotonicity assumption. Affect-aware Cognitive Diagnosis: Intuitively, the performance of the student is often correlated with his affect during the exercise-solving process. For example, even if a student possesses the capability to answer an exercise correctly, a low level of concentration during interaction could still lead to an incorrect response. To fit the influence of affect on the interaction process, we model the probabilities of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 622 Unsupervised Affect Perception Module Affect with Response Label 0 Affect with Response Label 1 Positive Pair Negative Pair Figure 3: Unsupervised affect perception module. student’s guessing and slipping based on the estimated affect distribution, which are as follows: ˆg = sigmoid(Wg × ˆa + bg), (10) ˆs = sigmoid(Ws × ˆa + bs), (11) where Wg, Ws ∈R1×Z denote the weight matrices of guessing and slipping, respectively, and bg, bs are bias terms. ˆg ∈(0, 1) represents the probability that the student answers the exercise correctly due to guessing, and ˆs ∈(0, 1) represents the probability that the student makes a mistake on exercise due to slipping. By combining ˆg, ˆs, and y∗, we adopt the well-known DINA diagnostic formula to infer the affectaware student response score ˆy as follows: ˆy = ˆg(1 −y∗) + (1 −ˆs)y∗, (12) where the DINA formula in Eq. (12) is a reformulated version in FuzzyCDM (Liu et al. 2018) which is more suitable for predicting student’s performance using neural networks. Inspired by DINA, the first part in Eq. (12) represents the probability that student s doesn’t know how to solve exercise e but guessed correctly, and the second part represents the probability that, based on the student’s capability, he should have answered the exercise correctly but made a mistake. Given that students’ response labels r are binary (0 for incorrect and 1 for correct), the binary cross-entropy loss is leveraged as the loss function for cognitive diagnosis: LCDM = − X i (ri log ˆyi + (1 −ri) log(1 −ˆyi)). (13) Training: In our approach, we jointly train the two losses in Eq. (7) and Eq. (13) in our ACD. The student affect perception module aims to predict the student’s affective distribution based on the student’s latent affect trait ha and exercise difficulty hdiff. The predicted affect provides assistance to the basic cognitive diagnosis. Specifically, the affectaware cognitive diagnosis module utilizes affect-related parameters to predict students’ responses and diagnose their levels of knowledge concept mastery. Synchronously, the accuracy of cognitive diagnosis results could also influence the student affect latent traits. We optimize the two losses simultaneously by employing a joint loss function: L = LCDM + λLa, (14) where λ denotes the trade-off parameter for the affect loss. Dataset ASSIST17 ASSIST12 Junyi Students 1709 27633 10000 Exercises 3162 53086 835 Knowledge concepts 102 265 835 Response records 311569 2013626 220799 AVG#score 0.4365 0.6971 0.6516 Table 1: Statistics of experimental datasets. Unsupervised Contrastive ACD Model (CACD) In fact, not all the datasets have the affect labels. When there are no available affect labels, it is not-trivial to optimize the student affect perception module. Considering the statistical correlation between the students’ affects and their cognitive response results illustrated in Figure 1 (a) and (b), we utilize the contrastive learning strategy to design an unsupervised affect perception module, as illustrated in Figure 3, to replace the student affect perception module shown in Figure 2. We refer to this new designed model suitable for datasets without affect labels as Contrastive ACD (CACD). Here we assume that positive student-exercise interactions with the response results more likely correspond to the same affective state of students, while negative student-exercise interactions more likely correspond to different affective states. Under this assumption, we can collect sufficient positive pairs and negative pairs for contrastive learning to optimize the affect vector ˆa in Eq. (6). To replace La, the contrastive affect perception loss Lca is formulated as follows: Lca = − X i 1 |Pi| X j∈Pi log exp(sim(ˆai, ˆaj)/τ) P k∈Ni exp(sim(ˆai, ˆak)/τ), (15) where Pi denotes the set of positive pairs constructed for i-th student-exercise interaction, and Ni denotes the set of negative pairs constructed for i-th interaction. sim(·, ·) denotes the cosine similarity between the two estimated affect vectors, and τ is the temperature of contrastive loss. Remarks Our approach can be regarded as a plug-andplay module which is added to existing cognitive diagnostic models to improve the performance. Experiments To demonstrate the generalization and effectiveness of affect perception in ACD, we first compare the model incorporating affect perception with its baselines. Then we will analyze and interpret the models. Datasets: In order to verify the generalization, we evaluate our model on three real datasets, in which two have affect labels, while one does not. A brief overview of the datasets is described as follows: • ASSIST171 is collected for ASSISTments Longitudinal Data Mining Competition in 2017. • ASSIST122 is the data for the school year 2012-2013 with affect. The affect data was extracted by researchers 1https://sites.google.com/view/assistmentsdatamining/dataset 2https://sites.google.com/site/assistmentsdata/2012-13-schooldata-with-affect The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 623 Datasets ASSIST17 ASSIST12 Models ACC RMSE AUC ACC RMSE AUC DINA baseline 64.84±0.09 46.66±0.02 69.64±0.06 71.45±0.07 43.80±0.04 69.49±0.11 ACD 71.15±0.26 43.62±0.18 77.72±0.29 73.76±0.08 42.12±0.05 74.21±0.06 IRT baseline 65.96±0.10 46.53±0.03 72.37±0.07 73.11±0.04 42.67±0.01 72.67±0.03 ACD 71.64±0.23 43.25±0.10 78.55±0.22 74.26±0.07 41.71±0.02 75.44±0.07 MIRT baseline 68.17±0.02 46.48±0.03 74.13±0.00 73.81±0.00 44.44±0.00 72.58±0.00 ACD 72.43±0.13 42.78±0.07 79.46±0.13 74.38±0.07 41.63±0.06 75.64±0.11 NCD baseline 69.21±0.87 45.13±0.60 75.34±1.01 74.23±0.05 41.95±0.01 74.78±0.04 ACD 72.42±0.10 42.84±0.07 79.49±0.10 74.98±0.05 41.37±0.06 76.02±0.16 RCD baseline 71.55±0.15 43.39±0.10 78.10±0.03 74.49±0.04 41.75±0.03 75.22±0.05 ACD 72.31±0.10 42.84±0.05 79.27±0.08 74.61±0.02 41.43±0.00 76.19±0.01 SCD baseline 71.59±0.10 43.35±0.05 78.19±0.01 74.64±0.03 41.48±0.02 75.94±0.04 ACD 72.69±0.05 42.79±0.02 79.40±0.07 74.70±0.01 41.37±0.00 76.10±0.02 Table 2: Experimental results on student performance prediction in percentage. from student logs (Wang, Heffernan, and Heffernan 2015; Botelho, Baker, and Heffernan 2017). This dataset has been widely used in research related to affect. • junyi3 is collected from the Chinese e-learning website Junyi Academy (Chang, Hsu, and Chen 2015). This dataset is widely used in CDM. Both ASSIST17 and ASSIST12 include four types of affect: bored, concentrating, confused, and frustrated. Following the setting in (Gao et al. 2021), we only retain the initial response log of students, and exclude students with less than 15 response records. The processed data statistics are presented in Table 1. 80% data of each student is leveraged for training and the remaining 20% for testing. Baselines: We use the following baselines for experiments: • DINA (De La Torre 2009) modeled the student and exercise factors as binary vectors, incorporating guessing and slipping parameters to estimate the performance. • IRT (Lord 1952) modeled students and exercises as unidimensional traits and utilized the logistic model to represent their interactions. • MIRT (Reckase 2009) extended the traits of students and exercises in IRT to multi-dimension. • NCD (Wang et al. 2020) leveraged the neural network to model the interactions between students and exercises. • RCD (Gao et al. 2021) modeled the relationships between knowledge concepts and introduced GCN. • SCD (Wang et al. 2023b) utilized self-supervised graph learning to address the long-tailed problem in CDM. Experimental Settings: The Prediction Accuracy (ACC), Root Mean Square Error (RMSE), and Area Under an ROC Curve (AUC) are selected as evaluation metrics to evaluate the results in our method. For fairness, we use the same hyperparameter settings for all models. We set the coefficient λ for the affect prediction loss in Eq. 14 to 1 to avoid over adjusting. and the dimension of the affect latent traits for students, denoted as d, to 128 for good performance. The 3https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId= 1198 Dataset junyi Models ACC RMSE AUC DINA baseline 74.22 41.76 78.72 CACD 76.96 40.05 82.18 IRT baseline 67.60 42.68 77.50 CACD 77.17 39.76 82.64 MIRT baseline 75.13 41.17 79.89 CACD 77.27 39.73 82.64 NCD baseline 74.43 41.72 79.09 CACD 77.35 39.70 82.73 RCD baseline 77.16 39.63 82.62 CACD 77.42 39.56 82.94 SCD baseline 77.30 39.61 82.77 CACD 77.45 39.59 82.90 Table 3: The results on the dataset without affect labels. network parameters are initialized using Xavier initialization following the (Wang et al. 2020). All the weights are sampled from N 0, 2 nin+nout , where nin refers to the input dimensionality of a layer, and nout refers to the output dimensionality. We implement all the baselines and the ACD version by PyTorch. The experiments are conducted on the Intel Core i9-10900X CPU and a GeForce RTX 3090 GPU. Experimental Results and Analysis Performance Comparison: Table 2 shows the performance comparison on datasets with affect labels. The error bars after ’±’ represent the standard deviations of 5 evaluation runs for each model. From the table, it can be observed that all the methods incorporating our affect perception outperform their respective baselines. The improvements are particularly remarkable for the earlier methods. The superior performance of NCD (ACD) and MIRT (ACD) over the latest baselines suggests that introducing affect perception brings greater benefits than improving the diagnostic function. Compared with RCD and SCD, RCD (ACD) and SCD (ACD) show relatively smaller improvements. This is because the graph neural networks employed in SCD and RCD, although enhancing performance, could weaken the discriminative power of features, which conflicts with our intention of extracting personalized parameters from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 624 Student Performance Affect Models ACC RMSE AUC RMSE MAE NCD 69.21 45.13 75.34 45.34 36.39 ACD-w/o-L 71.54 43.37 78.40 42.48 33.12 CACD 71.76 43.25 78.54 32.41 23.28 ACD 72.42 42.84 79.49 22.10 14.64 Oracle 79.84 40.27 80.35 0 0 Table 4: The results of affect prediction on ASSIST17. student affect conception ACD RMSE AUC student exercise ✔ 63.54 47.19 67.83 ✔ 65.51 46.35 70.59 ✔ ✔ 72.42 42.84 79.27 Table 5: The results of ablation study on ASSIST17. affect. Nevertheless, the introduction of affect perception still leads to improvement. Table 3 shows the results on the dataset without affect labels. The CACD indicates the use of contrastive loss in affect perception. Even in the absence of affect labels, our approach still achieved large improvements. This demonstrates the effectiveness of affect attributes. Without loss of generality, we opt to consider NCD as the basic cognitive diagnosis module of ACD in the context of our subsequent analysis. Reliability of Affect Perception: To explain the improvements brought by the affect perception, we evaluate the accuracy of the predicted affect on ASSIST17. As the affect prediction can be regarded as the regression task, besides the conventional metrics of ACC, RMSE and AUC on student performance prediction, we use RMSE and Mean Absolute Error (MAE) as evaluation metrics on affect prediction. In comparison, in addition to the baseline model NCD, our model ACD and CACD, we designed two variants, ACD-w/o-L and ”Oracle”. ACD-w/o-L represents the variant where no affect prediction loss is constrained. ”Oracle” means directly extracting personalized parameters from the affect labels. Noteworthily, due to using the test set affect labels as input in Oracle, it is actually an upper bounded model and it is designed only for comparison. In NCD, as there is no affect prediction involved, random affect was used instead. The results are shown in Table 4. It is obvious that a more improved accuracy in affect prediction corresponds to a more accurate cognitive diagnostic results. This suggests that the integration of student affect prediction and cognitive diagnosis is meaningful and effective, and our student affect prediction module can accurately predict affects. ACD-w/o-L exhibits a significant improvement compared with NCD, indicating that the personalized student parameters incorporated into our designed interaction function are effective. While CACD shows marginal improvement in student performance prediction compared with ACD-w/o-L, it demonstrates a considerable enhancement in affect prediction results. This suggests that CACD has stronger interpretability, which aligns with the requirements of CDM. 10−310−210−1 1 3 7 10 λ 72.5 72.0 71.5 ACC 10−310−210−1 1 3 7 10 λ 43.2 43.0 42.8 RMSE 10−310−210−1 1 3 7 10 λ 72.5 72.0 71.5 ACC 10−310−210−1 1 3 7 10 λ 43.2 43.0 42.8 RMSE Figure 4: Performance on different drade-off parameter λ. [0,0.2) [0.2,0.4) [0.4,0.6) [0.6,0.8) [0.8,1] Value of Concentrating 0.6 0.4 0.2 0.0 #Correct Response (%) [0,0.2) [0.2,0.4) [0.4,0.6) [0.6,0.8) [0.8,1] Value of Confused 0.6 0.4 0.2 0.0 #Correct Response (%) Figure 5: The impact of predicted affect on response results. Ablation Study on Student Affect Perception: In ACD, we undertake the prediction of students’ affects during exercise-solving based on two critical factors: the latent affect traits of students and the difficulty of exercises. To verify the essentiality of simultaneously considering both two factors, we conducted a comparative study by incorporating only one of these factors within the student affect perception module. The results are shown in Table 5. When only considering exercise difficulty, the model is similar to the DINA model and the model’s performance is superior to that achieved by solely considering student affect traits. This indicates that different students might experience similar affect while attempting exercises of the same difficulty. When simultaneously incorporating both factors, i.e. ACD, a significant enhancement in performance is achieved. This substantiates our hypothesis that students’ affects during exercise-answering is concurrently correlated with both student affect traits and exercise difficulty. Hyper-parameter Analysis: λ is the trade-off parameter in Eq. (14). We set it from 0.001 to 10. As shown in Figure 4, as the value of λ gradually increases, the performance of ACD improves due to more accurate affect predictions and the optimal performance is achieved when λ is set to 1. When λ becomes excessively large, it would lead to descent in performance due to the neglect of crucial CDM. Correlation Between Affect and Response Results: As shown in Figure 1, there is a certain correlation between students’ affects and their response. In order to verify the interpretability of our approach, i.e., whether the predicted affects reflect the aforementioned correlation, we computed the distribution between the predicted affects and response logs. We present the correct response probability on different affects concentrating and confused predicted by ACD on ASSIST17 in Figure 5. The Figure 5 demonstrates that higher levels of concentrating have a positive impact on response quality, whereas increased confused often lead to unsuccessful responses. The impact of predicted affects on response results remains consistent with the ground truth. Comparing Figure 1 and Figure 5, it is obvious that there are The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 625 Exercise #1 #2 #3 #4 #5 #6 Concept A B B C D E Student#1 ✔ ✔ ✘ ✘ ✘ ✔ Student#2 ✘ ✘ ✔ ✘ ✔ ✔ Table 6: Response logs of two students in ASSIST12. Student#1 Student#2 Ex-#2 Ex-#3 Ex-#2 Ex-#3 Frustrated 0.3 0.3 0.3 0.5 Confused 0 0 0 0.6 Concentrating 0.7 0.1 0.7 0.7 Bored 0.4 0.7 0.1 0.3 guessing ✔ slipping ✔ Table 7: The affect of students and ACD’s prediction of guessing and slipping probability. Instances with predictive outcomes exceeding 0.5 are considered to have occurred and are denoted by ✔. Note that ”Ex-” refers to ”Exercise”. some disparities between the predicted distribution of affect and the ground truth, the reason may be that the responses are influenced by a combination of various affects and student abilities rather than single affect. Case Study: In order to conduct a more comprehensive analysis of the role of affect, here we present a diagnosis example of NCD and ACD on ASSIST12. Table 6 displays a part of the response logs. Figure 6 illustrates the diagnostic results of NCD and ACD. Both models show similar diagnostic results for the two students across all concepts except for concept B. However, in the context of concept B, NCD did not provide interpretable diagnostic results. Student#1 answered exercise#2 correctly but failed on exercise#3, while student#2 exhibited the opposite pattern. The diagnostic results from NCD indicate a similar proficiency for concept B of both students, which fails to provide a satisfactory explanation for this phenomenon. In fact, all CDMs based on the IRT paradigm are confronted with this issue. Different form them, our ACD takes students’ affects into account. Table 7 presents not only the labels of affect, but also the prediction of ACD on guessing and slipping probability based on students’ affects. For student#1, when answering exercise#3, ACD concludes that although student#1 answered exercise#3 incorrectly, he actually possesses a solid understanding of concept B but made a slipping. In contrast, student#2, even though he answered exercise#3 correctly, ACD suggests that student#2 might have guessed the answer without a true grasp of concept B. As a result, student#1 possesses a higher proficiency on concept B compared to student#2. Related Work Cognitive Diagnosis Modeling: Cognitive diagnosis modeling (CDM) is a fundamental task in intelligent education. Existing work primarily derives from IRT (Lord 1952) and DINA (De La Torre 2009) paradigms. IRT (Lord 1952) aimed to project the latent features of students and items and predict the performance with a manually designed function. A B C D E A Student#1 Student#2 (a) NCD A B C D E A (b) ACD Figure 6: The diagnosis results of NCD and ACD. It is the basic model in CDM. Then, MIRT (Reckase 2009) expanded the features of the IRT (Lord 1952) model into multidimensional vectors to enhance its expressive power. NeuralCD (Wang et al. 2020) replaced the manually designed function with neural networks to fit the interactions between students and items. RCD (Gao et al. 2021) pioneered the application of graph networks in cognitive diagnosis and modeled the relationships between knowledge concepts. Lately, SCD (Wang et al. 2023b) enhanced the model’s performance on long-tailed problem. As another foundational paradigm, DINA (De La Torre 2009) modeled the student and exercise factors as binary vectors, incorporating guessing and slipping parameters to estimate the performance. However, existing works based on the DINA (De La Torre 2009) model (e.g., FuzzyCDM (Liu et al. 2018)) considered guessing and slipping parameters as factors associated with the exercises, ignoring the personalized response patterns of individual students. Student Affect Related Research: Affect as the expression of personalized states is involved in our method. Existing works (San Pedro et al. 2013; Ocumpaugh et al. 2014; Wang, Heffernan, and Heffernan 2015) primarily focused on detecting affect states during students’ exercise-solving processes based on their interaction logs. (Botelho, Baker, and Heffernan 2017) refitted the detectors with deep learning. However, existing works mainly focused on utilizing the interaction logs between students and exercises to label the affect. There has been no work specifically addressing affect perceptive cognitive diagnosis. Conclusion This paper presents an affect-aware cognitive diagnosis (ACD) model. Specifically, we design a student affect perception module to predict the affects exhibited by students during exercise solving and extract personalized interaction parameters from the predicted affects to enhance the interaction process in CDM. We then introduce a contrastive loss based on response results to extend the model to datasets without affect labels. Extensive experiments validate the effectiveness and generality of our model. In the future, we will integrate more techniques, such as domain generalization (Wang et al. 2023a; Chang et al. 2023) or federated learning (Dai et al. 2023) to enhance our ACD model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 626 Acknowledgments This work is supported by National Natural Science Foundation of China (Grant No. 62106003, 62206003, 62272435, U22A2094, U21A20512), National Key Research and Development Project (NO. 2018AAA0100105), and the University Synergy Innovation Program of Anhui Province (Grant GXXT-2022-047). References Anderson, A.; Huttenlocher, D.; Kleinberg, J.; and Leskovec, J. 2014. Engaging with massive online courses. In Proc. Int. Conf. World Wide Web, WWW, 687–698. Botelho, A. F.; Baker, R. S.; and Heffernan, N. T. 2017. Improving sensor-free affect detection using deep learning. In Artificial Intelligence in Education, 40–51. Springer. Chang, H.-S.; Hsu, H.-J.; and Chen, K.-T. 2015. Modeling Exercise Relationships in E-Learning: A Unified Approach. In EDM, 532–535. Chang, T.; Yang, X.; Zhang, T.; and Wang, M. 2023. Domain Generalized Stereo Matching via Hierarchical Visual Transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9559–9568. Csikszentmihalyi, M.; and Csikszentmihalyi, I. S. 1992. Optimal experience: Psychological studies of flow in consciousness. Cambridge university press. Dai, R.; Yang, X.; Sun, Y.; Shen, L.; Tian, X.; Wang, M.; and Zhang, Y. 2023. FedGAMMA: Federated Learning With Global Sharpness-Aware Minimization. IEEE Transactions on Neural Networks and Learning Systems. De La Torre, J. 2009. DINA model and parameter estimation: A didactic. Journal of educational and behavioral statistics, 34(1): 115–130. Gao, W.; Liu, Q.; Huang, Z.; Yin, Y.; Bi, H.; Wang, M.-C.; Ma, J.; Wang, S.; and Su, Y. 2021. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In SIGIR - Proc. Int. ACM SIGIR Conf. Res. Dev. Inf. Retr., 501– 510. Liu, Q.; Wu, R.; Chen, E.; Xu, G.; Su, Y.; Chen, Z.; and Hu, G. 2018. Fuzzy cognitive diagnosis for modelling examinee performance. ACM Trans. Intell. Syst. Technol., 9(4): 1–26. Lord, F. 1952. A theory of test scores. Psychometric monographs. Nakagawa, H.; Iwasawa, Y.; and Matsuo, Y. 2019. Graphbased knowledge tracing: modeling student proficiency using graph neural network. In IEEE/WIC/ACM International Conference on Web Intelligence, 156–163. Nguyen, T. 2015. The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT Journal of online learning and teaching, 11(2): 309– 319. Ocumpaugh, J.; Baker, R.; Gowda, S.; Heffernan, N.; and Heffernan, C. 2014. Population validity for educational data mining models: A case study in affect detection. British Journal of Educational Technology, 45(3): 487–501. Pedro, M. O.; Baker, R.; Bowers, A.; and Heffernan, N. 2013. Predicting college enrollment from student interaction with an intelligent tutoring system in middle school. In EDM. Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L. J.; and Sohl-Dickstein, J. 2015. Deep knowledge tracing. Advances in neural information processing systems, 28. Reckase, M. D. 2009. Multidimensional item response theory models. In Multidimensional item response theory, 79– 112. Springer. San Pedro, M. O.; Ocumpaugh, J.; Baker, R. S.; and Heffernan, N. T. 2014. Predicting STEM and Non-STEM College Major Enrollment from Middle School Interaction with Mathematics Educational Software. In EDM, 276–279. San Pedro, M. O. Z.; Baker, R. S. d.; Gowda, S. M.; and Heffernan, N. T. 2013. Towards an understanding of affect and knowledge from student interaction with an intelligent tutoring system. In Artificial Intelligence in Education, 41– 50. Springer. Shen, S.; Liu, Q.; Chen, E.; Huang, Z.; Huang, W.; Yin, Y.; Su, Y.; and Wang, S. 2021. Learning process-consistent knowledge tracing. In Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 1452–1460. Wang, F.; Liu, Q.; Chen, E.; Huang, Z.; Chen, Y.; Yin, Y.; Huang, Z.; and Wang, S. 2020. Neural cognitive diagnosis for intelligent education systems. In Proc. AAAI Conf. Artif. Intell., AAAI, volume 34, 6153–6161. Wang, S.; Chen, Y.; He, Z.; Yang, X.; Wang, M.; You, Q.; and Zhang, X. 2023a. Disentangled Representation Learning with Causality for Unsupervised Domain Adaptation. In Proceedings of the 31st ACM International Conference on Multimedia, 2918–2926. Wang, S.; Zeng, Z.; Yang, X.; and Zhang, X. 2023b. Selfsupervised Graph Learning for Long-tailed Cognitive Diagnosis. In Proc. AAAI Conf. Artif. Intell., AAAI, volume 37(1), 110–118. Wang, Y.; Heffernan, N. T.; and Heffernan, C. 2015. Towards better affect detectors: effect of missing skills, class features and common wrong answers. In Proceedings of the fifth international conference on learning analytics and knowledge, 31–35. Wu, Z.; Li, M.; Tang, Y.; and Liang, Q. 2020. Exercise recommendation based on knowledge concept prediction. Knowledge-Based Systems, 210: 106481. Zhuang, Y.; Liu, Q.; Huang, Z.; Li, Z.; Jin, B.; Bi, H.; Chen, E.; and Wang, S. 2022a. A Robust Computerized Adaptive Testing Approach in Educational Question Retrieval. In SIGIR - Proc. Int. ACM SIGIR Conf. Res. Dev. Inf. Retr., 416– 426. Zhuang, Y.; Liu, Q.; Huang, Z.; Li, Z.; Shen, S.; and Ma, H. 2022b. Fully adaptive framework: Neural computerized adaptive testing for online education. In Proc. AAAI Conf. Artif. Intell., AAAI, volume 36(4), 4734–4742. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 627 | 2024 | 70 |
18,518 | MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling Jiaqi Xu, Bo Liu, Yunkuo Chen, Mengli Cheng, Xing Shi Alibaba Group, China {zhoumo.xjq, xuanyuan.lb, chenyunkuo.cyk, mengli.cml, shubao.sx}@alibaba-inc.com Abstract Video-and-language understanding has a variety of applications in the industry, such as video question answering, textvideo retrieval, and multi-label classification. Existing videoand-language understanding methods generally adopt heavy multi-modal encoders and feature fusion modules, which consume high computational costs. Specially, they have difficulty dealing with dense video frames or long text prevalent in industrial applications. This paper proposes MuLTI, a highly accurate and efficient video-and-language understanding model that achieves efficient and effective feature fusion and rapid adaptation to downstream tasks. Specifically, we design a Text-Guided MultiWay-Sampler based on adapt-pooling residual mapping and self-attention modules to sample long sequences and fuse multi-modal features, which reduces the computational costs and addresses performance degradation caused by previous samplers. Therefore, MuLTI can handle longer sequences with limited computational costs. Then, to further enhance the model’s performance and fill in the lack of pretraining tasks in the video question answering, we propose a new pretraining task named Multiple Choice Modeling. This task bridges the gap between pretraining and downstream tasks and improves the model’s ability to align video and text features. Benefiting from the efficient feature fusion module and the new pretraining task, MuLTI achieves state-of-the-art performance on multiple datasets. Implementation and pretrained models will be released. Introduction Video-and-language understanding has a wide range of applications such as video question answering (videoQA), textvideo retrieval and multi-label classification (Diba et al. 2019). Existing methods have made significant progress in video-and-language understanding. However, they still suffer from two challenges: Balancing computational efficiency and performance when dealing with long sequences and the domain gap between pretraining and downstream tasks. The video-text model generally consists of three modules: text encoder, video encoder, and feature fusion module. The latter two usually cause high computational costs. Feature fusion modules face efficiency and effectiveness challenges. Previous studies (Fu et al. 2021; Huang et al. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2022) concatenate video-text encoder outputs for transformer encoders processing, with complexity growing with sequence length squared. Other studies (Lei et al. 2021b; Li et al. 2021; Yang et al. 2022; Lei et al. 2021a) reduce computation by condensing video features via mean pooling or class tokens before feature fusion, risking loss of critical details. Flamingo (Alayrac et al. 2022) employs samplers and random queries for efficient video feature condensation, though this approach is suboptimal and may compromise feature integrity. In summary, balancing computational costs and the model’s accuracy in the feature fusion module is still challenging. Following (Miech et al. 2019; Sun et al. 2019; Li et al. 2020; Zhu and Yang 2020; Miech et al. 2020), we explore strategies for selectively freezing encoder components to lower visual encoder training costs. Aligning pretraining with downstream tasks is challenging. Previous pretraining frameworks generally apply four typical pretraining tasks: Masked Frame Modeling (MVM) tasks (Lei et al. 2021a; Ma, Lou, and Ouyang 2021; Fu et al. 2021; Huang et al. 2022) for video encoder optimization, Masked Language Modeling (MLM) tasks (Devlin et al. 2018; Sun et al. 2019; Zhu and Yang 2020; Luo et al. 2020; Li et al. 2020; Lei et al. 2021b) for text encoder optimization, Video Text Matching (VTM) and Video Text Comparison (VTC) tasks (Li et al. 2020; Luo et al. 2020; Fu et al. 2021; Li et al. 2021) for joint optimization of video and text encoders. Although the above methods have proven effective in learning video and text representations, there are still significant domain gaps between pretraining and downstream tasks, especially in videoQA. Only the VTC task is consistent with text-video retrieval among the above pretraining tasks. In summary, narrowing the domain gap between the pretraining and downstream tasks is still challenging. Addressing these challenges, we introduce MuLTI, featuring a Text-Guided MultiWay-Sampler for sequence condensation and multi-modal fusion. Existing methods typically use a learnable query vector to sample the video feature through self-attention modules (Alayrac et al. 2022). A randomly initialized query vector can discard vital original feature information, causing performance drops. We design an lightweight Adapt-Pooling method in Text-Guided MultiWay-Sampler to obtain the condensed features by calculating the importance of each sequence block. Then, we add the condensed features to the sampled features and use The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6297 Long Seq Text-Guided Sampler Video Encoder Text Embedding Video Encoder Bottom Layers Bert Encoder Multi-Modal Fusion (Bert Encoder) Multi-Modal Fusion (Top Layers Bert Encoder) Video Text Video Text Video Text (a) (b) (d) Long Seq Short Seq Average Pooling Video Encoder Frozen Trained Short Seq Video Encoder Text Embedding Multi-Modal Fusion (Bert Decoder) Video Text (c) Condensed Module Long Seq MultiWay Sampler Long Seq VIOLET ALPRO Flamingo The top layers of encoders will be trained. Bert Encoder Ours Figure 1: Comparison of different models. Previous works such as (a) and (b) cannot easily handle long sequences. Previous works such as (c) use randomly initialized query vectors for sampler and condense video features, which is sub-optimal solution. short text features to sample and fuse long video features. We share the self-attention and reserve different feed forward networks for different modalities in the sampler. Figure 1 shows that previous models (a)(Fu et al. 2021; Huang et al. 2022) and (b)(Li et al. 2021) consume substantial video memory with their lengthy concatenated feature fusion. Both (b) and (c)(Alayrac et al. 2022) compress video features, a common choice due to their greater length compared to text. However, excessive compression can impair performance because of the rich information in video features. In contrast, we design MuLTI like (d) and introduces the Text-Guided MultiWay-Sampler to efficiently condense text features for fusion. Since text is more concise, we use the streamlined text to direct video feature sampling, resulting in enhanced performance. Pretraining on large-scale video-text datasets could improve the performance of video-text models significantly. However, there are still domain gaps between the existing pretraining tasks and downstream tasks, specifically in videoQA. The difficulty of introducing videoQA to pretraining tasks is constructing suitable question-answer pairs. To reduce the domain gap between the pretraining task and the downstream task in videoQA, we introduce a new pretraining task named Multiple Choice Modeling (MCM). The MCM can bridge the task gap between pretraining and downstream tasks by constructing multiple-choice question answering tasks on large-scale video-text datasets. It asks the model to find text descriptions that match the video most from a randomly constructed collection, which enhances the representation ability of the video and text encoders and the alignment between video and text features. The contributions can be summarized as follows: (1) We propose MuLTI, a highly accurate and memoryefficient video-and-language framework, which achieves efficient and effective feature fusion through the feature sampling and attention modules. (2) We propose a Text-Guided MultiWay-Sampler to sample long sequence features and facilitate the interactions between video and text features, reducing memory cost and improving performance. (3) We design a new pretraining task called Multiple Choice Modeling (MCM) to bridge the task gap between pretraining and downstream tasks. Experimental results on seven English tasks and one Chinese multi-label classification task demonstrate the effectiveness of MuLTI. Although we designed MuLTI for industrial scenarios with long sequences, MuLTI still handles short sequences well and achieves state-of-the-art performance. Related Work Video-and-Language Structure. Clover (Huang et al. 2022) and VIOLET (Fu et al. 2021) directly concatenate video and text features, using an encoder to manage their complex interactions, with complexity tied to the concatenated sequence length squared. ALPRO (Li et al. 2021) similarly uses an encoder for fusing features but applies mean pooling on video features before concatenation, risking loss of crucial details. AllInOne (Wang et al. 2022a) reduces memory demands by merging text with image features frame-by-frame but still faces high computational loads with extensive OCR transcripts. Flamingo (Alayrac et al. 2022) attempts cost-cutting by condensing video features using samplers and random queries, which isn’t ideal. To tackle the above problems, we design a Text-Guided MultiWaySampler based on adapt-pooling residual mapping and selfattention modules to sample long sequence features and fuse multi-modal features. Video-and-Language Pretraining. Four typical pretraining tasks are applied in previous pretraining framework: Masked Frame Modeling (MVM) tasks (Lei et al. 2021a; Ma, Lou, and Ouyang 2021; Fu et al. 2021; Huang et al. 2022), Masked Language Modeling (MLM) tasks (Devlin et al. 2018; Sun et al. 2019; Zhu and Yang 2020; Luo et al. 2020; Li et al. 2020; Lei et al. 2021b; Fu et al. 2021), Video Text Matching (VTM) and Video Text Comparison (VTC) tasks (Li et al. 2020; Luo et al. 2020; Fu et al. 2021; Li et al. 2021). MVM is used for video encoder optimization, MLM is used for text encoder optimization, VTM and VTC are used for joint optimization of video and text encoders. In (Ge et al. 2022), Multiple Choice Questions (MCQ) is proposed to learn fine-grained video and text features. However, MCQ is trained by contrastive loss and does not correlate well with videoQA. In summary, downstream task gaps persist between pretraining and downstream tasks, particularly in videoQA. To address this, we enhance MuLTI with MCM, bridging the pretraining and downstream tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6298 [CLS] spongebob is [MASK] to squidward. Learnable Embedding Input for Baselines Video Input (a) Text Encoder Video Encoder Query Embedding Learnable Query K Self-Attention Block Query Feature Long Text Feature Sampled Feature V Q K Text-Guided MultiWay-Sampler ×! (b) …… in_l ×d FC SoftMax Transposition (c) + Adapt-Pooling Feature Output Feature out_l×d …… VTC Head … [CLS] What does this video describe ? [SEP] option 1: spongebob is Talking to Squidward. [SEP] option 2 : blue sky Input for MCM … Video Feature MLM Head VTM Head MCM Head FFN + K + FFN + Blue arrows is related to MCM task. Red arrows is related to Baselines. Framework of MuLTI For Pretraining Adapt-Pooling Sampler K Self-Attention Block V Q K K + MultiWay-Sampler Text-Guided Sampler ✖ Figure 2: (a) shows the framework of MuLTI. MuLTI contains a video encoder, a text encoder, and a Text-Guided MultiWaySampler. Text-Guided MultiWay-Sampler is used to condense the extracted features and feature fusion. (b) shows the framework of the Text-Guided MultiWay-Sampler. The adapt-pooling feature provides origin information. We share the self-attention module and reserve different feed forward networks for different modalities in the sampler to accommodate modalities. Methodology MuLTI’s Architecture Figure 2 (a) gives an overview of MuLTI’s architecture. Details for each component are as follows. Video and Text Encoders: Unless specified, a 12layer VIT-B/16224 (Radford et al. 2021) is used as video encoder. We sparsely sample Nv frames from the input video. The VIT-B/16224 model divides each frame into K non-overlapping patches. The per-video features is ˜⃗v ∈ RNv×K×d, where d is the feature dimension. The output of the video encoder is a video features sequence: {⃗v1, ...,⃗vNv}, with ⃗vi ∈RK×d. Experiments revealed the class token is unnecessary and was removed to save computation. Unless specified, a 12-layer bert (Devlin et al. 2018) is used as the text encoder. Assuming the length of input text is Nt, the output of the text encoder is a text features sequence ˜⃗t ∈RNt×d: {⃗tcls,⃗t1, ...,⃗tNt}, with ⃗ti ∈Rd. The ⃗tcls is the output of the text [CLS] token. Following (Miech et al. 2019; Sun et al. 2019; Li et al. 2020), we explore training strategies for partially freezing encoder layers. Text-Guided MultiWay-Sampler: The multi-modal fusion core is the Text-Guided MultiWay-Sampler, adapted from the transformer decoder, shown in Figure 2 (b). The Text-Guided MultiWay-Sampler is designed to condense text features and fuse different modal features efficiently. Following (Alayrac et al. 2022), we initialize a random learnable query to condense features via sampling. The expression Sampler(z, q) represents the sampling of feature z using the query vector q through the sampler. (i) Why we need Adapt-Pooling? Learnable queries can compress features well, but starting with random vectors may reduce their effectiveness. Random initialization may lose key details in original features, weakening the model’s ability to capture and retain the essence of the data. Therefore, we design an attention-based lightweight AdaptPooling method to condense long sequence features. The Adapt-Pooling structure is shown on the left side of the Figure 2 (b). The formula is shown below, AdaPool(z) is the output of the Adapt-Pooling, with W reduce ∈Rd×Ns, d the hidden dimension of the transformer, Ns the length of condensed features, ∗.T the transposition of the matrix. AdaPool(z) = Softmax((W reducez).T) × z (1) The Softmax((W reducez).T) yields an importance weight matrix of shape [Ns, Ni], with each element signifying the relative importance of the corresponding block within the sequence, and Ni representing the length of the input features. Adapt-Pooling selectively highlights key input segments, condensing features while preserving its critical attributes. This integration enriches the feature set with distilled information and ensures full data utilization, boosting the model’s capacity and robustness. (ii) Why we condense text features? The video features are often redundant, whereas text features are denser and more meaningful (He et al. 2022). Language guidance is key to distilling valuable information from videos. Both (Li et al. 2021) and (Alayrac et al. 2022) condense the video features. Excessive compression harms model performance; using condensed text to sample video features improves results. Before fusion, learnable time embeddings enhance image features for temporal modeling. The short text features are used to sample the video features to fuse multimodal features. In our Text-Guided MultiWay-Sampler, we use shared self-attention modules but distinct FFNs for each modality to handle multi-modal features efficiently. The fuse feature is shown as follow, with q the query embedding of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6299 Method FLOPs Params FPS MuLTI-S 99G 203M 20.74 MuLTI-B 346G 247M 10.13 VIOLET (Fu et al. 2021) 249G 198M 9.05 ALPRO (Li et al. 2021) 432G 235M 9.97 MuLTI-L 1509G 746M 3.12 FrozenBiLM (Yang et al. 2022) 1733G 1224M 2.54 Table 1: Comparison among models with 16 frames. Text length is 512. FPS is based on 1 NVIDIA V100 16GB GPU. text features, zout the fused feature: zout = Sampler(˜⃗v, Sampler(˜⃗t, q) + AdaPool(˜⃗t)) (2) A work similar to ours is Token Learner (Ryoo et al. 2021), which uses spatial attention in model to extract 8 or 16 representative vectors from an image. The difference is that we use Adapt-Pooling and self-attention to condense features for multi-modal fusion. The sampler extracts complex information via self-attention, while Adapt-Pooling provides fast, simple features through residual mapping. (iii) Why Text-Guided MultiWay-Sampler is efficient ? Our feature fusion module outperforms flatten-based methods and transformer encoders in efficiency, as simple analysis shows: we assume VIT-B/16224 is used as video encoder, each frame will be flattened into a sequence of length 196. Let the number of queries be N q t for text, the length of the video features be Nv × 196, and the length of the text features be Nt. Thus the complexity of the flatten method will be O((Nt + Nv × 196)2). After applying the Text-Guided MultiWay-Sampler, the complexity is O(N q t × Nv × 196 + N q t × Nt). As N q t are generally much smaller than Nv and Nt. our method is much more efficient than other methods. MuLTI for different scenes. In this section, we built scalable models for resource-varied scenes. We replace the video encoder from VIT-B/16 to VIT-L/14 and the text encoder from bert-base to bert-large. Then, we obtain MuLTIL. In addition, we replace the video encoder from VIT-B/16 to VIT-B/32 and reduce the text encoder from 12 layers to 6 layers. Then, we obtain MuLTI-S. The floating point of operations (FLOPs), parameters (Params) and frames per second (FPS) of different models are shown in Table 1. Pretraining for MuLTI Multiple Choice Modeling: Despite MLM and VTM’s success in learning video-text representations, a significant gap remains between pretraining and downstream tasks like videoQA. The difficulty of introducing videoQA into the pretraining task is constructing suitable question-answer pairs. Inspired by multiple choice videoQA, we find the text descriptions paired with videos are the correct natural answers. Therefore, we introduce Multiple Choice Modeling, a new pretraining task that bridges the task gap between pretraining and downstream tasks. Specifically, it is constructed as follows, which is a four-choice question. "[CLS]<Question> ? [SEP] Option 1: <Answer 1>. [SEP] Option 2: <Answer 2>. [SEP] Option 3: <Answer 3>. [SEP] Option 4: <Answer 4>." We randomly place the correct descriptions in <Answer 1>, <Answer 2>, <Answer 3>, <Answer 4>, and obtain answers other than the correct descriptions through the text corpus. The <Question> also has various choices, such as “What does this picture describe?”, “What does this video describe?” and so on. As shown in Figure 2 (a), typical MLM, VTM, and VTC tasks correspond to the red arrows and red squares in the image. The MCM corresponds to the image’s blue arrows and blue squares, and the MCM does not conflict with the other pretraining tasks. The MCM is seamlessly integrated with other pretraining tasks and does not require additional manual annotations or data preprocessing. It utilizes video encoders to extract visual features and text encoders for generating textual representations, followed by a Text-Guided MultiWay-Sampler for feature fusion. The MCM head evaluates the given options’ relevance to the video, optimizing alignment using cross-entropy loss. The MCM task, choosing the best description from options, mirrors essential videoQA cognition, enhancing the model’s cross-modal reasoning and alignment. MCM directly improves the model’s ability to match text with corresponding videos, enhancing performance in text-video retrieval tasks. Pretraining Objectives: We also employ the MLM, VTM and VTC, considering their effectiveness. The MLM randomly masks input tokens with 15% probability and replaces them with [MASK], which are predicted based on video and text. The VTC treats matching video text pairs as positive pairs and other video text pairs in the batch as negative pairs. The VTM is slightly different from VTC, where the multi-modal features are fused before used for classification. The overall pretraining objective of MuLTI is: L = Lmlm + Lvtc + Lvtm + Lmcm (3) Experiments Implementation Details Pretraining datasets: We pretrained the model using two large datasets. One is WebVid-2M (Bain et al. 2021), which contains 2.5M video-text pairs. Because pretraining the video-text model using image-text pairs also improves the model’s performance (Lei et al. 2021b), the CC-3M(Sharma et al. 2018) is also used as a pretrained dataset containing 3M image-text pairs. We implement MuLTI in PyTorch (Paszke et al. 2019). In detail, the video encoder is initialized with pretrained weights from CLIP (Radford et al. 2021). Text encoder is initialized with a 12-layer of the BERTbase model (Devlin et al. 2018). Then, a 4-layer Text-Guided MultiWaySampler is used to condense text features and fuse multimodal features. The length of query embedding is set to 16. MuLTI pretraining spanned 10 epochs on eight NVIDIA A100 GPUs, a 256 batch size totaling 200k iterations. Optimization used AdamW with a 1e−4 learning rate, 0.05 weight decay, and a warm-up scheduler. We uniformly sample 16 frames for each video and scale them to 224 × 224. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6300 MSRQ MSVQ TGIF. TGIF. TGIF MSRVTT DiDeMo Act. Tran. Fra. Ret Ret Method #PT Acc.↑ Acc.↑ Acc.↑ Acc.↑ Acc.↑ R1 / R5 / R10 ↑ G-M ↑ R1 / R5 / R10 ↑ G-M ↑ CLIP4CLIP 400M 43.1 / 70.4 / 80.8 62.6 43.4 / 70.2 / 80.6 62.6 QB-Norm 400M 47.2 / 73.0 / 83.0* 65.9* 43.3 / 71.4 / 80.8* 63.0* CAMoE 400M 47.3 / 74.2 / 84.5* 66.7* 43.8 / 71.4 / 79.9* 63.0* TS2-Net 400M 54.0 / 79.3 / 87.4* 72.1* 47.4 / 74.1 / 82.4* 66.1* ALPRO 5.5M 42.1 45.9 33.9 / 60.7 / 73.2 53.2 35.9 / 67.5 / 78.8 57.6 VIOLET 185.5M 43.9 47.9 92.5 95.7 68.9 34.5 / 63.0 / 73.4 54.2 32.6 / 62.8 / 74.7 53.5 AllInOne 102.5M 44.3 47.9 92.7 94.3 64.2 37.9 / 68.1 / 77.1 58.4 32.7 / 61.4 / 73.5 52.8 Clover 5.5M 44.1 52.4 95.0 98.2 71.6 40.5 / 69.8 / 79.4 60.7 50.1 / 76.7 / 85.6 69.0 Flamingo 2139M 47.4 52.3 FrozenBiLM 10M 47.0 54.4 68.6 MuLTI-S 5.5M 45.6 50.0 97.3 98.9 71.2 41.3 / 70.6 / 79.7 61.5 42.6 / 71.4 / 80.0 62.5 45.8 / 73.5 / 82.0* 65.1* 47.9 / 73.0 / 82.6* 66.1* MuLTI-B 5.5M 46.6 53.0 97.3 99.1 73.5 45.1 / 72.4 / 81.8 64.4 45.2 / 74.6 / 82.2 65.2 49.4 / 75.9 / 84.0* 68.0* 48.3 / 75.4 / 83.5* 67.2* MuLTI-L 5.5M 47.8 54.7 97.9 99.0 75.6 48.7 / 75.9 / 83.9 67.7 50.5 / 78.5 / 86.2 69.9 54.7 / 77.7 / 86.0* 71.5* 56.5 / 80.2 / 87.0* 73.3* Table 2: Comparisons with existing methods. #PT means number of pretrain datasets. Acc. (%) denotes the performance of videoQA. R@k denotes recall (%) with k retrieval efforts. G-M denotes the geometric mean of R@1, R@5, R@10. The datasets commonly used are WebVid2M (Bain et al. 2021), WebVid10M (Bain et al. 2021), WIT (Radford et al. 2021), HowTo100M (Miech et al. 2019), YT-Temporal-180M (Zellers et al. 2021), Conceptual Captions (Sharma et al. 2018). * indicates that DSL (Cheng et al. 2021) or QB-Norm (Bogolin et al. 2021) is used for post-processing. Method #PT OCR Multi-Label VIOLET ‡ ✘ 55.22 ALPRO ‡ ✘ 58.53 MuLTI-S ✘ 63.97 MuLTI-S ✔ 66.13 MuLTI-B ✘ 64.60 MuLTI-B ✔ 67.86 Table 3: Comparisons on multi-label classification in mAP (%). ‡ means the methods are reproduced in our framework. Downstream Tasks and Datasets Video Question Answering. We evaluate MuLTI on five widely used videoQA tasks. (1) MSRQ (MSRVTTQA) (Xu et al. 2017, 2016) is a open-ended videoQA task includes 10k videos and 243k question-answer pairs. (2) MSVQ (MSVD-QA) (Xu et al. 2017; Chen and Dolan 2011) is a open-ended videoQA task includes 1970 videos and 50k question-answer pairs. (3) TGIF-QA (Jang et al. 2017) contains three datasets: TGIF-Action and TGIFTransition for multiple-choice videoQA tasks, and TGIFFrame for open-ended videoQA tasks. Text-Video Retrieval. (1) MSRR (MSRVTT-Ret) contains 10K videos with 200K annotations. Following (Fu et al. 2021), we use 9k videos for training and 1k videos for testing. (2) DiDeMo (DiDeMo-Ret) consists of 10K videos with 40K annotations. Following (Lei et al. 2021b), we concatenate all annotations from the video into a title. Multi-Label Classification. Video labels are crucial for 2 4 6 8 10 12 14 16 (a) Num of Sparse Frames 10000 20000 30000 40000 Memory-Usage (MB) 9047 9693 10176 10693 11248 11751 12327 12746 14961 17891 19967 21917 24081 26097 28095 30158 14581 21819 29701 39107 OOM OOM OOM OOM MuLTI ALPRO VIOLET 4 6 8 MuLTI 4 6 8 ALPRO 4 6 8 VIOLET (b) Num of Spare Frames & Model 0 10000 20000 30000 40000 Memory-Usage (MB) 9693 3969 5724 17891 9339 8552 21819 9805 12014 10176 4587 5589 19967 11461 8506 29701 15449 14252 10693 5095 5598 21917 13505 8412 39107 23259 15848 Video Encoder Text Encoder & Feature Fusion Figure 3: Comparisons with existing methods on MemoryUsage with different numbers of frames. Text length is 512. the ranking models used in online advertising1. We create a short video dataset from our app, which includes 486k videos with captions and 21696 labels. Multiple professional editors cross check the labels. We used a high-performing text detector from ICDAR2 for OCR transcripts. The OCR transcripts are truncated to 512. The examples for multilabel classification can be found in the appendix. 1https://algo.qq.com/index.html 2https://rrc.cvc.uab.es/?ch=4&com=evaluation&task=4 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6301 Method Base TGMS PB MCM MSRQ MSVQ MSRR Acc.↑ Acc.↑ R1↑ R5↑ R10↑ G-Mean ↑ MuLTI-B ✔ ✘ ✘ ✘ 44.84 48.35 38.90 69.50 78.50 59.64 ✔ ✔ ✘ ✘ 45.54 49.86 38.80 70.30 80.10 60.22 ✔ ✔ ✔ ✘ 46.28 51.93 44.30 72.40 81.90 64.04 ✔ ✔ ✔ ✔ 46.61 53.03 45.10 72.40 81.80 64.40 Table 4: Evaluations of the proposed methods. TGMS: Text-Guided MultiWay-Sampler. PB (Pretraining Baseline): Pretraining model with MLM, VTM and VTC. MCM: Multiple Choice Modeling. Acc. (%) is used to measure the performance of videoQA. R@k denotes recall (%) with k retrieval efforts. G-Mean denotes the geometric mean of R@1, R@5, R@10. 100 200 300 400 500 Length of Text 8000 10000 12000 14000 16000 Memory-Usage (MB) OOM OOM OOM OOM 8257 8677 10307 14283 10825 11713 13921 OOM 8243 8477 8789 9927 10461 10629 11131 12273 F+E(8) F+D(8) F+D(16) F+S+D(8) F+S+D(16) Figure 4: Comparisons of different text length and number of frames on Memory-Usage. The F means Flatten, the D means Decoder, the E means Encoder, the S means Sampler. The number in parentheses represents the number of frames. what is the man lifting ? what is the man drinking ? what is biting the cat and it turns and attacks another cat ? what is the color of the clothes ? Figure 5: A visualization of the cross-attention map from the Text-Guided MultiWay-Sampler. Performance of Proposed Methods Table 2 compares MuLTI with CLIP4CLIP (Luo et al. 2021), QB-Norm (Bogolin et al. 2021), CAMoE (Cheng et al. 2021), TS2-Net (Liu et al. 2022), ALPRO (Li et al. 2021), VIOLET (Fu et al. 2021), AllInOne (Wang et al. 2022a), Clover (Huang et al. 2022), Flamingo (Alayrac et al. 2022) and FrozenBiLM (Yang et al. 2022). In videoQA tasks, MuLTI surpasses all baseline models on MSRQ, MSVQ, TGIF-Action, TGIF-Transition and TGIF-Frames. Since MuTLI does not use speech data as input, it is compared with FrozenBiLM(Yang et al. 2022) without using speech data. In general, MuLTI achieves stateof-the-art performance in various QA tasks. In text-video retrieval tasks, we finetune MuLTI using the MSRVTT and DiDeMo datasets. Our results demonstrate that MuLTI is highly competitive in both benchmarks, particularly in the DiDeMo dataset. These findings highlight the Methods MSRQ MSVQ Memory Usage Class Token 44.54 47.90 7081 Mean Pooling 44.40 47.07 6941 Max Pooling 44.41 46.93 6963 Flatten + Encoder 44.84 48.35 15791 TGMS 45.54 49.86 10551 Table 5: Ablation studies on feature retention methods. The number of sparse frames is set to 6 for Flatten method. TGMS means Text-Guided MultiWay-Sample. Method CV CT SS AP MSRQ MSVQ ✘ ✘ ✘ ✘ 45.13 49.19 ✔ ✘ ✘ ✘ 44.76 48.10 ✔ ✘ ✔ ✔ 45.14 48.92 Flatten ✔ ✔ ✘ ✘ 44.57 48.50 Decoder ✘ ✔ ✘ ✘ 45.08 49.38 ✘ ✔ ✔ ✘ 45.16 49.80 ✘ ✔ ✘ ✔ 45.48 49.54 ✘ ✔ ✔ ✔ 45.54 49.86 Table 6: An ablation study on feature compression methods. CV means Condensed Video, CT means Condensed Text, SS means Shared-Sampler, AP means Adapt-Pooling. effectiveness of MuLTI for text-video retrieval. For multi-label classification, we compare MuLTI with VIOLET and ALPRO but exclude FrozenBiLM due to its impractical size for industry deployment. VIOLET and ALPRO do not use OCR transcripts as they would lead to out-of-memory on V100 GPUs. We also report MuLTI’s OCR-less performance in Table 3 for a fair comparison; MuLTI significantly surpasses both VIOLET and ALPRO. As shown in Figure 3, MuLTI maintains a video memory cost less than half of ALPRO’s and VIOLET’s when frame count rises during training, because its efficient fusion modules minimizes memory cost increases. Finally, we evaluate our main technical contributions in Table 4. Compared with baseline models, our main technical contributions improve performance on all datasets. The Text-Guided MultiWay-Sampler boosts MuLTI’s multimodal fusion ability, pinpointing key details in surplus video features. MCM advances the model’s alignment ability and narrows the gap between pretraining and downstream tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6302 PB MVM MCM MSRQ MSVQ MSRR ✔ ✘ ✘ 46.28 51.93 64.04 ✔ ✔ ✘ 45.87 50.16 63.41 ✔ ✔ ✔ 46.11 51.65 63.71 ✔ ✘ ✔ 46.61 53.03 64.40 Table 7: Ablation studies on the Multiple Choice Modeling. PB means Pretraining Baseline. Frozen / Total MSRQ MSVQ Memory Usage VE TE 12/12 12/12 44.06 46.83 6109 12/12 0/12 44.07 47.12 7439 6/12 0/12 45.10 47.57 18219 9/12 0/12 45.59 47.52 11541 9/12 3/12 45.50 49.63 11131 9/12 6/12 45.54 49.86 10551 9/12 9/12 45.04 49.14 10283 Table 8: Ablation studies on frozen layers. VE refers to video encoder and TE refers to text encoder. Frozen/Total refers to the number of frozen and total layers. Adapter ATT MSRQ MSVQ MSRR DiDeMo ✘ ✘ 45.54 49.86 60.22 51.68 ✔ ✘ 45.61 50.48 60.54 52.08 ✔ ✔ 45.71 50.63 61.16 52.42 Table 9: Ablation studies on the Attention-Adapter. ATT means Attention. The Importance of Text-Guided MultiWay-Sampler Why we condense text features? We compare performance of different aggregation methods (i.e. Class Token, Mean Pooling, Max Pooling and Flatten) in Table 5. Results show that Flatten outperforms other aggregation methods but requires substantial video memory. Above section reveals the decoder uses less memory than the encoder for long sequences, prompting its use in feature fusion. The decoder handles datasets like MSRQ well. However, the cost is still high when processing long text and video like our multi-label datasets. The specific memory cost is shown in Figure 4. Following (Alayrac et al. 2022), we use a decoderbased sampler for feature condensation Table 6 compares different condensation methods, showing text compression’s superiority. As shown in Figure 5, the visual part most relevant to the problem is given more weight. The importance of Shared-Sampler. The sampler and feature fusion module, using the same decoder structure, can share weights without compromising performance, simplifying model optimization (Wang et al. 2022b). We share the sampler and decoder’s self-attention but keep separate FFNs for each modality, cutting parameters while maintaining performance. Compared with the Flatten Method, the Shared-Sampler improves accuracies on MSRQ and MSVQ by 0.32% and 1.45%, respectively. The importance of Adapt-Pooling. As shown in Table 6, the sampler leads to worse performance when condensing text and video features. The sampler’s random query vector carries the risk of losing original key features; we design a lightweight aggregation module, Adapt Pooling, to preserve the original features. As shown in Table 6, the Adapt-Pooling improves accuracy on MSRQ and MSVQ. Additionally, we explored various combination methods (i.e. add, concatenate, and multiply), and noted slight performance differences. We achieved an accuracy of 45.51% using concatenate and 45.45% using multiply on MSRQ. To verify these techniques’ robustness, we applied them to condense video features, which also improved performance. The Importance of Multiple Choice Modeling MCM aims to bridge the gap between pretraining and downstream tasks by integrating videoQA into pretraining, enhancing the model’s focus on video and sentence subjects for better multimodal feature extraction. We use the classical MLM, VTM, and VTC tasks to pretrain the model as a baseline. Due to video content corruption caused by MVM, the MVM task conflicts with other tasks (Lei et al. 2021a). In our initial attempts to include MVM for pretraining, we observed a degradation in performance as shown in Table 7. Thus, we have decided not to use MVM for pretraining. To confirm MCM’s robustness, we also added MCM for pretraining based on the usage of MVM. The results show MCM still substantially enhances model’s performance. Compared to the model pretrained with baseline, MCM explicitly improves the model’s performance on the videoQA task by narrowing the task gap between pretraining and downstream tasks. MCM’s promotion of multi-modal feature alignment enhances the model’s retrieval task performance. As shown in Table 7, the models pretrained with MCM outperformed the baseline in both videoQA and retrieval tasks, demonstrating its effectiveness. Ablation Experiment on Training Strategies Analysis of Frozen Layers. In this section, we systematically evaluate the effect of the number of frozen layers. The results on videoQA are demonstrated in Table 8. It indicates that unfreezing the top layers of video and text encoders can improve performance on both datasets. Analysis of Attention-Adapter. Analyzing frozen layers reveals that unfreezing excess layers reduces accuracy due to overfitting from excessive parameter adjustments. Following (Yang et al. 2022), we add adapters to the encoders in the shallow layers. Table 9 shows that while adapters perform effectively, their capability is constrained by the basic FFN module. By integrating a lightweight attention module (Hu et al. 2017), the model focuses better on informative tokens. Conclusion We present MuLTI, a high-performing video-language framework with a novel Text-Guided MultiWay-Sampler for improved sampling efficiency and a pretraining task to better align with downstream tasks. MuLTI achieves state-of-theart performance on seven video-language benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6303 References Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; Ring, R.; Rutherford, E.; Cabi, S.; Han, T.; Gong, Z.; Samangooei, S.; Monteiro, M.; Menick, J.; Borgeaud, S.; Brock, A.; Nematzadeh, A.; Sharifzadeh, S.; Binkowski, M.; Barreira, R.; Vinyals, O.; Zisserman, A.; and Simonyan, K. 2022. Flamingo: a Visual Language Model for Few-Shot Learning. ArXiv, abs/2204.14198. Bain, M.; Nagrani, A.; Varol, G.; and Zisserman, A. 2021. Frozen in Time: A Joint Video and Image Encoder for Endto-End Retrieval. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 1708–1718. Bogolin, S.-V.; Croitoru, I.; Jin, H.; Liu, Y.; and Albanie, S. 2021. Cross Modal Retrieval with Querybank Normalisation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5184–5195. Chen, D.; and Dolan, W. B. 2011. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, 190–200. Cheng, X.; Lin, H.; Wu, X.; Yang, F.; and Shen, D. 2021. Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss. ArXiv, abs/2109.04290. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR, abs/1810.04805. Diba, A.; Fayyaz, M.; Sharma, V.; Paluri, M.; Gall, J.; Stiefelhagen, R.; and Gool, L. V. 2019. Large Scale Holistic Video Understanding. In European Conference on Computer Vision. Fu, T.-J.; Li, L.; Gan, Z.; Lin, K.; Wang, W. Y.; Wang, L.; and Liu, Z. 2021. VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling. ArXiv, abs/2111.12681. Ge, Y.; Ge, Y.; Liu, X.; Li, D.; Shan, Y.; Qie, X.; and Luo, P. 2022. BridgeFormer: Bridging Video-text Retrieval with Multiple Choice Questions. ArXiv, abs/2201.04850. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll’ar, P.; and Girshick, R. B. 2022. Masked Autoencoders Are Scalable Vision Learners. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15979–15988. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; and Wu, E. 2017. Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42: 2011–2023. Huang, J.; Li, Y.; Feng, J.; Sun, X.; and Ji, R. 2022. Clover: Towards A Unified Video-Language Alignment and Fusion Model. ArXiv, abs/2207.07885. Jang, Y.; Song, Y.; Yu, Y.; Kim, Y.; and Kim, G. 2017. TGIFQA: Toward Spatio-Temporal Reasoning in Visual Question Answering. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1359–1367. Lei, C.; Luo, S.; Liu, Y.; He, W.; Wang, J.; Wang, G.; Tang, H.; Miao, C.; and Li, H. 2021a. Understanding Chinese Video and Language via Contrastive Multimodal PreTraining. Proceedings of the 29th ACM International Conference on Multimedia. Lei, J.; Li, L.; Zhou, L.; Gan, Z.; Berg, T. L.; Bansal, M.; and Liu, J. 2021b. Less is more: Clipbert for video-andlanguage learning via sparse sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7331–7341. Li, D.; Li, J.; Li, H.; Niebles, J. C.; and Hoi, S. C. H. 2021. Align and Prompt: Video-and-Language Pre-training with Entity Prompts. ArXiv, abs/2112.09583. Li, L.; Chen, Y.-C.; Cheng, Y.; Gan, Z.; Yu, L.; and Liu, J. 2020. Hero: Hierarchical Encoder for Video+Language Omni-representation Pre-training. ArXiv, abs/2005.00200. Liu, Y.; Xiong, P.; Xu, L.; Cao, S.; and Jin, Q. 2022. TS2Net: Token Shift and Selection Transformer for Text-Video Retrieval. In Proceedings of the European Conference on Computer Vision (ECCV). Luo, H.; Ji, L.; Shi, B.; Huang, H.; Duan, N.; Li, T.; Li, J.; Bharti, T.; and Zhou, M. 2020. Univl: A unified video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353. Luo, H.; Ji, L.; Zhong, M.; Chen, Y.; Lei, W.; Duan, N.; and Li, T. 2021. CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval. Neurocomputing, 508: 293–304. Ma, Z.; Lou, M.; and Ouyang, X. 2021. Top1 Solution of QQ Browser 2021 Ai Algorithm Competition Track 1 : Multimodal Video Similarity. ArXiv, abs/2111.01677. Miech, A.; Alayrac, J.-B.; Smaira, L.; Laptev, I.; Sivic, J.; and Zisserman, A. 2020. End-to-End Learning of Visual Representations From Uncurated Instructional Videos. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9876–9886. Miech, A.; Zhukov, D.; Alayrac, J.-B.; Tapaswi, M.; Laptev, I.; and Sivic, J. 2019. HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2630–2640. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32: 8026–8037. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML. Ryoo, M. S.; Piergiovanni, A. J.; Arnab, A.; Dehghani, M.; and Angelova, A. 2021. TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? ArXiv, abs/2106.11297. Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018. Conceptual Captions: A Cleaned, Hypernymed, Image Alttext Dataset For Automatic Image Captioning. In ACL. Sun, C.; Myers, A.; Vondrick, C.; Murphy, K. P.; and Schmid, C. 2019. VideoBERT: A Joint Model for Video and Language Representation Learning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 7463– 7472. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6304 Wang, A.; Ge, Y.; Yan, R.; Ge, Y.; Lin, X.; Cai, G.; Wu, J.; Shan, Y.; Qie, X.; and Shou, M. Z. 2022a. All in One: Exploring Unified Video-Language Pre-training. ArXiv, abs/2203.07303. Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; qiang liu; Aggarwal, K.; Mohammed, O. K.; Singhal, S.; Som, S.; and Wei, F. 2022b. Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks. ArXiv, abs/2208.10442. Xu, D.; Zhao, Z.; Xiao, J.; Wu, F.; Zhang, H.; He, X.; and Zhuang, Y. 2017. Video question answering via gradually refined attention over appearance and motion. In Proceedings of the ACM international conference on Multimedia, 1645–1653. Xu, J.; Mei, T.; Yao, T.; and Rui, Y. 2016. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5288–5296. Yang, A.; Miech, A.; Sivic, J.; Laptev, I.; and Schmid, C. 2022. Zero-Shot Video Question Answering via Frozen Bidirectional Language Models. ArXiv, abs/2206.08155. Zellers, R.; Lu, X.; Hessel, J.; Yu, Y.; Park, J. S.; Cao, J.; Farhadi, A.; and Choi, Y. 2021. MERLOT: Multimodal Neural Script Knowledge Models. In Neural Information Processing Systems. Zhu, L.; and Yang, Y. 2020. ActBERT: Learning GlobalLocal Video-Text Representations. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8743–8752. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6305 | 2024 | 700 |
18,519 | Regulating Intermediate 3D Features for Vision-Centric Autonomous Driving Junkai Xu1,2*, Liang Peng1,2,*, Haoran Cheng1,2,*, Linxuan Xia1,2,*, Qi Zhou1,2,*, Dan Deng2, Wei Qian2, Wenxiao Wang3†, Deng Cai1,2 1State Key Lab of CAD & CG, Zhejiang University 2FABU Inc. 3School of Software Technology, Zhejiang University {xujunkai, pengliang, haorancheng}@zju.edu.cn Abstract Multi-camera perception tasks have gained significant attention in the field of autonomous driving. However, existing frameworks based on Lift-Splat-Shoot (LSS) in the multicamera setting cannot produce suitable dense 3D features due to the projection nature and uncontrollable densification process. To resolve this problem, we propose to regulate intermediate dense 3D features with the help of volume rendering. Specifically, we employ volume rendering to process the dense 3D features to obtain corresponding 2D features (e.g., depth maps, semantic maps), which are supervised by associated labels in the training. This manner regulates the generation of dense 3D features on the feature level, providing appropriate dense and unified features for multiple perception tasks. Therefore, our approach is termed Vampire, stands for “Volume rendering As Multi-camera Perception Intermediate feature REgulator”. Experimental results on the Occ3D and nuScenes datasets demonstrate that Vampire facilitates fine-grained and appropriate extraction of dense 3D features, and is competitive with existing SOTA methods across diverse downstream perception tasks like 3D occupancy prediction, LiDAR segmentation and 3D objection detection, while utilizing moderate GPU resources. We provide a video demonstration in the supplementary materials and Codes are available at github.com/cskkxjk/Vampire. Introduction Vision-centric 3D surrounding perception plays an important role in modern autonomous driving and robotics due to its convenience and board applicability for downstream tasks. Vision-based perception frameworks can be broadly categorized into two paradigms (Li et al. 2023): backward projection (or Transformer-based (Li et al. 2022a)) and forward projection (or LSS-based, as it originates from the concept of “Lift, Splat, Shoot” (Philion and Fidler 2020)). Backward projection / Transformer-based approaches set 3D points in 3D space or BEV plane and then projects these points back onto the 2D image. This procedure allows each predefined 3D or BEV position to obtain corresponding image features. Transformer (Vaswani et al. 2017) architectures are widely used in this paradigm to aggregate informa*Work performed during an internship at FABU Inc. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Method overview. The key idea is to regulate dense intermediate 3D features in training, to produce appropriate features for different downstream perception tasks. tion from image features, generating task-specific features tailored to meet the objectives (Li et al. 2022c; Huang et al. 2023; Wei et al. 2023; Zhang, Zhu, and Du 2023; Zhou and Kr¨ahenb¨uhl 2022; Liu et al. 2022a,b). When demonstrating promising performance on various perception tasks such as 3D object detection (Li et al. 2022c; Liu et al. 2022a), BEV map segmentation (Li et al. 2022c; Liu et al. 2022b) and 3D occupancy prediction (Huang et al. 2023; Wei et al. 2023; Zhang, Zhu, and Du 2023), they require substantial GPU memory to support the interaction between task queries and image features (Zhang et al. 2022). In contrast, forward projection / LSS-based methods project 2D image features onto the 3D space, incorporating per-pixel depth estimation. They rely on implicit (Philion and Fidler 2020; Hu et al. 2021) or explicit (Li et al. 2022b,a; Huang et al. 2021) depth estimation to elevate image features to the 3D space, acquiring intermediate feature representations such as BEV or 3D voxel representation for task-specific heads. This paradigm is effective for objectlevel perception tasks, e.g., 3D object detection, but struggles with dense point / grid-level perception tasks, e.g., 3D occupancy prediction. Using estimated per-pixel depth and camera calibrations, these methods position 2D features at the foremost visible surface of objects in the 3D space, leading to sparse 3D features. An intuitive resolution would be to densify the sparse 3D features using a feature inpainting module. This enables the model to guess and inpaint the empty regions based on The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6306 known sparse features and produce dense 3D features. However, this naive manner is uncontrollable due to the lack of regulations, and could cause some kind of “overgeneration”, which means that features may be generated at the wrong places and violate the geometry constraints. Here the question arises: how to find decent regulations in favor of appropriate dense 3D feature extraction? We resort to employing occupancy to model the 3D feature space in a unified manner. Occupancy is an ideal dense 3D representation due to its fine-grained information and universality in different tasks (Huang et al. 2023; Tian et al. 2023; Sima et al. 2023). We observe that there is an analogy between the occupancy and the volume density in the implicit scene representation NeRF (Mildenhall et al. 2021; Barron et al. 2021), as they both describe whether a space is occupied. This observation motivates us to employ additional information from the 2D space to implicitly regulate our intermediate 3D features, as NeRF does. To this end, we incorporate volume rendering (Max 1995) as a regulator for the intermediate 3D feature space (see Figure 1). Specifically, we map image features to 3D voxel space following the LSS scheme (Philion and Fidler 2020), and employ a 3D hourglass-like design (Chang and Chen 2018) as the sparse feature inpaintor. The resulting dense intermediate 3D features are then used to generate feature volumes (density, semantic) for volume rendering. We supervise the rendered depth maps and semantic maps with LiDAR projected ground-truth labels under both camera views and bird’s-eye-view. In this way, we employ simple 2D supervisions to regulate dense intermediate 3D features, which ensures our sparse feature inpaintor not to generate unreasonable 3D features that could violate their 2D correspondences. We term the entire framework as Vampire, stands for taking volume rendering as a regulator for intermediate features in multi-camera perception. We provide the overview design in Figure 1. We perform experiments on various multi-camera perception tasks, including 3D occupancy prediction (Tian et al. 2023), image-based LiDAR segmentation on the competitive nuScenes dataset (Caesar et al. 2020), and we also assess whether the regulated 3D features continue to exhibit effectiveness for the 3D object detection. The contributions of this work are summarized as follows: • We provide a new outlook on intermediate features for vision-centric perception tasks, drawing connections between the occupancy in autonomous driving and volume density in NeRF. • We introduce Vampire, a multi-camera perception framework. The key component lies in using volume rendering as a regulator for dense intermediate 3D features. As such, different perception tasks benefit from the regulated intermediate features. • We demonstrate that our method can handle several perception tasks in a single forward pass with moderate computational resources. The single Vampire model that consumes limited GPU memory (12GB per device) for training is comparable with other existing SOTAs across multiple perception tasks (3D occupancy prediction, image-based LiDAR segmentation and 3D objection detection). Related Work Multi-camera 3D Perception 3D object detection is a classic and longstanding 3D perception task. In multi-camera setting, various attempts (Philion and Fidler 2020; Li et al. 2022c; Huang et al. 2021; Li et al. 2022b) have been proposed for detecting objects in the bird’s-eye-view (BEV) representations which collapse the height dimension of 3D space to achieve a balance between accuracy and efficiency. LSS (Philion and Fidler 2020) and its follow-ups (Huang et al. 2021; Li et al. 2022b,a) first estimate implicit or explicit per-pixel depth distributions to back-project the 2D image features into 3D space, then use the pooling operation or height compression to generate BEV features. Others take advantage of Transformer (Vaswani et al. 2017) and use learnable object-level queries to directly predict 3D bounding boxes (Wang et al. 2022; Liu et al. 2022a,b) or positionaware queries to produce BEV features (Li et al. 2022c; Zhou and Kr¨ahenb¨uhl 2022). However, there are innumerable rigid and nonrigid objects with various structures and shapes in the real-world autonomous driving, which cannot be handled by classic 3D object detection. An alternative is to assign occupancy states to every spatial region within the perceptive range(Tesla 2022), namely, 3D occupancy prediction. Unlike LiDAR segmentation (Fong et al. 2022) which is designed for sparse scanned LiDAR points, the occupancy prediction task aims to achieve dense 3D surrounding perception. This area haven’t been thoroughly explored yet, only a few works use transformer-based designs to deal with it. TPVFormer (Huang et al. 2023) proposes to use tri-perspective view (TPV) grid queries to interact with image features and get reasonable occupancy prediction results to describe the 3D scene. SuroundOcc (Wei et al. 2023) builds 3D volume queries to reserve 3D space information. CONet (Wang et al. 2023) and SuroundOcc (Wei et al. 2023) both generate dense occupancy labels for better prediction performance. OccFormer (Zhang, Zhu, and Du 2023) use a dual-path transformer network to get fine-grained 3D volume features. Occ3D (Tian et al. 2023) and OccNet (Sima et al. 2023) label the original nuScenes dataset to get occupancy data at different scopes. In this paper, we advocate to regulate the dense 3D features to achieve better perception. Scene Representation Learning Effective 3D scene representation is the core of autonomous driving perception. Voxel-based scene representations turn the 3D space into discretized voxels which is usually adopted by LiDAR segmentation (Ye et al. 2022, 2021; Zhu et al. 2021), 3D scene completion (Cao and de Charette 2022; Chen et al. 2020; Roldao, de Charette, and VerroustBlondet 2020) and 3D occupancy prediction (Wang et al. 2023). BEV-based scene representations collapse 3D features onto the Bird’s Eye View (BEV) plane and achieve a good balance between accuracy and efficiency. They show its effectiveness in 3D object detection (Li et al. 2022c; Huang et al. 2021; Li et al. 2022b,a; Zhang et al. 2022) and BEV segmentation (Philion and Fidler 2020; Li et al. 2022c; Hu et al. 2021; Xie et al. 2022) but are not applicable The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6307 Figure 2: Framework of Vampire. We extract the 2D image features from multi-view images, and transform the 2D features to 3D volume space to generate sparse intermediate 3D features. Inpainting techniques are then applied to obtain dense intermediate features. Density volume and semantic volume are generated by forwarding the dense features with specific heads. Specifically, in the training stage, we regulate intermediate features by constructing loss between ground truth and volume rendered images. The dense features and volumes can be used for various downstream tasks such as 3D occupancy prediction, image-based LiDAR segmentation, and 3D object detection. Please note that for the image-based LiDAR segmentation task, we do not use point clouds as input but employ its evaluation protocol (Huang et al. 2023; Wei et al. 2023; Sima et al. 2023). The point clouds serve as point queries to extract features for training supervision and evaluation. for dense perception tasks when losing the height dimension. Notably, recent implicit scene representation methods demonstrate their potential to represent meaningful 3D scenes. They learn continuous functions to consume 3D coordinates and output representation of a certain point. This kind of representation can model scenes at arbitraryresolution and is commonly used for 3D reconstruction (Chabra et al. 2020; Park et al. 2019) and novel view synthesis (Mildenhall et al. 2021; Barron et al. 2021; Yariv et al. 2021; Wang et al. 2021). As far as we know, very limited researches have explored on combining implicit scene representations with existing representations in autonomous driving. Tesla (Tesla 2022) firstly discusses the similarity between scene occupancy and NeRF (Mildenhall et al. 2021) but it concentrates on using volume rendering to train the occupancy representation for scene reconstruction. A recent related work is (Gan et al. 2023), which adopts similar idea to consider the occupancy the same as volume density and use volume rendering for better depth estimation. Their models use parameter-free back projection to map 2D image features to 3D volume and aggregates the 3D features with corresponding position embeddings to predict volume density and render the depth maps. Most related to ours is the model of (Pan et al. 2023), which shares the same spirit and adopts rendering-based supervision as us. This model is very similar to Vampire, but our work goes further to demonstrate that such volumerendering-assisted perception framework benefits multiple perception tasks and achieves competitive results with stateof-the-art approaches. Methodology Overview The overall framework is illustrated in Figure 2. Vampire consists of three stages: 2D-to-3D Transformation, Sparse Feature Inpainting and Intermediate Feature Regulating. The surrounding multi-camera images are first passed to 2D image backbone to extract 2D image features. In 2Dto-3D transformation stage, the 2D image features are transformed from 2D image space to the 3D volume space. We follow the LSS scheme (Philion and Fidler 2020; Li et al. 2022b,a; Huang et al. 2021) to perform the feature mapping along depth dimension. To overcome the 3D feature sparsity of LSS transformation, we take a 3D hourglass net (Chang and Chen 2018) as feature inpaintor to conduct the sparse feature inpainting and generate dense intermediate 3D features. The final volume features (semantic volume and density volume) can be obtained by forwarding the dense intermediate 3D features with specific heads. In intermediate feature regulating stage, we sample points along the ray from camera views or BEV view and get corresponding features for rendering, the rendered images and feature maps are used to construct losses to regulate the intermediate features. 2D-to-3D Transformation We adopt LSS paradigm (Philion and Fidler 2020) to transform 2D image features to 3D features. LSS-based transformations do not generate redundant features like parameterThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6308 free transformations (Cheng, Wang, and Fragkiadaki 2018; Sitzmann et al. 2019; Harley et al. 2019, 2022) and are more effective than transformer-based transformations (Li et al. 2022c; Huang et al. 2023; Wei et al. 2023). We use two simple 1-layer 2D convolution neural network (CNN) to conduct this process. The first one is used to predict categorical depth distribution with softmax activation, and the second one is used to lower the dimension of image features to meet our device constraints. These two CNNs work together to map image features along depth axis, and we do not explicitly supervise this mapping process with depth labels. In this way, 2D image features are placed at the front visible surface for any certain pixels. Sparse Feature Inpainting The aforementioned 2D-to-3D transformation produce sparse intermediate 3D features, and such sparsity is not appropriate for dense prediction tasks like occupancy prediction. To overcome this limitation, we draw inspiration from classic image inpainting (Liu et al. 2018; He et al. 2022) and use a 3D hourglass-like design (Chang and Chen 2018) to inpaint the sparse intermediate 3D features Vsparse and generate dense intermediate 3D features Vdense. Please refer to supplementary materials for network architecture details. Intermediate Feature Regulating In this stage, we use the dense intermediate 3D features Vdense to produce two volumetrically 3D features – density volume Vdensity, semantic volume Vsemantic. Different from (Mildenhall et al. 2021), we adopt the SDF (Signed Distance Function) to model the volume density σ to facilitate the trilinear interpolation during grid sampling. Specifically, we predict the signed distance volume Vsdf where each value in a position in this volume represents its distance to its nearest surface. Then we transform the signed distance volume Vsdf to density volume Vdensity by applying transformation function. We use the same transformation function as (Yariv et al. 2021): Vdensity = αΨβ (Vsdf) , Ψβ(s) = ( 1 2 exp( s β ) if s ≤0 1 −1 2 exp(−s β ) if s > 0 (1) where α, β > 0 are learnable parameters and Ψβ is the cumulative distribution function of the Laplace distribution with zero mean and β scale, s is the predicted signed distance at coordinate x. For a coordinate x in range of interest, we can get its feature embeddings including volume density σ (x) and semantic logits s (x) by grid sampling G (·, x) in these 3D volume features. σ (x) = G (Vdensity, x) , s (x) = G (Vsemantic, x) (2) To compute the depth and semantics of a single pixel, we adopt similar techniques as (Zhi et al. 2021; Kerr et al. 2023) to accumulate feature embeddings along a ray ⃗r = ⃗ot + t⃗d. The rendering weights are calculated by: w(t) = Z t T(t)σ(t)dt, where T(t) = exp Z t (−σ(c))dc (3) So the rendered feature embeddings are: D(r) = Z t w(t)r(t)dt, S(r) = Z t w(t)s(r(t))dt (4) In Vampire, we conduct volume rendering in both camera view and bird’s eye view. Camera View. For camera view, we render depth and semantic maps to achieve the supervision from 2D space. To render a pixel, we cast a ray from the camera center through the pixel. We sample n depth value {zi|i = 1, ..., n} for a pixel [u, v]T and use known camera calibration to backproject the pixel to several 3D points x ∈{[xi, yi, zi]T |i = 1, ..., n}. The corresponding volume densities and semantic logits are obtained by Equation 2, and the depth and semantic maps can be calculated by Equation 4. Bird’s Eye View. Different from rendering in camera view, we do not need camera calibration under bird’s-eye-view. Instead, we render directly from the top-down height axis to obtain the BEV height maps and BEV semantic maps. See Figure 3 for reference. Figure 3: Rendering operations in Intermediate Feature regulating stage. For the camera view, the semantic and depth map are rendered by casting rays from the camera center through each pixel. Several 3D points are sampled along the ray to calculate density and semantic values. For the bird’seye-view, the semantic and height map are rendered directly from the top-down height axis. Optimization Depth Consistency Loss. We enforce consistency between the rendered depth (or height) D and the ground-truth camThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6309 era depth (or BEV height) ¯D. Ldep = 1 N c valid N c valid X i=1 SmoothL1(Dc i −¯Dc i ) + 1 N b valid N b valid X i=1 SmoothL1(Db i −¯Db i) (5) Semantic Consistency Loss. Similarly, we impose consistency on the volume-rendered semantic logits Sc and the ground-truth semantic label ¯Sc, we employ both cross entropy (CE) loss and lovasz-softmax (LS) loss: Lsem = 1 N c valid N c valid X i=1 (CE(Sc i , ¯Sc i ) + LS(Sc i , ¯Sc i )) + 1 N b valid N b valid X i=1 (CE(Sb i , ¯Sb i ) + LS(Sb i , ¯Sb i )) (6) where N c valid is the number of pixels with ground-truth depth for all cameras (obtained by projecting the sparse LiDAR points to current camera image plane). N b valid is the number of pixels with ground-truth height in the range of BEV. ¯Dc and ¯Sc are the ground-truth depths and semantic labels obtained by projecting the sparse LiDAR points to the current camera image plane. For BEV, we get the groundtruth label ¯Db and ¯Sb by projecting the LiDAR points to the grounding plane and take the height and semantic label of the highest point for a pixel in BEV map. The overall loss we use to regulate our intermediate 3D features is: Lreg = λdepLdep + λsemLsem (7) where λdep, λsem are fixed loss weights. We empirically set all weights to 1 by default. Applications of Vampire Features We follow the existing scheme (Huang et al. 2023; Li et al. 2022b) to use our regulated intermediate features. 3D occupancy prediction. The 3D occupancy prediction task usually covers a certain range of scene, thus we voxelize the interested scene range and conduct grid sampling from our predicted semantic volume Vsemantic with these voxel center coordinates. We use the output semantic logits to represent the semantic occupancy for each voxel. LiDAR segmentation. Different from traditional LiDAR segmentation task, our model consumes purely RGB images to perceive 3D surroundings rather than LiDAR point cloud. To conduct LiDAR segmentation, we use LiDAR point clouds as point queries to get the corresponding semantic logits from semantic volume Vsemantic. 3D object detection. We adopt a tanh function to scale the density volume Vdensity to the range of [0, 1] and then use it to enhance dense 3D intermediate features Vdense. We collapse the height dimension and use a linear layer to squeeze the feature dimension: FBEV = HC(Vdense · tanh(Vdensity)) (8) Where HC stands for “height compression”, which collapses the height dimension of 3D features then squeezes the feature dimension with a linear layer to get the final BEV shape features FBEV for detection. FBEV is then fed into the detection head to obtain final detection results. For simplicity, we adopt the detection head of BEVDepth (Li et al. 2022b) to produce 3D object detection results. Experiments To evaluate the proposed method, we benchmark Vampire on challenging public autonomous driving datasets nuScenes (Caesar et al. 2020) and its variants (Tian et al. 2023; Fong et al. 2022). Datasets. The nuScenes dataset contains 1000 scenes of 20 seconds duration each, and the key samples are annotated at 2Hz. Each sample consists of RGB images from 6 surrounding cameras with 360° horizontal FOV and point cloud data from 32 beams LiDAR. The total of 1000 scenes are officially divided into training, validation and test splits with 700, 150 and 150 scenes, respectively. Occ3DnuScenes (Tian et al. 2023) contains 700 traing scenes and 150 validation scenes. The occupancy scope is defined as [−1.0, 5.4] × [−40.0, 40.0] × [−40.0, 40.0](meter) with a voxel size of 0.4-meter. Implementation details. Our implementation is based on official repository of BEVDepth (Li et al. 2022b). We use ResNet-50 (He et al. 2016) as image backbone and the image resolution of 256 × 704 to meet our computational resources. For the inpainting network, we adpot an hourglasslike architecture (further details are provided in our supplementary materials). The intermediate 3D feature resolutions are 20 × 256 × 256 corresponding to the range of [−3.0, 5.0] × [−51.2, 51.2] × [−51.2, 51.2](meter) and the 3D feature dimension are set to 16 by default. We use AdamW as an optimizer with a learning rate set to 2e-4 and weight decay as 1e-7. All models are trained for 24 epochs with a total batch size of 8 on 8 3080Ti GPUs (12GB). Method Backbone Image Size mIoU ↑ MonoScene (2023) Effi.NetB7 900×1600 6.1 BEVDet (2023) R101 900×1600 19.4 OccFormer (2023) 21.9 BEVFormer (2023) 26.9 TPVFormer (2023) 27.8 CTF-Occ (2023) 28.5 UniOcc (2023) R50 256×704 22.0 Vampire (ours) 28.3 Table 1: 3D occupancy prediction results on Occ3DnuScenes. “Effi.NetB7” stands for EfficientNetB7. We obtain the values of other methods from the benchmark paper (Tian et al. 2023). We use bold to indicate the highest result and underline for the second-best result. Despite image backbone and input size differences, Vampire achieves comparable performance with state-of-the-art methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6310 Method Backbone Image Size mIoU ↑ BEVFormer (2023) R101 900×1600 56.2 TPVFormer (2023) 68.9 TPVFormer† (2023) 58.5 OccNet† (2023) 60.5 TPVFormer (2023) R50 450×800 59.3 OccNet† (2023) 900×1600 53.0 Vampire (ours) 256×704 66.4 Vampire (ours) † 256×704 62.2 Table 2: LiDAR segmentation results on Panoptic nuScenes (Fong et al. 2022) validation set. We obtain the values of baselines from their respective papers. Mark † indicates methods trained without direct LiDAR supervision but only occupancy semantic labels. 3D Occupancy Prediction We compare Vampire with previous state-of-the-art methods on the 3D occupancy prediction task in Table 1. These baseline methods including two main-stream BEV models −BEVDet (Huang et al. 2021), BEVFormer (Li et al. 2022c) and five existing 3D occupancy prediction methods −MonoScene (Cao and de Charette 2022), TPVFormer (Huang et al. 2023), OccFormer (Zhang, Zhu, and Du 2023), UniOcc (Pan et al. 2023), and CTF-Occ (Tian et al. 2023). It can be observed that our method achieves comparable performance with these methods under the mIoU metric. Our Vampire surpasses OccFormer / BEVFormer / TPVFormer by 6.4 / 1.4 / 0.5 mIoU. Although Vampire has a lower mIoU than CTF-Occ (28.3 v.s. 28.5), it is still promising since our method adopts a relatively weak image backbone ResNet-50 and lower input image resolution (256 × 704). LiDAR Segmentation We compare Vampire with existing image-based LiDAR segmentation methods in Table 2. These baseline methods including BEVFormer (Li et al. 2022c), TPVFormer (Huang et al. 2023) and OccNet (Sima et al. 2023). In the inference stage, we predict the semantic labels for given points in the LiDAR segmentation task. Vampire surpasses the SOTA model TPVFormer (Huang et al. 2023) with the same backbone in terms of mIoU (66.4 v.s. 59.3), but a little lower (2.5) compared to TPVFormer-R101. Even without direct 3D LiDAR supervision, Vampire can outperform OccNet (Sima et al. 2023) models with different backbones respectively by 9.2 and 1.7 points in mIoU. 3D Object Detection We conduct 3D object detection experiments on nuScenes validation set. The intention is to verify whether the regulated 3D features can still qualified for 3D detection task. We choose several main-stream 3D object detection baselines including BEVFormer (Li et al. 2022c), BEVDet (Huang et al. 2021) and BEVDepth (Li et al. 2022b). For fair comparisons, we report the baseline values under the setting of ResNet-50 backbone and without temporal fusion techniques. We also choose three baselines provided by (Sima et al. 2023) which conducts joint training of occupancy prediction and 3D detection task like us. As shown in Table 3, comparing to normal 3D object detection methods, Vampire surpasses BEVFormer and BEVDet in mAP (0.301 v.s. 0.286), but lower in NDS (0.354 v.s. 0.372). This could be attributed to negative transfer (Pan and Yang 2009) in joint training of multi-task. BEVDepth reports the value with EMA technique and a large batch size of 64, thus we attribute the performance gap to that. For joint training baselines, Vampire achieves a significantly higher mAP (0.301 v.s. 0.277), but has a gap on the metric of NDS (0.354 v.s. 0.390) and the metric of mean Average Velocity Error (0.541 v.s. 1.043). To summarize, Vampire can perceive the geometry details of 3D surroundings but less sensitive with object velocity. It is because the baseline methods are trained by occupancy data with additional flow annotation (occupancy velocity), which can significantly improve their performance to perceive object speed. Ablation Studies Architectural components. We conduct an ablation study on network structures and the proposed losses under the multi-task setting in Table 4. For 3D occupancy prediction task and LiDAR segmentation task, we report the mIoU. For 3D object detection task, we report the NDS. As a parameter-free method, Bilinear (Harley et al. 2022) can produce dense 3D features in the simplest way but also cause massive features generated at the wrong 3D spaces, resulting poor performances in all three tasks. The LSS baseline produces sparse 3D intermediate features, which can handle object detection, but fails to handle dense prediction tasks (e.g., occupancy prediction). When employing the feature inpaintor, dense point / grid level tasks (i.e., occupancy and segmentation) obtain significant improvements. The regulation of depth Ldep improves the occupancy prediction, but Method Joint. mAP ↑ NDS ↑ mAVE ↓ BEVFormer (2022c) 0.257 0.359 0.660 BEVDet (2022b) 0.286 0.372 BEVDetph (2022b) 0.322 0.367 BEVNet (2023) ✓ 0.271 0.390 0.541 VoxNet (2023) 0.277 0.387 0.614 OccNet (2023) 0.276 0.390 0.570 Vampire (ours) 0.301 0.354 1.043 Table 3: 3D object detection results on nuScenes validation set. “mAVE” stands for mean Average Velocity Error. Vampire achieves comparable mAP with baseline methods but fails to sense accurate velocity. The joint-training baselines are trained with additional occupancy flow annotation (occupancy velocity) (Sima et al. 2023), which can significantly improve their performance to perceive object speed. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6311 Trans. Inp. Ldep Lsem Occ.↑ Seg.↑ Det.↑ Bilinear 21.3 56.7 0.301 LSS 21.9 56.5 0.318 LSS ✓ 23.8 60.1 0.316 LSS ✓ ✓ 24.9 59.6 0.309 LSS ✓ ✓ ✓ 25.8 62.6 0.318 Table 4: Ablation study for network structures and losses. “Trans.” stands for 2D-to-3D transformation. “Inp.” stands for sparse feature inpainting. “Occ.” represents 3D occupancy prediction, “Seg.” refers to LiDAR segmentation, “Det.” denotes 3D object detection. Camera. BEV. Occ.↑ Seg.↑ Det.↑ ✓ 24.6 61.9 0.303 ✓ 24.9 60.0 0.315 ✓ ✓ 25.8 62.6 0.318 Table 5: Ablation study for camera and BEV views. “Camera.” stands for volume rendering loss in camera view. “BEV.” stands for volume rendering loss in BEV view. “Occ.” represents 3D occupancy prediction, “Seg.” refers to LiDAR segmentation, “Det.” denotes 3D object detection. has a negative effect for LiDAR segmentation and detection. Such negative effect is because Ldep imposes constraints to the density volume Vdensity and enhances both foreground (e.g., cars) and background objects (e.g., trees). Lsem provides extra semantic information, which alleviates the performance drops and achieve the best results. Supervision of different views. We provide the ablation experiments for both views. As shown in Table 5, LiDAR segmentation is more relevant with the supervision from camera view and 3D object detection is more sensitive to the supervision of BEV view. The camera view supervision can provide fine-grained geometry information which facilitates the LiDAR segmentation. However, the upper parts of camera view has very few LiDAR points for supervision (no LiDAR in the sky), thus the upper parts of density and semantic volumes are out of control. This could explain the degradation of detection performance when only supervising the camera Method Device Params. ↓Memory ↓FPS↑ BEVNet (2023) V100 39M 8G 4.5 VoxNet (2023) 72M 23G 1.9 OccNet (2023) 40M 18G 2.6 BEVFormer (2023) RTX3090 4.5G 3.2 TPVFormer (2023) 5.1G 3.1 OccFormer (2023) 147M 5.9G 2.9 Vampire (ours) RTX3080Ti 52M 5.0G 3.8 Table 6: Efficiency analysis. The experiments are all conducted with the corresponding device. view. BEV view can provide extra information and squeezing the upper parts of Vdensity/Vsemantic to meet their highest surface, such occlusion information is invisible in camera views and can restrain the degradation of detection. Efficiency Analysis In Table 6, we compare the inference latency and memory of several methods. We find that the computational resources used in our methods are moderate. This makes the method practical and easy to use for the community. Qualitative Results (a) Input multi-view images. (b) Rendered depth / density maps without/with Ldep. (c) Rendered camera / BEV semantic maps without/with Lsem. Figure 4: Visualizations of rendered results. The effectiveness of Ldep can be verified by Figure 4b, Ldep imposes constraints for learning reasonable 3D geometry information. The effectiveness of Lsem can be verified by Figure 4c, semantic regulation provides significant improvements in generating dense and meaningful features. Conclusion In this paper, we explore the connections between space occupancy in autonomous driving and volume density in NeRF, and propose a novel vision-centric perception framework, i.e., Vampire, which takes volume rendering as the intermediate 3D feature regulator in the multi-camera setting. Vampire predicts per-position occupancy as the volume density and accumulate the intermediate 3D features to 2D planes to obtain additional 2D supervisions. Extensive experiments show that our method is competitive with existing state-of-the-arts across multiple downstream tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6312 Acknowledgements This work was supported in part by The National Nature Science Foundation of China (Grant Nos: 62303406, 62273302, 62036009, 61936006), in part by Ningbo Key R&D Program (No.2023Z231, 2023Z229), in part by Yongjiang Talent Introduction Programme (Grant No: 2023A-194-G), in part by the Key R&D Program of Zhejiang Province, China (2023C01135). References Barron, J. T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; and Srinivasan, P. P. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5855–5864. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11621–11631. Cao, A.-Q.; and de Charette, R. 2022. Monoscene: Monocular 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3991–4001. Chabra, R.; Lenssen, J. E.; Ilg, E.; Schmidt, T.; Straub, J.; Lovegrove, S.; and Newcombe, R. 2020. Deep local shapes: Learning local sdf priors for detailed 3d reconstruction. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, 608–625. Springer. Chang, J.-R.; and Chen, Y.-S. 2018. Pyramid stereo matching network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5410–5418. Chen, X.; Lin, K.-Y.; Qian, C.; Zeng, G.; and Li, H. 2020. 3d sketch-aware semantic scene completion via semisupervised structure prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4193–4202. Cheng, R.; Wang, Z.; and Fragkiadaki, K. 2018. Geometryaware recurrent neural networks for active visual recognition. Advances in Neural Information Processing Systems, 31. Fong, W. K.; Mohan, R.; Hurtado, J. V.; Zhou, L.; Caesar, H.; Beijbom, O.; and Valada, A. 2022. Panoptic nuscenes: A large-scale benchmark for lidar panoptic segmentation and tracking. IEEE Robotics and Automation Letters, 7(2): 3795–3802. Gan, W.; Mo, N.; Xu, H.; and Yokoya, N. 2023. A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving. arXiv preprint arXiv:2303.10076. Harley, A. W.; Fang, Z.; Li, J.; Ambrus, R.; and Fragkiadaki, K. 2022. Simple-BEV: What Really Matters for MultiSensor BEV Perception? arXiv preprint arXiv:2206.07959. Harley, A. W.; Lakshmikanth, S. K.; Li, F.; Zhou, X.; Tung, H.-Y. F.; and Fragkiadaki, K. 2019. Learning from unlabelled videos using contrastive predictive neural 3d mapping. arXiv preprint arXiv:1906.03764. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16000–16009. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hu, A.; Murez, Z.; Mohan, N.; Dudas, S.; Hawke, J.; Badrinarayanan, V.; Cipolla, R.; and Kendall, A. 2021. FIERY: future instance prediction in bird’s-eye view from surround monocular cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15273–15282. Huang, J.; Huang, G.; Zhu, Z.; Ye, Y.; and Du, D. 2021. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. arXiv preprint arXiv:2112.11790. Huang, Y.; Zheng, W.; Zhang, Y.; Zhou, J.; and Lu, J. 2023. Tri-Perspective View for Vision-Based 3D Semantic Occupancy Prediction. arXiv preprint arXiv:2302.07817. Kerr, J.; Kim, C. M.; Goldberg, K.; Kanazawa, A.; and Tancik, M. 2023. LERF: Language Embedded Radiance Fields. arXiv preprint arXiv:2303.09553. Li, Y.; Bao, H.; Ge, Z.; Yang, J.; Sun, J.; and Li, Z. 2022a. Bevstereo: Enhancing depth estimation in multi-view 3d object detection with dynamic temporal stereo. arXiv preprint arXiv:2209.10248. Li, Y.; Ge, Z.; Yu, G.; Yang, J.; Wang, Z.; Shi, Y.; Sun, J.; and Li, Z. 2022b. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092. Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; and Dai, J. 2022c. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX, 1–18. Springer. Li, Z.; Yu, Z.; Wang, W.; Anandkumar, A.; Lu, T.; and Alvarez, J. M. 2023. FB-BEV: BEV Representation from Forward-Backward View Transformations. arXiv preprint arXiv:2308.02236. Liu, G.; Reda, F. A.; Shih, K. J.; Wang, T.-C.; Tao, A.; and Catanzaro, B. 2018. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European conference on computer vision (ECCV), 85–100. Liu, Y.; Wang, T.; Zhang, X.; and Sun, J. 2022a. Petr: Position embedding transformation for multi-view 3d object detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVII, 531–548. Springer. Liu, Y.; Yan, J.; Jia, F.; Li, S.; Gao, Q.; Wang, T.; Zhang, X.; and Sun, J. 2022b. Petrv2: A unified framework for 3d perception from multi-camera images. arXiv preprint arXiv:2206.01256. Max, N. 1995. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2): 99–108. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6313 Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1): 99–106. Pan, M.; Liu, L.; Liu, J.; Huang, P.; Wang, L.; Zhang, S.; Xu, S.; Lai, Z.; and Yang, K. 2023. UniOcc: Unifying VisionCentric 3D Occupancy Prediction with Geometric and Semantic Rendering. arXiv preprint arXiv:2306.09117. Pan, S. J.; and Yang, Q. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10): 1345–1359. Park, J. J.; Florence, P.; Straub, J.; Newcombe, R.; and Lovegrove, S. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 165–174. Philion, J.; and Fidler, S. 2020. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, 194–210. Springer. Roldao, L.; de Charette, R.; and Verroust-Blondet, A. 2020. Lmscnet: Lightweight multiscale 3d semantic completion. In 2020 International Conference on 3D Vision (3DV), 111– 119. IEEE. Sima, C.; Tong, W.; Wang, T.; Chen, L.; Wu, S.; Deng, H.; Gu, Y.; Lu, L.; Luo, P.; Lin, D.; and Li, H. 2023. Scene as Occupancy. Sitzmann, V.; Thies, J.; Heide, F.; Nießner, M.; Wetzstein, G.; and Zollhofer, M. 2019. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2437–2446. Tesla. 2022. Tesla AI Day. https://www.youtube.com/ watch?v=ODSJsviD SU. Tian, X.; Jiang, T.; Yun, L.; Wang, Y.; Wang, Y.; and Zhao, H. 2023. Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous Driving. arXiv preprint arXiv:2304.14365. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; and Wang, W. 2021. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. Advances in Neural Information Processing Systems, 34: 27171–27183. Wang, X.; Zhu, Z.; Xu, W.; Zhang, Y.; Wei, Y.; Chi, X.; Ye, Y.; Du, D.; Lu, J.; and Wang, X. 2023. OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception. arXiv preprint arXiv:2303.03991. Wang, Y.; Guizilini, V. C.; Zhang, T.; Wang, Y.; Zhao, H.; and Solomon, J. 2022. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning, 180–191. PMLR. Wei, Y.; Zhao, L.; Zheng, W.; Zhu, Z.; Zhou, J.; and Lu, J. 2023. SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving. arXiv preprint arXiv:2303.09551. Xie, E.; Yu, Z.; Zhou, D.; Philion, J.; Anandkumar, A.; Fidler, S.; Luo, P.; and Alvarez, J. M. 2022. Mˆ 2bev: Multi-camera joint 3d detection and segmentation with unified birds-eye view representation. arXiv preprint arXiv:2204.05088. Yariv, L.; Gu, J.; Kasten, Y.; and Lipman, Y. 2021. Volume rendering of neural implicit surfaces. Advances in Neural Information Processing Systems, 34: 4805–4815. Ye, D.; Zhou, Z.; Chen, W.; Xie, Y.; Wang, Y.; Wang, P.; and Foroosh, H. 2022. Lidarmultinet: Towards a unified multi-task network for lidar perception. arXiv preprint arXiv:2209.09385. Ye, M.; Wan, R.; Xu, S.; Cao, T.; and Chen, Q. 2021. Drinet++: Efficient voxel-as-point point cloud segmentation. arXiv preprint arXiv:2111.08318. Zhang, Y.; Zhu, Z.; and Du, D. 2023. OccFormer: Dualpath Transformer for Vision-based 3D Semantic Occupancy Prediction. arXiv preprint arXiv:2304.05316. Zhang, Y.; Zhu, Z.; Zheng, W.; Huang, J.; Huang, G.; Zhou, J.; and Lu, J. 2022. Beverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving. arXiv preprint arXiv:2205.09743. Zhi, S.; Laidlow, T.; Leutenegger, S.; and Davison, A. J. 2021. In-place scene labelling and understanding with implicit scene representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15838–15847. Zhou, B.; and Kr¨ahenb¨uhl, P. 2022. Cross-view transformers for real-time map-view semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13760–13769. Zhu, X.; Zhou, H.; Wang, T.; Hong, F.; Ma, Y.; Li, W.; Li, H.; and Lin, D. 2021. Cylindrical and asymmetrical 3d convolution networks for lidar segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9939–9948. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6314 | 2024 | 701 |
18,520 | ZOOM: Learning Video Mirror Detection with Extremely-Weak Supervision Ke Xu*, Tsun Wai Siu*, Rynson W.H. Lau† Department of Computer Science, City University of Hong Kong Abstract Mirror detection is an active research topic in computer vision. However, all existing mirror detectors learn mirror representations from large-scale pixel-wise datasets, which are tedious and expensive to obtain. Although weakly-supervised learning has been widely explored in related topics, we note that popular weak supervision signals (e.g., bounding boxes, scribbles, points) still require some efforts from the user to locate the target objects, with a strong assumption that the images to annotate always contain the target objects. Such an assumption may result in the over-segmentation of mirrors. Our key idea of this work is that the existence of mirrors over a time period may serve as a weak supervision to train a mirror detector, for two reasons. First, if a network can predict the existence of mirrors, it can essentially locate the mirrors. Second, we observe that the reflected contents of a mirror tend to be similar to those in adjacent frames, but exhibit considerable contrast to regions in far-away frames (e.g., non-mirror frames). In this paper, we propose ZOOM, the first method to learn robust mirror representations from extremely weak annotations of per-frame ZerO-One Mirror indicators in videos. The key insight of ZOOM is to model the similarity and contrast (between the mirror and non-mirror regions) in temporal variations to locate and segment the mirrors. To this end, we propose a novel fusion strategy to leverage temporal consistency information for mirror localization and a novel temporal similarity-contrast modeling module for mirror segmentation. We construct a new video mirror dataset for training and evaluation. Experimental results under new and standard metrics show that ZOOM performs favorably against existing fully-supervised mirror detection methods. Introduction Mirrors are made to reflect objects in the surroundings for different purposes (e.g., monitoring traffic situations, checking dressings, and decorating rooms). However, such reflected contents of mirrors may fail existing computer vision models in various tasks, e.g., depth estimation (Tan et al. 2021), lane detection (Feng et al. 2022), and scene parsing (Zhou et al. 2017; Xie et al. 2023). Hence, it is essential to design effective and robust mirror detectors. *These authors contributed equally. †Rynson Lau is the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Input SATNet HetNet VMDNet Ours GT Figure 1: We propose ZOOM, which learns to detect mirrors with extremely weak supervision, i.e., the zero-one mirror indicators. ZOOM learns the similarity (blue arrow) and contrast (red arrows) in temporal variations and achieves promising results against fully supervised mirror detectors. In recent years, a few deep methods are proposed to train mirror detectors in a fully-supervised manner with a large amount of mirror images and annotations. Yang et al. (2019) propose the first deep network to learn contextual contrasted features for detecting mirrors in single RGB images. Two other methods (Mei et al. 2021; Tan et al. 2021) extend the modeling of contrasted features by incorporating depth information. Other methods exploit appearance correspondences (Lin, Wang, and Lau 2020; Lin, Tan, and Lau 2023), semantic relationships (Guan, Lin, and Lau 2022), visual chiral discrepancy (Tan et al. 2023), coarse symmetry relations (Huang et al. 2023), and intensity-based low-level and semantics-based high-level features (He, Lin, and Lau 2023), for distinguishing the real and reflected contents. Despite the success, these methods typically require tedious pixel-level labeling of large amounts of training data. They may also suffer from the over-detection limitation, as their fully-supervised learning schemes implicitly assume that test images always contain mirrors (e.g., Figure 1 first row). Weakly supervised learning is a straightforward and possible solution to reduce labeling efforts. We note that there The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6315 are four main types of weak supervision signals widely studied in related computer vision tasks, i.e., bounding boxes (Dai, He, and Sun 2015), scribbles (Zhang et al. 2020; Yu et al. 2021; He et al. 2023), points (Gao et al. 2022b; Kim et al. 2023), and class labels (Araslanov and Roth 2020; Qin et al. 2022; Li, Xie, and Lin 2018; Piao et al. 2021; Wang et al. 2017). However, while the class labels may not work well as mirrors may reflect objects of different classes, bounding boxes, scribbles, and points still require annotators to pay considerable effort to recognize/locate the targets. In this paper, we aim to answer the question of whether it is possible to learn robust mirror representations with minimum supervision. Our key observation is that the temporal existence of mirrors can be used as weak supervision to train a mirror detector. The reasons are two-fold. First, when a network learns to predict the existence of mirrors, it essentially learns to locate the mirrors. Second, due to the relative motions between mirrors and cameras, we observe that the reflected contents of a mirror tend to be similar to those in adjacent frames, but exhibit considerable contrast to those in far-away frames, e.g., non-mirror frames (Figure 1 1st column). Such temporal information can be modeled for mirror segmentation. The first observation inspires us to explore the knowledge of mirror presence/absence to train a mirror detector, which is much cheaper than existing weak labels as localization is no longer required by the annotators. The second observation inspires us to model the temporal variations in similarity and contrast to segment the mirror regions. Inspired by the above observations, in this paper, we propose ZOOM, the first method that learns robust mirror representations from extremely weak annotations of per-frame ZerO-One Mirror indicators in videos. ZOOM has two main novelties. First, we propose a novel Class Activation Maps (CAM) (Zhou et al. 2016) based fusion strategy to leverage temporal consistency information for robust mirror localization. Second, we propose a novel temporal similaritycontrast modeling module to model the similarity of mirror regions of adjacent frames and the contrast between mirror and non-mirror regions of distant frames for mirror segmentation. To facilitate the learning process, we construct a new video mirror dataset for the training and evaluation of ZOOM. This dataset does not assume that mirrors always exist. To summarize, this work has four main contributions: • We propose ZOOM, the first method that learns from extremely weak annotations of frame-level ZerO-One Mirror indicators for video mirror detection. • We propose a novel fusion strategy to localize mirrors by introducing temporal consistency information, and a novel temporal similarity-contrast modeling module to segment mirrors by modeling the feature similarity of mirror regions in adjacent frames and the feature contrast of mirror/non-mirror regions in distant frames. • We construct a video mirror dataset, which covers diverse daily life scenes. The key feature of our dataset is that it does not assume the presence of mirrors. • Experiments with new and standard metrics on our and existing datasets show that ZOOM achieves promising results against existing fully-supervised mirror detectors. Related Work Deep Mirror Detection. Yang et al. (2019) propose the first deep network to segment mirrors in single images via contextual contrasted feature modeling. Later, a few methods propose to model the appearance correspondences (Lin, Wang, and Lau 2020), semantic associations (Guan, Lin, and Lau 2022), visual chiralty (Tan et al. 2023), and coarse symmetry relations (Huang et al. 2023) between real and reflected contents. He et al. (2023) propose an efficient mirror detector by learning intensity-based contrast and semantic features. Two other methods (Mei et al. 2021; Tan et al. 2021) extend the contrast modeling of RGB images by incorporating depth information. Most recently, a concurrent work by Lin et al. (2023) models inter- and intra-frame appearance correspondences for video mirror detection. While all existing mirror detection methods are fully supervised, this paper presents a novel method to train a video mirror detector with 0/1 mirror indicators. Video Salient Object Detection (VSOD). Aiming at the segmentation of visually distinctive (i.e., salient) objects from an input video, the majority of VSOD methods (Fan et al. 2019; Gu et al. 2020; Zhang et al. 2021; Li et al. 2019; Chen et al. 2021; Song et al. 2018) is fully-supervised, which focus on the modeling of dynamic visual contrasts (in contrast to static ones from single images, e.g., (Tu et al. 2016; Hu et al. 2018)). To alleviate the annotation costs, scribbles (Zhao et al. 2021) and points (Gao et al. 2022a) are exploited as weak supervision signals for VSOD. However, VSOD methods do not detect mirrors well, as the reflected contents of mirrors are not always salient. Weak Supervision Signals. Bounding boxes (Dai, He, and Sun 2015; Liang et al. 2022), scribbles (Zhang et al. 2020; Yu et al. 2021; He et al. 2023), points (Yang et al. 2018; Gao et al. 2022b; Kim et al. 2023), and class labels (Araslanov and Roth 2020; Qin et al. 2022; Li, Xie, and Lin 2018; Liu et al. 2023; Piao et al. 2021; Wang et al. 2017; Kweon et al. 2021; Tian et al. 2020, 2022) are popular weak supervision signals. However, bounding boxes, scribbles, and points still require annotators to provide location information. In this paper, we explore a weaker and more challenging supervision signal, the 0/1 mirror indicator. Our supervision is similar to class labels in that they both do not have explicit location information. However, methods using class labels typically leverage strong semantics (e.g., certain shapes and appearances of a specific class), which may not work well on mirrors due to the changes of reflected contents in mirrors. Dataset To facilitate weakly-supervised training and evaluation, we first construct a video mirror detection dataset, which contains 200 videos (12, 490 frames). Figure 2 shows some examples in our dataset.1 We discuss the details below. Video Collection. We collect 140 videos from two public datasets: 70 videos from the Charades (Sigurdsson et al. 2016) and 70 videos from the Charades-Ego (Sigurdsson 1https://drive.google.com/drive/folders/ 199OpHuHkmbY4ib5TJKV m7rxN1JgmHWI?usp=sharing. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6316 Figure 2: Our dataset examples. The upper two rows show four training video clips (with corresponding mirror indicators marked in red). The bottom two rows show two test video clips (with corresponding ground truth mirror maps). et al. 2018), which record daily indoor activities. We capture 60 videos by ourselves using smartphones. We trim each video to have a duration of 5 ∼8 seconds at 10 FPS. The total duration of our videos is 1, 252 seconds. Dataset Annotation. We randomly split our dataset into a training set of 150 videos (9,398 images) and a test set of 50 videos (3,092 images). We assign frame-level binary mirror indicators to the training set and annotate pixel-level mirror masks for the test set. We uniformly sample ∼20% frames from the training set to annotate mirror masks, for collecting dataset statistics and finetuning existing methods. Contrast Distribution. Figure 3 shows the color contrasts (χ2 distance of RGB histograms) between the mirror regions and non-mirror regions. We include the distributions of MSD (Yang et al. 2019), PMD (Lin, Wang, and Lau 2020) and VMD (Lin, Tan, and Lau 2023) for reference, to which our dataset has similar color contrast distributions. Figure 3: Color contrast distributions. Area Distribution. Figure 4 shows the mirror area distributions of our training and test sets, respectively. It shows that after the dataset split, the distributions of training and test sets are still aligned well. It also shows that our dataset contains mirrors of different area ratios, while small mirrors being the most make our dataset challenging. Temporal & Spatial Distribution. We analyze both temporal and spatial existences of mirrors in our dataset in Figure 5. To analyze the temporal existences of mirrors, we use the Figure 4: Mirror area distributions of our training (left) and test (right) sets. Figure 5: Temporal (left) and spatial (right) distributions. relative time (frame index) of the mirror disappearing in a video. Figure 5 (left) shows that in our dataset, mirrors may move out of the camera across the whole video duration, although there is a relatively higher chance that mirrors disappear at the end of a video. To analyze the spatial existences of mirrors, Figure 5 (right) shows the probability map, which indicates how likely each pixel belongs to a mirror. Mirrors occupy the majority of the image except for the bottom parts, as mirrors tend to be placed around human eyesight. Proposed Method Besides requiring expensive pixel-wise labels for training, we note that existing mirror detection methods typically assume the existence of mirrors, which often results in mirror over-segmentation. Our key idea is to exploit the temporal presence of mirrors as weak supervision to train a mirror detector, as learning to predict the mirror presence essentially locates the mirrors. Besides, we observe that the reflected contents of a mirror tend to be similar to those in adjacent frames, but exhibit considerable contrast to regions in faraway non-mirror frames. To this end, we propose ZOOM, to learn robust video mirror representations from the extremely-weak supervision of per-frame zero-one mirror indicators. Formally, given a collection of videos of N frames Y = {y1, ..., yN} and their corresponding mirror indicators S = {s1, ..., sN} ∈{0, 1} as supervision, ZOOM is a deep function Zθ to be trained to produce the mirror maps ˆ M = { ˆm1, ..., ˆmN} as: ˆ M = Zθ(Y), (1) where learnable parameter θ contains two groups of parameThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6317 Temporal Fusion Encoder Classification Head Temporal Similarity Contrast Module Backbone Features Segmentation Head Lseg Confidence Scores Lcls Feedback Constrain Figure 6: Overview of ZOOM. Given input frames yi, yi+1, yn, we use an encoder Ψ to extract backbone features fi, fi+1, fn, respectively. We attach a classification network to predict the mirror presence, which produces mirror localization maps ci, ci+1, cn via Φcam. We perform a temporal fusion process to exploit temporal consistency to obtain refined maps cr i , cr i+1, cr n as pseudo ground truth. We propose a temporal similarity-contrast modeling module to model the temporal varying similarity/contrast between mirror/non-mirror regions for mirror segmentation. ters {θc, θs} for the classification and segmentation, respectively. Figure 6 illustrates the overview of training ZOOM. Localizing the Mirrors Unlike existing weak supervision signals, our zero-one mirror indicators are extremely weak (i.e., without mirror location information). Hence, while training ZOOM to locate mirrors via classification of mirror existences, we consider the CAM techniques (Zhou et al. 2016; Wang et al. 2020; Selvaraju et al. 2017; Jiang et al. 2021) to generate the localization maps for input frames. Locating Mirrors In Single Frames. We note that CAM methods are typically leveraged in post-processing steps to generate pseudo-ground truth maps. Our goal is to generate mirror localization maps with their corresponding features, both of which are used to facilitate mirror localization and segmentation. Hence, to determine where a neural network focuses when recognizing the mirror in one frame, we modify the last two layers (i.e., the global average pooling and the fully connected layer) of CAM (Zhou et al. 2016). Specifically, given input frames Y = {yi, yi+1, yj} (of which yi and yi+1 are two adjacent mirror frames and yj is a non-mirror frame of the same video), we first extract their corresponding multi-scale deep features F = {fi, fi+1, fj} using an encoder Ψ (a pre-trained ResNext backbone (Xie et al. 2017)). We then generate localization maps C = {ci, ci+1, cj} and corresponding confidence scores P = {pi, pi+1, pj} as: C = Φcam(F), P = Cls(C), (2) in which Φcam is a mapping function consisting of three convolution layers for reducing and aligning the channel dimension of features F, and predicting the localization maps C, respectively. C is then fed to a 1 × 1 convolutional classification head (Cls) to produce the confidence scores P for calculating the binary cross-entropy loss. Temporal Fusion. The localization maps obtained from the classification are coarse (focus on the most discriminative pixels) and noisy (identify both mirror and non-mirror pixels). We consider a temporal similarity fusion strategy, as: ˆc = ci + ci+1 −cj, (3) which tends to aggregate more confident pixels and suppress the noisy ones. We then leverage a feedback strategy to enhance the backbone features F using ˆc. Specifically, we first conduct min-max normalization to constrain activation values to [0, 1]. We then feedback the fused localization map to the backbone features as: Fc = F · ˆc, (4) where · is element-wise multiplication. We omit simple convolutions to align feature dimensions for simplicity. Pseudo-Label Generation. After obtaining the enhanced mirror-aware backbone features Fc, we perform another classification process to obtain localization maps as: Cr = Φcam(Fc). (5) We do not directly use ˆc as pseudo-labels as they may not be accurate due to initially incorrect classification. Segmenting the Mirrors Based on the observation that the reflected contents of a mirror tend to be consistent with those in adjacent frames, but exhibit certain contrast to regions in distant frames, we model the feature similarity between mirror regions in adjacent frames and the feature contrast between mirror and non-mirror regions in distant frames. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6318 Input Features Convs ASPP ASPP Concat SE Convs Convs Llov Llov Llov Lfs Seg. Seg. Seg. Lfc Figure 7: Proposed temporal similarity-contrast modeling module. It aims to enhance feature similarity between mirror regions in adjacent frames and maximize feature contrast between mirror/non-mirror regions in distant frames. Temporal Similarity Modeling. As shown in Figure 7 (upper part), given the input backbone features F = {fi, fi+1, fj}, we first apply three groups of convolutions to reduce and align their feature dimensions, respectively, to produce {f ′ i, f ′ i+1, f ′ j} ∈RH×C×W , where H, C, W represents the feature height, channel numbers, and width, respectively. Note that f ′ i and f ′ i+1 are features extracted from adjacent mirror frames while f ′ j are features from the distant nonmirror frame. We first model the feature similarity between f ′ i and f ′ i+1. Specifically, we first concatenate f ′ i and f ′ i+1 and then apply the SE block (Hu, Shen, and Sun 2018) to adjust the channel-wise feature consistency to produce K ∈RH×C×W . We then apply the ASPP module (Chen et al. 2017) to encode f ′ i and f ′ i+1 into two compact embeddings Qi and Qi+1 ∈RH′×C′×W ′, respectively. We then compute the cosine similarity betwen Qi and K to obtain the confidence map Ui (Wang et al. 2022), and we obtain Ui+1 similarly (denoted as Ⓢin Figure 7). Finally, we use Ui and Ui+1 to re-weight K and fed them into the segmentation head to produce the mirror maps ˆ Mi and ˆ Mi+1, respectively. To enhance the network to capture the content similarity in mirror regions, we minimize the feature similarity loss as: Lfs = ∥ˆ Mi · f ′ i −ˆ Mi+1 · f ′ i+1∥2. (6) Temporal Contrast Modeling. As shown in Figure 7 (lower part), we construct an additional branch to model the temporal contrast information. To maximize the feature discrepancy between the mirror region of i-th (or i + 1-th) frame and the non-mirror region of n-th frame, we minimize the feature contrast loss as: Lfc = −∥( ˆ Mi [ ˆ Mi+1)·f ′ n−ˆ Mi·f ′ i −ˆ Mi+1·f ′ i+1∥2, (7) where ˆ Mi and ˆ Mi+1 are mirror regions of frame i and i + 1, respectively, while ˆ Mi S ˆ Mi+1)·f ′ n extracts the corresponding non-mirror features in frame n. As all frames are from the same video clip, f ′′ n also represents the non-mirror regions in frame i and i + 1 to some extent. By modeling such contrast, Eq. 7 helps the model to segment the mirrors more accurately, and also suppress background noisy predictions with the all-zero supervision and the parameter-sharing of the segmentation head. Training and Inference Loss Function. We use the binary cross entropy loss (Lbce) to supervise the mirror localization process. In addition to the Lfs and Lfc, we use the Lov´asz-Softmax loss (Berman, Triki, and Blaschko 2018) (Llov) for the mirror segmentation process. The whole loss function can be written as: L = X Lbce(Y, S) + Llov( ˆ M, Cr) + Lfs + Lfc. (8) In practice, we first train the classification branch, in which we forward the classification process twice and backpropagate it once to produce pseudo-labels. We then train the segmentation branch and freeze the classification-related parameters. For training, we assume that fn is extracted from the non-mirror frames. In inference, if one frame is classified as non-mirrors, the corresponding segmentation process is not performed. We use two frames for inference. Implementation Details. We have implemented the proposed model under Pytorch (Paszke et al. 2017), and tested it on a PC with an i7 4GHz CPU and a GTX4090 GPU. We use ResNext-101 (Xie et al. 2017) pre-trained on ImageNet (Deng et al. 2009) to initialize our encoder network, while other network parameters are initialized using the truncated normal initializer (with the randomness seed set to 2333). For loss minimization, we adopt the AdamW optimizer (Loshchilov and Hutter 2019). The base learning rate, batch size, and the number of training epochs are 2e−4, 8, and 120, respectively, while the learning rate is reduced by 10 at the 90th epoch. Input frames are resized to 352 × 352. No post-processing techniques are used to refine our results. Results Experimental Setups Evaluation Methods. We compare our method to seven fully-supervised mirror detection methods with publically available codes, including six image-based methods (i.e., MirrorNet (Yang et al. 2019), PMDNet (Lin, Wang, and Lau 2020), SANet (Guan, Lin, and Lau 2022), VCNet (Tan et al. 2023), SAT (Huang et al. 2023), and HetNet (He, Lin, and Lau 2023)) and one most recent video-based method (VMDNet (Lin, Tan, and Lau 2023)). Evaluation Datasets. We report the mirror detection performance on the proposed video mirror detection dataset and the existing video dataset (Lin, Tan, and Lau 2023). We finetune all competing methods using our training data when they are evaluated on our test set. We follow the experimental setups of VMDNet (Lin, Tan, and Lau 2023) for experiments on their dataset. Evaluation Metrics. We follow previous methods to use the intersection over union (IoU), mean absolute error (MAE), and F-measure (Fβ, β is set to 0.3 as suggested in (Achanta et al. 2009)) to evaluate the mirror detection performance. As our dataset may have video frames that do not contain mirrors, to evaluate the performance on such frames, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6319 Methods w/ Mirror w/o Mirror IoU↑ Fβ ↑ MAE↓ TNR↑ ER↓ MAE↓ MirrorNet 0.468 0.564 0.203 0.641 0.941 0.357 MirrorNet† 0.572 0.758 0.093 0.843 1.000 0.164 PMDNet 0.585 0.728 0.074 0.932 0.855 0.068 PMDNet† 0.647 0.807 0.062 0.967 0.966 0.082 SANet 0.541 0.760 0.167 0.753 1.000 0.351 SANet† 0.633 0.819 0.083 0.925 0.999 0.164 VCNet 0.612 0.755 0.069 0.899 0.932 0.104 VCNet† 0.660 0.825 0.256 0.971 1.000 0.264 HetNet 0.576 0.739 0.068 0.927 0.834 0.075 HetNet† 0.635 0.798 0.046 0.983 0.808 0.017 SAT 0.569 0.757 0.081 0.882 0.516 0.118 SAT† 0.724 0.858 0.040 0.936 0.518 0.064 VMDNet 0.320 0.664 0.103 0.972 0.238 0.028 VMDNet† 0.449 0.616 0.095 0.910 0.893 0.090 Ours 0.513 0.774 0.070 0.994 0.091 0.012 Table 1: Quantitative comparison on the proposed dataset. “Model†” represents models finetuned on our dataset (in fully-supervised ways). The best and second best results are marked in bold and underlined, respectively, for reference. we propose to compute the true negative rate (TNR) as: TNR = TN/(TN + FP), (9) where TN represents the number of true negative pixels (i.e., correctly-detected non-mirror pixels) and FP represents the number of false positive pixels (i.e., falselydetected non-mirror pixels). This measures to what extent a detector performs correctly in detecting non-mirror regions, and a higher TNR indicates a better performance. We also measure the error rate (ER) by computing the ratio between the number of non-mirror frames detected to have mirrors and the number of total non-mirror frames. This measures how often a detector may detect mirrors that do not exist, and a lower ER indicates a better performance. Comparing to State-of-the-arts Quantitative Results. Table 1 reports the comparisons on our test set between ZOOM (weakly-supervised) and existing methods (pre-trained and finetuned in fully-supervised ways). Note that frames are tested following the chronological order but we separate frames into two groups, i.e., frames with and without mirrors, for evaluation and discussion. Accordingly, several observations can be made. First, all existing methods tend to achieve better performance when they are finetuned on our dataset. This is understandable as the domain discrepancy exists. Second, compared to imagebased methods, the video-based VMDNet (Lin, Tan, and Lau 2023) shows a relatively lower generalization ability. This is due to that it relies on modeling the temporal appearance correspondence, which is a strong assumption to hold in daily life scenes. Third, our method achieves the best performance on the non-mirror frames under all three metrics, which shows that our method tends to make fewer wrong predictions. Last, although our model is trained with extremely weak supervision, it still produces promising results compared to fully supervised methods on mirror frames. Methods F/W IoU↑ Fβ ↑ MAE↓ MirrorNet F 0.505 0.681 0.145 PMDNet F 0.532 0.749 0.128 VCNet F 0.539 0.749 0.123 HetNet F 0.567 0.751 0.120 SAT F 0.318 0.564 0.334 VMDNet F 0.567 0.787 0.105 Ours W 0.294 0.448 0.387 Table 2: Quantitative results on the VMD dataset (Lin, Tan, and Lau 2023). Best results are marked in bold for reference. Methods IoU↑ Fβ ↑ TNR↑ Φcam Only 0.276 0.420 0.733 Cr →ˆC 0.334 0.571 0.896 Our Cr 0.473 0.668 0.990 fi Only 0.318 0.465 0.801 fi + fi+1 0.422 0.658 0.935 fi + fn 0.392 0.633 0.907 w/o Lfs 0.474 0.647 0.950 w/o Lfc 0.492 0.695 0.928 Llov →Lproj 0.520 0.724 0.961 Ours 0.513 0.774 0.994 Table 3: Ablation study. The upper, middle, and bottom parts compare different ablated versions of the localization, segmentation, and loss functions, respectively. Table 2 further reports the comparisons on the VMD test set (Lin, Tan, and Lau 2023), which shows that ZOOM performs favorably against existing mirror detectors. Qualitative Results. Figure 8 shows the visual comparisons between between ZOOM and state-of-the-art mirror detectors on our test set. Although we note that sometimes ZOOM may not produce pixel-wisely accurate mirror maps, it generally locates the mirrors well in these challenging cases. Internal Analysis Ablation Study. We report ablation results in Table 3. We first analyze the qualities of pseudo labels, by removing the proposed temporal fusion and applying Φcam to individual frames to generate pseudo maps (denoted as “Φcam Only”). We then add the temporal fusion (Eq. 3) but exclude the feedback constraint (Eq. 4) (denoted as “Cr →ˆC”). Last, we report the quality of pseudo labels generated by the proposed approach (denoted as “Our Cr”). The upper part of Table 3 verifies the effectiveness of the temporal fusion for mirror localization and pseudo labels generation. Next, we evaluate the components of the proposed temporal similarity-contrast modeling module. We first adapt our method to process the single frame only (denoted as “fi Only”). We then investigate the effectiveness of modeling temporal coherence and contrast separately (denoted as “fi + fi+1” and “fi + fn”, respectively). The middle three rows of Table 3 show that modeling either the similarThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6320 Input MirrorNet PMD SANet VCNet SATNet HetNet VMDNet Ours GT Figure 8: Visual comparison between ZOOM (weakly-supervised) and existing mirror detectors (fully-supervised). ity or contrast temporally improves the performance, while modeling both of them results in the best results (Ours). Moreover, we find that the performance of removing the SE block degrades from (IoU/Fβ: 0.513/0.774) to (IoU/Fβ: 0.433/0.682) while replacing the ASPP module with simple pooling operations yields to (IoU/Fβ: 0.470/0.687). Last, we analyze the loss terms in Eq. 8. We first remove the feature similarity term Lfs and the feature contrast term Lfc separately (denoted as “w/o Lfs” and “w/o Lfc”, respectively). We also replace the Llov with the box loss (Tian et al. 2021) (“denoted as Llov →Lproj”). The bottom four rows of Table 3 shows the degraded performance when Lfs and Lfc are removed. Besides, while the projection loss may result in higher IoU, it degrades the performance of other metrics, as it tends to produce false positive errors. Model Efficiency. Table 4 compares the model size (the number of parameters) and inference time (in terms of FPS), between ZOOM and existing methods. We can see that ZOOM performs at a reasonable computational cost. (a) Input (b) Ours (c) GT (d) Input (e) Ours (f) GT Figure 9: Failure cases. Our method may fail (a) when the mirror is small across the whole video, or (d) when there exists distracting mirror-like objects. Methods Reso. Num. of Param. FPS MirrorNet 384×384 121.77 7.82 PMD 384×384 147.66 7.41 SANet 384×384 104.80 8.53 HetNet 352×352 49.92 49.23 VMDNet 384×384 62.24 17.06 Ours 352×352 57.26 19.02 Table 4: Models’ sizes and inference time for reference. Limitations. As shown in Figure 9(a), our method may fail when the mirror is small (far away) across the whole video and misclassify the presence of mirrors when the target scene contains mirror-like objects (e.g., glossy objects with reflections/highlights in Figure 9(d)). Conclusion In this paper, we have proposed a novel approach, named ZOOM, to learn an effective mirror detector from extremely weak annotations of per-frame ZerO-One Mirror indicators in videos. ZOOM leverages CAMs with a novel fusion strategy to model temporal consistency information for mirror localization. It also includes a novel temporal similaritycontrast modeling module for mirror segmentation. We have constructed a new video mirror dataset and conducted experiments on the proposed dataset as well as the existing mirror dataset under new and standard metrics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6321 Acknowledgements This project is in part supported by a GRF grant from the Research Grants Council of Hong Kong (No.: 11211223) and an SRG grant from City University of Hong Kong (No.: 7005674). References Achanta, R.; Hemami, S.; Estrada, F.; and Susstrunk, S. 2009. Frequency-tuned salient region detection. In CVPR. Araslanov, N.; and Roth, S. 2020. Single-stage semantic segmentation from image labels. In CVPR. Berman, M.; Triki, A. R.; and Blaschko, M. B. 2018. The Lov´asz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In CVPR. Chen, C.; Wang, G.; Peng, C.; Fang, Y.; Zhang, D.; and Qin, H. 2021. Exploring Rich and Efficient Spatial Temporal Interactions for Real-Time Video Salient Object Detection. IEEE TIP. Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017. Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587. Dai, J.; He, K.; and Sun, J. 2015. BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation. In ICCV. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. ImageNet: A large-scale hierarchical image database. In CVPR. Fan, D.-P.; Wang, W.; Cheng, M.-M.; and Shen, J. 2019. Shifting more attention to video salient object detection. In CVPR. Feng, Z.; Guo, S.; Tan, X.; Xu, K.; Wang, M.; and Ma, L. 2022. Rethinking Efficient Lane Detection via Curve Modeling. In CVPR. Gao, S.; Xing, H.; Zhang, W.; Wang, Y.; Guo, Q.; and Zhang, W. 2022a. Weakly Supervised Video Salient Object Detection via Point Supervision. In ACM MM. Gao, S.; Zhang, W.; Wang, Y.; Guo, Q.; Zhang, C.; He, Y.; and Zhang, W. 2022b. Weakly-Supervised Salient Object Detection Using Point Supervison. In AAAI. Gu, Y.; Wang, L.; Wang, Z.; Liu, Y.; Cheng, M.-M.; and Lu, S.-P. 2020. Pyramid constrained self-attention network for fast video salient object detection. In AAAI. Guan, H.; Lin, J.; and Lau, R. W. 2022. Learning Semantic Associations for Mirror Detection. In CVPR. He, R.; Dong, Q.; Lin, J.; and Lau, R. W. 2023. WeaklySupervised Camouflaged Object Detection with Scribble Annotations. In AAAI. He, R.; Lin, J.; and Lau, R. W. 2023. Efficient Mirror Detection via Multi-level Heterogeneous Learning. In AAAI. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In CVPR. Hu, X.; Zhu, L.; Qin, J.; Fu, C.-W.; and Heng, P.-A. 2018. Recurrently aggregating deep features for salient object detection. In AAAI. Huang, T.; Dong, B.; Lin, J.; Liu, X.; Lau, R. W. H.; and Zuo, W. 2023. Symmetry-Aware Transformer-based Mirror Detection. In AAAI. Jiang, P.-T.; Zhang, C.-B.; Hou, Q.; Cheng, M.-M.; and Wei, Y. 2021. Layercam: Exploring hierarchical class activation maps for localization. IEEE TIP. Kim, B.; Jeong, J.; Han, D.; and Hwang, S. J. 2023. The Devil is in the Points: Weakly Semi-Supervised Instance Segmentation via Point-Guided Mask Representation. In CVPR. Kweon, H.; Yoon, S.-H.; Kim, H.; Park, D.; and Yoon, K.-J. 2021. Unlocking the potential of ordinary classifier: Classspecific adversarial erasing framework for weakly supervised semantic segmentation. In ICCV. Li, G.; Xie, Y.; and Lin, L. 2018. Weakly supervised salient object detection using image labels. In AAAI. Li, H.; Chen, G.; Li, G.; and Yu, Y. 2019. Motion guided attention for video salient object detection. In ICCV. Liang, Z.; Wang, P.; Xu, K.; Zhang, P.; and Lau, R. W. 2022. Weakly-supervised salient object detection on light fields. IEEE TIP. Lin, J.; Tan, X.; and Lau, R. W. 2023. Learning To Detect Mirrors From Videos via Dual Correspondences. In CVPR. Lin, J.; Wang, G.; and Lau, R. W. 2020. Progressive mirror detection. In CVPR. Liu, F.; Liu, Y.; Kong, Y.; Xu, K.; Zhang, L.; Yin, B.; Hancke, G.; and Lau, R. 2023. Referring Image Segmentation Using Text Supervision. In ICCV. Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. arXiv:1711.05101. Mei, H.; Dong, B.; Dong, W.; Peers, P.; Yang, X.; Zhang, Q.; and Wei, X. 2021. Depth-Aware Mirror Segmentation. In CVPR. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in PyTorch. In NeurIPS Workshop. Piao, Y.; Wang, J.; Zhang, M.; and Lu, H. 2021. Mfnet: Multi-filter directive network for weakly supervised salient object detection. In ICCV. Qin, J.; Wu, J.; Xiao, X.; Li, L.; and Wang, X. 2022. Activation modulation and recalibration scheme for weakly supervised semantic segmentation. In AAAI. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV. Sigurdsson, G. A.; Gupta, A.; Schmid, C.; Farhadi, A.; and Alahari, K. 2018. Actor and Observer: Joint Modeling of First and Third-Person Videos. In CVPR. Sigurdsson, G. A.; Varol, G.; Wang, X.; Farhadi, A.; Laptev, I.; and Gupta, A. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In ECCV. Song, H.; Wang, W.; Zhao, S.; Shen, J.; and Lam, K.-M. 2018. Pyramid dilated deeper convlstm for video salient object detection. In ECCV. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6322 Tan, J.; Lin, W.; Chang, A.; and Savva, M. 2021. Mirror3D: Depth Refinement for Mirror Surfaces. In CVPR. Tan, X.; Lin, J.; Xu, K.; Chen, P.; Ma, L.; and Lau, R. W. 2023. Mirror Detection With the Visual Chirality Cue. IEEE TPAMI. Tian, X.; Xu, K.; Yang, X.; Yin, B.; and Lau, R. W. 2020. Weakly-supervised salient instance detection. In BMVC. Tian, X.; Xu, K.; Yang, X.; Yin, B.; and Lau, R. W. 2022. Learning to detect instance-level salient objects using complementary image labels. IJCV. Tian, Z.; Shen, C.; Wang, X.; and Chen, H. 2021. Boxinst: High-performance instance segmentation with box annotations. In CVPR. Tu, W.-C.; He, S.; Yang, Q.; and Chien, S.-Y. 2016. Realtime salient object detection with a minimum spanning tree. In CVPR. Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; and Hu, X. 2020. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In CVPR workshops. Wang, L.; Lu, H.; Wang, Y.; Feng, M.; Wang, D.; Yin, B.; and Ruan, X. 2017. Learning to detect salient objects with image-level supervision. In CVPR. Wang, X.; Yu, Z.; De Mello, S.; Kautz, J.; Anandkumar, A.; Shen, C.; and Alvarez, J. M. 2022. Freesolo: Learning to segment objects without annotations. In CVPR. Xie, S.; Girshick, R.; Doll´ar, P.; Tu, Z.; and He, K. 2017. Aggregated residual transformations for deep neural networks. In CVPR. Xie, Z.; Wang, S.; Xu, K.; Zhang, Z.; Tan, X.; Xie, Y.; and Ma, L. 2023. Boosting Night-time Scene Parsing with Learnable Frequency. IEEE TIP. Yang, X.; Mei, H.; Xu, K.; Wei, X.; Yin, B.; and Lau, R. W. 2019. Where is my mirror? In ICCV. Yang, X.; Xu, K.; Chen, S.; He, S.; Yin, B. Y.; and Lau, R. 2018. Active matting. Yu, S.; Zhang, B.; Xiao, J.; and Lim, E. G. 2021. Structureconsistent weakly supervised salient object detection with local saliency coherence. In AAAI. Zhang, J.; Yu, X.; Li, A.; Song, P.; Liu, B.; and Dai, Y. 2020. Weakly-supervised salient object detection via scribble annotations. In CVPR. Zhang, M.; Liu, J.; Wang, Y.; Piao, Y.; Yao, S.; Ji, W.; Li, J.; Lu, H.; and Luo, Z. 2021. Dynamic context-sensitive filtering network for video salient object detection. In ICCV. Zhao, W.; Zhang, J.; Li, L.; Barnes, N.; Liu, N.; and Han, J. 2021. Weakly supervised video salient object detection. In CVPR. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In CVPR. Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; and Torralba, A. 2017. Scene parsing through ade20k dataset. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6323 | 2024 | 702 |
18,521 | Weakly Supervised Multimodal Affordance Grounding for Egocentric Images Lingjing Xu1, Yang Gao1*, Wenfeng Song2*, Aimin Hao1 1State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, China 2Computer School, Beijing Information Science and Technology University, China {xulingjing, gaoyangvr, ham}@buaa.edu.cn, [email protected] Abstract To enhance the interaction between intelligent systems and the environment, locating the affordance regions of objects is crucial. These regions correspond to specific areas that provide distinct functionalities. Humans often acquire the ability to identify these regions through action demonstrations and verbal instructions. In this paper, we present a novel multimodal framework that extracts affordance knowledge from exocentric images, which depict human-object interactions, as well as from accompanying textual descriptions that describe the performed actions. The extracted knowledge is then transferred to egocentric images. To achieve this goal, we propose the HOI-Transfer Module, which utilizes local perception to disentangle individual actions within exocentric images. This module effectively captures localized features and correlations between actions, leading to valuable affordance knowledge. Additionally, we introduce the Pixel-Text Fusion Module, which fuses affordance knowledge by identifying regions in egocentric images that bear resemblances to the textual features defining affordances. We employ a Weakly Supervised Multimodal Affordance (WSMA) learning approach, utilizing imagelevel labels for training. Through extensive experiments, we demonstrate the superiority of our proposed method in terms of evaluation metrics and visual results when compared to existing affordance grounding models. Furthermore, ablation experiments confirm the effectiveness of our approach. Code:https://github.com/xulingjing88/WSMA. Introduction The notion of affordance, originally proposed by Gibson (Gibson 2014), posits that objects possess ”action possibilities”. For instance, a knife can be employed for cutting objects, while a cup can be used for drinking. However, merely knowing the purpose of an object is insufficient to enable intelligent agents to actively engage with their environment. Precise understanding of interaction locations is crucial. For example, a knife’s blade is for cutting, while its handle is for gripping. This concept has garnered significant attention in the domains of robotics and computer vision, finding applications in tasks such as robotic grasping and scene comprehension. *Corresponding author: Yang Gao and Wenfeng Song Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Or Image Encoder Image Encoder Text Encoder Fusion Image Encoder Image Encoder Cross-view Affordance Label Image Encoder Affordance Label Local Cross-view Affordance Label Affordance Text Egoce ntric Image Exoce ntric Image Egoce ntric Image Exoce ntric Image Egoce ntric Image Egoce ntric Image (a) Previous Methods Or Image Encoder Image Encoder Text Encoder Fusion Image Encoder Image Encoder Cross-view Affordance Label Image Encoder Affordance Label Local Cross-view Affordance Label Affordance Text Egoce ntric Image Exoce ntric Image Egoce ntric Image Exoce ntric Image Egoce ntric Image Egoce ntric Image (b) Ours (WSMA) Figure 1: Comparison between Previous Methods and Our Proposed Method. (a) One focused on individual object learning, and the other utilized cross-view transfer from exocentric images. (b) Our WSMA leverages local cross-view transfer from exocentric images and additionally learns from corresponding textual descriptions. Navigating affordance knowledge is riddled with challenges. Firstly, many dominant techniques, such as those referenced in (Myers et al. 2015; Nguyen et al. 2017; Chuang et al. 2018; Do, Nguyen, and Reid 2018; Fang et al. 2018), are deeply anchored to detailed pixel-level annotations. This is particularly demanding given the complex nature of affordance regions. For example, annotating the ”drink with” action for a cup entails meticulous attention to the rim area—a task that is both intricate and error-prone. Consequently, the rigors of such annotation often undermine data quality. Secondly, in real-world settings, interactions with objects are informed by both actions and language—an aspect often underserved by current methodologies. For example, Figure 1(a) indicates that traditional methods (Grabner, Gall, and Van Gool 2011; Hermans, Rehg, and Bobick 2011; Myers et al. 2015) largely view affordances as unchanging object traits, leading to segmentations predominantly based on visual appearance. Such an approach downplays the fluidity of human-object dynamics. Conversely, as depicted on the right side of Figure 1(a), newer research (Nagarajan, Feichtenhofer, and Grauman 2019; Li et al. 2023; Luo et al. 2022b) leans heavily on exocentric images of human interactions, potentially sidelining other rich data sources. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6324 To tackle the challenges, we introduce a Weakly Supervised Multimodal Affordance Grounding approach, as illustrated in Figure 1(b). Our initial strategy mitigates the high costs associated with manual annotation by adopting a weakly supervised learning approach, where image-level labels guide the process instead of intricate pixel-level annotations. Furthermore, to harness diverse learning sources, we present an innovative methodology that leverages both exocentric images capturing human interactions and textual data for affordance knowledge acquisition. Notably, individuals learning interactions through a combination of visual and textual cues are expected to outperform those solely relying on images. The fundamental concept driving our proposed methodology unfolds as follows: during the training phase, we employ pairs of exocentric images and corresponding affordance text to amass affordance knowledge, which is subsequently transposed to egocentric images. As depicted in Figure 1(b), our network architecture consists of three branches: Exocentric, Egocentric, and Text. The Exocentric branch is trained with exocentric images to glean affordance insights from these visual inputs. This acquired visual knowledge is then transmitted to the Egocentric branch through the HOITransfer Module. This integration incorporates a localization strategy, enabling the effective differentiation of various actions. In parallel, the Text branch is trained with affordance text and transfers its learned knowledge to the Egocentric branch. By amalgamating insights from both exocentric images and text, our approach achieves refined localization of affordance regions. During the testing phase, we retain the Egocentric and Text branches, employ class activation mapping (CAM) (Zhou et al. 2016) to infer the pertinent affordance regions, and subsequently enhance segmentation results using the CAM Refined Module. In summary, our salient contributions are as follows: • We propose a multimodal weakly-supervised framework (WSMA) designed to localize affordance regions by integrating affordance knowledge from both exocentric images and their corresponding textual descriptions, effectively transferring the acquired knowledge to egocentric images. • We introduce the HOI-Transfer Module to extract local affordance knowledge from exocentric person-object interaction features and supervise the training of egocentric images. Concurrently, the Pixel-Text Fusion Module is incorporated to facilitate the transfer of knowledge from text to images by integrating text and egocentric image features. • In our evaluation, we conduct experiments on two datasets: ADE20K and HICO-IIF. The experimental results clearly underscore the superiority of our proposed approach over existing methods, showcasing its optimal performance in localizing affordance regions. Related Work In our work, we employ a weakly supervised multimodal approach for Affordance Grounding. In this section, we survey works in domains related to our method. Visual Affordance Grounding Visual affordance grounding aims to locate object regions responsible for specific functionalities, thereby enhancing the comprehension of human interactions. While recent research has delved into this task, initial works relied on pixellevel annotations for supervised training (Koppula, Gupta, and Saxena 2013; Myers et al. 2015; Chuang et al. 2018; Do, Nguyen, and Reid 2018), but were limited by the complexity of annotations. The field has also yielded various weakly supervised approaches. For example, Sawatzky et al. (Sawatzky and Gall 2017) proposed a method using a minimal number of points as weak supervision, while Nagarajan et al. (Nagarajan, Feichtenhofer, and Grauman 2019) leveraged videos for learning. Recently, weak supervision through image-level labels (Luo et al. 2022b; Li et al. 2023) has emerged, primarily focusing on exocentric images for affordance learning. In contrast, our work introduces a comprehensive framework that not only attends to exocentric images but also incorporates affordance knowledge from textual sources. Cross-view Knowledge Distillation Knowledge distillation is a training technique in deep learning that involves the transfer of model knowledge from a teacher model to a student model (Mirzadeh et al. 2020; Chen et al. 2020). Conversely, cross-view knowledge distillation focuses on transferring knowledge across different perspectives. Research in this field has expanded in recent years (Fang et al. 2018; Sigurdsson et al. 2018; Nagarajan, Feichtenhofer, and Grauman 2019; Li et al. 2021; Luo et al. 2022b; Li et al. 2023), with some methods leveraging videos for knowledge transfer. For instance, Ego-exo (Li et al. 2021) proposes a method that uses third-person videos to uncover latent signals and predict specific attributes in egocentric views. Other methods utilize images from different perspectives for knowledge transfer. Both Cross-viewAG (Luo et al. 2022b) and LOCATE (Li et al. 2023) learn affordance knowledge from exocentric images and transfer this knowledge to egocentric images. In this paper, we employ cross-view distillation using newly designed local losses between exocentric and egocentric images. Vision-language Models Visual-language models aim to achieve mutual comprehension and interaction between images and natural language, establishing a close nexus between visual and textual information. An increasing number of works are dedicated to investigating this domain, with CLIP (Radford et al. 2021) being one of the most prominent examples. CLIP undertakes training on extensive image-text datasets and attains impressive performance benchmarks. Additionally, this area of research has generated numerous other significant contributions (Xu et al. 2019; Wang, Chan, and Loy 2023; Guo et al. 2023). For instance, certain investigations (Gao et al. 2021a; Zhang et al. 2021; Zhou et al. 2022) have advanced CLIP’s training strategies, while others (Rao et al. 2022) focus on segmentation tasks. Inspired by these developments, we integrate CLIP’s text encoder into our framework to extract textual features in our study. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6325 DINO-VIT C Self-Attention √ ··· √ ··· √ ··· ··· ··· ··· 𝑾× 𝑯+ 𝑬 HOI-Transfer Module Classification C Concat 𝐿𝑐𝑙𝑖𝑝 𝐿𝑐𝑙𝑠 𝐿𝑐𝑙𝑠 Exocentric Images Text carry ··· DINO-VIT ··· [Learnable Prompts] Multimodal AIM Module Label Egocentric Image encoder C Pixel-Text Fusion Pixel-Text Attention Figure 2: Overview of the Proposed Framework (WSMA). During the training stage, the proposed framework is divided into three branches: Exocentric, Egocentric, and Text. (1) The Exocentric branch extracts affordance knowledge from exocentric images and transfers it to the Egocentric branch using the HOI-Transfer Module. (2) The Egocentric branch extracts features from egocentric images and incorporates the affordance knowledge provided by the other branches. (3) The Text branch extracts features from the affordance text and fuses them into the Egocentric branch using the Pixel-Text Fusion Module. Image Embeddings Text Embeddings Self-Attention 𝐶× hw 1 × 𝐶 𝐶× hw 𝐶× hw repeat Figure 3: Detailed Description of Pixel-Text Attention. This module is designed to transfer textual knowledge. Method Figure 2 illustrates our Weakly Supervised Multimodal Affordance (WSMA) framework, which is designed to localize affordance regions in egocentric images. The following sections delve deeper into the intricacies of this framework. Multimodal Fusion While people typically learn interaction skills through demonstrations and linguistic cues, current research often overlooks this. In contrast, our approach authentically derives affordance insights from both exocentric images and descriptive text. Notably, compared to other weakly supervised methods relying on image-level labels, our approach utilizes the same labels and treats class names as textual input, without introducing new annotations or increasing the labeling workload in practical scenarios. Accordingly, our model is built upon three foundational pillars: Exocentric, Egocentric, and Text branches. The input parameters to our model include n exocentric images Ii (where i spans from 1 to n), an egocentric image Ig, and an affordance label C. Egocentric Branch and Text Branch For a single egocentric image Ig input, we utilize the DINO-VIT (Caron et al. 2021) model for feature extraction. DINO-VIT, a selfsupervised vision transformer, provides feature information pertinent to image semantic segmentation. As shown in Figure 2, our framework employs a DINO-VIT model M with b (b = 12) blocks, yielding f 1 g , . . . , f b g = M (Ig). To enhance results comprehensively, we extract features from both the penultimate and final layers, yielding the deep feature fg = MLP Concat f b−1 g , f b g , where MLP comprises two linear layers. With regards to the input of affordance text T (textual descriptions of affordance label C), we recognize the challenges in manual prompt design. Drawing inspiration from CoOp (Zhou et al. 2022), we introduce m (m = 16) trainable prompts preceding T. This results in T ′ = [V1] · · · [Vm] T, where {V1, . . . , Vm} represent the m trainable prompts. Using the text encoder MT from CLIP (Radford et al. 2021), we then derive text embeddings ft = MT T ′ . Pixel-Text Fusion Module To effectively merge the affordance knowledge from textual information into the Egocentric Image branch, we introduce the Pixel-Text Fusion Module. Firstly, to ensure alignment of the image features fg and text features ft within the same feature space, we use the following equation: f ′ g = AttentionPool(Concat(Average(fg), fg)). (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6326 C SelfAttention Attention Matrix Inference CAM Refined Module Mask DINO-VIT ··· [Learnable Prompts] class CAM encoder Pixel-Text Attention Figure 4: Description of the Testing Stage. We retain only the Egocentric and Text branches, utilizing the attention matrix from the self-attention structure within the network to optimize the results. AttentionPool is computed using a multi-head attention mechanism. Based on this, we propose the equation: Zclip = f ′ g(0)f T t , (2) where f ′ g(i) refers to the portion of f ′ g with channel index i. Finally, we utilize Zclip to compute the cross-entropy loss Lclip, which guides the training. As illustrated in Figure 3, to seamlessly merge the aligned textual features with the image features, we introduced the Pixel-Text Attention Module. This computation employs the egocentric image feature f ′ g ∈R(1+hw)×c and the previously obtained text embedding ft ∈R1×c. We start with: fatt = ft[f ′ g(1 :)]T , (3) where fatt serves as a similarity matrix bridging images and text. It functions as a subsequent attention matrix for egocentric images, directing the model’s focus towards regions that resonate with the affordance text. fatt undergoes repetition to yield f ′ att ∈Rc×hw. The final equation becomes: Fg = fg × f ′ att + fg, (4) where Fg represents the culmination of affordance knowledge transfer from the text to the egocentric images. For classification purposes, Fg is passed through a 3 × 3 convolutional layer followed by a fully connected layer, resulting in the classification scores cego. These scores are then used to determine the cross-entropy loss Lcls for optimization. Exocentric Branch To begin with, feature extraction is conducted for the input of n exocentric images Ii(i = {1, . . . , n}), mirroring the approach used in the Egocentric branch. Similar to the Egocentric branch, we utilize DINO-VIT for feature extraction. Subsequently, we merge the outputs from the last two layers of the network (f b−1 i , f b i ) to yield the comprehensive features f i x = MLP(Concat(f b−1 i , f b i )). Building upon the Affordance Invariance Mining Module (AIM) introduced in a prior study (Luo et al. 2022b), we express the comprehensive features f i x as Wx × Hi x + Ei x. Here, Wx denotes the sub-feature related to human interactions, while Hi x and Ei x represent the coefficient matrix and individual variations of the i-th image, respectively. By minimizing Ei x and iteratively updating Wx and Hi x using non-negative matrix factorization (Lee and Seung 2000), we derive the shared features F i x = f i x +Conv(Wx ×Hi x) from exocentric images. Subsequently, the input features F i x pass Exocentric Branch HOI Label 𝐶𝐴𝑀 𝐶𝐴𝑀 𝑳𝒅 𝑠𝑒𝑙𝑒𝑐𝑡 𝑠𝑒𝑙𝑒𝑐𝑡 𝑓𝑙𝑎𝑡𝑡𝑒𝑛 𝑡𝑟a𝑛𝑠𝑝𝑜𝑠𝑒 𝑓𝑙𝑎𝑡𝑡𝑒𝑛 𝑡𝑟a𝑛𝑠𝑝𝑜𝑠𝑒 𝑳𝒍_𝒓𝒆𝒍𝒂 Egocentric Branch Figure 5: Detailed Description of the HOI-Transfer Module. This module is designed to transfer affordance knowledge from the Exocentric branch to the Egocentric branch. through a 3 × 3 convolution and a fully connected layer to produce the classification scores cexo. These scores, cexo, are also used to compute the cross-entropy loss Lcls. Weakly Supervised Affordance Grounding Given the challenges associated with annotating precise affordance regions, we rely solely on image-level annotations to calculate the loss functions. Our approach incorporates four specific losses: Lcls, Lclip, Ld, and Ll rela. We have already discussed Lcls and Lclip in previous sections. Here we will delve into the details of Ld and Ll rela, which are integrated within the HOI-Transfer Module. HOI-Transfer Module Within the HOI-Transfer Module (Figure 5), we have formulated two losses, denoted as Ld and Ll rela, to transfer the affordance knowledge acquired from the Exocentric branch to the Egocentric branch. Initially, we average the n exocentric features F i x(i = {1, . . . , n}) to obtain Fx. We have already acquired feature Fg from the Egocentric branch and the Text branch. Inspired by Class Activation Mapping (CAM), we have devised a local knowledge transfer mechanism that enables better differentiation between distinct behaviors. We use CAM to calculate the weighted sum of the feature maps F j g , F j x (j represents the j-th channel) from the last convolutional layer, resulting in the affordance region heatmaps Y Ck g , Y Ck x (Ck represents the k-th class) for each affordance class. Y Ck branch={g,x} = P j wCk j F j branch={g,x}. (5) Here, wCk j represents the weights corresponding to the feature map. Subsequently, we utilize the obtained heatmaps to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6327 LOCATE WSMA (Ours) GT Crossview-AG CrossviewAG+ ADE20K-Unseen ADE20K-Seen HICO-IIF pour throw type on hit drink with cut with catch cut with swing type on cut with Figure 6: Qualitative Comparison with the State-of-the-art Models (Cross-view-AG(Luo et al. 2022b), Cross-view-AG+ (Luo et al. 2022a), LOCATE (Li et al. 2023)). Method Pub. ADE20K-Unseen ADE20K-Seen HICO-IIF KLD ↓ SIM ↑ NSS ↑ KLD ↓ SIM ↑ NSS ↑ KLD ↓ SIM ↑ NSS ↑ Weakly Supervised Object Localization SPA CVPR21 7.425 0.169 0.262 5.528 0.221 0.357 — — — EIL CVPR20 2.167 0.277 0.330 1.931 0.285 0.522 — — — TS-CAM ICCV21 2.104 0.201 0.151 1.842 0.260 0.336 — — — Affordance Grounding Hotspots ICCV19 1.994 0.237 0.577 1.773 0.278 0.615 — — — Cross-view-AG CVPR22 1.787 0.285 0.829 1.538 0.334 0.927 1.779 0.263 0.946 Cross-view-AG+ — 1.765 0.279 0.882 1.489 0.342 0.981 1.836 0.256 0.883 LOCATE CVPR23 1.405 0.372 1.157 1.226 0.401 1.177 1.593 0.327 0.966 WSMA(Ours) This Work 1.335 0.382 1.220 1.176 0.416 1.247 1.465 0.358 1.012 Table 1: Comparisons with Other State-of-the-art Models (SPA (Pan et al. 2021), EIL (Mai, Yang, and Luo 2020), TS-CAM (Gao et al. 2021b), Hotspots (Nagarajan, Feichtenhofer, and Grauman 2019), Cross-view-AG (Luo et al. 2022b), Cross-viewAG+ (Luo et al. 2022a), LOCATE (Li et al. 2023)). ↑indicates that a higher value is preferable, while ↓indicates that a lower value is preferable. The experimental results that are bold and underlined represent the state-of-the-art performance. calculate two losses. Firstly, Ld, aims to minimize the distance between the features learned in the Egocentric branch and the Exocentric branch. Accordingly, we select the corresponding heatmap based on the affordance class label C and calculate Ld as follows: Ld = Y C g −Y C x . (6) The second loss, Ll rela, addresses the fact that distinct classes in the affordance domain often exhibit overlapping characteristics. For instance, when a person holds a cup to drink water, both the ”hold” and ”drink with” actions are relevant, illustrating what we term action correlation. Consequently, we incorporate a loss term to transfer this action correlation knowledge from the Exocentric branch to the Egocentric branch. Rego = flatten(Yg) × flatten(Y T g ). (7) Rego is the correlation matrix in the Egocentric branch. Rexo = flatten(Yx) × flatten(Y T x ). (8) Rexo is the correlation matrix in the Exocentric branch. Ll rela = Cosine(Rego, Rexo). (9) During the training phase, the overall loss L is obtained as a weighted sum of the four individual losses. L = λclsLcls + λclipLclip + λdLd + λl relaLl rela. (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6328 Bego HOI-Transfer Pixel-Text Fusion ADE20K-Unseen ADE20K-Seen HICO-IIF KLD ↓ SIM ↑ NSS ↑ KLD ↓ SIM ↑ NSS ↑ KLD ↓ SIM ↑ NSS ↑ ! % % 1.707 0.287 0.973 1.430 0.345 1.093 1.860 0.255 0.770 ! ! % 1.471 0.330 1.180 1.297 0.375 1.178 1.785 0.286 0.650 ! % ! 1.531 0.338 1.095 1.277 0.393 1.169 1.690 0.320 0.970 ! ! ! 1.335 0.382 1.220 1.176 0.416 1.247 1.465 0.358 1.012 Table 2: Ablation Experiments of Different Modules ( Bego is the Egocentric branch) λcls, λclip, λd, λl rela represent the weights corresponding to the respective losses. Finally, during the inference phase (Figure 4), we retain only the Egocentric branch and the Text branch. We input a single egocentric image and associate it with the affordance label C. For instance, if you aim to determine the affordance region of an object for the action ”catch”, you can input the image of the object along with the corresponding textual label ”catch”. Through the application of CAM, we generate the corresponding heatmap H. To further enhance the heatmap, we utilize the CAM Refined Module. In this module, the network’s self-attention mechanism extracts the attention matrix Q, which is then used to refine H. H ′ = Mask × Q × H + H, (11) where H ′ represents the refined heatmap, and the Mask is Mask = 1 if H > threshold 0 else . (12) We set a threshold to remove less crucial portions from Q × H, thus focusing more on the important parts. Experimental Results Datasets and Evaluation Metrics We use the Affordance Grounding Dataset (AGD20K) (Luo et al. 2022b), which is a comprehensive dataset containing various viewpoints, specifically, 20,061 exocentric and 3,755 egocentric images. These images represent 36 unique affordance categories. We conduct evaluations under two distinct settings: ”Seen” and ”Unseen”. The ”Seen” setting includes object categories from the training set in the test set, whereas the ”Unseen” setting incorporates object categories that are not present in the training set. In addition to AGD20K, we have assembled a new dataset, HICO-IIF, by selecting specific subsets from the HICO-DET (Chao et al. 2018) and IIT-AFF (Nguyen et al. 2017) datasets. More details about these datasets are available in the Appendix. To measure the alignment between experimental outcomes and the ground truth, we use three metrics: KullbackLeibler Divergence (KLD), Similarity (SIM), and Normalized Scanpath Saliency (NSS). The Appendix provides a comprehensive overview of each metric. Implementation Details For the backbone of both the egocentric and exocentric branches, we use the pre-trained DINO-ViT-S, keeping its weights frozen during the training process. DINO-ViT-S is pre-trained using unsupervised learning on the ImageNet dataset (Deng et al. 2009). In the case of the exocentric branch, we simultaneously process input from three exocentric images. The text branch, on the other hand, employs the pre-trained text encoder from the CLIP model as its backbone network. We set the hyperparameters λcls, λclip, λd, and λl rela to 1, 1, 0.5, and 0.5 respectively, while the threshold is fixed at 0.2. Further details regarding parameter configurations can be found in the Appendix. Quantitative and Qualitative Comparisons We benchmark our method against three weakly supervised object localization models and four cutting-edge affordance grounding models. The results of these models are tabulated in Table 1. Notably, our proposed method, WSMA, outperforms all other compared methods. Specifically, when juxtaposed with the current leading affordance grounding model, LOCATE, WSMA demonstrates superior performance across all three metrics. These experimental results underscore the potency of our approach in transferring learned knowledge from exocentric images and text to egocentric images, consequently improving heatmap accuracy. Furthermore, to provide a more detailed visual analysis of the differences among various model results, we conducted a qualitative comparison. As illustrated in Figure 6, it can be inferred that under the ”Unseen” setting, WSMA achieves improved precision in localization by effectively mitigating environmental influences. For instance, under the ”throw” label, WSMA accurately pinpoints the region of the basketball, whereas other models exhibit varying degrees of sensitivity to environmental factors introduced by individuals in the scene. Similarly, across the other two datasets, WSMA consistently yields results that closely align with the ground truth, outmatching other models in comparison. Ablation Study To validate the efficacy of the proposed modules, we conducted comprehensive ablation experiments, with results summarized in Table 2. Methods relying solely on the Egocentric branch exhibited the lowest performance. Analyzing the three evaluation metrics revealed that integrating the HOI-Transfer Module or the Pixel-Text Fusion Module led to varying degrees of improvement. Ultimately, the combined integration of both modules achieved the highest performance. This accomplishment can be attributed to the efficient extraction of knowledge from exocentric images and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6329 brush with drink with 𝑩𝒆𝒈𝒐 stick +𝑴𝑯𝑶𝑰 GT +𝑴𝑯𝑶𝑰 + 𝑴𝑷𝑻 carry catch hold write type on stick wash Figure 7: Ablation Experiments’ Visualization. MHOI is the HOI-Transfer Module. MP T is the Pixel-Text Fusion Module. Ld Lg rela Ll rela ADE20K-Unseen ADE20K-Seen HICO-IIF KLD ↓ SIM ↑ NSS ↑ KLD ↓ SIM ↑ NSS ↑ KLD ↓ SIM ↑ NSS ↑ % % % 1.531 0.338 1.095 1.277 0.393 1.169 1.690 0.320 0.970 ! % % 1.512 0.340 1.103 1.248 0.398 1.216 1.594 0.328 1.022 ! ! % 1.499 0.339 1.144 1.255 0.399 1.205 1.624 0.329 0.997 ! % ! 1.335 0.382 1.220 1.176 0.416 1.247 1.465 0.358 1.012 Table 3: Ablation Experiments of Two Losses in the HOI-Transfer Module. textual data through these two modules, followed by the seamless transfer of this knowledge to egocentric images. As a consequence, exceptional results have been attained. Visual comparisons are presented in Figure 7. When compared to using only the Egocentric branch, the inclusion of the HOI-Transfer Module enhances the accuracy of identifying approximate affordance region locations. For example, by incorporating the HOI-Transfer Module, more precise attention can be directed towards the head of the toothbrush. Moreover, Figure 7 demonstrates the beneficial effect of the Pixel-Text Fusion Module in localizing affordance regions, effectively eliminating interference from other parts and achieving precise localization. Furthermore, we conducted ablation experiments on two loss functions within the HOI-Transfer Module (see Table 3). We examined the impact of including or excluding Ld in the experiments. Additionally, regarding the HOI correlation loss, we compared the efficacy of two distinct loss functions, namely Lg rela and Ll rela, with the latter already introduced in the methodology section. Lg rela was introduced in a prior work (Luo et al. 2022b), where it calculates action relevance using classification scores. Analyzing Table 3, we observe that the experimental results are superior when incorporating Ld. When Lg rela is added in addition to Ld, the three evaluation metrics do not simultaneously achieve superior results. However, if we replace Lg rela with Ll rela, all three evaluation metrics significantly outperform the results without its inclusion. Notably, Ll rela (Figure 5) first identifies the heatmaps for each category before performing correlation calculations. This suggests that the superior performance of Ll rela is attributed to the heatmaps containing more informative and valuable information compared to simple classification scores. Conclusion This work introduces a novel weakly supervised multimodal framework, WSMA, for localizing affordance regions. The main idea is to learn affordance knowledge from both exocentric images and affordance text. The framework utilizes the HOI-Transfer Module to extract affordance knowledge from exocentric images, while the Pixel-Text Fusion Module integrates knowledge from text into egocentric images. During testing, the framework takes only the egocentric image and its corresponding affordance text to determine object affordance regions. WSMA demonstrates superior performance compared to state-of-the-art methods. However, our work still has limitations due to the lack of complex interaction images in existing public datasets. For instance, there may be situations where an image contains multiple objects of different categories but with the same affordance label. To address these challenges, we plan to improve datasets and tackle the challenges arising from such complex interactions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6330 Acknowledgments This paper is supported by the National Key R&D Program of China (2023YFC3604500), National Natural Science Foundation of China (62002010, 62102036), Beijing Natural Science Foundation (L232102, 4222024), the Beijing Science and Technology Plan Project (No. Z221100007722001, Z231100005923039), RD Program of Beijing Municipal Education Commission (KM202211232003), Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (No.VRLAB2022A02, No.VRLAB2022C06). References Caron, M.; Touvron, H.; Misra, I.; J´egou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 9650–9660. Chao, Y.-W.; Liu, Y.; Liu, X.; Zeng, H.; and Deng, J. 2018. Learning to detect human-object interactions. In 2018 ieee winter conference on applications of computer vision (wacv), 381–389. IEEE. Chen, D.; Mei, J.-P.; Wang, C.; Feng, Y.; and Chen, C. 2020. Online knowledge distillation with diverse peers. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 3430–3437. Chuang, C.-Y.; Li, J.; Torralba, A.; and Fidler, S. 2018. Learning to act properly: Predicting and explaining affordances from images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 975–983. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Do, T.-T.; Nguyen, A.; and Reid, I. 2018. Affordancenet: An end-to-end deep learning approach for object affordance detection. In 2018 IEEE international conference on robotics and automation (ICRA), 5882–5889. IEEE. Fang, K.; Wu, T.-L.; Yang, D.; Savarese, S.; and Lim, J. J. 2018. Demo2vec: Reasoning object affordances from online videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2139–2147. Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; and Qiao, Y. 2021a. Clip-adapter: Better visionlanguage models with feature adapters. arXiv preprint arXiv:2110.04544. Gao, W.; Wan, F.; Pan, X.; Peng, Z.; Tian, Q.; Han, Z.; Zhou, B.; and Ye, Q. 2021b. Ts-cam: Token semantic coupled attention map for weakly supervised object localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2886–2895. Gibson, J. J. 2014. The ecological approach to visual perception: classic edition. Psychology press. Grabner, H.; Gall, J.; and Van Gool, L. 2011. What makes a chair a chair? In CVPR 2011, 1529–1536. IEEE. Guo, Z.; Zhang, R.; Qiu, L.; Ma, X.; Miao, X.; He, X.; and Cui, B. 2023. Calip: Zero-shot enhancement of clip with parameter-free attention. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 746–754. Hermans, T.; Rehg, J. M.; and Bobick, A. 2011. Affordance prediction via learned object attributes. In IEEE international conference on robotics and automation (ICRA): Workshop on semantic perception, mapping, and exploration, 181–184. Citeseer. Koppula, H. S.; Gupta, R.; and Saxena, A. 2013. Learning human activities and object affordances from rgb-d videos. The International journal of robotics research, 32(8): 951– 970. Lee, D.; and Seung, H. S. 2000. Algorithms for non-negative matrix factorization. Advances in neural information processing systems, 13. Li, G.; Jampani, V.; Sun, D.; and Sevilla-Lara, L. 2023. LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10922–10931. Li, Y.; Nagarajan, T.; Xiong, B.; and Grauman, K. 2021. Ego-exo: Transferring visual representations from thirdperson to first-person videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6943–6953. Luo, H.; Zhai, W.; Zhang, J.; Cao, Y.; and Tao, D. 2022a. Grounded affordance from exocentric view. arXiv preprint arXiv:2208.13196. Luo, H.; Zhai, W.; Zhang, J.; Cao, Y.; and Tao, D. 2022b. Learning affordance grounding from exocentric images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2252–2261. Mai, J.; Yang, M.; and Luo, W. 2020. Erasing integrated learning: A simple yet effective approach for weakly supervised object localization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8766–8775. Mirzadeh, S. I.; Farajtabar, M.; Li, A.; Levine, N.; Matsukawa, A.; and Ghasemzadeh, H. 2020. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 5191– 5198. Myers, A.; Teo, C. L.; Ferm¨uller, C.; and Aloimonos, Y. 2015. Affordance detection of tool parts from geometric features. In 2015 IEEE International Conference on Robotics and Automation (ICRA), 1374–1381. IEEE. Nagarajan, T.; Feichtenhofer, C.; and Grauman, K. 2019. Grounded human-object interaction hotspots from video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8688–8697. Nguyen, A.; Kanoulas, D.; Caldwell, D. G.; and Tsagarakis, N. G. 2017. Object-based affordances detection with convolutional neural networks and dense conditional random fields. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5908–5915. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6331 Pan, X.; Gao, Y.; Lin, Z.; Tang, F.; Dong, W.; Yuan, H.; Huang, F.; and Xu, C. 2021. Unveiling the potential of structure preserving for weakly supervised object localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11642–11651. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18082–18091. Sawatzky, J.; and Gall, J. 2017. Adaptive binarization for weakly supervised affordance segmentation. In Proceedings of the IEEE international conference on computer vision workshops, 1383–1391. Sigurdsson, G. A.; Gupta, A.; Schmid, C.; Farhadi, A.; and Alahari, K. 2018. Charades-ego: A large-scale dataset of paired third and first person videos. arXiv preprint arXiv:1804.09626. Wang, J.; Chan, K. C.; and Loy, C. C. 2023. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2555–2563. Xu, H.; He, K.; Plummer, B. A.; Sigal, L.; Sclaroff, S.; and Saenko, K. 2019. Multilevel language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 9062–9069. Zhang, R.; Fang, R.; Zhang, W.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; and Li, H. 2021. Tip-adapter: Training-free clipadapter for better vision-language modeling. arXiv preprint arXiv:2111.03930. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2921–2929. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6332 | 2024 | 703 |
18,522 | Gaze from Origin: Learning for Generalized Gaze Estimation by Embedding the Gaze Frontalization Process Mingjie Xu1, Feng Lu1, 2* 1State Key Laboratory of VR Technology and Systems, School of CSE, Beihang University, Beijing, China 2Peng Cheng Laboratory, Shenzhen, China {xumingjie, lufeng}@buaa.edu.cn Abstract Gaze estimation aims to accurately estimate the direction or position at which a person is looking. With the development of deep learning techniques, a number of gaze estimation methods have been proposed and achieved state-of-the-art performance. However, these methods are limited to withindataset settings, whose performance drops when tested on unseen datasets. We argue that this is caused by infinite and continuous gaze labels. To alleviate this problem, we propose using gaze frontalization as an auxiliary task to constrain gaze estimation. Based on this, we propose a novel gaze domain generalization framework named Gaze Frontalization-based Auxiliary Learning (GFAL) Framework which embeds the gaze frontalization process, i.e., guiding the feature so that the eyeball can rotate and look at the front (camera), without any target domain information during training. Experimental results show that our proposed framework is able to achieve state-of-the-art performance on gaze domain generalization task, which is competitive with or even superior to the SOTA gaze unsupervised domain adaptation methods. Introduction Gaze information is important for real applications. It indicates the direction or position at which a person is looking, and is widely used in many scenarios, such as augmented reality (Wang, Zhao, and Lu 2022) and autonomous driving (Mole et al. 2021). To obtain this information, a number of gaze estimation methods have been proposed. In the early years, model-based gaze estimation methods were popular in this field. Although they were able to accurately estimate gaze, they required dedicated devices and calibration (Sun, Liu, and Sun 2015). To tackle this problem, appearancebased gaze estimation approaches have been widely proposed in recent years (Lu et al. 2011, 2014), especially deeplearning-based gaze estimation methods (Zhang et al. 2017a; Chen and Shi 2018). Such methods only use RGB images as input and are capable of achieving competitive performance. Although all of these gaze estimation methods are capable of achieving state-of-the-art performance when trained and tested on the same dataset (i.e.the source domain), they often suffer performance degradation on unseen test datasets *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Input Gaze (𝜃, 𝜓) Rotate to Gaze (0,0) pitch 𝜃 yaw 𝜓 Original Gaze Rotated Gaze Estimation Error Ours ResNet-18 Baseline Generalizability Improved Target Domain (Testing) Looking at the Front Source Domain (Training) Gaze Direction Gaze Frontalization Process as a Constraint (Proposed) Feature Auxiliary Learning 7.48° 5.72° Figure 1: Overall idea of our proposed gaze domain generalization framework GFAL, which is able to improve crossdomain gaze estimation performance. (i.e.the target domain). This problem is defined as crossdomain gaze estimation, which has not been fully addressed and remains difficult (Cheng, Bao, and Lu 2022; Wang et al. 2022; Bao et al. 2022). The reason behind the difficulty is that gaze labels are infinite and continuous. Typical classification or recognition tasks output finite and discrete labels. However, gaze estimation is a regression task and regresses continuous Euler angles of gaze directions, which form an infinite set of labels. Consequently, accurately estimating these gaze labels is challenging, particularly in cross-domain scenarios. Moreover, infinite and continuous gaze labels is one of the major causes for overfitting in the finite training dataset. We propose using an auxiliary task for auxiliary learning to constrain gaze estimation, which can help to alleviate these problems. We argue that gaze frontalization is a proper choice for such an auxiliary task because: 1) Gaze frontalization and gaze estimation are homogeneous. 2) Gaze frontalization is also easier than gaze estimation, which can help constrain gaze estimation by performing multi-task learning or serving as a regularization term. Based on these analyses, we propose a novel gaze domain generalization framework named Gaze Frontalization-based The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6333 Auxiliary Learning (GFAL) Framework, which can improve cross-domain gaze estimation performance without any target domain data or label for training. First, we extract the features from a single face image. Second, we embed the gaze frontalization process to guide the features so that the eyeball can rotate and look at the front (camera), as shown in Fig. 1. Our main contributions are as follows: • We systematically analyzed the cross-domain regression problem of gaze estimation and proposed using a proper auxiliary task, i.e., the gaze frontalization task, to tackle this problem. • Based on this, we propose a novel gaze domain generalization framework named GFAL, by embedding the gaze frontalization process, which aims to guide the features so that the eyeball can rotate and look at the front (camera). This framework is flexible, as it can be used with image warping or multiple gaze redirection methods to improve cross-domain gaze estimation performance. • The experimental results show that GFAL framework can achieve significant performance improvements of 23.53%, 11.32%, 6.75% and 9.11% over the baseline on ETH-to-MPII, ETH-to-Diap, Gaze360-to-MPII and Gaze360-to-Diap tasks, respectively, and surpass all SOTA gaze domain generalization methods. Related Works Gaze Estimation. Recent appearance-based gaze estimation methods usually use deep learning techniques, such as Convolution Neural Network (CNN) (Zhang et al. 2017b,a; Cheng, Lu, and Zhang 2018; Cheng et al. 2020b; Fischer, Chang, and Demiris 2018; Park, Spurr, and Hilliges 2018; Chen and Shi 2018; Yu, Liu, and Odobez 2019a; Cheng et al. 2020a) or Vision Transformer (Dosovitskiy et al. 2021; Cheng and Lu 2022; O Oh, Chang, and Choi 2022). Thanks to the effective architecture and various datasets (Zhang et al. 2020; Kellnhofer et al. 2019; Zhang et al. 2017b; Funes Mora, Monay, and Odobez 2014), this kind of gaze estimation methods can achieve excellent performance. For example, Park et al.proposed a method with the help of facial landmark extraction (Park et al. 2018). Bao et al.proposed a lightweight gaze estimation network designed for mobile devices, combining information on the face, two eyes, and eye positions (Bao et al. 2021). Although these gaze estimation approaches can all achieve SOTA performance, they often suffer from poor gaze estimation accuracy when tested on unseen target domain. Addressing such cross-domain gaze estimation problem is necessary for further application. To tackle the abovementioned cross-domain gaze estimation problem, various approaches have been proposed, including Supervised Domain Adaptation (SDA) methods, Unsupervised Domain Adaptation (UDA) methods, and Domain Generalization (DG) methods. Gaze SDA (Krafka et al. 2016; He et al. 2019; Yu, Liu, and Odobez 2019b) and UDA (Kellnhofer et al. 2019; Lee et al. 2022) methods usually fine-tune the source domain pretrained model on a few labeled or unlabeled target domain samples to improve cross-domain gaze estimation performance, respectively. For example, gaze SDA methods are often based on meta-learning (Park et al. 2019), gaze difference (Liu et al. 2018, 2019) and gaze decomposition (Chen and Shi 2020, 2022), while gaze UDA methods are often based on adversarial learning (Wang et al. 2019; Lahiri, Agarwalla, and Biswas 2018), teacher-student networks (He et al. 2019; Liu et al. 2021), representation learning (Guo et al. 2021; Wang et al. 2022), rotation consistency (Bao et al. 2022) and jitter (Liu et al. 2022). Although these approaches can achieve excellent results in cross-domain gaze estimation, samples or labels on target domain are difficult to collect in real world, which limits their applications. To tackle the data collection problem in real world, DG methods are proposed for cross-domain gaze estimation, which do not require any target domain information when training. These methods typically use adversarial learning (Cheng, Bao, and Lu 2022) or adversarial attack (Xu, Wang, and Lu 2023) to eliminate or disturb gaze-irrelevant factors or features. Moreover, some gaze SDA, UDA or redirection methods also contain gaze DG modules, such as (Park et al. 2019; Bao et al. 2022; Wang et al. 2022; Lee et al. 2022). However, their cross-domain performances still have room for improvement. Many of these gaze SDA, UDA or DG methods use auxiliary tasks. Different from our proposed GFAL, the auxiliary tasks used by these methods cannot solve the infinite and continuous gaze problem. Gaze Redirection. Gaze redirection methods aim to generate face or eye images looking at the given target direction. Recently, many gaze redirection methods have been proposed, such as DeepWarp (Ganin et al. 2016), ST-ED (Zheng et al. 2020), CUDA-GHR (Jindal and Wang 2023) and (Yu, Liu, and Odobez 2019b). These methods are usually used to extend the training dataset for gaze estimation (Zheng et al. 2020; Jindal and Wang 2023). However, gaze estimation models still suffer from the problems stated above, which cannot improve cross-domain gaze estimation performance. Moreover, some methods try to use gaze redirection to perform gaze representation learning for better calibration, such as (Park et al. 2019; Yu, Liu, and Odobez 2019b; Yu and Odobez 2020), but they all rely on labeled calibration samples, which are hard to obtain in the real world. Motivation and Key Idea Formulation Gaze Estimation. Formally, we first define the gaze estimation task, as discussed in (Zhang et al. 2020). Given the input face image x, the gaze direction g can be estimated via: z = F(x), g = G(z), (1) where F is a feature extractor, G is a fully connected (FC) layer and z is the extracted feature. Note that the gaze direction g is expressed by the Euler angle (θ, ψ) defined by pitch θ and yaw ψ. Cross-Domain Gaze Estimation. Typical gaze estimation models are often trained and tested on the source domain Ds and achieve excellent performance. However, if these models are tested on an unseen target domain Dt, their performance usually degrades. This problem is called crossdomain gaze estimation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6334 Unique Target (0,0) Arbitrary Gaze Label 𝐠 Link from 𝐚to 𝐠 (b) Gaze frontalization is easier than gaze estimation (a) Gaze frontalization and gaze estimation are homogeneous Gaze Estimation Gaze Frontalization (Additional Constraint) Auxiliary Learning Intrinsically Related Eyeball Rotation Figure 2: Illustration of our motivation and solution. Problem Analysis The cross-domain gaze estimation problem is difficult to tackle, because gaze labels are infinite and continuous, which is one of the major causes for overfitting in the finite training dataset. Gaze estimation is a kind of regression task. As described above, the gaze estimation model outputs pitch θ and yaw ψ. Arbitrary continuous values can be used to express θ and ψ. Therefore, there are infinite (θ, ψ)-s, forming infinite and continuous gaze labels. This characteristic poses challenges in accurately learning the mapping from images to gaze, especially in cross-domain scenarios. This is more difficult than classification or recognition tasks because the labels of the latter tasks are finite and discrete. Why We Use Gaze Frontalization as an Auxiliary Task? Based on the above analyses, we propose the use of an auxiliary task to constrain gaze estimation to tackle the crossdomain gaze estimation problem. This auxiliary task should help to solve the abovementioned problem. We use gaze frontalization as an auxiliary task, which involves rotating the eyeballs to make them look at the front (camera), i.e., the (0, 0) direction. We argue that the gaze frontalization task is a proper choice for such an auxiliary task for the following reasons: 1) Gaze frontalization and gaze estimation are homogeneous, involving the utilization of face images as input and the association with eyeball rotation. 2) Gaze frontalization is also easier than gaze estimation, as the target of gaze frontalization corresponds to a unique direction (0,0), while gaze estimation takes arbitrary gazes as target. Thus, Gaze frontalization (easier) can help constrain gaze estimation (harder) by performing multi-task learning or serving as a regularization term. The frontal gaze (0,0) can serve as anchors to constrain gaze directions and eyeball rotation. Following the above, our motivation is to find out how to use gaze frontalization to constrain gaze estimation. To achieve this, we embed the gaze frontalization process to guide the extracted gaze features so that the eyeball can rotate to (0,0), further improving cross-domain gaze estimation performance. Note that it is not helpful to using gazes other than (0, 0) to constrain gaze estimation because arbitrary gazes pose greater difficulty. GFAL Framework Based on the above analyses, we propose a novel gaze domain generalization framework named Gaze Frontalizationbased Auxiliary Learning (GFAL) Framework, based on an embedded gaze frontalization process, i.e., forcing the gaze feature z to rotate the eyeballs so that the eyeballs can look at (0, 0), for auxiliary learning. GFAL framework includes 3 parts: 1) Gaze Estimation Network, 2) Gaze Frontalization Module and 3) Consistency Loss, as shown in Fig. 3(a). Gaze Estimation Network First, we propose a Gaze Estimation Network for gaze and head orientation estimation, as shown in Fig. 3(b). Gaze Estimation. As described above, gaze estimation uses a feature extractor F and a FC layer G, takes a face image x as input and outputs the gaze direction ˆg = G(F(x)), which is defined by pitch θ and yaw ψ. We use the commonly used L1 distance between the estimated gaze direction ˆg and the ground truth gaze direction g for training: LG = ∥ˆg −g∥1. (2) Head Orientation Estimation. If the training dataset provides head orientation labels, we also estimate head orientations ˆh = H(F(x)) using the feature extractor F and a FC layer H whose architecture is the same as that of G, since this information is widely used in and helpful for gaze estimation methods (Park et al. 2019; Wang et al. 2019): LH = ∥ˆh −h∥1, (3) where ˆh is the estimated head orientation and h is the ground truth head orientation label. Gaze Frontalization Module Goal and Procedure. The goal of this module is to embed the gaze frontalization process, i.e., forcing the feature z to rotate the eyeballs in x so that they look at (0, 0) while keeping the head orientation unchanged, for auxiliary learning. To achieve this goal, we feed z and x into this module to obtain the output face image xfro, in which z is able to make the eyeball in x rotate to (0, 0), i.e., looking at the front (camera), from the direction (θ, ψ). Then, we optimize the loss function Lfro of this module. This procedure can be implemented using various strategies, such as image warping, gaze redirection, and 3D reconstruction of face images. Implementation. Here, we design a strategy based on image warping, as shown in Fig. 3(c). We first feed the extracted feature z into a warping field generator W to obtain the warping field M: M = W(z). (4) For every position (xrot, yrot) in the output image xfro, there is a corresponding position (x, y) in the input image x, which can be obtained from the warping field M. W is implemented using a ResNet-like decoder (Cheng, Bao, and Lu 2022). Note that the size of M is the same as that of x. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6335 Gaze Estimation Network Gaze Direction 𝐠" Head Orientation 𝐡$ Input 𝐱 Gaze Frontalization Module (Proposed) Feature 𝐳 … Plug and Play 1 2 Input Output Output Frontalized 𝐱!"# 1 Consistency Loss (Proposed) 3 Frontalized 𝐱!"# 𝐠"!"# 𝐡$ !"# Gaze Label (0,0) Head Label 𝐡of 𝐱 Output Output Output Gaze FC Head FC 𝐠" 𝐡$ Feature Extractor Extracted Feature 𝐳 Input 𝐱 F H G Gaze Estimation Network Gaze Frontalization Module Image Warping ℒ$#% 𝒢 (Eq.7-8) ℒ$#% ℋ (Eq. 9-10) Consistency Loss 3 Frontalized 𝐱!"# Output Output 1 Gaze 𝐠"!"# Head 𝐡$ !"# Gaze Label (0,0) Head Label 𝐡 of 𝐱 Keep Consistent Keep Consistent Input Output Output (a) Overview (b) (c) (d) 1 2 3 End-to-end Training 3D Reconstruction (Optional) Gaze Redirection (Optional) Image Warping (Proposed) 1 2 . . → ↙ . ↗ ← ↓ . Warping Field 𝐌 Warping Field Generator Bilinear Sampler S W Input 𝐱 Extracted Feature 𝐳 Frontalized 𝐱!"# Output Figure 3: Our proposed gaze domain generalization framework. Then we apply the output warping field M to the input face image x to obtain the output warped face image xfro: xfro = S(x, M), (5) where S is a bilinear sampler (Jaderberg et al. 2015) used to process bilinear interpolation as the output warping field M needs to be integer-valued but is real-valued. To ensure that the eyes in xfro indeed look at (0, 0) with the head orientations unchanged, we introduce a reference face image xref, which has eyeballs looking at (0, 0) and the same head orientation and identity as x, to constrain the generation of xfro. We maximize Multi-Scale Structural Similarity (MS-SSIM) (Wang, Simoncelli, and Bovik 2003) between xref and xfro: Lfro = 1 −MS-SSIM(xref, xfro). (6) Note that there are various choices for Lfro, such as the L1, L2 distance and Structural Similarity (SSIM) (Wang et al. 2004). The comparison of different choices of Lfro is shown in Tab. 4, in which MS-SSIM achieves the best performance. The selection of xref is described in Algorithm 1. Given an input face image x, we first select the face images Dide that share the same identity as x to form the candidate set of reference images xref of x. Then, we iterate over all the images xk in Dide to find the xK with the gaze direction closest to (0, 0) and the head orientation most similar to the head orientation of x. Finally, xK is the expected xref. Note that the Gaze360 dataset (Kellnhofer et al. 2019) does not provide head orientation labels and reliable identity labels. To address this problem, we use SSIM (Wang et al. 2004) to measure the head orientation similarity instead and use CosFace (Wang et al. 2018) and k-means (MacQueen 1967) to generate identity labels. Specifically, we use 1 −SSIM(xk, x) to replace Angular(hk, h) in Line 8 of Algo. 1 and use Angular(gk, (0, 0))/180 to replace Angular(gk, (0, 0)) in Line 7 of Algo. 1 to make α and β have the same scale. Consistency Loss We propose the Consistency Loss Lcon to ensure that the eyeballs in xfro are indeed looking at (0, 0) and that the head orientation is indeed the same as that of x, further strengthening the effectiveness of the auxiliary learning of gaze frontalization. Lcon includes Gaze Term LG con and Head Orientation Term LH con, as shown in Fig. 3(d). Gaze Term. We feed the output face image xfro back into the Gaze Estimation Module to ensure that the output gaze direction ˆgfro is (0, 0): LG con = ∥ˆgfro −(0, 0)∥1. (7) In our implementation, we replace (0, 0) in Eq. 7 with the gaze label gref of xref to make the training more stable, and the loss function is changed to: LG∗ con = ∥ˆgfro −gref∥1. (8) We further conduct experiments to compare LG∗ con (Eq. 8) with LG con (Eq. 7) , which show that LG∗ con is better. Head Orientation Term. We feed xfro back into the Gaze Estimation Module to ensure that the output head orientation ˆhfro is the same as the head orientation of x: LH con = ∥ˆhfro −h∥1. (9) Similarly to Gaze Term, in our implementation, we replace h in Eq. 9 with the head orientation label href of xref to make training more stable, and the loss function is changed to: LH∗ con = ∥ˆhfro −href∥1. (10) We conduct experiments to compare LH∗ con (Eq. 10) with LH con (Eq. 9), which show that LH∗ con is better. Total Loss Function The total loss function of this framework is: L = LG + LH + λfroLfro + λG∗ conLG∗ con + λH∗ conLH∗ con, (11) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6336 Algorithm 1: Reference Image Selection Require: Input face image x, face images Dide that shares the same identity as x. Ensure: The reference image xref of x. 1: smin ←+∞ ▷The minimum score. 2: K ←−1 ▷The id of the reference image in Dide. 3: for every candidate face image xk ∈Dide do 4: if xk equals x then 5: continue ▷xk should be different from x. 6: end if 7: α ←Angular(gk, (0, 0)) ▷gk is the gaze label of xk. 8: β ←Angular(hk, h) ▷hk and h are the head orientation labels of xk and x, respectively. 9: s ←α + β ▷The score of xk. 10: if s < smin then ▷Found smaller smin. 11: smin ←s 12: K ←k 13: end if 14: end for 15: xref ←xK where λfro, λG∗ con and λH∗ con are the weights used to balance different loss terms. Note that LH and LH∗ con are optimized only when head orientation labels are provided in the training dataset. In our implementation based on image warping, we empirically set λfro = 0.1 and λG∗ con = λH∗ con = 0.01. Experiments Setup Datasets. Following other gaze domain generalization (DG) methods (Cheng, Bao, and Lu 2022; Bao et al. 2022), we use ETH-XGaze (DE) (Zhang et al. 2020) and Gaze360 (DG) (Kellnhofer et al. 2019) for training, and MPIIGaze (DM) (Zhang et al. 2017b) and EyeDiap (DD) (Funes Mora, Monay, and Odobez 2014) for evaluation, as the former two datasets have wider gaze distributions than the latter two (Liu et al. 2021). Therefore, 4 DG tasks are formed, including DE →DM, DE →DD, DG →DM and DG →DD. Specifically, DE contains images of 80 subjects, and we use 75 of them for training and the remaining 5 for validation; hence, there is a total of 713,646 images for training, as in (Cheng, Bao, and Lu 2022). For DG, the training set contains 84,902 samples, where the subjects are looking at the front. For these two datasets, we use the provided pre-processed data. DM contains 45,000 images for evaluation, which are normalized using (Sugano, Matsushita, and Sato 2014). For DD, we use the VGA videos in the screen target session and sample images every 15 frames, to obtain 16,674 images for evaluation, as in (Cheng, Bao, and Lu 2022). Comparison Methods. We compare our proposed method with (a) Typical Gaze Estimation (TGE), (b) Gaze Domain Generalization (DG) and (c) Gaze Unsupervised Domain Adaptaion (UDA) methods. For TGE, we compare our proposed method with Full-Face (Zhang et al. 2017a), RT-Gene (Fischer, Chang, and Demiris 2018), DilatedNet (Chen and Shi 2018), CA-Net (Cheng et al. 2020a), GazeTR (Cheng and Lu 2022) and (O Oh, Chang, and Choi 2022). They do not require target domain information when training, so it is necessary to compare them. For DG, PureGaze (Cheng, Bao, and Lu 2022) and GazeCon (Xu, Wang, and Lu 2023) have been proposed. Moreover, some gaze SDA, UDA, redirection, and unconstrained gaze estimation methods contain gaze DG modules, including FAZE (Park et al. 2019), RAT (Bao et al. 2022), CDG (Wang et al. 2022) and LatentGaze (Lee et al. 2022). We also compare our proposed gaze DG method with them as they all claimed gaze domain generalization ability. And we compare our proposed method with Baseline (Zhang et al. 2020) (based on ResNet (He et al. 2016)). Furthermore, although our proposed gaze DG method is not designed for gaze UDA, we also compare our proposed method with other gaze UDA methods for reference, including ADDA (Tzeng et al. 2017), GazeAdv (Wang et al. 2019), Gaze360 (Kellnhofer et al. 2019), DAGEN (Guo et al. 2021), PnP-GA (Liu et al. 2021), RUDA (Bao et al. 2022), CRGA (Wang et al. 2022) and LatentGaze (Lee et al. 2022). Note that gaze UDA methods need some unlabeled target domain samples for adaptation, while gaze DG methods do not need these target domain samples for training. Gaze Redirection Methods. Although GFAL framework is designed for the gaze DG task and not the gaze redirection task, it has some similarities to gaze redirection. Therefore, we conduct experiments based on representative gaze redirection methods, including DeepWarp (Ganin et al. 2016), FAZE (Park et al. 2019), ST-ED (Zheng et al. 2020) and CUDA-GHR (Jindal and Wang 2023). Implementation Details. We use NVIDIA GPUs for the experiments. We use ResNet (He et al. 2016) as the backbone. Models are trained for 25 epochs for DE and 100 epochs for DG, using a batch size of 50. The Adam optimizer is used with lr = 10−4, β1 = 0.9 and β2 = 0.95. All input images are of the size 224 × 224 and are normalized to [0, 1]. Furthermore, since CDG (Wang et al. 2022) is designed based on data augmentation, including a random color field and grayscale, we apply these data augmentation strategies to GFAL framework and compare GFAL framework with CDG. The results in the 5th row show that our framework can also surpass CDG (Wang et al. 2022). Comparison with SOTA Methods To show the superior cross-domain gaze estimation performance of GFAL framework, we compare our proposed method with SOTA methods in the DG and UDA tasks. Domain Generalization. The results are shown in the 3nd6th rows in Table 1. For a fair comparison, we categorize the gaze DG methods according to different backbones and whether data augmentation is performed. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6337 Type Methods |Dt| DE →DM DE →DD DG →DM DG →DD TGE Full-Face 0 12.35 30.15 11.13 14.42 RT-Gene 0 21.81 38.60 Dilated-Net 0 18.45 23.88 CA-Net 0 27.13 31.41 Oh et al. 0 8.31 7.10 GazeTR 0 8.70 10.98 7.96 8.88 Ours (Res-18)† 0 5.72 6.97 7.18 7.38 Ours (Res-50)‡ 0 6.09 6.48 7.23 6.76 DG† Baseline† 0 7.48 7.86 7.70 8.12 FAZE† 0 9.49 21.03 PureGaze† 0 9.14 8.37 9.28 9.32 RAT† 0 7.92 8.65 7.60 8.16 LatentGaze† 0 7.98 9.81 GazeCon† 0 6.89 7.78 7.82 8.52 Ours (Res-18)† 0 5.72 6.97 7.18 7.38 DG‡ Baseline‡ 0 7.11 7.38 7.38 7.17 FAZE‡ 0 7.80 15.85 PureGaze‡ 0 7.08 7.48 7.62 7.70 RAT‡ 0 7.40 8.38 7.69 8.32 GazeCon‡ 0 8.35 8.80 8.24 8.83 Ours (Res-50)‡ 0 6.09 6.48 7.23 6.76 DG†∗ GazeCon†∗ 0 6.50 7.44 7.55 9.03 Ours (Res-18)†∗ 0 6.47 7.04 6.98 9.02 DG‡∗ CDG‡∗ 0 6.73 7.95 7.03 7.27 GazeCon‡∗ 0 7.20 8.76 8.22 11.15 Ours (Res-50)‡∗ 0 5.68 6.23 6.29 6.69 UDA† (for reference) ADDA† 500 6.65 8.24 6.27 9.53 GazeAdv† 100 6.36 7.62 7.54 8.43 Gaze360† 100 6.24 7.47 7.17 7.66 DAGEN† 500 5.73 6.77 7.38 8.00 PnP-GA† 10 5.53 5.87 6.18 7.92 RUDA† 100 5.70 7.52 6.20 7.02 LatentGaze† <100 5.21 7.81 Ours (Res-18)† (w/o Dt) 0 5.72 (4/8) 6.97 (3/8) 7.18 (5/7) 7.38 (2/7) Table 1: Results of SOTA gaze estimation methods. Angular error in degrees are shown. |Dt| indicates target domain sample numbers for training. † and ‡ indicate that the backbone is ResNet-18 and 50, respectively. * indicates that data augmentation is used. Typical gaze estimation methods are usually trained and tested on the same domain and do not consider the crossdataset setting. As a result, their performances on Dt are poor. In contrast, GFAL framework surpasses all the typical gaze estimation methods (2nd row in Tab. 1). In addition, we compare GFAL framework with other SOTA gaze DG methods. As shown in the 3rd and 4th rows in Tab. 1, GFAL framework can outperform the other methods by a large margin. Note that ResNet-50 may not always yield better results than ResNet-18. This is because deeper networks like ResNet-50 may be more prone to overfit the training data (Schmidt 2023). Furthermore, since CDG and GazeCon is designed based on data augmentation, including a random color field and grayscale, we apply these data augmentation strategies to Task FAZE ST-ED CUDA-GHRBaseline†Ours† DE →DM 8.17 7.30 7.58 7.48 5.72 DE →DD 11.61 8.14 8.99 7.86 6.97 Table 2: Cross-dataset validation results of different gaze redirection methods on the gaze estimation task. The results show the angular error in degrees. GFAL framework and compare GFAL framework with them. The results in the 5th and 6th row show that our framework can also surpass them. Moreover, our proposed GFAL shows good scalability across datasets. GFAL can achieve good domain generalization performance while using different training datasets, where the largest contains 713,646 images, while the smallest contains 84,902 images. Unsupervised Domain Adaptation. We compare GFAL framework with SOTA gaze UDA methods for reference. The results are shown in the last two rows in Tab. 1. Our proposed method (ResNet-18) can surpass 4 of 7, 5 of 7, 2 of 6, and 5 of 6 methods in the DE →DM, DE →DD, DG →DM and DG →DD tasks, respectively. Relation to Gaze Redirection Methods Comparison with Gaze Redirection Methods. This experiment aims to prove that GFAL can surpass gaze redirection methods in the cross-domain gaze estimation task. We first pretrain gaze redirection models using DE (Zhang et al. 2020). Then we use these models to generate |Ds| face images with random gaze directions, as in (Zheng et al. 2020), forming the synthetic training set Dsyn, where |Ds| is the size of the source domain training set. Then, we train the gaze estimation model using the ResNet-18 (He et al. 2016) backbone and the training set Ds ∪Dsyn. Note that we do not use DG (Kellnhofer et al. 2019), as DG does not provide head orientation labels, and these gaze redirection methods need head orientation labels for training. The results are shown in Tab. 2. It can be concluded that GFAL framework is superior to SOTA gaze redirection methods. The reason behind this is that these models still suffer from the infinite and continuous labels problem, although the training dataset is extended and enriched so that it contains more gaze information. Conversely, GFAL framework can achieve better performance than gaze redirection methods. Plug Gaze Redirection Methods into GFAL Framework. This experiment aims to prove that gaze redirection methods can be plugged into GFAL framework to improve crossdomain gaze estimation performance. Here, we plug the SOTA gaze redirection methods into GFAL framework to replace the image warping procedure and use the gaze feature z only to generate redirected face images that look at (0, 0). All the experiments are conducted with a ResNet-18 (He et al. 2016) backbone for extracting z. The results are shown in Tab. 3. It can be seen that plugging these gaze redirection methods into GFAL framework can improve gaze domain generalization performance as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6338 Strategy DE →DM DE →DD DG →DM DG →DD Avg Baseline† 7.48 7.86 7.70 8.12 7.79 Ours (DeepWarp)† 7.35 8.93 6.73 6.89 7.48 Ours (FAZE)† 8.10 9.37 6.87 7.66 8.00 Ours (ST-ED)† 7.22 8.20 7.40 8.15 7.74 Ours (Warping)† 5.72 6.97 7.18 7.38 6.81 Table 3: Cross-dataset validation results of plugging different SOTA gaze redirection methods into our proposed framework. The results show the angular error in degrees. Input 𝐱 Output Image 𝐱!"# (a) Source Domain Input 𝐱 Output Image 𝐱!"# (b) Target Domain 1.99° 2.23° 1.83° 0.93° 3.77° 2.21° 1.68° 2.49° Figure 4: Visualization of xfro. Angular errors between ˆgfro and (0, 0) are shown in the lower right corner of xfro. well as the image warping for most of time. However, the overall (average) performance of the implementation based on gaze redirection methods cannot surpass that the implementation based on image warping. This is because SOTA gaze redirection networks are complex and difficult to train, leading to difficulty in training gaze estimation models based on these kinds of implementation. Gaze Frontalization Visualization This experiment aims to show that the extracted feature z can correctly perform the gaze frontalization process, i.e., rotating the eyeball in x to (0, 0). We visualize xfro according to x, results are shown in Fig. 4. The eyeball directions in x are successfully rotated from (θ, ψ) to (0, 0) in the output image xfro. The angular error between ˆgfro = G(F(xfro)) and (0, 0) shown in the lower right corner of every xfro in Fig. 4 is small enough to support this observation. This observation indicates that our proposed framework can successfully force z to process gaze frontalization. Ablation Study Comparison of Different Choices of Lfro. We conduct an experiment to compare different choices of Lfro, including Strategy DE →DM DE →DD DG →DM DG →DD Avg L1 7.28 7.40 6.76 7.47 7.23 L2 7.12 8.25 7.18 8.19 7.69 SSIM 6.78 8.59 6.33 7.56 7.32 MS-SSIM 5.72 6.97 7.18 7.38 6.81 Lcon 6.02 8.16 7.91 7.47 7.39 L∗ con 5.72 6.97 7.18 7.38 6.81 (0, 0) →(θ, ψ) 7.33 7.15 7.56 7.87 7.48 (θ, ψ) →(0, 0) 5.72 6.97 7.18 7.38 6.81 Baseline 7.48 7.86 7.70 8.12 7.79 Lfro 7.92 8.92 7.41 8.59 8.21 Lfro + Lcon 5.72 6.97 7.18 7.38 6.81 Table 4: Ablation study results of different strategies. The results show the angular error in degrees. distance metrics L1, L2, SSIM (Wang et al. 2004) and MSSSIM (Wang, Simoncelli, and Bovik 2003). The results are shown in the second row of Tab. 4, which indicate that using MS-SSIM can achieve the best performance and is the best choice for the distance metric between xref and xfro. Comparison of Lcon and L∗ con. We use L∗ con (Eq. 8 and 10) instead of Lcon (Eq. 7 and 9) for more stable training. Here, we compare these two types of loss terms, and the results are shown in the third row of Tab. 4. It is indicated that using L∗ con is better for Consistency Loss than using Lcon. Comparison of (0, 0) →(θ, ψ) and (θ, ψ) →(0, 0). We use the (θ, ψ) →(0, 0) strategy (rotating (θ, ψ) to (0, 0)) instead of (0, 0) →(θ, ψ) to make every g have a clear and unique target (0, 0), further constraining the gaze estimation learning. The comparison results of these 2 strategies are shown in the fourth row of Tab. 4, and the results show that (θ, ψ) →(0, 0) is better. Contributions of the Loss Terms. Results are shown in the last row of Tab. 4. Using only Lfro cannot improve the cross-domain gaze estimation performance. Furthermore, using both Lfro and Lcon can significantly improve performance. This further shows the effectiveness of both Gaze Frontalization Module and Consistency Loss for gaze frontalization auxiliary learning. Conclusion In this paper, we propose a novel gaze domain generalization (DG) framework GFAL, which aims to utilize the embedding of gaze frontalization process to improve crossdomain gaze estimation performance without any target domain information during training. Experimental results show that GFAL framework can achieve SOTA performance on gaze DG task, which is competitive with or even superior to the SOTA gaze UDA methods, and surpass most of representative gaze redirection methods. Moreover, various types of implementations can be plugged into GFAL framework, which can improve cross-domain gaze estimation performance for most of time. This work provides new insights for cross-domain gaze estimation. In the future, we will extend the gaze frontalization methods, such as 3D reconstruction. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6339 Acknowledgments This work was partially supported by National Natural Science Foundation of China (NSFC) under Grant 62372019, and partially supported by Peng Cheng Laboratory (PCL2023A10-2). References Bao, Y.; Cheng, Y.; Liu, Y.; and Lu, F. 2021. Adaptive feature fusion network for gaze tracking in mobile tablets. In 2020 25th international conference on pattern recognition, 9936–9943. IEEE. Bao, Y.; Liu, Y.; Wang, H.; and Lu, F. 2022. Generalizing gaze estimation with rotation consistency. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4207–4216. Chen, Z.; and Shi, B. 2020. Offset calibration for appearance-based gaze estimation via gaze decomposition. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 270–279. Chen, Z.; and Shi, B. E. 2018. Appearance-based gaze estimation using dilated-convolutions. In Proceedings of the asian conference on computer vision, 309–324. Springer. Chen, Z.; and Shi, B. E. 2022. Towards high performance low complexity calibration in appearance based gaze estimation. IEEE transactions on pattern analysis and machine intelligence, 45(1): 1174–1188. Cheng, Y.; Bao, Y.; and Lu, F. 2022. Puregaze: Purifying gaze feature for generalizable gaze estimation. In Proceedings of the AAAI conference on artificial intelligence, volume 36, 436–443. Cheng, Y.; Huang, S.; Wang, F.; Qian, C.; and Lu, F. 2020a. A coarse-to-fine adaptive network for appearancebased gaze estimation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 10623–10630. Cheng, Y.; and Lu, F. 2022. Gaze estimation using transformer. In 2022 26th international conference on pattern recognition, 3341–3347. IEEE. Cheng, Y.; Lu, F.; and Zhang, X. 2018. Appearance-based gaze estimation via evaluation-guided asymmetric regression. In Proceedings of the european conference on computer vision, 100–115. Cheng, Y.; Zhang, X.; Lu, F.; and Sato, Y. 2020b. Gaze estimation by exploring two-eye asymmetry. IEEE transactions on image processing, 29: 5259–5272. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International conference on learning representations. Fischer, T.; Chang, H. J.; and Demiris, Y. 2018. Rt-gene: Real-time eye gaze estimation in natural environments. In Proceedings of the european conference on computer vision, 334–352. Funes Mora, K. A.; Monay, F.; and Odobez, J.-M. 2014. Eyediap: A database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras. In Proceedings of the symposium on eye tracking research and applications, 255–258. Ganin, Y.; Kononenko, D.; Sungatullina, D.; and Lempitsky, V. 2016. Deepwarp: Photorealistic image resynthesis for gaze manipulation. In Proceedings of the european conference on computer vision, 311–326. Springer. Guo, Z.; Yuan, Z.; Zhang, C.; Chi, W.; Ling, Y.; and Zhang, S. 2021. Domain Adaptation Gaze Estimation by Embedding with Prediction Consistency. In Proceedings of the asian conference on computer vision, 292–307. Springer. He, J.; Pham, K.; Valliappan, N.; Xu, P.; Roberts, C.; Lagun, D.; and Navalpakkam, V. 2019. On-device few-shot personalization for real-time gaze estimation. In Proceedings of the IEEE/CVF international conference on computer vision workshops. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Jaderberg, M.; Simonyan, K.; Zisserman, A.; and Kavukcuoglu, K. 2015. Spatial transformer networks. In Proceedings of the 28th international conference on neural information processing systems, 2017–2025. Jindal, S.; and Wang, X. E. 2023. Cuda-ghr: Controllable unsupervised domain adaptation for gaze and head redirection. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 467–477. Kellnhofer, P.; Recasens, A.; Stent, S.; Matusik, W.; and Torralba, A. 2019. Gaze360: Physically unconstrained gaze estimation in the wild. In Proceedings of the IEEE/CVF international conference on computer vision, 6912–6921. Krafka, K.; Khosla, A.; Kellnhofer, P.; Kannan, H.; Bhandarkar, S.; Matusik, W.; and Torralba, A. 2016. Eye tracking for everyone. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2176–2184. Lahiri, A.; Agarwalla, A.; and Biswas, P. K. 2018. Unsupervised domain adaptation for learning eye gaze from a million synthetic images: An adversarial approach. In Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing, 1–9. Lee, I.; Yun, J.-S.; Kim, H. H.; Na, Y.; and Yoo, S. B. 2022. LatentGaze: Cross-Domain Gaze Estimation through GazeAware Analytic Latent Code Manipulation. In Proceedings of the asian conference on computer vision, 3379–3395. Liu, G.; Yu, Y.; Mora, K. A. F.; and Odobez, J.-M. 2018. A differential approach for gaze estimation with calibration. In British machine vision conference, volume 2, 6. Liu, G.; Yu, Y.; Mora, K. A. F.; and Odobez, J.-M. 2019. A differential approach for gaze estimation. IEEE transactions on pattern analysis and machine intelligence, 43(3): 1092– 1099. Liu, R.; Bao, Y.; Xu, M.; Wang, H.; Liu, Y.; and Lu, F. 2022. Jitter Does Matter: Adapting Gaze Estimation to New Domains. arXiv:2210.02082. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6340 Liu, Y.; Liu, R.; Wang, H.; and Lu, F. 2021. Generalizing gaze estimation with outlier-guided collaborative adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, 3835–3844. Lu, F.; Sugano, Y.; Okabe, T.; and Sato, Y. 2011. Inferring human gaze from appearance via adaptive linear regression. In Proceedings of the IEEE international conference on computer vision, 153–160. Lu, F.; Sugano, Y.; Okabe, T.; and Sato, Y. 2014. Adaptive Linear Regression for Appearance-Based Gaze Estimation. IEEE transactions on pattern analysis and machine intelligence, 36(10): 2033–2046. MacQueen, J. 1967. Classification and analysis of multivariate observations. In Proceedings of the fifth berkeley symposium on mathematical statistics and probability, 281–297. Mole, C.; Pekkanen, J.; Sheppard, W. E.; Markkula, G.; and Wilkie, R. M. 2021. Drivers use active gaze to monitor waypoints during automated driving. Scientific Reports, 11(1): 1–18. O Oh, J.; Chang, H. J.; and Choi, S.-I. 2022. Self-attention with convolution and deconvolution for efficient eye gaze estimation from a full face image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4992–5000. Park, S.; Mello, S. D.; Molchanov, P.; Iqbal, U.; Hilliges, O.; and Kautz, J. 2019. Few-shot adaptive gaze estimation. In Proceedings of the IEEE/CVF international conference on computer vision, 9368–9377. Park, S.; Spurr, A.; and Hilliges, O. 2018. Deep pictorial gaze estimation. In Proceedings of the european conference on computer vision, 721–738. Park, S.; Zhang, X.; Bulling, A.; and Hilliges, O. 2018. Learning to find eye region landmarks for remote gaze estimation in unconstrained settings. In Proceedings of the 2018 ACM symposium on eye tracking research and applications, 1–10. Schmidt, J. 2023. Testing for Overfitting. arXiv:2305.05792. Sugano, Y.; Matsushita, Y.; and Sato, Y. 2014. Learningby-synthesis for appearance-based 3d gaze estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1821–1828. Sun, L.; Liu, Z.; and Sun, M.-T. 2015. Real time gaze estimation with a consumer depth camera. Information Sciences, 320: 346–360. Tzeng, E.; Hoffman, J.; Saenko, K.; and Darrell, T. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7167–7176. Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; and Liu, W. 2018. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5265– 5274. Wang, K.; Zhao, R.; Su, H.; and Ji, Q. 2019. Generalizing eye tracking with bayesian adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11907–11916. Wang, Y.; Jiang, Y.; Li, J.; Ni, B.; Dai, W.; Li, C.; Xiong, H.; and Li, T. 2022. Contrastive regression for domain adaptation on gaze estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 19376–19385. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wang, Z.; Simoncelli, E. P.; and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In Thirty-seventh asilomar conference on signals, systems and computers 2003, volume 2, 1398–1402. IEEE. Wang, Z.; Zhao, Y.; and Lu, F. 2022. Gaze-VergenceControlled See-Through Vision in Augmented Reality. IEEE Transactions on Visualization and Computer Graphics, 28(11): 3843–3853. Xu, M.; Wang, H.; and Lu, F. 2023. Learning a generalized gaze estimator from gaze-consistent feature. In Proceedings of the AAAI conference on artificial intelligence, volume 37, 3027–3035. Yu, Y.; Liu, G.; and Odobez, J.-M. 2019a. Deep Multitask Gaze Estimation with a Constrained Landmark-Gaze Model. In Proceedings of the european conference on computer vision workshops, 456–474. Springer. Yu, Y.; Liu, G.; and Odobez, J.-M. 2019b. Improving fewshot user-specific gaze adaptation via gaze redirection synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11937–11946. Yu, Y.; and Odobez, J.-M. 2020. Unsupervised representation learning for gaze estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7314–7324. Zhang, X.; Park, S.; Beeler, T.; Bradley, D.; Tang, S.; and Hilliges, O. 2020. Eth-xgaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation. In Proceedings of the european conference on computer vision, 365–381. Springer. Zhang, X.; Sugano, Y.; Fritz, M.; and Bulling, A. 2017a. It’s written all over your face: Full-face appearance-based gaze estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 51–60. Zhang, X.; Sugano, Y.; Fritz, M.; and Bulling, A. 2017b. Mpiigaze: Real-world dataset and deep appearance-based gaze estimation. IEEE transactions on pattern analysis and machine intelligence, 41(1): 162–175. Zheng, Y.; Park, S.; Zhang, X.; De Mello, S.; and Hilliges, O. 2020. Self-learning transformations for improving gaze and head redirection. Advances in neural information processing systems, 33: 13127–13138. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6341 | 2024 | 704 |
18,523 | HACDR-Net: Heterogeneous-Aware Convolutional Network for Diabetic Retinopathy Multi-Lesion Segmentation QiHao Xu1, 2, Xiaoling Luo1, 2*, Chao Huang3, Chengliang Liu2, Jie Wen 2, Jialei Wang2, Yong Xu2 1College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China 2Shenzhen Key Laboratory of Visual Object Detection and Recognition, Harbin Institute of Technology, Shenzhen, China 3School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Diabetic Retinopathy (DR), the leading cause of blindness in diabetic patients, is diagnosed by the condition of retinal multiple lesions. As a difficult task in medical image segmentation, DR multi-lesion segmentation faces the main concerns as follows. On the one hand, retinal lesions vary in location, shape, and size. On the other hand, because some lesions occupy only a very small part of the entire fundus image, the high proportion of background leads to difficulties in lesion segmentation. To solve the above problems, we propose a heterogeneous-aware convolutional network (HACDR-Net) that composes heterogeneous cross-convolution, heterogeneous modulated deformable convolution, and optional nearfar-aware convolution. Our network introduces an adaptive aggregation module to summarize the heterogeneous feature maps and get diverse lesion areas in the heterogeneous receptive field along the channels and space. In addition, to solve the problem of the highly imbalanced proportion of focal areas, we design a new medical image segmentation loss function, Noise Adjusted Loss (NALoss). NALoss balances the predictive feature distribution of background and lesion by jointing Gaussian noise and hard example mining, thus enhancing awareness of lesions. We conduct the experiments on the public datasets IDRiD and DDR, and the experimental results show that the proposed method achieves better performance than other state-of-the-art methods. The code is opensourced on github.com/xqh180110910537/HACDR-Net. Introduction Diabetic retinopathy (DR) is one of the most common microvascular complications of diabetes, which can cause a series of fundus lesions. Therefore, DR multi-lesion segmentation is crucial to diabetes diagnosis. Over the past few years, Convolutional Neural Networks (CNNs) and Transformer Networks (Liu et al. 2023a,b) have greatly promoted the development of DR multi-lesion segmentation (Wang et al. 2022; Cui et al. 2023; Xu et al. 2022; Ling et al. 2023). However, existing segmentation methods still face limitations that hinder their performance in DR multi-lesion segmentation. First, each type of lesion has a variable shape and size in the fundus image. Secondly, the area occupied by *Corresponding Author: Xiaoling Luo Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Comparison of our model and other attention patterns. (a) is a convolutional attention mechanism with very large kernels. (b) is ViT’s (Dosovitskiy et al. 2020) global attention mechanism. (c) is our heterogeneous convolutional attention mechanism. the lesions in the fundus picture is small. As a result, model training is more likely to be biased toward the background rather than the lesions. Existing methods improve segmentation accuracy in DR multi-lesion segmentation by obtaining global position relationships through large receptive fields, but small lesion features are negatively affected by large irrelevant areas. In this paper, we propose a heterogeneous-aware convolutional network for DR multi-lesion segmentation (HACDRNet). Heterogeneous convolution aims to aggregate the convolution features of different structures to obtain heterogeneous receptive fields. Compared with previous DR multilesion segmentation methods, the heterogeneous receptive field extracts the heterogeneous features of lesions, thereby having a good segmentation effect on lesions of various sizes and shapes. Furthermore, the heterogeneous convolutional structure has space-adaptive capabilities to reduce perturbance in large irrelevant regions. Aggregating heterogeneous convolution information is difficult, because features may conflict under heterogeneous receptive fields. To this end, inspired by Visual Attention Network (Guo et al. 2023) (VAN), we design a heterogeneous-aware attention aggregation (HAAA) module to summarize the heterogeneous feature maps. Different from VAN, we aggregate the features of heterogeneous convolution instead of a single large kernel convolution. In The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6342 Fig. 1, the receptive field of (a) and (b) is too large, causing the lesion to be easily perturbed by the background area; the heterogeneous receptive field of (c) is easier to get diverse lesion areas. In addition, another difficulty is that the proportion of the focal area is imbalanced, which will cause the features of some lesions to be ignored during training. This is not allowed. We found that adjusting the predicted pixel values can change the distribution of predictive features. To this end, we propose a novel loss function, Noise Adjusted Loss (NALoss). NALoss balances the predictive feature distribution by adjusting the pixel prediction. Gaussian noise is added in the predicted pixels to perturb the background‘s feature and enhance the lesion‘s feature. It is worth noting that noise addition is not involved in the testing phase. Experiments prove that NALoss can strengthen the feature learning, and improves the segmentation performance. To summarize, our contributions are as follows: • We propose a novel heterogeneous-aware convolutional network (HACDR-Net) for DR multi-lesion segmentation. The network has heterogeneous receptive fields and spatial adaptability, which solves the segmentation problem caused by the different shapes and sizes of lesions. • We propose a new loss function Noise Adjusted Loss (NALoss) specially designed for fitting highly imbalanced segmentation. It balances the distributions of predictive features by jointing Gaussian noise and hard example mining. • HACDR-Net undergoes thorough testing on two datasets and consistently achieves state-of-the-art results. Various metrics have been significantly improved on the DDR and IDRiD datasets. Related Work Approaches of DR Multi-Lesion Segmentation In medical image segmentation, U-net and its family, such as ResUnet (Diakogiannis et al. 2020), DenseUnet (Li et al. 2018), and Unet++ (Zhou et al. 2019b), were first widely used. L-seg (Guo et al. 2019) first proposes an end-to-end unified framework for multi-lesion segmentation of fundus images. However, these performed poorly in DR multilesion segmentation. Because the receptive field of traditional convolution is too small, it is not enough to grasp the global relationship. In recent years, segmentation models based on Transformer and CNN-Transformer, such as Transunet (Chen et al. 2021) and Swinunet (Cao et al. 2022), have begun to be applied to DR multi-lesion segmentation. Among them, RTnet (Huang et al. 2022) proposed a relation transformer network for diabetic retinopathy multi-lesion segmentation. The segmentation network with Swin Transformer (Liu et al. 2022) and Twins-SVT (Chu et al. 2021) as the backbone also achieved good results. PMCNet (He et al. 2022) improves the accuracy through the combination of CNN and Transformer. M2MRF (Liu et al. 2023c) is also a state-ofthe-art network in this task. These networks mainly improve the segmentation effect by expanding the receptive field. But they neglected the characteristics of lesions and appeared helpless when facing sundry lesions. Loss Function for Imbalanced Segmentation Various segmentation loss functions for solving imbalanced medical image data problems have been widely used. There are two types of loss functions. The first type aims to balance the importance of samples. Examples include Weighted Cross-Entropy (Ronneberger, Fischer, and Brox 2015), Diceloss (Li et al. 2020), Focalloss (Lin et al. 2017). The second type aims to balance the number of samples, and one method for achieving this is online hard example mining (OHEM) (Wang et al. 2023). Method Overview of Our Work The overall architecture of our proposed HeterogeneousAware Convolutional Network (HACDR-Net) is illustrated in Fig. 2, including HACDR-Net and Noise Adjusted Loss (NALoss). The encoder comprises four stages, each with downsampling rates Ri = [4, 8, 16, 32]. Each stage extracts heterogeneous features through repeatedly Heterogeneous Convolutional Attention (HCA) Blocks and downsamples with modulated deformable convolution (MDConv). The number of HCA Block iteration in the four stages respectively are 3, 3, 5, and 2. The core of HCA Block is the heterogeneous-aware attention aggregation (HAAA) module. For the deformable feed-forward network (DFFN) module in HCA block, we try to replace depth-wise convolution (DWConv) with MDConv. The decoder adopts a U-shaped structure like U-net (Ronneberger, Fischer, and Brox 2015). All of the structures in our HACDR-Net are residuals. In addition, we propose a new medical segmentation loss function NALoss for lesion-sample training, as shown in Fig. 4. It adjusts the feature distribution of training predictions by jointing Gaussian noise and hard example mining. Encoder with HCA Block As shown in Fig. 2, our encoder adopts a Transformerlike architecture, including deformable convolutional downsampling and HCA Block. However, different from selfattention and multi-head attention, we propose a novel heterogeneous convolutional attention to meet the requirements of lesion segmentation. In HCA Block, HAAA obtains heterogeneous feature maps through multi-branch heterogeneous convolution and then aggregates these features through an attention method. This mechanism of heterogeneous convolution can obtain diverse lesion areas in the heterogeneous receptive field along the channels and space. HCA Block widely uses MDConv. Moreover, a 3×3 MDConv is also applied in the downsampling. MDConv can grasp the details of various lesions and dynamically adapt to heterogeneous features, compared with traditional convolution. The MDConv is defined as follows: eF(x, y) = K X k=1 wkmk ∗F(x + ∆xk, y + ∆yk), (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6343 Figure 2: An overview of HACDR-Net. Figure 3: Visualization of the encoder’s features by GradCAM (Selvaraju et al. 2017). (a) and (f) describes the original image and ground truth, respectively. (b) to (e) feature maps represent different branch structures and their combinations. (b) represents DCC, (c) represents MLKCC+ONFA, (d) represents DCC+MLKCC+ONFA, and (e) represents DCC+MLKCC+ONFA through the attention aggregation module. where F(x, y) represents the input features, and eF(x, y) represents the deformed and enhanced features, where K represents the total number of sampling points, and k enumerates the sampling points. wk represents the learnable weight of the k-th sampling point, and mk represents the scalar modulation of the k-th sampling point, normalized by the sigmoid function. (x + ∆xk, y + ∆yk) represents the offset coordinates of the sampling point. In this way, our HACDR-Net enhances awareness of lesions. HAAA Module. To solve the problem of segmentation caused by various shapes and sizes of lesions, we use multibranch heterogeneous convolution to obtain heterogeneous features, as shown in Fig. 2. Before extracting heterogeneous features, HAAA module uses 5×5 MDConv for feature dynamic adaptation. We design three heterogeneous convolution branches, including a multi-scale large-kernel cross-convolution (MLKCC) branch, a deformable cross-convolution (DCC) branch, and an optional near-far-aware (ONFA) branch. As illustrated in Fig. 2, k×k cross-convolution is the convolution of features sequentially through 1×k and k×1 axisconvolutions. We use cross-convolution extensively here to reduce the collision of large irrelevant areas while obtaining and enlarging heterogeneous receptive fields. MLKCC branch obtains multi-scale cross receptive fields to capture the long-range relationship of lesions. Obtaining only a single-shaped cross receptive field cannot adapt to the characteristics of various shapes and sizes of lesions. DCC branch uses deformable cross-convolution to obtain dynamic local receptive fields. At the same time, DCC branch and 5×5 MDConv constitute a deformable convolution residual structure, which has a dynamic adaptive ability to get lesion areas of different shapes. ONFA branch is composed of k×k deformable cross-convolution, which can enhance network’s adaptability. We residually sum these features to form a heterogeneous feature map. Ultimately, an aggregation of heterogeneous feature channels is achieved through feature attention aggregation using 1×1 convolutions, culminating in an attention operation via input and output matrix multiplication. HAAA can be denoted as: HAAA(X) = Conv1×1( 2 X i=0 LKCCki×ki(X′) + DCC3×3(X′) + ONFAn×n(X′) + X′) ⊗X + X, (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6344 X′ = MDConv5×5(X), (3) LKCCki×ki(X) = DWConv1×ki(DWConvki×1(X)), (4) DCC3×3(X) = MDConv1×3(MDConv3×1(X)), (5) ONFAn×n(X) = MDConv1×n(MDConvn×1(X)). (6) Among them, DWConv means depth-wise depth separable convolution, MDConv represents modulated deformable convolution, LKCC represents large-kernel crossconvolution, DCC indicates deformable cross-convolution, ONFA means optional near-far-aware convolution and their subscripts indicate the size of the convolution. The operation ’⊗’ means element-wise matrix multiplication. We set the scale of axis convolution ki to 7, 11, and 21. Through multi-branch heterogeneous convolution and attention aggregation, HAAA possesses heterogeneous receptive fields and dynamic adaptive perception capability. HAAA reduces the feature collision of multi-branch and enhances awareness of various lesions. The three branches of MLKCC, DCC, ONFA, and the attention aggregation module are indispensable. As depicted in Fig. 3, the effect of heterogeneous convolution aggregation is significantly superior to that of other single branches. DFFN Module. This module mainly enhances the local features of HAAA. As shown in Fig. 2, it adopts a residual convolution structure and can choose the form of MDConv and DWConv. It turns out that the two structures behave differently on different datasets. Decoder According to the requirements of DR multi-lesion segmentation, we choose the U-net structure network. We combine the multi-scale features of the encoder to form a feature map by channel fusion and upsampling. Finally, it is restored to a mask map. The decoder based on U-net (Ronneberger, Fischer, and Brox 2015) shows the best effect in DR multilesion segmentation. Loss Function To solve the problem of the highly imbalanced proportion of focal areas, we propose a loss function NALoss. As shown in Fig. 4, by jointing Gaussian noise and hard example mining, NALoss balances the predictive feature distribution of background and lesion to improve the feature representations. First, during training, we add weighted Gaussian noise to the predicted pixels. By adjusting the distribution of predicted values for each pixel, NALoss can balance the distribution of predictive features. Each predicted pixel is a vector of c categories and each mask pixel is the one-hot vector of c categories. We denote pi as the predicted pixel, gi as the mask pixel, and pk i , gk i as the k-th category value of the pixel. zk i denotes a special vector of the form pk i . wk denotes the loss weight of category k. Adjusted Parameter can be denoted as α, α = [α1, α2, . . . , αk], where αk denotes the most critical noise weight of the k-th category. N(0, σ2) is Gaussian noise with mean 0 and variance σ2. Our loss formula for the Figure 4: An overview of NALoss. The operation ’⊕’ means add, and ’⊗’ denotes multiply. Pick pixel Pi as an illustration to depict the dual stages of NALoss: adding Gaussian noise to balance the predictive distribution and hard example mining. first step can be denoted as LF: Softmax(zk i ) = ezk i Pc u=1 ezu i , (7) ξ = log(Softmax(pk i + αkN(0, σ2))), (8) LF = −1 N N X i=1 C X k=1 wkgk i ξ. (9) Determining suitable Adjusted Parameter α is the key to NALoss. To enhance the robustness of prediction for different pixels (background and lesion pixels), we collect the total number of pixels of different categories in the entire training set, and design the Adjusted Parameter α, as shown in the formula. αk = log PC j=1 sj sk PC i=1 log PC j=1 sj si , (10) where sk represents the total number of k-th category pixels of all images in the training set, PC j=1 sj represents the total number of pixels in all images from the training set. We can see that Adjusted Parameter α is inversely proportional to the number of categories, as shown in Fig. 4. That is to say, we will increase the prediction probability of lesions to achieve the effect of perturbing the background and increasing the proportion of lesion-predictive feature distribution. Gaussian noise can cause some random variation in predicted ranges. With appropriate Adjusted Parameter α, fluctuating features can increase the difficulty of background The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6345 Methods IoU Dice AUPR mIoU EX SE HE MA mDice EX SE HE MA mAUPR EX SE HE MA HED (Xie and Tu 2015) 46.66 64.74 47.43 50.38 24.07 62.18 78.60 64.33 67.00 38.81 63.94 80.81 66.41 68.09 40.45 PSPNet (Zhao et al. 2017) 41.70 57.78 43.71 45.81 19.50 57.38 73.24 60.81 62.83 32.63 58.73 75.21 63.36 63.65 32.71 DenseUNet (Li et al. 2018) 46.71 66.51 45.57 45.86 30.55 62.56 79.89 62.61 62.89 44.83 65.06 81.01 66.10 67.10 46.01 Deeplabv3+ (Chen et al. 2018) 45.21 66.10 44.90 44.39 25.45 60.90 79.60 61.96 61.48 40.57 63.19 81.93 64.66 63.04 43.14 L-seg (Guo et al. 2019) 65.15 79.45 63.74 71.13 46.27 DNL (Yin et al. 2020) 42.28 57.67 44.80 47.03 19.61 57.94 73.15 61.87 63.96 32.78 59.09 75.12 64.04 64.73 32.48 HRNetV2 (Wang et al. 2020) 47.52 66.57 45.56 50.99 26.98 63.14 79.93 62.58 67.53 42.49 64.93 82.09 65.50 68.38 43.76 Twins-SVT-B (Chu et al. 2021) 47.07 64.68 44.91 51.76 26.92 62.79 78.56 61.98 68.19 42.42 63.84 80.09 63.12 68.86 43.27 TransUnet (Chen et al. 2021) 46.49 67.76 47.33 46.46 24.42 62.74 79.89 62.47 64.02 44.57 63.23 80.01 66.91 62.85 43.10 Swin-Unet (Cao et al. 2022) 47.76 66.26 48.36 47.54 28.86 63.53 79.71 65.19 64.43 44.79 64.48 81.34 66.57 64.91 45.10 Swin-Tv2 (Liu et al. 2022) 48.09 67.22 49.33 45.26 30.55 62.99 80.12 65.71 62.25 43.90 64.86 83.11 68.20 65.12 43.02 PMCNet (He et al. 2022) 43.12 56.02 68.08 87.24 71.11 67.05 46.94 M2MRF (Liu et al. 2023c) 48.56 66.07 48.58 48.16 31.42 64.45 79.57 65.39 65.01 47.81 66.00 81.98 67.41 66.68 47.91 HACDR-Net (Ours) 49.12 64.61 56.21 47.35 28.31 64.71 78.50 71.96 64.27 44.12 68.79 86.90 76.73 68.50 43.02 Table 1: Comparison of our proposed HACDR-Net with the state-of-the-art methods on the IDRiD dataset. The best results are highlighted in bold and the second best results are underlined. (Unit: %) prediction, which also can strengthen the robustness of training. Next, we propose a hard example mining of noise-adding pixels. The training focuses on the pixels where the noiseadding predictions are seriously wrong. The specific loss function NALoss is as follows: LNA = LF(LF < θ), (11) where θ represents the threshold. By hard example mining, NALoss significantly improves the segmentation performance of lesions. As for the reason that we use hard example mining on noise-adding pixels, on the one hand, we found that the minor prediction loss caused by noise can be ignored. In this way, HACDR-Net can both focus on those lesion errors with low presence and reduce the negative impact of perturbations on the background. On the other hand, if we simply use hard example mining, the training will still be dominated by background pixels, which cannot solve the problems of training process. Attentively, we divide the training into two stages. In the initial stage, we use the Cross-Entropy loss function, and then employ NALoss for training. Experiments Datasets and Evaluation Metrics Dataset. Two publicly available DR multi-lesion segmentation datasets are adopted, i.e., the Indian Diabetic Retinopathy Image Dataset (IDRiD), A General-purpose High-quality Dataset for Diabetic Retinopathy Classification, Lesion Segmentation and Lesion Detection (DDR). These datasets consist of images with a background category and four kinds of lesion categories. The four types of lesions include hard exudates (EX), soft exudates (SE), microangiomas (MA), and hemorrhages (HE). DDR: The DDR (Li et al. 2019) dataset contains 757 images of fundus lesions with pixel-level annotations, including 383 images for training, 149 images for validation, and 225 images for testing. The resolution of the images in this dataset ranges from 1088×1920 to 3456×5184 pixels. IDRiD: The IDRiD (Porwal et al. 2018) dataset only contains 81 images of fundus lesions with pixel-level annotations, including 54 images for training and 27 images for testing. The resolution of the images in this dataset is 2848×4288 pixels. Evaluation Metrics. We follow the protocol suggested by DDR (Li et al. 2019) and IDRiD (Porwal et al. 2018) and report standard metrics including Intersection over Union (IoU) (Rezatofighi et al. 2019), mean Intersection over Union (mIoU) (Rezatofighi et al. 2019), Dice coefficient (Milletari, Navab, and Ahmadi 2016), mean Dice coefficient (mDice) (Milletari, Navab, and Ahmadi 2016), the area under precision-recall curve (AUPR) (Boyd, Eng, and Page 2013) and mean area under precision-recall curve (mAUPR) (Boyd, Eng, and Page 2013). As multi-class segmentation tasks, mDice, mAUPR, and mIoU are core metrics for evaluating performance. Implementation Details Our implementation is based on mmsegmentation (Contributors 2020) libraries. All models are trained on a node with 2 RTX 3090 GPUs. Following M2MRF (Liu et al. 2023c), images in IDRiD are resized to 1440×960 pixels, and we resize the images of DDR to 1280×1280. To enhance the robustness of the model, we use three data augmentation techniques: multiple scaling (0.5∼2.0), rotation (90°, 180°, and 270°), and flipping (horizontal and vertical). Before training, we preprocess the images by contrast, brightness adjustment, and image fusion as used in (Zhou et al. 2019a). It can mitigate variation due to lighting conditions and resolution. The batch size is set to 1∼4 according to different resolutions for these two datasets. AdamW (Loshchilov and Hutter 2017) is applied to train our models. We set the initial learning rate as 0.00006 and employ the poly-learning rate decay policy. Comparison with the State-of-the-Arts Quantitative Comparison. We compare HACDR-Net with other state-of-the-art methods on the DDR and IDRiD The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6346 Methods IoU Dice AUPR mIoU EX SE HE MA mDice EX SE HE MA mAUPR EX SE HE MA HED (Xie and Tu 2015) 27.17 39.50 27.09 29.46 12.63 41.79 56.63 42.61 45.50 22.43 42.97 61.40 43.19 46.68 20.61 PSPNet (Zhao et al. 2017) 24.31 37.31 24.51 26.64 8.75 37.97 54.35 39.37 42.08 16.09 39.23 57.04 42.71 42.32 14.85 DenseUNet (Li et al. 2018) 31.58 41.25 37.58 32.73 14.76 47.02 58.41 54.63 49.32 25.73 48.29 62.00 55.01 51.11 25.05 Deeplabv3+ (Chen et al. 2018) 26.47 41.44 23.44 26.46 14.55 40.95 58.59 37.97 41.83 25.40 42.34 62.32 40.79 41.83 24.39 L-seg (Guo et al. 2019) 32.08 55.46 35.86 26.48 10.52 DNL (Yin et al. 2020) 24.33 36.39 27.15 25.33 8.46 38.02 53.36 42.71 40.40 15.60 40.14 56.05 47.81 42.01 14.71 HRNetV2 (Wang et al. 2020) 28.84 41.82 29.01 28.94 15.60 43.95 58.98 44.96 44.86 26.99 45.21 61.55 45.68 46.91 26.70 Twins-SVT-B (Chu et al. 2021) 29.28 39.70 29.08 36.24 12.07 44.15 56.83 45.04 53.19 21.54 46.11 59.71 49.96 52.72 21.54 TransUnet (Chen et al. 2021) 27.78 39.76 37.43 22.46 11.47 42.15 56.89 54.47 36.68 20.57 44.03 60.57 55.46 41.49 18.58 Swin-Unet (Cao et al. 2022) 30.07 42.64 33.82 30.62 13.19 45.10 59.79 50.53 46.77 23.31 46.72 62.71 54.39 46.12 23.67 Swin-Tv2 (Liu et al. 2022) 32.10 44.07 36.12 32.77 15.44 47.86 61.18 54.14 49.36 26.75 48.59 63.05 55.01 50.11 26.20 PMCNet (He et al. 2022) 36.44 54.30 31.64 31.64 19.94 M2MRF (Liu et al. 2023c) 30.41 43.06 30.56 32.08 15.95 45.77 60.20 46.81 48.58 27.51 49.42 63.88 55.47 50.01 28.33 HACDR-Net (Ours) 33.70 44.13 38.54 36.95 15.17 49.30 61.24 55.64 53.96 26.34 50.36 65.15 56.75 55.02 24.50 Table 2: Comparison of our proposed HACDR-Net with the state-of-the-arts methods on the DDR dataset. The best results are highlighted in bold and the second best results are underlined. (Unit: %) Figure 5: Visual Comparison of 4 methods on the DDR dataset. The colored boxes show the main lesions. (a) Fundus Image, (b) DenseUnet, (c) SwinV2, (d) M2MRF, (e) HACDR-Net (Ours), (f) Ground Truth. dataset. These compared methods are mainly divided into three categories, including Convolutional networks, Transformer-based networks, and previous DR multi-lesion segmentation networks. Convolutional networks include HED (Xie and Tu 2015), PSPNet (Zhao et al. 2017), DenseUNet (Li et al. 2018), Deeplabv3+ (Chen et al. 2018), DNL (Yin et al. 2020), HRNetV2 (Wang et al. 2020). Transformer-Based networks include Swin-T-base (Liu et al. 2021), Twins-SVT-B (Chu et al. 2021), TransUnet (Chen et al. 2021), Swin-Unet (Cao et al. 2022), Swin-transformer V2 (Swin-Tv2) (Liu et al. 2022). Previous DR multi-lesion segmentation networks include L-seg (Guo et al. 2019), PMCNet (He et al. 2022), M2MRF (Liu et al. 2023c). Table 1 lists the performance of different comparison methods on the IDRiD dataset. Likewise, compared with the previous best method M2MRF, our metrics mAUPR, mDice, and mIoU improve respectively by 2.79%, 0.26%, and 0.56%. Table 2 lists the performance of different comparison methods on the DDR dataset. Our HACDR-Net shows the best performance with all methods. Compared with the previous best method M2MRF, our metrics mAUPR, mDice, and mIoU improve respectively by 0.94%, 3.53%, and 3.29%. In the two datasets, not only the mean metrics but also the category metrics have improved. All in all, these Methods mDice mIoU mAUPR MD 46.55 31.04 46.50 Bα 47.35 32.01 48.01 Bθ 46.70 31.32 46.89 Bα + MD 48.02 32.75 48.50 Bθ + MD 48.45 33.04 49.21 Bα + Bθ + Bκ + MD 49.30 33.70 50.36 Bα + Bθ + MD 49.04 33.26 49.92 w/o HAAA 45.31 30.43 45.70 w/o DFFN 49.09 33.42 49.70 Table 3: Ablation study in HACDR-Net on the DDR dataset. HACDR-Net overall is represented as ’Bα+Bθ+Bκ+MD’, where ’Bα’, ’Bθ’, ’Bκ’, and ’MD’ indicate MLKCC branch, DCC branch, ONFA branch, and the Modulated Deformable Convolution respectively. ’w/o HAAA’ and ’w/o DFFN’ denote that HAAA and DFFN are removed from the overall model, respectively. The best results are highlighted in bold. (Unit: %) quantitative results on two datasets substantiate the fine robustness of our HACDR-Net. Visual Comparison. Fig. 5 shows the qualitative results of different methods on the DDR dataset, including DenseUnet, M2MRF, Swin-transformer v2 (Swin-Tv2), and our HACDR-Net. It demonstrates that our HACDR-Net can precisely segment lesions of different shapes and sizes compared with other methods. Ablation Study Through experiments, we analyze the contributions of each component in our model, detailed in Table 3. We assess the model performance by removing HAAA, DFFN, and NALoss components. Additionally, we employ T-SNE (Van der Maaten and Hinton 2008) for feature visualization and conduct a hyperparameter analysis. Ablation experiments were conducted on two public datasets, with a focus on the more representative DDR dataset due to its diverse The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6347 Structure mDice mIoU mAUPR ψ 48.55 32.71 49.10 τ 48.02 32.01 48.01 υ 47.72 31.32 48.05 ψ + τ 49.07 32.52 49.44 ψ + υ 48.60 32.79 49.50 τ + υ 49.10 33.04 50.03 τ + υ + ψ 49.30 33.70 50.36 Table 4: Ablation study in HAAA on the DDR dataset. ψ is represented as 1×1 convolution, τ is represented as 5×5 modulated deformable convolution, and υ is represented as attention. The best results are highlighted in bold. (Unit: %) Loss Function mDice mIoU mAUPR ζCE 47.04 31.67 47.32 ζW CE 47.88 32.34 48.51 ζDice 46.50 30.81 46.51 ζCE + ζDice 46.95 31.34 46.75 ζOHEM 47.12 31.71 47.95 ζNA + ζW CE 49.08 33.41 50.01 ζNA 49.30 33.70 50.36 Table 5: Ablation study of loss function on the DDR dataset. The top outcomes are shown in bold. (Unit: %) image resolutions and qualities. Effectiveness of Heterogeneous Branches. Table 3 shows that the heterogeneous features formed by HACDRNet have greatly improved various metrics compared with a single branch. Among them, the combination of branches represented by ’Bα’, ’Bθ’, ’Bκ’, compared with a single branch, our metrics mAUPR, mDice, and mIoU improve respectively by 2.35%∼3.86%, 1.95%∼2.75%, and 1.69%∼2.66%. ’Bκ’ (k = 17) has a significant biased effect on lesions of different scales. The details of ’Bκ’ will be introduced in Hyper-Parameters. Overall, our results shed new light on the importance of heterogeneous-aware convolution for improving DR multi-lesion segmentation performance. Effectiveness of HAAA and DFFN Module. HAAA collects and combines heterogeneous features in HACDR-Net. Table 3 demonstrates its significance, as mAUPR, mDice, and mIoU drop by 4.66%, 3.99%, and 3.27% without HAAA. Table 4 verifies the essentiality of HAAA’s key components: 1×1 convolution for aggregation, 5×5 deformable convolution for adaptive features, and attention mechanism. DFFN is crucial for aggregating heterogeneous features and without it, mAUPR, mDice, and mIoU decrease by 0.66%, 0.21%, and 0.28%, respectively. Effectiveness of NALoss. Table 5 illustrates that our NALoss outperforms the DR multi-lesion segmentation dataset with imbalanced data. Abbreviations used include ζCE for Cross-Entropy loss, ζW CE for Weighted CrossEntropy loss (Ronneberger, Fischer, and Brox 2015), ζDice for Dice loss (Li et al. 2020), ζOHEM for Ohem loss (Shrivastava, Gupta, and Girshick 2016), and ζNA for our Figure 6: The deep features are visualized using T-SNE in (a) DenseU-net, (b) HACDR-Net+WCEloss, and (c) HACDR-Net+NALoss. Figure 7: Evaluation of the hyperparameters. Comparative analysis of (a) the kernel k of Optional near-far-aware Convolution Bκ, and (b) the threshold for NALoss θ. (Unit: %) NALoss. Our HACDR-Net is the baseline network. We surpass mainstream loss functions, achieving optimal outcomes (mAUPR:+1.85%, mIoU:+1.36%, mDice:+1.42%). Visualization of Deep Features. T-SNE (Van der Maaten and Hinton 2008) is used to obtain 2D embeddings and visualize the deep features of the last encoder layer. As shown in Fig. 6, the lesion feature class generated by our network with NALoss is more compact, the difference between different classes is clearer, and the segmentation effect is improved. Hyper-Parameters. We evaluate the influence of two core parameters on the model, one is the size of the convolution kernel k of ONFA branch Bκ, and the other is the different threshold for NALoss θ, As in Fig. 7. ONFA branch adapts receptive field based on lesion size. The kernel k denotes convolution receptive field size. A larger kernel (k = 17) works well for EX segmentation, while a smaller kernel (k = 5) is good for MA. However, the best overall performance is achieved with a moderate k value (k = 9). Next, we examine the impact of various thresholds θ for NALoss. Setting θ to 0.5 yields the best results. Conclusion We introduce a new network, HACDR-Net, for DR multilesion segmentation. This network addresses the challenge of segmenting lesions of different shapes and sizes in DR images. Moreover, we propose a new loss function, NALoss, to handle imbalanced segmentation requirements. Our experiments show that HACDR-Net outperforms other methods in DR multi-lesion segmentation. In future work, we aim to enhance multi-lesion segmentation in DR images using multi-modal technology. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6348 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62301621, the Stable Support Projects for Shenzhen Higher Education Institutions under grant no. 20231122005530001, the Science and Technology Innovation Committee of Shenzhen Municipality under grant no. GJHZ20210705141812038. References Boyd, K.; Eng, K. H.; and Page, C. D. 2013. Area under the precision-recall curve: point estimates and confidence intervals. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, 451–466. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; and Wang, M. 2022. Swin-unet: Unet-like pure transformer for medical image segmentation. In European conference on computer vision, 205–218. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A. L.; and Zhou, Y. 2021. Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), 801–818. Chu, X.; Tian, Z.; Wang, Y.; Zhang, B.; Ren, H.; Wei, X.; Xia, H.; and Shen, C. 2021. Twins: Revisiting the design of spatial attention in vision transformers. Advances in Neural Information Processing Systems, 34: 9355–9366. Contributors, M. 2020. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. Cui, C.; Ren, Y.; Pu, J.; Li, J.; Pu, X.; Wu, T.; Shi, Y.; and He, L. 2023. A Novel Approach for Effective Multi-View Clustering with Information-Theoretic Perspective. arXiv preprint arXiv:2309.13989. Diakogiannis, F. I.; Waldner, F.; Caccetta, P.; and Wu, C. 2020. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS Journal of Photogrammetry and Remote Sensing, 162: 94–114. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; and Hu, S.-M. 2023. Visual attention network. Computational Visual Media, 1–20. Guo, S.; Li, T.; Kang, H.; Li, N.; Zhang, Y.; and Wang, K. 2019. L-Seg: An end-to-end unified framework for multilesion segmentation of fundus images. Neurocomputing, 349: 52–63. He, A.; Wang, K.; Li, T.; Bo, W.; Kang, H.; and Fu, H. 2022. Progressive Multiscale Consistent Network for Multiclass Fundus Lesion Segmentation. IEEE Transactions on Medical Imaging, 41(11): 3146–3157. Huang, S.; Li, J.; Xiao, Y.; Shen, N.; and Xu, T. 2022. RTNet: relation transformer network for diabetic retinopathy multi-lesion segmentation. IEEE Transactions on Medical Imaging, 41(6): 1596–1607. Li, T.; Gao, Y.; Wang, K.; Guo, S.; Liu, H.; and Kang, H. 2019. Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Information Sciences, 501: 511–522. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.-W.; and Heng, P.A. 2018. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Transactions on Medical Imaging, 37(12): 2663–2674. Li, X.; Sun, X.; Meng, Y.; Liang, J.; Wu, F.; and Li, J. 2020. Dice Loss for Data-imbalanced NLP Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 465–476. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Ling, Y.; Chen, J.; Ren, Y.; Pu, X.; Xu, J.; Zhu, X.; and He, L. 2023. Dual label-guided graph refinement for multi-view graph clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8791–8798. Liu, C.; Wen, J.; Luo, X.; Huang, C.; Wu, Z.; and Xu, Y. 2023a. DICNet: Deep Instance-Level Contrastive Network for Double Incomplete Multi-View Multi-Label Classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8807–8815. Liu, C.; Wen, J.; Luo, X.; and Xu, Y. 2023b. Incomplete Multi-View Multi-Label Learning via Label-Guided Masked View- and Category-Aware Transformers. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8816–8824. Liu, Q.; Liu, H.; Ke, W.; and Liang, Y. 2023c. Automated lesion segmentation in fundus images with many-to-many reassembly of features. Pattern Recognition, 136: 109191. Liu, Z.; Hu, H.; Lin, Y.; Yao, Z.; Xie, Z.; Wei, Y.; Ning, J.; Cao, Y.; Zhang, Z.; Dong, L.; et al. 2022. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12009–12019. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Milletari, F.; Navab, N.; and Ahmadi, S.-A. 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV), 565–571. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6349 Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; and Meriaudeau, F. 2018. Indian diabetic retinopathy image dataset (IDRiD): a database for diabetic retinopathy screening research. Data, 3(3): 25. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 658–666. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626. Shrivastava, A.; Gupta, A.; and Girshick, R. 2016. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE conference on computer vision and pattern recognition, 761–769. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. 2020. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3349–3364. Wang, Q.; Tao, Z.; Gao, Q.; and Jiao, L. 2022. Multi-view subspace clustering via structured multi-pathway network. IEEE Transactions on Neural Networks and Learning Systems. Wang, Y.; Fei, J.; Wang, H.; Li, W.; Bao, T.; Wu, L.; Zhao, R.; and Shen, Y. 2023. Balancing Logit Variation for Long-tailed Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19561–19573. Xie, S.; and Tu, Z. 2015. Holistically-nested edge detection. In Proceedings of the IEEE international conference on computer vision, 1395–1403. Xu, J.; Tang, H.; Ren, Y.; Peng, L.; Zhu, X.; and He, L. 2022. Multi-level feature learning for contrastive multi-view clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16051–16060. Yin, M.; Yao, Z.; Cao, Y.; Li, X.; Zhang, Z.; Lin, S.; and Hu, H. 2020. Disentangled non-local neural networks. In Proceedings of the European conference on computer vision (ECCV), 191–207. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2881– 2890. Zhou, Y.; He, X.; Huang, L.; Liu, L.; Zhu, F.; Cui, S.; and Shao, L. 2019a. Collaborative Learning of Semi-Supervised Segmentation and Classification for Medical Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhou, Z.; Siddiquee, M. M. R.; Tajbakhsh, N.; and Liang, J. 2019b. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Transactions on Medical Imaging, 39(6): 1856–1867. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6350 | 2024 | 705 |
18,524 | Learning Invariant Inter-pixel Correlations for Superpixel Generation Sen Xu1,2, Shikui Wei*1,2, Tao Ruan3, Lixin Liao4 1Institute of Information Science, Beijing Jiaotong University 2Beijing Key Laboratory of Advanced Information Science and Network Technology 3Frontiers Science Center for Smart High-speed Railway System, Beijing Jiaotong University 4 DaoAI Robotics Inc. {senxu1, shkwei, ruantao} @bjtu.edu.cn, [email protected] Abstract Deep superpixel algorithms have made remarkable strides by substituting hand-crafted features with learnable ones. Nevertheless, we observe that existing deep superpixel methods, serving as mid-level representation operations, remain sensitive to the statistical properties (e.g., color distribution, high-level semantics) embedded within the training dataset. Consequently, learnable features exhibit constrained discriminative capability, resulting in unsatisfactory pixel grouping performance, particularly in untrainable application scenarios. To address this issue, we propose the Content Disentangle Superpixel (CDS) algorithm to selectively separate the invariant inter-pixel correlations and statistical properties, i.e., style noise. Specifically, We first construct auxiliary modalities that are homologous to the original RGB image but have substantial stylistic variations. Then, driven by mutual information, we propose the local-grid correlation alignment across modalities to reduce the distribution discrepancy of adaptively selected features and learn invariant inter-pixel correlations. Afterwards, we perform globalstyle mutual information minimization to enforce the separation of invariant content and train data styles. The experimental results on four benchmark datasets demonstrate the superiority of our approach to existing state-of-theart methods, regarding boundary adherence, generalization, and efficiency. Code and pre-trained model are available at https://github.com/rookiie/CDSpixel. Introduction Superpixel segmentation (Achanta et al. 2012; Liu et al. 2011; Achanta and Susstrunk 2017; Yang et al. 2020; Wang et al. 2021) divides an image into compact and contiguous regions of pixels based on specific criteria such as color similarity, texture, or brightness. Compared to pixel-level processing, superpixels significantly reduce the number of image primitives and preserve the structural information, thus improving the efficiency and accuracy of many downstream tasks, i.e., semantic segmentation (He et al. 2015; Zhu et al. 2014; Kwak, Hong, and Han 2017; Gadde et al. 2016), stereo matching (Yang et al. 2020; Birchfield and Tomasi 1999; Wang and Zheng 2008), self-supervised pretraining (Sautier et al. 2022), image classification (Zhao, Zhu, and *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 𝐻! 𝐻" domain shift 𝐻!"# (a) (b) decision boundary Figure 1: Motivation. (a) Visualization of the t-SNE distributions on the BSDS dataset. From left to right are the baseline and our CDS. After applying color inversion, the feature distribution of baseline displays a noticeable decision boundary. In contrast, the feature distribution extracted by CDS is more compact (i.e.,[-80, +80] vs [-40, +40]) and indivisible. (b) Gradually modifying the stylistic information of both auxiliary (Ht) and original data (Ho) enhances the purity of the shared invariant inter-pixel correlations (Hinv). Feng 2022; Guo et al. 2018), etc. The usual practice of traditional superpixel algorithms is to initialize a regular grid of superpixels and iteratively adjust the association of pixels and neighboring superpixels. Afterward, deep learning algorithms leverage image features learned by neural networks to replace hand-crafted features, i.e., XYlab used by traditional methods, significantly improving the performance of superpixel algorithms. However, although learnable features have been proven effective, they also introduce new challenges. Superpixel segmentation, as a mid-level image representation task, needs to adapt to a variety of open-world scenarios. Traditional superpixel algorithms employ independent online processes (e.g., clustering or graph-cut), which remain unaffected by inter-instance influences. In contrast, deep superpixel algorithms carry the potential risk of learning the unique data distribution present in the training set. To validate this potential risk, we conducted a straightforward experimental verification in Fig. 1(a). Given that this concern reflects the algorithm’s generalization ability, we opted for SCN (Yang et al. 2020), a deep superpixel algorithm renowned for its solid generalization, for experimental validation. We applied an inversion operation (subtracting each pixel value from 255) to simulate variations in the data distribution within the test set. While the inter-pixel correlaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6351 tions of identical images should remain constant through the linear transformation of RGB values during superpixel generation, the features extracted by the baseline algorithm exhibit a discernible decision boundary. Hence, learnable superpixels not only encompass a pure representation of interpixel correlations, but are also influenced by stylistic noise from the training set, such as high-level semantics and color distribution. Among the previous works, this issue is overlooked even though necessary. To address this issue, we introduce a novel deep superpixel method named the Content Disentangle Superpixel (CDS) algorithm to disentangle the dataset-specific style from the invariant content, i.e., the inter-pixel correlation used for superpixel segmentation. Our CDS achieve the objective of decoupling by constructing auxiliary data. As superpixel segmentation focuses not on high-level semantics but on pixel correlations, auxiliary data, despite altering style, do not disrupt the inherent pixel relationships. As illustrated in Fig.1(b), gradually modifying the stylistic information of both auxiliary and original data enhances the purity of the shared invariant inter-pixel correlations. Concretely, our method consists of three parts in order: feature extraction, content disentangle (CD), and superpixel generation. In the CD phrase, we feed the modality embeddings into the content selective gate to adaptively decouple content features and style noise. Afterwards, we propose the superpixel-grid correlation alignment to ensure that the selected content features from two modalities have the same pixel correlations. To prevent the occurrence of degenerate solutions in the content selective gate above. We enforce constraints to encourage the dataset/modal-specific features to have smaller mutual information while avoiding degradation of the selected modality style. Finally, a modalityshared superpixel decoder is designed to predict superpixel associations. Moreover, the auxiliary modality is only used in the training phase. In summary, our contributions are: • We discover that existing deep superpixel algorithms depend on the distribution of training data and propose the CDS algorithm to ensure that learnable superpixels have a high generalization and boundary adherence power. • During the training phase, CDS introduces an auxiliary modality to help decouple the correlated features among pure pixels in the RGB modality; while performing inference using only RGB. Compared to previous work, our algorithm does not increase additional computational burden but effective. • Experimental results on datasets from four different domains indicate the superiority of our approach to existing superpixel algorithms. In addition, we demonstrate that our method improves the performance of downstream tasks. Related Work Traditional Superpixel Algorithms. The research of traditional superpixel algorithms (Achanta et al. 2012; Bergh et al. 2012; Felzenszwalb and Huttenlocher 2004; Li and Chen 2015; Liu et al. 2011; Grady 2006; Meyer 1992) has a long timeline. In times of scarce parallel computing resources, traditional superpixel algorithms can reduce image information redundancy and are often used to improve the computational efficiency of downstream tasks. Traditional superpixel algorithms are mainly classified into graphbased, clustering-based, and energy-based approaches. Concretely, graph-based methods, e.g., ERS (Liu et al. 2011), treat superpixel segmentation as a graph partitioning problem. The image is formulated as an undirected graph, and the edge weight indicates the inter-pixel similarity. Clusteringbased methods, e.g., SLIC (Achanta et al. 2012), SNIC (Achanta and Susstrunk 2017), and LSC (Li and Chen 2015), initial the superpixels with seed pixels and apply cluster algorithms like k-means to adjust the pixel association. Energy-based methods, e.g., SEEDS (Bergh et al. 2012) and ETPS (Yao et al. 2015), first partition the image into regular grids and leverage different energy functions as an objective to exchange the pixels between neighboring superpixels. Traditional superpixel algorithms usually use CIELAB colors, concatenated with two-dimensional position encoding as pixel features. Deep Superpixel Algorithms. SEAL (Tu et al. 2018) combines the neural networks with the ERS (Liu et al. 2011) by proposing the segmentation-aware affinity loss to learn cluster-friendly features. SEAL is not end-to-end trainable. To address this problem, Jampani et al. relaxes the nearest neighbor constraints of SLIC and develops the first differentiable deep algorithm SSN (Jampani et al. 2018). Since both SEAL and SSN require iterative traditional superpixel operations to complete the segmentation, SCN (Yang et al. 2020) model the superpixel segmentation as a classification problem between each pixel and its neighboring nine superpixels, which significantly improves the computational efficiency of superpixel segmentation. Based on the SCN, Wang et al. (Wang et al. 2021) propose the association implantation module, i.e., AINet, to enable the network to enhance the relations between the pixel and its surrounding grids. Among them, SCN and AINet are the SOTA algorithms with optimal performance. Style Removal Algorithms. To better understand our work, we introduce the style removal techniques related to our Motivation. Style removal is not an independent visual task, often used as a technical means to solve problems such as style transfer and domain generalization. Existing methods mainly focus on two aspects: Normalization (Pan et al. 2018; Ulyanov, Vedaldi, and Lempitsky 2017; Huang and Belongie 2017) and Whitening (Li et al. 2017; Pan et al. 2019; Cho et al. 2019; Choi et al. 2021). Especially in (Ulyanov, Vedaldi, and Lempitsky 2017), the authors propose instance normalization to prevent overfitting on the domain-specific style of training data. (Pan et al. 2018) achieve significant performance improvement by incorporating the IN layers to capture style-invariant information. (Li et al. 2017) visually validates that the image style exists mainly on the correlation of feature channels and proposes the whitening transform to extract image content. Channel whitening calculates the covariance matrix across all channels, resulting in excessive computational complexity (O(n2)). In (Cho et al. 2019), the authors introduce group-wise instance whitening (GIW) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6352 transform to improve time efficiency. Overall, instance-level normalization and whitening both ensure that feature channels are independent of each other. Preliminaries Superpixel segmentation is the task of partitioning an input image I ∈RH×W ×3 into a set of n superpixels S = {S1, S2, ..., Sn}, where each superpixel Si consists of a group of adjacent pixels with similar characteristics. Mathematically, superpixel segmentation algorithms aim to calculate the pixel-superpixel association map Q. Since computing the association between each pixel p with n superpixels has a high computational complexity, the latest deep superpixel approaches (Yang et al. 2020) formulate the superpixel segmentation as a local classification problem between pixel p with nine neighbor superpixel grids, and improve the time efficiency. This also served as our theoretical foundation. Concretely, the image I is initialized into regular grids, and Q ∈RH×W ×9 indicates the probability that each pixel p is attributed to its nine nearby superpixel grids. The association map Q is predicted by a series of CNNs. Since no label is available for this output, deep algorithms design the superpixel loss inspired by the k-means convergence condition. Formally, let f(p) be the pixel properties, i.e., semantic one-hot vector and positional encoding; we first obtain the superpixel clustering center properties g(s) leveraging Q and f(p). Then reconstruct the pixel property as follows: g(s) = P {p|s∈Np} f(p) · Qs(p) P {p|s∈Np} Qs(p) , f ′(p) = X s∈Np g(s)·Qs(p). (1) Here Np indicates the set of adjacent superpixels of p, and the Qs(p) is the predicted probability that pixel p is assigned to superpixel s. Finally, superpixel segmentation amounts to minimizing the reconstruction distance Lsp(Q) = X p Dis (f(p), f ′(p)) . (2) Following previous works (Yang et al. 2020; Wang et al. 2021), we use the cross-entropy and the Euclidean distance as the distance measure of the semantic label and the position vector, respectively. Proposed Algorithm In this section, we present Content Disentangle Superpixel (CDS) algorithm (shown in Fig.2), which aims to learn invariant inter-pixel correlations and reduce the style noise for superpixel generation. We first introduce the auxiliary modal and feature extraction, and then, present the content disentangle mechanism. Finally, the superpixel decoder is shown in the last subsection. Auxiliary Modal and Feature Extraction Auxiliary Modal. As shown in Fig.1(b), we construct samples with large domain offsets homogeneously sourced from HSV LAB Grad. Map … Ψ! Ψ" ℱ" ℱ! Feature Extraction Shared Parameters ...... ...... G Local-grid Correlation Alignment Global-style MI Minimization 𝒱! D Shared Parameters ...... ...... Training & Inference Only Training Modal Encoder Content Selective Gate Superpixel Decoder 𝒱" 𝒞! 𝒞" Image (RGB) Aux. modal D G G D Ψ Figure 2: Flowchart of the proposed content disentangle superpixel algorithm. the raw data to separate the inter-pixel correlation information, and the constructed auxiliary samples need to: (1) preserve the pixel interrelations intact, (2) and exhibit significant stylistic differences. Consequently, we adopt the auxiliary modality. Modality refers to the way in which something happens or is experienced (Baltruˇsaitis, Ahuja, and Morency 2018), and different modalities, i.e., HSV and LAB color-space transform and gradient map, describe objects from different perspectives. Although these modalities do not possess as significant modality barriers as heterogeneous data like text and images, they exhibit substantial variations in pixel-level descriptions. Modal Encoder. Different from (Yang et al. 2020; Wang et al. 2021), to facilitate the decoupling of learnable image features, we remain in the explicit pixel-level feature extraction phase. For each image I and its auxiliary modality sample A ∈RH×W ×3, the whole process can be formulated as: Ψ(I) = P {Φ (I; θ) + I; θ∗} (3) Ψ(A) = P {Φ (A; γ) + A; γ∗} (4) where the Φ is a series of convolution layers modified from the CNN part of SSN (Jampani et al. 2018). We adopt its structure for simplicity and efficiency. Since we formulate superpixels as a local classification problem, we do not concatenate positional encoding as SSN does. Additionally, to prevent the loss of pixel information, we additionally use a non-linear mapping function P{·} to aggregate the original pixel values. θ, θ∗and γ, γ∗indicate the parameters of two modalities, respectively. Content Disentangle Mechanism After feature extraction process, we obtain the pixel-level embedding of the inputs, i.e., FI, FA ∈RC×H×W . In this section, we introduce the content disentangle mechanism to adaptively select the inter-pixel correlation information, which contains three operations in detail. Content Selective Gate is a learnable filter to separate the pixel embedding FA,I into content features CA,I ∈ RC×H×W and global modality/dataset style vectors VA,I ∈ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6353 Slide … … 𝑑 𝑑×𝑑 𝐾 𝐻 𝑊 𝑑 Flatten Flatten 𝑍! " 𝑍! # 𝑍! 𝑍" 𝐷#$(𝑍% ! ||𝑍% ") LCA Mechanism Figure 3: Illustration of the Local-grid Correlation Alignment (LCA) mechanism. LCA performs spatial domain distribution alignment at the superpixel level. RC. As discussed in the related work, the image style exists mainly on the correlation of feature channels (Li et al. 2017). More importantly, superpixels do not consider global semantics like classification tasks do, and each position in the spatial domain has equal significance. Therefore, the content selective gate is designed as a share-weighted channel-wise attention strategy: Ci = Gate(Fi) · Fi Vi = avgpool((1 −Gate(Fi)) · Fi) (5) where i ∈{A, I}, avgpool is the channel-wise average pooling operation, and Gate(Fi) = σ (W2 · δ (W1 · avgpool(Fi))) . (6) Here, W1 and W2 are the parameters for two fullyconnected layers, and the δ(·) and σ(·) are the relu and sigmoid function, respectively. Local-grid Correlation Alignment is proposed to enforce the auxiliary modality and the primary modality to learn similar superpixel-friendly invariant features, i.e., the correlations between pixels. For superpixels, our objective is to ensure that embeddings of neighboring pixels with similar attributes are highly similar, while those of dissimilar pixels exhibit pronounced distinctions. This perspective highlights that superpixels prioritize capturing variations in feature dissimilarity at each image position, rather than being fixated on specific numerical values. Consequently, it becomes essential to guarantee the congruence of spatial distributions between the auxiliary and primary modalities. As illustrated in Fig.3, given content features CI, CA ∈ RC×H×W and initial superpixel grid with d × d size, We first employ average pooling to reduce dimensions, followed by traversing all superpixel grids and unfolding features in the spatial domain, resulting in the feature matrix Z ∈RK×d2. K is the initial superpixel number which calculated by (H/d) × (W/d). Then, the local-grid correlation alignment constraint is formulated as: Lalign = 1 K K X i DKL(ZI i ∥ZA i ). (7) Algorithm 1: Training pseudocode for CDS. Input: Input image I and auxiliary modal A. Output: The superpixel association map QI and QA. Initialize components: ΨI, ΨA, Gate, and superpixel decoder D. Initialize the variational distribution network hθ. for each training iteration do Step 1: Update the main network. (Fix the hθ.) Conduct auxiliary modal A. Calculate the FI and FA by Eq.3, Eq.4. Calculate the Ci and Vi by Eq.5, Eq.6. Predict the QI and QA by feeding Ci into the superpixel decoder D. Calculate Lalign, LMI, and Lsp, respectively. Update the CDS Step 2: Update the variational distribution network. Detach the value of VI and VA from the computational graph. Calculate L(θ). Update the parameters of hθ end for Here, we do not directly compute the global spatial distribution for three main reasons: (1) To enhance parallel computing efficiency. (2) To prevent the loss of local information, as softmax normalization is employed prior to calculating KL divergence, and a large number of pixels could lead to excessively small local probabilities. (3) The deep superpixel algorithm is defined as a neighborhood classification problem. Global-style Mutual Information Minimization. Since only alignment lead to information loss or confusion between modalities, to prevent the occurrence of degenerate solutions in the content selective gate above, we enforce constraints to encourage the dataset/modal-specific features V to have smaller mutual information. Mathematically, given the style vectors VA,I, (Cheng et al. 2020) provide an upper bound on their mutual information: I(VI; VA) ≤Ep(VA,VI)[log p(VI | VA)] −Ep(VA)Ep(VI)[log p(VI | VA)] (8) where I(VI; VA) indicates the mutual information. However, the VA,I are learnable variables, and conditional distribution p(VI | VA) is unavailable. Consequently, we leverage the variational distribution hθ to approximate p(·|·). Then, the proposed MI loss is given to minimize the MI upper bound: LMI = 1 N 2 N X i=1 N X j=1 [(log hθ(VI i |VA i )) −(log hθ(VI j |VA i ))] (9) Specifically, the variational distribution hθ(VI|VA)) is usually implemented with neural networks and optimized independently by L(θ) = −1 N PN i=1 hθ(VI i |VA i ), the loglikelihood function (Yue et al. 2022). When participating The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6354 400 600 800 1000 1200 Number of Superpixels 0.94 0.95 0.96 0.97 0.98 ASA Score SLIC ERS ETPS SEAL SSN SCN AINet Ours 0.8 0.85 0.9 Boundary Recall 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 Boundary Precision SLIC ERS ETPS SEAL SSN SCN AINet Ours 400 600 800 1000 1200 Number of Superpixels 0.04 0.05 0.06 0.07 0.08 0.09 0.1 UE Score SLIC ERS ETPS SEAL SSN SCN AINet Ours (a) BSDS500 500 1000 1500 2000 Number of Superpixels 0.91 0.92 0.93 0.94 0.95 ASA Score SLIC ERS ETPS SEAL SSN SCN AINet Ours 0.8 0.85 0.9 0.95 Boundary Recall 0.13 0.15 0.17 0.19 0.21 0.23 0.25 0.27 Boundary Precision SLIC ERS ETPS SEAL SSN SCN AINet Ours 500 1000 1500 2000 Number of Superpixels 0.08 0.1 0.12 0.14 0.16 0.18 UE Score SLIC ERS ETPS SEAL SSN SCN AINet Ours (b) NYUv2 1000 1500 2000 2500 3000 Number of Superpixels 0.93 0.94 0.95 0.96 0.97 ASA Score SLIC ERS ETPS SEAL SSN SCN AINet Ours 0.75 0.8 0.85 0.9 0.95 Boundary Recall 0.14 0.16 0.18 0.2 0.22 0.24 0.26 Boundary Precision SLIC ERS ETPS SEAL SSN SCN AINet Ours 1000 1500 2000 2500 3000 Number of Superpixels 0.06 0.07 0.08 0.09 0.1 0.11 0.12 UE Score SLIC ERS ETPS SEAL SSN SCN AINet Ours (c) KITTI 200 400 600 800 1000 1200 Number of Superpixels 0.97 0.975 0.98 0.985 ASA Score SLIC ERS ETPS SEAL SSN SCN AINet Ours 0.85 0.9 0.95 Boundary Recall 0.05 0.06 0.07 0.08 0.09 Boundary Precision SLIC ERS ETPS SEAL SSN SCN AINet Ours 200 400 600 800 1000 1200 Number of Superpixels 0.025 0.03 0.035 0.04 0.045 0.05 0.055 UE Score SLIC ERS ETPS SEAL SSN SCN AINet Ours (d) VOC2012 Figure 4: Performance comparison on four datasets from different domains. From Left to Right: BSDS, NYU, KITTI and VOC datasets. From Top to Bottom: ASA, BR-BP and UE metrics. Except UE, higher values indicate that the algorithm is more effective. in the computation of LMI and updating of the main network, the parameters of hθ are frozen, and only gradients are passed. Superpixel Generation Since the features encoding local pixel correlations, i.e., CI,A have the same size as the original image, we initially formulate hierarchical features {c1, c2, ..., cn}, where n = log2(d), employing bilinear interpolation. The size of ci is 1 2i−1 · (H, W), i ∈[1, n]. Then, similar to U-net, we construct the decoder in a bottom-up manner, and the predict association map Q is given as: Qi = D(c1, c2, ..., cn; ϵ), ∀i ∈{A, I}, (10) where ϵ indicates the parameters of D. In this way, we extend the network’s receptive field to encompass the adjacent vicinity of nine d × d superpixel grids, thereby enabling a more nuanced prediction of the relationship map. Ltotal = Lalign + LMI + Lsp(QI) + Lsp(QA) (11) Finally, the overall optimization objective of the proposed CDS is defined in Eq. 11, which comprises three components: the loss for the proposed local-grid correlation alignment, the mutual information loss, and the superpixel loss. Algorithm 1 demonstrates the pseudocode for training our algorithm. Experiments Experiment Settings Dataset. Since the proposed CDS superpixel algorithm is introduced to mitigate the influence of attribute noise from the training set. We evaluate our method on four segmentation datasets from different domains: BSDS500 (Arbelaez et al. 2010), NYUv2 (Silberman et al. 2012), KITTI (Geiger, Lenz, and Urtasun 2012), Pascal VOC2012 (Everingham et al. 2015). Concretely, BSDS is an object edge detection dataset that contains five segmentation labels from different annotators for each image. NYUv2 is a semantic segmentation dataset for indoor scenes. KITTI is a commonly used street scene segmentation dataset for autonomous driving. Pascal VOC is a benchmark segmentation dataset with twenty object categories, which can be used for tasks such as object detection, instance segmentation. Evaluation protocol. Follow the evaluation protocol of previous works (Yang et al. 2020; Wang et al. 2021), we only train our model on the BSDS500 dataset and run inference on the other datasets. The clustering performance is mainly evaluated by three public metrics: Achievable Segmentation Accuracy (ASA), Boundary Recall-Precision (BR-BP) curve, Under-segmentation Error (UE). Implementation details. During the training phase, we apply data augmentation through random resize, random cropping to 208 × 208, and random horizontal/vertical flipping for our CDS. We trained the models using Adam optimizer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6355 Model Time(ms) Device SLIC(Achanta et al. 2012) 120 CPU ERS(Liu et al. 2011) 940 CPU ETPS(Yao et al. 2015) 82 CPU SEAL(Tu et al. 2018) 2658 GPU&CPU SSN(Jampani et al. 2018) 278 GPU SCN(Yang et al. 2020) 5 GPU AINet(Wang et al. 2021) 29 GPU Ours 6 GPU Table 1: Runtime comparison for generating about 600 superpixels on NYUv2 with image size 608 × 448. The learning rate starts at 5e-4 and is updated by the poly learning rate policy. We trained our model for 150k iterations with batch size eight, and superpixel grid size d is set to 16. We use the gradient map as the auxiliary modality and conduct all the experiments on single RTX3090 GPU. Comparison with the State-of-the-Arts Fig.4 and Tab.1 report the metrics and runtime comparisons with representative superpixel algorithms, respectively, including three traditional approaches, i.e., clustering-based method SLIC (Achanta et al. 2012), graph-based method ERS (Liu et al. 2011), and energy-based method ETPS (Yao et al. 2015), and four deep superpixel approaches, i.e., SEAL (Tu et al. 2018), SSN (Jampani et al. 2018), SCN (Yang et al. 2020) and AINet (Wang et al. 2021). For traditional superpixels, we leverage the hyperparameters posted by (Stutz, Hermans, and Leibe 2018). For deep superpixel algorithms, we conduct the experiment with their official implementations. Among them, SCN and AINet are the SOTA algorithms with optimal performance. For a fair comparison, we employ the actual number of generated superpixels due to potential variations from manually specified counts. Our observations reveal the following: • (1) Our method consistently outperforms others across all datasets, with a widening and stabilizing lead as superpixel count increases. This implies that style noise reduction enhances contour accuracy. • (2) Across diverse domains, our superiority becomes more pronounced, highlighting superior generalization. • (3) The auxiliary modality only participates in computations during the training phase, ensuring both improved algorithm performance and preserved inference speed. Qualitative comparison. Fig.7 shows the qualitative results of six state-of-the art methods on dataset BSDS, NYUv2, KITTI, and Pascal VOC. Our method has better performance when facing critical object contours. Please see more visual results in the supplementary. Ablation Study We conduct experiments primarily to address the following key questions: • Q1: The choices of auxiliary modalities, and the difference from using them as a form of data augmentation. 400 600 800 1000 1200 Number of Superpixels 0.966 0.968 0.97 0.972 0.974 0.976 0.978 ASA Score MI Align HSV LAB Ours Base 500 1000 1500 2000 Number of Superpixels 0.93 0.935 0.94 0.945 0.95 0.955 ASA Score MI Align HSV LAB Ours Base Figure 5: Component analysis. From left to right: ASA score on the BSDS and NYU datasets. 400 600 800 1000 1200 Number of Superpixels 0.96 0.965 0.97 0.975 ASA Score IN IW GIW Aug Ours Base 500 1000 1500 2000 Number of Superpixels 0.93 0.935 0.94 0.945 0.95 0.955 ASA Score IN IW GIW Aug Ours Base Figure 6: Comparison with other style removal methods. From left to right: ASA score on the BSDS and NYU datasets. • Q2: Since our goal is to extract shared information from image pairs with significant style differences, why not use alignment directly. • Q3: Can other universal style-removal methods improve the performance of the CDS model structure. Component Analysis. As illustrated in Fig.5, we first study the choice of auxiliary modality, i.e., HSV, LAB, and gradient map (Ours), for question Q1. The performance ranking is Ours > HSV > LAB. As expected, the dissimilarity between the auxiliary modality and the RGB modality is positively correlated with the experimental performance. The LAB color space retains two color channels, while the image gradient map does not possess the same color information as RGB. Then, to answer the question Q2, the conducted experiments show that employing either feature alignment constraints or style mutual information minimization alone reduces the performance of the method, but both are still superior to the baseline method(Yang et al. 2020). We consider that (1) When only using alignment constraints, it actually does not decouple and remove the attribute noise of the dataset, but forcibly brings the two modalities closer in the feature space, resulting in the retention of some irrelevant information for superpixel segmentation. (2) Since the two modalities share the parameters of the superpixel generator, it is equivalent to implicit alignment. However, without the pixel correlation guidance for shared information extraction, it leads to a decrease in performance. Comparison with universal style-removal approaches. We compare our method to other style-removal manners in Fig.6 to further verify the our effectiveness. Firstly, to supplement the question Q1, we employ the auxiliary modality transform as a data augmentation strategy (model Aug), i.e, applying the random modal transition by probability 0.3 for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6356 (a) Image (b) Label (c) SLIC (d) SSN (e) SEAL (f) SCN (g) AINet (h) Ours Figure 7: Visualization results of ours and previous methods. Compared to other superpixel algorithms, our method achieves better boundary adherence when facing unseen images. (From Top to Bottom: examples from BSDS, NYUv2, KITTI, VOC2012.) training the single-modality model. Due to the significant style differences between modalities and the original RGB images, it confuses the lightweight feature extractor of the superpixel algorithm, resulting in a severe performance decline. Then we compare our methods with the IN (Ulyanov, Vedaldi, and Lempitsky 2017), IW (Li et al. 2017), GIW (Cho et al. 2019) for question Q3. As shown in the figure, our method achieve the best performance. Specifically, the generalization performance of Instance Normalization (IN) falls behind our baseline. We believe this is because during the training phase, Instance Normalization introduces additional errors due to significant individual variations. Application Superpixel algorithms provide better feature representation for downstream tasks, improving model performance and efficiency. The performance of superpixels in downstream tasks also demonstrates the effectiveness of superpixel algorithms. Table.2 reports the performance comparison on downstream semantic segmentation tasks. We leverage the pretrained Deeplab(Chen et al. 2014) and Bilateral Inception (BI) module(Gadde et al. 2016) and directly replace the superpixel segmentation generated by gSLICr(Ren, Prisacariu, and Reid 2015) with different superpixel results. The first row indicates the official implementation with 1000 superpixels. The following four lines show the results using 600 superpixels. Our method achieved an mIOU of 79.02 on the reduced validated set used by (Gadde et al. 2016) of Pascal VOC2012, outperforming other superpixel algorithms. Base Modal Methods IoU DeepLab BI (gSLICr) 78.54 BI (ETPS) 77.67 BI (SCN) 78.90 BI (AINet) 78.96 BI (Ours) 79.02 Table 2: Superpixel algorithms with downstream segmentation. IoU comparison on the Pascal VOC dataset. Conclusion In this paper, we propose the Content Disentangle Superpixel algorithm to eliminate the dataset style noise that exists in the learnable superpixel features and reduce the featurelevel distribution difference between training data and openworld data for superpixel segmentation. Unlike other deep superpixel algorithms that use single-modal training, we introduce auxiliary modalities to assist in decoupling the RGB image features into invariant image content and style noise. Specifically, we propose local-grid correlation alignment to maximize the selected image content information across modalities and learn the invariant inter-pixel correlations for superpixel generation. Then, we propose the global-style mutual information minimization to the minimize the upper bound of mutual information for style information, and prevent the degenerate solution during the disentangle process. Experimental results demonstrate that our method effectively mitigates the impact of style noise on deep superpixel algorithms and achieves superior results compared to existing methods on four different datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6357 Acknowledgements This work was supported in part by the National Key R&D Program of China (No.2021ZD0112100), and National Natural Science Foundation of China (No.61972022, No.U1936212, No.62120106009, No.52202486). References Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; and S¨usstrunk, S. 2012. SLIC superpixels compared to state-ofthe-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11): 2274–2282. Achanta, R.; and Susstrunk, S. 2017. Superpixels and polygons using simple non-iterative clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4651–4660. Arbelaez, P.; Maire, M.; Fowlkes, C.; and Malik, J. 2010. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5): 898–916. Baltruˇsaitis, T.; Ahuja, C.; and Morency, L.-P. 2018. Multimodal machine learning: A survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence, 41(2): 423–443. Bergh, M. V. d.; Boix, X.; Roig, G.; Capitani, B. d.; and Gool, L. V. 2012. Seeds: Superpixels extracted via energydriven sampling. In European conference on computer vision, 13–26. Springer. Birchfield, S.; and Tomasi, C. 1999. Multiway cut for stereo and motion with slanted surfaces. In Proceedings of the seventh IEEE international conference on computer vision, volume 1, 489–495. IEEE. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2014. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062. Cheng, P.; Hao, W.; Dai, S.; Liu, J.; Gan, Z.; and Carin, L. 2020. Club: A contrastive log-ratio upper bound of mutual information. In International conference on machine learning, 1779–1788. PMLR. Cho, W.; Choi, S.; Park, D. K.; Shin, I.; and Choo, J. 2019. Image-to-image translation via group-wise deep whitening-and-coloring transformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10639–10647. Choi, S.; Jung, S.; Yun, H.; Kim, J. T.; Kim, S.; and Choo, J. 2021. Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11580–11590. Everingham, M.; Eslami, S. A.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2015. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111: 98–136. Felzenszwalb, P. F.; and Huttenlocher, D. P. 2004. Efficient graph-based image segmentation. International journal of computer vision, 59(2): 167–181. Gadde, R.; Jampani, V.; Kiefel, M.; Kappler, D.; and Gehler, P. V. 2016. Superpixel convolutional networks using bilateral inceptions. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, 597–613. Springer. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, 3354–3361. IEEE. Grady, L. 2006. Random walks for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 28(11): 1768–1783. Guo, Y.; Jiao, L.; Wang, S.; Wang, S.; Liu, F.; and Hua, W. 2018. Fuzzy superpixels for polarimetric SAR images classification. IEEE Transactions on Fuzzy Systems, 26(5): 2846–2860. He, S.; Lau, R. W.; Liu, W.; Huang, Z.; and Yang, Q. 2015. Supercnn: A superpixelwise convolutional neural network for salient object detection. International journal of computer vision, 115: 330–344. Huang, X.; and Belongie, S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, 1501–1510. Jampani, V.; Sun, D.; Liu, M.-Y.; Yang, M.-H.; and Kautz, J. 2018. Superpixel sampling networks. In Proceedings of the European Conference on Computer Vision (ECCV), 352–368. Kwak, S.; Hong, S.; and Han, B. 2017. Weakly supervised semantic segmentation using superpixel pooling network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; and Yang, M.-H. 2017. Universal style transfer via feature transforms. Advances in neural information processing systems, 30. Li, Z.; and Chen, J. 2015. Superpixel segmentation using linear spectral clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1356– 1363. Liu, M.-Y.; Tuzel, O.; Ramalingam, S.; and Chellappa, R. 2011. Entropy rate superpixel segmentation. In CVPR 2011, 2097–2104. IEEE. Meyer, F. 1992. Color image segmentation. In 1992 international conference on image processing and its applications, 303–306. IET. Pan, X.; Luo, P.; Shi, J.; and Tang, X. 2018. Two at once: Enhancing learning and generalization capacities via ibn-net. In Proceedings of the European Conference on Computer Vision (ECCV), 464–479. Pan, X.; Zhan, X.; Shi, J.; Tang, X.; and Luo, P. 2019. Switchable whitening for deep representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1863–1871. Ren, C. Y.; Prisacariu, V. A.; and Reid, I. D. 2015. gSLICr: SLIC superpixels at over 250Hz. ArXiv e-prints. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6358 Sautier, C.; Puy, G.; Gidaris, S.; Boulch, A.; Bursuc, A.; and Marlet, R. 2022. Image-to-lidar self-supervised distillation for autonomous driving data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9891–9901. Silberman, N.; Hoiem, D.; Kohli, P.; and Fergus, R. 2012. Indoor segmentation and support inference from rgbd images. In European conference on computer vision, 746–760. Springer. Stutz, D.; Hermans, A.; and Leibe, B. 2018. Superpixels: An evaluation of the state-of-the-art. Computer Vision and Image Understanding, 166: 1–27. Tu, W.-C.; Liu, M.-Y.; Jampani, V.; Sun, D.; Chien, S.-Y.; Yang, M.-H.; and Kautz, J. 2018. Learning superpixels with segmentation-aware affinity loss. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 568–576. Ulyanov, D.; Vedaldi, A.; and Lempitsky, V. 2017. Improved texture networks: Maximizing quality and diversity in feedforward stylization and texture synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6924–6932. Wang, Y.; Wei, Y.; Qian, X.; Zhu, L.; and Yang, Y. 2021. AINet: Association Implantation for Superpixel Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7078–7087. Wang, Z.-F.; and Zheng, Z.-G. 2008. A region based stereo matching algorithm using cooperative optimization. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, 1–8. IEEE. Yang, F.; Sun, Q.; Jin, H.; and Zhou, Z. 2020. Superpixel segmentation with fully convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13964–13973. Yao, J.; Boben, M.; Fidler, S.; and Urtasun, R. 2015. Realtime coarse-to-fine topologically preserving segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2947–2955. Yue, L.; Liu, Q.; Du, Y.; An, Y.; Wang, L.; and Chen, E. 2022. DARE: Disentanglement-Augmented Rationale Extraction. Advances in Neural Information Processing Systems, 35: 26603–26617. Zhao, C.; Zhu, W.; and Feng, S. 2022. Superpixel guided deformable convolution network for hyperspectral image classification. IEEE Transactions on Image Processing, 31: 3838–3851. Zhu, W.; Liang, S.; Wei, Y.; and Sun, J. 2014. Saliency optimization from robust background detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2814–2821. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6359 | 2024 | 706 |
18,525 | Direction-Aware Video Demoir´eing with Temporal-Guided Bilateral Learning Shuning Xu1*, Binbin Song1*, Xiangyu Chen1,2, Jiantao Zhou1† 1State Key Laboratory of Internet of Things for Smart City, Department of Computer and Information Science, University of Macau 2Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences {yc07425, yb97426, jtzhou}@um.edu.mo, [email protected] Abstract Moir´e patterns occur when capturing images or videos on screens, severely degrading the quality of the captured images or videos. Despite the recent progresses, existing video demoir´eing methods neglect the physical characteristics and formation process of moir´e patterns, significantly limiting the effectiveness of video recovery. This paper presents a unified framework, DTNet, a direction-aware and temporal-guided bilateral learning network for video demoir´eing. DTNet effectively incorporates the process of moir´e pattern removal, alignment, color correction, and detail refinement. Our proposed DTNet comprises two primary stages: Frame-level Direction-aware Demoir´eing and Alignment (FDDA) and Tone and Detail Refinement (TDR). In FDDA, we employ multiple directional DCT modes to perform the moir´e pattern removal process in the frequency domain, effectively detecting the prominent moir´e edges. Then, the coarse and finegrained alignment is applied on the demoir´ed features for facilitating the utilization of neighboring information. In TDR, we propose a temporal-guided bilateral learning pipeline to mitigate the degradation of color and details caused by the moir´e patterns while preserving the restored frequency information in FDDA. Guided by the aligned temporal features from FDDA, the affine transformations for the recovery of the ultimate clean frames are learned in TDR. Extensive experiments demonstrate that our video demoir´eing method outperforms state-of-the-art approaches by 2.3 dB in PSNR, and also delivers a superior visual experience. Introduction Moir´e patterns appear when two similar repetitive patterns interfere with each other, a phenomenon commonly observed during image capture on screens. The occurance of moir´e patterns can be intricate and multifaceted, leading to an unsatisfactory visual experience. Eliminating moir´e patterns can be arduous owing to their ambiguous shapes, diverse colors, and variable frequencies. Image demoir´eing has received increasing attention from the research community. DMCNN (Sun, Yu, and Wang 2018) proposes the first deep end-to-end network that uses *These authors contributed equally. †Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: A visual comparison with the existing methods for video demoir´eing. (a) Moir´e input, (b-d) Outputs of state-ofthe-art image demoir´eing methods, (e) Output of the video demoir´eing method VDmoir´e, (f) Our result, capable of removing the moir´e patterns, while restoring color deviations and fine details. (Zoom in for a better view.) a multi-scale architecture to remove moir´e patterns of various sizes. MBCNN (Zheng et al. 2020), WDNet (Liu et al. 2020a) and FHDe2Net (He et al. 2020) compensate the fine detail distortions in demoir´eing by exploiting signal properties in the frequency domain. ESDNet (Yu et al. 2022) develops a lightweight model for high-resolution image demoir´eing. The aforementioned models are designed for removing moir´e patterns from a single image, and there are relatively fewer algorithms targeting at removing moir´e patterns from videos. If we directly apply image demoir´eing methods to remove moir´e patterns from videos, it may result in poor temporal consistency. This limitation arises from the inability of image demoir´eing methods to make use of information from neighboring frames for restoring moir´e patterns in the current frame (Dai et al. 2022). Hence, the removal The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6360 of moir´e patterns from videos necessitates the development of algorithms tailored to address this issue. VDmoir´e (Dai et al. 2022) builds the first video demoir´eing dataset captured by hand-held cameras and designs a baseline video demoiring model that can effectively leverage nearby frames with the relation-based consistency regularization. FPANet (Oh et al. 2023) removes moir´e patterns and recovers the original color in the frequency domain using amplitude and phase. However, existing restoration methods lack modules specifically designed for the unique characteristics of moir´e patterns, and they neglect the formation process of moir´e patterns in videos. As a result, unsatisfactory video demoir´eing performance has been observed in these methods. This work presents a direction-aware and temporalguided bilateral learning video demoir´eing network (DTNet), a unified architecture that combines moir´e pattern removal, alignment, color correction, and detail refinement. Our proposed DTNet is structured into two key stages: Frame-Level Direction-Aware Demoir´eing and Alignment (FDDA), and Tone and Detail Refinement (TDR). Within FDDA, a direction-aware demoir´eing module with eight predefined directions is designed to eliminate moir´e patterns from each frame of the input moir´e video, identifing the prominent moir´e edges within each block in the Discrete Cosine Transform (DCT) domain. Furthermore, we introduce a learnable band reject filter (LBRF) in the directionaware demoir´e module to attenuate the specific frequencies of moir´e patterns. Also, the coarse-to-fine alignment is applied on the demoir´ed features for facilitating better utilization of the neighboring information. In TDR, a temporalguided bilateral algorithm is formed to address the color deviation and restore the texture of the original content. We propose to learn a 3D bilateral grid, storing affine coefficients and biases, based on pixel position and intensity for spatially variant color restoration while respecting the edges. For better utilization of temporal information, guidance maps are extracted from adjacent frames, furnishing valuable temporal cues for effective color and detail refinement. Fig. 1 presents a visual comparison with existing methods on a video with moir´e patterns. Notably, our proposed DTNet effectively eliminates moir´e patterns, simultaneously restoring color deviations and fine details. In summary, our contributions are listed as follows: • We design a direction-aware and temporal-guided bilateral learning network (DTNet) for video demoir´eing. DTNet effectively incorporates moir´e pattern removal, alignment, color correction, and detail refinement. • With the consideration of the unique characteristics of moir´e patterns, we propose a directional-aware demoir´eing module for moir´e pattern removal. • For the color restoration and detail refinement of the final output, we propose a novel temporal-guided bilateral learning strategy, promoting the spatially variant color recovery while respecting the edges of latent clean images. • Extensive experiments are conducted on the video demoir´eing dataset. Compared with state-of-the-art methods, our DTNet achieves a dominant performance gain in both qualitative and quantitative evaluations. Related Works Image and Video Demoir´eing Moir´e arises from the interference of two patterns with similar frequencies and often occurs when capturing screen images, resulting in significant degradation of image quality. To remove the moir´e patterns from the original images, many end-to-end image demoir´eing solutions have been proposed (Liu, Shu, and Wu 2018; Cheng, Fu, and Yang 2019; Liu et al. 2020b,c; Xu, Chu, and Sun 2020; Wang et al. 2021; Niu et al. 2023; Wang et al. 2023; Zhang et al. 2023). DMCNN (Sun, Yu, and Wang 2018) presents the first real-world dataset for image demoir´eing and a multiresolution convolutional neural network. MopNet (He et al. 2019) designs a multi-scale aggregated, edge-guided, and pattern attribute-aware network for moir´e pattern removal. In addition to the elaborate designs on the spatial domain, several studies utilize frequency domain learning for image demoir´eing (Liu et al. 2020a; He et al. 2020; Zheng et al. 2020). WDNet (Liu et al. 2020a) decomposes the input image with moir´e patterns into different frequency bands using a wavelet transform and designs a dual-branch network for restoring the close-range and far-range information. MBCNN (Zheng et al. 2020) utilizes learnable bandpass filters to acquire a frequency prior to separate moir´e patterns from normal image texture. In addition, there are also networks specifically designed for video demoir´eing (Dai et al. 2022; Oh et al. 2023). VDMoir´e (Dai et al. 2022) presents a simple video demoir´eing model that utilizes multiple video frames. It also employs a novel relation-based consistency loss, which enhances the temporal consistency of videos. FPANet (Oh et al. 2023) learns filters in both frequency and spatial domains, enhancing the restoration quality by eliminating moir´e patterns of different sizes. Note that existing demoir´eing methods have not been designed with modules specific to the directional characteristics of moir´e patterns for restoration. This issue will be explicitly addressed in our proposed DTNet. Bilateral Filtering The bilateral filter is a non-linear smoothing filter that preserves image edges and reduces noise (Tomasi and Manduchi 1998; Banterle et al. 2012). It has garnered significant attention for its ability to accelerate the edge-aware manipulation of images in the bilateral space (Barron and Poole 2016; Chen, Paris, and Durand 2007; Chen, Xu, and Koltun 2017; Pham and Van Vliet 2005; Gavaskar and Chaudhury 2018; Yang 2012; Zhang and Allebach 2008). Recently, various studies focus on the application of bilateral filters for image enhancement (Gharbi et al. 2017; Xia et al. 2020; Zheng et al. 2021b; Xu et al. 2021b; Ren et al. 2020). Zhu et al. (Zheng et al. 2021a) reconstructs bilateral coefficients on a reduced resolution of the input and generates high-quality feature maps under the guidance of full-resolution features. Xu et al. (Xu et al. 2021a) proposes an edge-aware cost volume upsampling module, regressing the disparity map at a high resolution to keep the high accuracy, while maintaining high efficiency. As mentioned above, the bilateral filter has been previously utilized mainly for image restoration purThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6361 Figure 2: The overview of our proposed DTNet for video demoir´eing. The whole framework consists of two stages: (i) Framelevel Direction-aware Demoir´eing and Alignment (FDDA), and (ii) Tone and Detail Refinement (TDR). poses, while no applications have yet been found in video restoration problems. In this paper, we devise a novel technique called temporal-guided bilateral learning by extending the use of bilateral filters from individual frames to multiple frames. The temporal-guided bilateral module will be shown effective in restoring both the color and edges in videos. Proposed Method For the image with moir´e patterns Im, the removal process of moir´e patterns M (Zheng et al. 2020) can be modeled as: ˆIc = T (Im −M), (1) whereˆIc denotes the estimation of the clean image displayed on the screen. T refers to the correction process of the color shift, caused by the ambient light and the screen itself. When it comes to removing moir´e patterns from videos, the input consists of multiple frames with moir´e patterns. Here, we take the example of three consecutive frames at timestamp {t−1, t, t+1} as input. The process of removing moir´e patterns from videos can be inferred as: At+i = Λ((It+i m −Mt+i), (It m −Mt)), i ∈{−1, 0, 1}, (2) ˆIt c = Γ(At−1, At, At+1), (3) where It+i m and Mt+i respectively denote the frames of Im and M at the specific timestamp t + i. Λ is the alignment process, where the consecutive demoir´ed frames are aligned to the reference one at timestamp t. At+i denotes the aligned deep features of the frame at timestamp t + i. Γ denotes the color refinement process on the aligned features. In this manner, the task of video demoir´eing can be separated into two steps: i) moir´e pattern removal and alignment, and ii) color restoration and detail refinement. In this work, we propose a direction-aware and temporalguided bilateral learning network (DTNet) for video demoir´eing, where a single network incorporates moir´e pattern removal, alignment, color correction, and detail refinement. Abiding by (2) and (3), we categorize the implementation of the aforementioned functions into FDDA and TDR. As demonstrated in Fig. 2, the consecutive input frames with moir´e patterns {It−1 m , It m, It+1 m } undergo processing in the direction-aware demoir´eing module and the coarse-to-fine alignment process within FDDA. In TDR, color and texture features are adaptively fused in the dual-path network to generate the final output image ˆIt c. Design of FDDA Given three consecutive frames {It−1 m , It m, It+1 m }, we propose FDDA to remove moir´e patterns in each frame and then align the features of neighboring frames, i.e., It−1 m and It+1 m , to the reference frame It m. Considering the formation process of moir´e patterns, various methods have been proposed to separate moir´e patterns and image content using frequency domain analysis, which is considered as a preferable approach. This is beneficial as image signal and moir´e patterns are usually more separable from a frequency perspective. Therefore, to obtain the aligned features {At−1, At, At+1}, we firstly extract the shallow feature, noted as F(It+i m ), from It+i m . Then BlockDCT is adopted to handle the moir´e pattern removal on F(It+i m ) in the frequency domain. Here, we denote St+i j and Rt+i j respectively as the frequency spectrum of F(It+i m ) and F(Mt+i) at the frequency j, where F(Mt+i) is the shallow feature of Mt+i. Thus, the DCT transformations of F(It+i m ) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6362 and F(Mt+i) can be computed by: DCT(F(It+i m ))= X j St+i j , DCT(F(Mt+i))= X j Rt+i j . (4) Owning to the linear property of Block-DCT/IDCT, the demoir´ed features Dt+i can be acquired by: Dt+i =F(It+i m ) −F(Mt+i) =IDCT(DCT(F(It+i m )))−IDCT(DCT(F(Mt+i m ))) =IDCT( X j St+i j − X j Rt+i j ), (5) where IDCT indicates the inverse function of block-DCT. We assume that the frequency spectrum of moire patterns tends to be consistent within a small patch. In signal processing, a band reject filter (BRF) is a filter that passes most frequencies unaltered, but attenuates those in a specific range to very low levels. Hence, the application of a BRF becomes a viable means for effective moir´e pattern removal. Since different patches necessitate the removal of distinct frequencies, determining a frequency prior for each moir´e image patch becomes time-consuming. Consequently, we construct a learnable BRF (LBRF), denoted as B(·), with several convolutional layers to attenuate the specific frequencies of moir´e patterns while preserving the original image contents. The removal of moir´e patterns in the frequency domain is defined as: B( X j St+i j ) = X j St+i j − X j Rt+i j . (6) Substituting (6) into (5), the moir´e pattern removal process in the deep feature space can be rewritten as: Dt+i = IDCT(B( X j St+i j )) = IDCT(B(DCT(F(It+i m )))). (7) Inherently, the human eye possesses high sensitivity in detecting edges within an image. For conventional DCT, it is executed through two separate 1-D transformations, applied along the vertical and horizontal directions. Nevertheless, the moir´e patterns exhibit irregular shapes characterized by distinct directions beyond the vertical or horizontal ones. Inspired by the directional DCT works (Zeng and Fu 2008), we introduce eight directional modes in DCT and IDCT, which are defined similarly to those used in the H.264 standard (the dc mode, Mode 2, is not counted), as depicted in Fig. 3 (a). In Zeng’s work, they adopt a brute-force method in choosing the most appropriate directional mode, which is timeconsuming and vulnerable to a wrong decision. Therefore, we propose a new direction-aware demoir´eing (DD) module that incorporates eight pre-defined directions within individual branches to effectively detect prominent moir´e edges in the DCT domain, as demonstrated in Fig. 3 (b). In FDDA, with the input features of It+i m , DD aims at acquiring the demoir´ed features Dt+i, i ∈{−1, 0, +1}. To distinguish the importance of demoir´ed features in different directions and facilitate feature aggregation, we generate weight maps ω with two convolutional layers and a sigmoid function and then split them into eight attention weights {ω0, ω1, ω3, ..., ω8} for each branch. Followed by the multiplication of spatial content features with the corresponding weight map in an element-wise manner, we concatenate the results and adopt a 3×3 convolution to generate the output features, effectively suppressing moir´e patterns. (a) Eight directional DCT modes. (b) Direction-aware Demoir´eing module. Figure 3: (a) Eight directional DCT modes. (b) The structure of the proposed Direction-aware Demoir´eing module. After we perform the aforementioned pre-demoir´e process for each frame, shown in the left part of FDDA in Fig. 2, we adopt the multi-scale alignment operation in EDVR (Wang et al. 2019) to acquire coarsely aligned features, denoted as CAt+i, i ∈{−1, 0, +1}. To further refine the coarsely aligned features, we predict the learnable offset and utilize deformable convolution (DConv) to align each neighboring frame to the reference frame at the feature level, generating the aligned feature At+i, i ∈{−1, 0, +1}, at each position. Followed by the aforementioned steps, we can obtain the moir´e-suppressed and aligned features {At−1, At, At+1}. Design of TDR TDR is designed to restore the ultimate clean result ˆIt c from the aligned features {At−1, At, At+1}. In TDR, we aim to refine the tone and color details degraded by the moir´e patterns while preserving the edges of the latent clean image. To this end, inspired by the bilateral filtering (Chen, Paris, and Durand 2007; Gharbi et al. 2017; Xia et al. 2020), we propose a temporal-guided bilateral learning framework for the video restoration in TDR. The specific design of TDR is depicted in the right portion of Fig. 2. For the color refinement in the RGB space, we first generate a full-size intermediate result ˆIi from the aligned features {At−1, At, At+1} with a combination of two convolutional layers and a PReLU layer. The color refinement can be achieved by fitting local affine transformations (Luan The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6363 Method TCL-V1 TCL-V2 iPhone-V1 iPhone-V2 PSNR↑ SSIM↑ LPIPS↓ PSNR↑ SSIM↑ LPIPS↓ PSNR↑ SSIM↑ LPIPS↓ PSNR↑ SSIM↑ LPIPS↓ DMCNN 20.321 0.703 0.321 20.707 0.793 0.385 21.967 0.712 0.280 21.816 0.749 0.496 WDNet 19.650 0.726 0.289 20.334 0.847 0.288 19.818 0.722 0.300 20.613 0.832 0.297 ESDNet 22.026 0.734 0.199 24.896 0.874 0.165 22.537 0.731 0.218 25.064 0.853 0.165 VDmoir´e 21.725 0.733 0.202 23.460 0.857 0.163 21.990 0.707 0.221 25.230 0.860 0.157 Ours 24.119 0.801 0.163 26.153 0.877 0.128 24.821 0.794 0.172 26.503 0.854 0.149 Table 1: Quantitative comparison with the state-of-the-art image (or video) demoir´eing approaches. The best results are highlighted with bold. The second-best results are highlighted with underline. et al. 2017; Xia et al. 2020) from the intermediate result ˆIi to the ultimate output ˆIt c. To this end, we propose to learn a 3D bilateral grid Z to store the affine coefficients and biases. Then, to exploit the temporal information for the guidance of the color refinement, we construct guidance maps from {At−1, At, At+1} to query the affine parameters in Z for the mapping from ˆIi to ˆIt c. The upper part of TDR demonstrates the diagram of the generation of Z. To save the memory consumption and accelerate the computation, we learn Z from the downsampled ˆIi, which is denoted as ˆIid with the size 256 × 256 × 3, rather than the original full-resolution version. The bilateral grid Z is learned through a U-Net and reorganized into a 3D array with the size 16 × 16 × 16. In the bottom part of TDR, to exploit the information of adjacent frames to guide the color and detail restoration of the reference frame, we use a single convolutional layer to generate three guidance maps {Gt−1, Gt, Gt+1} from the aligned features {At−1, At, At+1}. For a specific pixel with the coordinate (x, y) in the single-channel temporal guidance map Gt+i, we slice Z and look up the affine transformation coefficients Wt+i and biases Bt+i by: Wt+i, Bt+i = Z⌊x h ×16⌋,⌊y w ×16⌋,⌊Gt+i x,y ×16⌋, i ∈{−1, 0, 1}, (8) where w and h respectively denote the weight and height of Gt+i, and ⌊·⌋means the rounding operator. By taking into account the query pixel intensities, we preserve the background edges recovered in FDDA. Lastly, the ultimate result ˆIt c is generated by: ˆIt c =Conv([Wt−1, Wt, Wt+1])⊗Ii+Conv([Bt−1, Bt, Bt+1]), (9) where ⊗represents the Hadamard product and [·] means the concatenation operation. Training Objectives We train our framework in an end-to-end manner, and the overall training objective can be expressed as: L = ∥ˆIt c −It c∥1 + ∥Φl(ˆIt c) −Φl(It c)∥1, (10) where It c is the ground-truth moir´e-free image of the frame t. We employ L1 loss in conjunction with perceptual loss (Johnson, Alahi, and Fei-Fei 2016), which can reflect the human visual system’s perception of image quality. Φl(·) denotes a set of VGG-16 layers. Experiments Experimental Setup Dataset We evaluate the effectiveness of our proposed methods using the VDmoir´e dataset (Dai et al. 2022). This dataset consists of 290 clean source videos and the corresponding moir´ed videos. The source videos, with a resolution of 720p (1080×720), are displayed on either the MacBook Pro display or the Huipu v270 display. A hand-held camera (either iPhoneXR or TCL20 pro camera) captures the screen, resulting in recorded frames with moir´e patterns. To minimize the impact of misaligned frame correspondences, two methods are employed: estimating the homography matrix (referred as V1) and applying optical flow (referred as V2) to align the frames. To compare our proposed methods with state-of-the-art approaches in diverse settings, we carry out our experiments on four different dataset settings: TCL-V1, TCL-V2, iPhone-V1, and iPhone-V2. Training Details The video demoir´eing network utilizes three consecutive frames as input to produce one restored clean image. We adopt the AdamW optimizer with β1 = 0.9 and β2 = 0.999 to train the model. The learning rate is initialized as 4×10−4. We apply the cyclic cosine annealing learning rate schedule (Loshchilov and Hutter 2016), which allows partial warm restart optimization, generally improving the convergence rate in gradient-based optimization. In total, we train our model with batch size 16 on four NVIDIA Tesla A100 GPUs. Frame-Level Comparison We compare our approach with several demoir´eing methods: DMCNN (Sun, Yu, and Wang 2018), WDNet (Liu et al. 2020a), ESDNet (Yu et al. 2022), and VDmoir´e (Dai et al. 2022). It is important to note that both VDMoir´e and our method utilize multi-frame inputs, whereas other methods can only accept single-frame images as input. Quantitative Results The performance of demoir´eing is quantitatively measured using PSNR, SSIM, and LPIPS. In Table 1, our proposed DTNet achieves leading video demoir´eing performance on all four datasets. For instance, on the TCL-V1 dataset, we achieve a PSNR gain of 2.093 dB and an SSIM improvement of 0.067. Also, our approach significantly outperforms previous methods in terms of PSNR on both the TCL-V2 and iPhone-V2 datasets, exceeding 26 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6364 Figure 4: Visual quality comparison among DMCNN, WDNet, ESDNet, VDMoire, and our proposed DTNet. dB. Moreover, our method accomplishes a significant decrease in LPIPS, indicating a higher perceptual quality of the recovered images. Method FVD↓ FSIM↑ DMCNN 686.23 0.927 WDNet 713.52 0.936 ESDNet 201.79 0.966 VDmoir´e 269.13 0.960 Ours 158.24 0.972 Table 2: Quantitative comparison in terms of FVD and FSIM metrics to further analyze the quality of generated videos (or image frames) on TCL-V2 dataset. Note that lower FVD scores and higher FSIM scores indicate better performance. Qualitative Results We present visual comparisons between our DTNet and the existing methods in Fig. 4. The results clearly demonstrate the advantages of our approach in removing moir´e artifacts, particularly in the case of moir´e patterns on blank walls or white T-shirts. Also, for scenes with rich details and textures, our method excels not only in correcting color shifts but also in restoring details. Video-Level Comparison Additionally, we employ two metrics, FVD (Unterthiner et al. 2018) and FSIM (Zhang et al. 2011), to assess the quality of video outputs. FVD adapts Frechet Inception Distance (FID) to evaluate the temporal coherence of a video, whereas FSIM emphasizes low-level features as an image quality assessment metric inspired by the human visual system. As demonstrated in Table 2, our model surpasses other existing approaches in both metrics, indicating that our outputs exhibit a greater similarity to the target distribution of the entire video sequences, while preserving per-pixel and structural visual information. Ablation Study Table 3 presents an assessment of the effectiveness of our proposed FDDA and TDR through ablation experiments involving diverse combinations of these foundational components. Furthermore, we investigate the necessity of multiframe input by altering the input to repetitions of a single frame (Ours S). Note that, without loss of generality, we use the TCL-V2 dataset for the ablation study. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6365 Structure PSNR↑ SSIM↑ LPIPS↓ w/o Directional DCT 26.005 0.872 0.140 w/o Alignment 23.635 0.848 0.196 w/o Temporal-guided BL 25.813 0.871 0.144 Ours S 25.914 0.870 0.145 Ours 26.153 0.877 0.128 Table 3: Ablation Study on overall DTNet. Directional DCT. Without applying directional DCT in the direction-aware demoir´eing module, there is a decline in all three qualitative metrics. Upon observing the generated demoir´ed images visually in complex scenarios, directional DCT demonstrates significant contributions to moir´e pattern removal. Fig. 5 (a) displays an image of a hedgehog with alternating red and green moir´e stripes. Due to the rich structural details, which consist of numerous edge features, the separation of moir´e patterns from the original image content becomes highly challenging. When the directional DCT is not applied, Fig. 5 (b) exhibits noticeable residual moir´e stripes on the hedgehog’s back. In contrast, Fig. 5 (c) reveals a clear content which is less affected by moir´e patterns, resembling the ground truth shown in Fig. 5 (d). Figure 5: Effect of Directional DCT. Alignment. In order to facilitate the utilization of information from neighboring frames and effectively handle significant and intricate motions, we employ both multi-scale alignment and deformable convolution for the alignment process. To demonstrate the indispensability of alignment process, we substitute the original design with a simple concatenation and two convolutional layers, resulting in a PSNR of only 23.635 dB. Fig. 6 (b) represents the frame we aim to restore, with extensive colorful moir´e patterns on the blank wall in the center of the image. However, in the same location of the neighboring frames, as depicted in Fig. 6 (a, c), are less affected by moir´e patterns. By utilizing a suitable alignment structure, we can effectively leverage the information from neighboring frames to assist in restoring the current frame, as demonstrated in Fig. 6 (e). Temporal-guided Bilateral Learning. In TDR, we propose the temporal-guided bilateral learning to address the Figure 6: Effect of Alignment. color deviation and preserve image details. In the ablation study, we replace the proposed temporal-guided bilateral learning (Temporal-guided BL) with a concatenation followed by two convolutional layers for feature reconstruction, resulting in a PSNR drop of 0.35 dB. Also, the perceptual quality of the recovered images, as indicated by LPIPS scores, suffers a great decline. Arising from the ambient light and the screen itself, severe color degradation can be observed in Fig. 7 (a). With the absence of Temporal-guided BL, the color deviations remain challenging to rectify effectively. As a comparison, in Fig. 7 (c), the image generated using the Temporal-guided BL exhibit the correct tone, approaching the ground truth shown in Fig. 7 (d). Figure 7: Effect of Temporal-guided Bilateral Learning. Conclusion In this paper, we introduce DTNet, a unified framework for video demoir´eing. In the FDDA process, we analyze the directional characteristics of moir´e patterns and employ multiple directional DCT modes to perform the moir´e pattern removal process in the frequency domain, thereby aiding the coarse-to-fine alignment process. In TDR, we learn localized tone curves guided by temporally aligned features to reduce color and detail loss caused by screen capture, while respecting the edges of latent clean images. Extensive frame-level and video-level experiments demonstrate that our video demoir´eing method outperforms state-of-the-art approaches in both quantitative and qualitative evaluations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6366 Acknowledgments This work was supported in part by Macau Science and Technology Development Fund under SKLIOTSC2021-2023, 0072/2020/AMJ and 0022/2022/A1; in part by Research Committee at University of Macau under MYRG2022-00152-FST and MYRG-GRG2023-00058FST-UMDF; in part by Natural Science Foundation of China under 61971476; and in part by Alibaba Group through Alibaba Innovative Research Program. References Banterle, F.; Corsini, M.; Cignoni, P.; and Scopigno, R. 2012. A low-memory, straightforward and fast bilateral filter through subsampling in spatial domain. In Computer Graphics Forum, volume 31, 19–32. Wiley Online Library. Barron, J. T.; and Poole, B. 2016. The fast bilateral solver. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, 617–632. Springer. Chen, J.; Paris, S.; and Durand, F. 2007. Real-time edgeaware image processing with the bilateral grid. ACM Transactions on Graphics (TOG), 26(3): 103–es. Chen, Q.; Xu, J.; and Koltun, V. 2017. Fast image processing with fully-convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, 2497– 2506. Cheng, X.; Fu, Z.; and Yang, J. 2019. Multi-scale dynamic feature encoding network for image demoir´eing. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 3486–3493. IEEE. Dai, P.; Yu, X.; Ma, L.; Zhang, B.; Li, J.; Li, W.; Shen, J.; and Qi, X. 2022. Video Demoireing With Relation-Based Temporal Consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17622–17631. Gavaskar, R. G.; and Chaudhury, K. N. 2018. Fast adaptive bilateral filtering. IEEE Transactions on Image Processing, 28(2): 779–790. Gharbi, M.; Chen, J.; Barron, J. T.; Hasinoff, S. W.; and Durand, F. 2017. Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG), 36(4): 1–12. He, B.; Wang, C.; Shi, B.; and Duan, L.-Y. 2019. Mop moire patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2424–2432. He, B.; Wang, C.; Shi, B.; and Duan, L.-Y. 2020. Fhde 2 net: Full high definition demoireing network. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, 713– 729. Springer. Johnson, J.; Alahi, A.; and Fei-Fei, L. 2016. Perceptual losses for real-time style transfer and super-resolution. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 694–711. Springer. Liu, B.; Shu, X.; and Wu, X. 2018. Demoir\’eing of Camera-Captured Screen Images Using Deep Convolutional Neural Network. arXiv preprint arXiv:1804.03809. Liu, L.; Liu, J.; Yuan, S.; Slabaugh, G.; Leonardis, A.; Zhou, W.; and Tian, Q. 2020a. Wavelet-based dual-branch network for image demoir´eing. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, 86–102. Springer. Liu, L.; Yuan, S.; Liu, J.; Bao, L.; Slabaugh, G.; and Tian, Q. 2020b. Self-adaptively learning to demoir´e from focused and defocused image pairs. Advances in Neural Information Processing Systems, 33: 22282–22292. Liu, S.; Li, C.; Nan, N.; Zong, Z.; and Song, R. 2020c. MMDM: Multi-frame and multi-scale for image demoir´eing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 434– 435. Loshchilov, I.; and Hutter, F. 2016. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Luan, F.; Paris, S.; Shechtman, E.; and Bala, K. 2017. Deep photo style transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4990–4998. Niu, Y.; Lin, Z.; Liu, W.; and Guo, W. 2023. Progressive Moire Removal and Texture Complementation for Image Demoireing. IEEE Transactions on Circuits and Systems for Video Technology. Oh, G.; Gu, H.; Kim, S.; and Kim, J. 2023. FPANet: Frequency-based Video Demoireing using Frame-level Post Alignment. arXiv preprint arXiv:2301.07330. Pham, T. Q.; and Van Vliet, L. J. 2005. Separable bilateral filtering for fast video preprocessing. In 2005 IEEE International Conference on Multimedia and Expo, 4–pp. IEEE. Ren, D.; Shang, W.; Zhu, P.; Hu, Q.; Meng, D.; and Zuo, W. 2020. Single image deraining using bilateral recurrent network. IEEE Transactions on Image Processing, 29: 6852– 6863. Sun, Y.; Yu, Y.; and Wang, W. 2018. Moir´e photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8): 4160–4172. Tomasi, C.; and Manduchi, R. 1998. Bilateral filtering for gray and color images. In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), 839–846. IEEE. Unterthiner, T.; Van Steenkiste, S.; Kurach, K.; Marinier, R.; Michalski, M.; and Gelly, S. 2018. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717. Wang, C.; He, B.; Wu, S.; Wan, R.; Shi, B.; and Duan, L.-Y. 2023. Coarse-to-fine Disentangling Demoir´eing Framework for Recaptured Screen Images. IEEE Transactions on Pattern Analysis and Machine Intelligence. Wang, H.; Tian, Q.; Li, L.; and Guo, X. 2021. Image demoir´eing with a dual-domain distilling network. In 2021 IEEE International Conference on Multimedia and Expo (ICME), 1–6. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6367 Wang, X.; Chan, K. C.; Yu, K.; Dong, C.; and Change Loy, C. 2019. Edvr: Video restoration with enhanced deformable convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 0–0. Xia, X.; Zhang, M.; Xue, T.; Sun, Z.; Fang, H.; Kulis, B.; and Chen, J. 2020. Joint bilateral learning for real-time universal photorealistic style transfer. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part VIII 16, 327–342. Springer. Xu, B.; Xu, Y.; Yang, X.; Jia, W.; and Guo, Y. 2021a. Bilateral grid learning for stereo matching networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12497–12506. Xu, D.; Chu, Y.; and Sun, Q. 2020. Moir´e pattern removal via attentive fractal network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 472–473. Xu, Q.; Wang, L.; Wang, Y.; Sheng, W.; and Deng, X. 2021b. Deep bilateral learning for stereo image super-resolution. IEEE Signal Processing Letters, 28: 613–617. Yang, Q. 2012. Recursive bilateral filtering. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part I 12, 399–413. Springer. Yu, X.; Dai, P.; Li, W.; Ma, L.; Shen, J.; Li, J.; and Qi, X. 2022. Towards efficient and scale-robust ultra-highdefinition image demoir´eing. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII, 646–662. Springer. Zeng, B.; and Fu, J. 2008. Directional discrete cosine transforms—A new framework for image coding. IEEE transactions on circuits and systems for video technology, 18(3): 305–313. Zhang, B.; and Allebach, J. P. 2008. Adaptive bilateral filter for sharpness enhancement and noise removal. IEEE transactions on Image Processing, 17(5): 664–678. Zhang, L.; Zhang, L.; Mou, X.; and Zhang, D. 2011. FSIM: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8): 2378–2386. Zhang, Y.; Lin, M.; Li, X.; Liu, H.; Wang, G.; Chao, F.; Ren, S.; Wen, Y.; Chen, X.; and Ji, R. 2023. Real-Time Image Demoireing on Mobile Devices. arXiv preprint arXiv:2302.02184. Zheng, B.; Yuan, S.; Slabaugh, G.; and Leonardis, A. 2020. Image demoireing with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3636–3645. Zheng, Z.; Ren, W.; Cao, X.; Hu, X.; Wang, T.; Song, F.; and Jia, X. 2021a. Ultra-high-definition image dehazing via multi-guided bilateral learning. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16180–16189. IEEE. Zheng, Z.; Ren, W.; Cao, X.; Wang, T.; and Jia, X. 2021b. Ultra-high-definition image hdr reconstruction via collaborative bilateral learning. In Proceedings of the IEEE/CVF international conference on computer vision, 4449–4458. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6368 | 2024 | 707 |
18,526 | Spectral Prompt Tuning: Unveiling Unseen Classes for Zero-Shot Semantic Segmentation Wenhao Xu1,*, Rongtao Xu2,3,*, Changwei Wang2,3, Shibiao Xu1,†, Li Guo1, Man Zhang1, Xiaopeng Zhang2 1School of Artificial Intelligence, Beijing University of Posts and Telecommunications 2State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences 3School of Artificial Intelligence,University of Chinese Academy of Sciences [email protected] Abstract Recently, CLIP has found practical utility in the domain of pixel-level zero-shot segmentation tasks. The present landscape features two-stage methodologies beset by issues such as intricate pipelines and elevated computational costs. While current one-stage approaches alleviate these concerns and incorporate Visual Prompt Training (VPT) to uphold CLIP’s generalization capacity, they still fall short in fully harnessing CLIP’s potential for pixel-level unseen class demarcation and precise pixel predictions. To further stimulate CLIP’s zero-shot dense prediction capability, we propose SPT-SEG, a one-stage approach that improves CLIP’s adaptability from image to pixel. Specifically, we initially introduce Spectral Prompt Tuning (SPT), incorporating spectral prompts into the CLIP visual encoder’s shallow layers to capture structural intricacies of images, thereby enhancing comprehension of unseen classes. Subsequently, we introduce the Spectral Guided Decoder (SGD), utilizing both high and low-frequency information to steer the network’s spatial focus towards more prominent classification features, enabling precise pixel-level prediction outcomes. Through extensive experiments on two public datasets, we demonstrate the superiority of our method over state-of-the-art approaches, performing well across all classes and particularly excelling in handling unseen classes. Introduction Semantic segmentation is one of the fundamental tasks in computer vision, aiming to predict the class for each pixel in an image (Xu et al. 2023d, 2021b; Chen et al. 2021; Dong et al. 2021). Despite the existence of numerous related works (Lu et al. 2020; Dong et al. 2020; Xu et al. 2023b; Wang et al. 2023a), the success of deep semantic segmentation models heavily relies on a large amount of annotated training images, which requires significant efforts. In recent years, interest has been growing in unsupervised or weakly supervised semantic segmentation methods, including semi-supervised (Chen et al. 2021), weakly supervised (Xu et al. 2023a,c; Wang et al. 2023b), few-shot (Xie et al. 2021), and zero-shot semantic segmentation (Bucher et al. 2019; Pastore et al. 2021; Xian et al. 2019). Among *These authors contributed equally. †Shibiao Xu is the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Loss b. Plain one-stage methods Image Encoder Text Encoder Text-Patch Matching Decoder a.SPT-SEG (Ours) Loss Image Encoder Text Encoder Spectral Prompts Spectral Guided Decoder Loss c. Two-stage methods Image Encoder Text Encoder Proposal Generator Proposal-level Classification Semantic Masks Assembly Stage1 Stage2 Figure 1: (a) Our SPT-SEG method demonstrates outstanding performance across all classes. (b) While yielding favorable results within the seen classes, it exhibits relatively poorer performance in the unseen classes. (c) Its performance is unsatisfactory across all classes. them, zero-shot semantic segmentation tasks are particularly challenging and appealing, as they require generating accurate semantic segmentation results with only the semantic descriptions of the classes given. To incorporate zero-shot capability into visual systems, researchers have proposed large-scale vision-and-language pretraining models, such as CLIP (Radford et al. 2021) and ALIGN (Jia et al. 2021a). Specifically, CLIP encodes semantic concepts into model parameters by contrastive training on a massive collection of image-text pairs, forming a zero-shot knowledge base for downstream tasks. However, contrastive pretraining mainly focuses on capturing imagelevel concepts. In CLIP, the training texts primarily describe the global context of images, and the encoded image and text embeddings are used together to compute contrastive losses. Consequently, CLIP is more suitable for image-level classification tasks (Zhou et al. 2022b,a; Lu et al. 2022; Zhang et al. 2022). The pretrained visual-language model CLIP (Radford et al. 2021) has recently found applications in various dense prediction tasks, including semantic segmentation (Pakhomov et al. 2021), referring segmentation (Wang et al. 2022), and object detection (Esmaeilpour The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6369 et al. 2022). In the zero shot semantic segmentation task, approaches like zsseg (Xu et al. 2021a) and Zegformer (Ding et al. 2022) adopt a similar strategy that requires two-stage processing: first generating region proposals and then feeding the cropped regions into CLIP for zero-shot classification. However, this strategy involves encoding images twice as FI 1(c), once for proposal generation and another for CLIP encoding of each proposal. This design introduces additional computational overhead and fails to fully leverage the knowledge in the CLIP encoder to guide the proposal generation stage. To streamline the process, ZegCLip (Zhou et al. 2023) introduces a one-stage approach by incorporating visual prompt tuning into CLIP, then extending CLIP’s zero-shot capabilities from image-level to pixel-level. The inclusion of Visual Prompt Tuning (VPT) in CLIP significantly enhances its downstream task generalization with few learnable parameters. However, since the original CLIP’s training primarily revolves around image-level contrastive learning, its features tend to emphasize only the most discriminative parts of objects. Even with the introduction of VPT, the observed phenomenon persists even during pre-training with image-level contrastive loss. Consequently, this phenomenon leads to incomplete and biased segmentation in dense prediction tasks. Based on the aforementioned observations, we believe that further enhancing the image-to-pixel adaptability of CLIP (Radford et al. 2021) would contribute to improved zero-shot segmentation performance. Therefore, we propose an innovative one-stage method called SPT-SEG, as shown in Fig. 1(b). SPT-SEG differs from plain one-stage methods, as depicted in Fig.1(a). In our approach, we integrate spectral cues into the shallow layers of the CLIP visual encoder, which provides additional structural information that enhances the model’s comprehension of various object components. We also utilize high-frequency and low-frequency information to guide the alignment of text and pixels, directing the network’s spatial focus towards more salient classification features. The synergy of these two designs enhances the model’s semantic understanding and reasoning capabilities, effectively addressing the issues of inadequate pixel generalization and incomplete segmentation present in the current CLIP-based zero-shot semantic segmentation methods. In summary, our contributions are listed as follows: • We introduce Spectral Prompt Tuning (SPT), which builds upon VPT by incorporating a small set of learnable spectral parameters. These parameters are integrated into the shallow layers of the CLIP visual encoder to introduce spectral information. • We propose the Spectral Guided Decoder (SGD) layer, which is a novel component that utilizes high-frequency and low-frequency information to guide the matching process between textual and pixel representations. • We comprehensively assess our method on two public datasets, and the results clearly show that our approach significantly surpasses state-of-the-art methods. Related Work Vision-Language Model. Extensive research has been conducted on Visual-Language Models (VLM)(Hong et al. 2021; Huang et al. 2021; Kamath et al. 2021; Kim, Son, and Kim 2021), showcasing significant advancements in downstream vision tasks, especially in settings with unannotated or restricted data. These tasks encompass diverse areas such as image retrieval(Liu et al. 2021), dense prediction (Rao et al. 2022), visual referring expression (Wang et al. 2022), and visual question answering (Jiang, Liu, and Zheng 2022). CLIP (Radford et al. 2021) is widely recognized as one of the most popular vision-language models. It is pretrained using contrastive learning on a massive dataset of 400 million text-image pairs. ALIGN (Jia et al. 2021b) utilized an even larger dataset, comprising 1.8 billion pairs, for pre-training its model. However, this larger dataset also introduced a significant amount of noise. In more recent works, CoCa (Yu et al. 2022) and Beit-V3 (Wang et al. 2023c) have further emphasized the superior performance of VLM pre-trained features. Prompt Tuning. The concept of prompts originated from natural language processing and is mainly used in VLM to enhance its understanding of downstream specific tasks. By providing prompts, we can avoid massive parameter learning for VLM and instead use it as a fixed knowledge base, focusing only on task-relevant information. These prompts can be manually created for downstream tasks or automatically learned during fine-tuning. Full fine-tuning and linear probe (Gao et al. 2021) are two typical methods for adapting the VLM (i.e. CLIP) to downstream tasks. Full fine-tuning leads to a reduced VL representation of previously learned, while linear probe limits the zero-shot capability of CLIP. Inspired by the prompt learning in NLP, many works propose to adapt VLM by adding learnable tokens during endto-end training. CoOp (Zhou et al. 2022b) introduced continuous prompt learning, where a set of continuous vectors are optimized end-to-end with down-stream supervision . Additionally, learnable prompts are applied by CoOp on the text encoder of CLIP to replace sub-optimal hand-crafted templates. Co-CoOp (Zhou et al. 2022a) highlights the poor performance of CoOp on novel classes and addresses the generalization problem by explicitly conditioning the prompts on image instances. Recently, prompting (Jia et al. 2022; Sandler et al. 2022) has been adapted to vision tasks. (Sandler et al. 2022) proposes memory tokens which is a set of learnable embedding vectors for each transformer layer. VPT (Jia et al. 2022) proposes similar ideas and investigates the generality and feasibility of visual prompting via extensive experiments spanning multiple kinds of recognition tasks across multiple domains and backbone architectures. Our research further extends the paradigm of visual prompt learning by introducing spectral prompt, addressing the limitations of previous visual prompt learning methods in fully leveraging the structural information of images and their limited adaptability to pixel-level tasks. Zero-shot Semantic Segmentation. It remains a challenging task to achieve zero-shot semantic segmentation due to the presence of an imbalance problem in seen classes. Previous studies such as SPNet (Xian et al. 2019), ZS3 (Bucher The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6370 CLIP Text Encoder CLIP Image Encoder … A photo of a {…} . . ×3 SPT ··· b. Spectral Guided Decoder Spectral Guided Decoder Layer cls Relationship Descriptor a. Spectral Prompt Tuning Token selection Channel selection … Mat Mul Q Dot-Product Attention Q K low-frequency branch high-frequency branch Avg Pool C Loss ... V K V Dot-Product Attention GT Masks argmax upsampling Figure 2: Overview of our proposed SPT-SEG. The main contribution of our work lies in two simple but effective designs (Red marks a,b in the figure): (a) Spectral prompt tuning which adds learnable spectral prompts to the first two layers of the CLIP’s visual encoder; (b) Spectral guided decoder which utilizes high- and low-frequency feature information to guide the text to match with pixels, and decodes the predicted results. . et al. 2019), CaGNet (Gu et al. 2020) and STRICT (Pastore et al. 2021) adopt strategies to improve the generalization ability of semantic mappings from visible to invisible classes. Since the popular pre-trained visual language model CLIP has shown powerful zero-shot classification capabilities, it has recently been applied to zero-shot semantic segmentation as well. Zegformer (Ding et al. 2022) and zsseg (Xu et al. 2021a) developed an extensive proposal generator and used CLIP to classify each region and then integrate the predictions. Previous studies, such as SPNet (Xian et al. 2019), ZS3 (Bucher et al. 2019), CaGNet (Gu et al. 2020), SIGN (Cheng et al. 2021), Joint (Baek, Oh, and Ham 2021), and STRICT (Pastore et al. 2021), adopt the approach of improving the generalization capability of semantic mapping from the classes that have been encountered to unseen ones. Recently, a two-stage paradigm (Ding et al. 2022; Xu et al. 2021a) has been proposed to explore the use of CLIP for zero-shot segmentation. They leveraged the CLIP model to classify individual regions following a comprehensive proposal generator and then integrate the resulting predictions. Although effective, this design requires two image encoding processes, resulting in expensive computational costs. In order to simplify the pipeline of the two stages, ZegCLIP (Zhou et al. 2023) proposed a one-stage method that transfers CLIP’s powerful generalization ability from images to pixel-level classification. In this work, we use a one-stage method and achieve outstanding zero-shot segmentation performance through two effective designs. Method Problem Definition We adopt the generalized zero-shot semantic segmentation (GZLSS) method (Xian et al. 2019), which requires to segment both seen classes Cs and unseen classes Cu after only training on a dataset with pixel-annotations of seen part. During training, the model generates per-pixel classification results based on the semantic descriptions of all visible classes. During testing, the model is evaluated on both seen and unseen classes. It is important to note that Cs ∩Cu = ⊘ and that the label of Cu is not available during training. SPT-SEG The architecture of SPT-SEG is illustrated in Fig. 2. The basic one-stage methodology comprises four key components: the CLIP encoder that incorporates the text and visual encoders, the relationship descriptor between the cls token and the text embeding, a decoder, and a loss function. Our enhancements focus on two pivotal components: (1) Introducing an innovative Spectral Prompt Tuning approach within the visual encoder, aimed at extracting structural insights to bolster CLIP’s adaptability to dense prediction tasks, (2) Integrating a Spectral Guided Decode Layer into the decoder, which adeptly captures high and low-frequency features specific to the task. Spectral Prompt Tuning Prompt tuning is a recently proposed fine-tuning technique that offers a valuable approach to adapt pre-trained transformer models to target The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6371 Tranform Encoder Layer N ··· ··· ··· cls ··· + + + + V ··· ··· ··· + + + + ··· ··· ··· ··· Spectral prompts Visual prompts Tuned Frozen S Tranform Encoder Layer 3 Tranform Encoder Layer 2 Tranform Encoder Layer 1 Embed Figure 3: Overview of our proposed Spectral-Prompt Tuning. During training on downstream tasks, only the parameters of prompts and the linear head are updated while the whole Transformer encoder is frozen. domains (Xing et al. 2022). However, fine-tuning zeroshot segmentation models solely on a limited set of visible classes often leads to overfitting. This occurs because the optimization process focuses solely on visible classes, disregarding knowledge relevant to visual concepts that cannot be obtained from the training set. To address this issue, Visual Prompt Tuning (VPT) (Jia et al. 2022) has emerged as a potential solution. VPT introduces a small number of taskspecific learnable parameters in the input space while keeping the backbone frozen during downstream training. While VPT has shown promising results in certain cases, it does not fully leverage the intrinsic properties and structural characteristics of images, which may not be fully manifested in the spatial domain, thereby limiting its effectiveness in handling structure-aware tasks. To address this limitation, we propose the Spectral Prompt Tuning (SPT) method, as shown in Fig. 3. SPT extends the concept of VPT by incorporating prompt parameters learned from a spectral perspective. In contrast to VPT’s exclusive reliance on visual prompts for fine-tuning, SPT capitalizes on frequency domain features to offer supplementary understanding of intricate attributes and structural characteristics. The features learned by SPT in the spectrum allow it to better capture and distinguish subtle visual features of different classes, even for those classes that do not have direct examples in the training data. In this way, when the model encounters images of completely new classes, it can extract common information about these classes from the spectrum features, enabling more accurate segmentation. This ability can alleviate the ”partial” or ”ambiguous” segmentation issues that occur in zero-shot scenarios, thus ensuring a more precise capture of unknown classes. The input embeddings from the l-th layer of the image encoder in the CLIP model are denoted as gl, hl 1, hl 2, · · · , hl N . Here, gl represents the embedding for the [cls] token, and Hl = hl 1, hl 2, · · · , hl N corresponds to the embeddings of image patches. In the context of SPT, the CLIP image encoder’s token sequences are extended with learnable tokens Vl = vl 1, vl 2, · · · , vl M in each layer. Furthermore, learnable spectral prompts Sl = sl 1, sl 2, · · · , sl N are added in the first two layers. These additions enhance the model’s ability to process image features at multiple levels of abstraction. Sl is calculated from Hl and gl, and a set of learnable filter parameters wf, the process can be expressed as: Sl = F-1(F(Hl ⊙gl) ⊙wf), (1) where F is the 2D fast fourier transform (FFT) and F−1 is the inverse FFT (IFFT). Then, when l ≤2 the layer processes the input token as: gl, −, Hl = Layerl gl−1, Vl−1, Hl−1 + Sl−1 ), (2) when l > 2 the transform layer processes the input token as: gl, −, Hl = Layerl gl−1, Vl−1, Hl−1 ). (3) Spectral Guided Decode Layer In practical semantic segmentation applications, high-quality segmentation results are crucial for the success of the task. Recent work (Patro, Namboodiri, and Agneeswaran 2023) combined spectral layers with multi-head attention in a transformer architecture to capture relevant features in initial layers. LiTv2 (Pan, Cai, and Zhuang 2022)introduced a novel attention mechanism that separately processes high and low-frequency components in attention layers, capturing local and global relationships effectively in classification and segmentation tasks. Drawing inspiration from these insights, we propose an innovative decoding method as shown Fig. 2(b) by introducing frequency domain features during the decoding stage, which significantly enhances the performance of image segmentation. Firstly, the frequency domain-guided decoder can balance the attention on small details and global structure, enabling the model to focus on both local and overall features simultaneously. Secondly, guided by frequency domain features, the decoder can capture object boundaries and textures more accurately, thereby improving the precision of the segmentation results. Most importantly, this decoder exhibits stronger generalization ability on unseen classes, which is crucial for unknown situations in real-world applications. The design comprises the following steps: (1) The high-frequency branch captures fine-grained local dependencies through local window self-attention., while the low-frequency branch applies average pooling to each window, obtaining low-frequency signals that capture the global dependencies of the input. This high and lowfrequency capturing is built on the multi-head self-attention The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6372 (MSA) mechanism, which allowsfor capturing distant relations labeled at different locations in the input sequence X ∈RN×D. Here, N is the length of the input sequence, and D represents the hidden dimension. To achieve this, we divide the Nh heads in MSA into two groups with a split ratio α. Specifically, αNh heads are used for the highfrequency branch, and the remaining (1 −α)Nh heads are utilized for the low-frequency branch. The high-frequency branch computes the output by linearly projecting the outputs of the α self-attention heads and then concatenating them as follows: The high-frequency branch performs a simple non-overlapping-window (3 × 3) partitioning of the inputs X, and then computes the outputs by α-sizing and concatenating them as follows: MSAα( ˆX) = Concat h∈[αNh][SAh( ˆX)], (4) where SAh( ˆX) denotes the output of the h-th self-attention head, and note that ˆX denotes the input with the nonoverlapping window already divided. Meanwhile, the lowfrequency branch utilizes average pooling to extract lowfrequency signals within each window, and its computation process can be expressed as: MSA1−α( ˆX) = Concat h∈[(1−α)Nh][SAh(AvgPool( ˆX))], (5) Finally, the overall output is obtained by concatenating the outputs from each branch as follows: z = [MSAα( ˆX); MSA1−α( ˆX)], (6) where [·] denotes the concatenation operation. (2) we emphasize task-relevant tokens and channels through frequency domain feature extraction to select specific characteristics. We perform frequency domain feature extraction on z ∈RN×D to identify task-related markers and channels. The output is obtained using the following operation: ˆz = P · sim(z, ξ) · z, (7) where ξ ∈Rd and P ∈Rd×d are task-specific parameters, and sim(·, ·) represents the cosine similarity ranging between [0, 1]. The resulting ˆz can be represented as [ˆz1, ˆz2, ..., ˆzN] ∈RN×D, where ˆzj denotes the embedding for the jth patch class. The matrix t = [t1, t2, ..., tC] ∈ RC×D represents C classes, with d as the feature dimension of the CLIP model. Here, ti denotes the representation of the i-th class, and [cls] corresponds to the global feature represented as g ∈RN×D. The relationship descriptor can be represented as: ˆt = ϕ([t · g; t]), (8) where ϕ(·) projects [t · g; t] to the same dimension as ˆz. Semantic masks are calculated using matrix product: Masks = ˆt · ˆzT ∈RC×N, (9) The final segmentation results are obtained by applying the Argmax operation along the class dimension of Masks. Loss Function We employ a combination of the focal loss (Lin et al. 2017), and the structural similarity (SSIM) loss (Wang, Simoncelli, and Bovik 2003). The total loss L is a linear combination of the focal loss and SSIM loss, with coefficients α and β to balance their contributions: L = γ · Lfocal + σ · Lssim, (10) The coefficients γ, σ are used to control the relative importance of the focal loss and SSIM loss in the overall loss function. Experiments Datasets We conducted extensive experiments on two benchmark datasets to evaluate the effectiveness of our proposed method: PASCAL VOC 2012 (20), COCO-Stuff 164K. Here are the details of each dataset: 1. PASCAL VOC 2012: This dataset consists of 10,582 augmented images for training and 1,449 for validation. We focus on 15 seen classes, ignoring the ”background” class, and 5 unseen classes. 2. COCO-Stuff 164K: It is a large-scale dataset with 118,287 training images and 5,000 testing images, covering 171 classes. Among them, 156 classes are seen, and 15 classes are unseen. Evaluation Metrics As in previous studies, we assess the performance using pixel-wise classification accuracy (pAcc) and the mean intersection over union (mIoU) for both seen and unseen classes, referred to as mIoU(S) and mIoU(U), respectively. Additionally, we calculate the harmonic mean IoU (hIoU) between the seen and unseen classes as in ZegCLIP (Zhou et al. 2023), which is formulated as: hIoU = 2 ∗mIoU(S) ∗mIoU(U) mIoU(S) + mIoU(U) . (11) Implementation Details Our proposed method is implemented using the MMSegmentation open-source toolbox(Contributors 2020) with PyTorch 1.10.1. All experiments were conducted on two H800 GPUs using the pre-trained CLIP ViT-B/16 model. The batch size was set to 16, and the images were resized to a resolution of 512 × 512. We performed a total of 20,000 training iterations on the PASCAL VOC 2012 dataset, and 96,000 iterations on the COCO-Stuff 164K dataset. Based on previous research works (Gu et al. 2020; Xu et al. 2021a; Ding et al. 2022; Zhou, Loy, and Dai 2022), we have set up the unseen classes. The optimizer used was AdamW, and we followed the default training schedule provided by the MMSeg toolbox. In SPT-SEG, it should be noted that the model learns multiple prompts exclusively from seen classes during training. The optimizer used was AdamW, and we followed the default training schedule provided by the MMSeg toolbox. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6373 Methods PASCAL VOC 2012 COCO-Stuff 164K pAcc mIoU(S) mIoU(U) hIoU pAcc mIoU(S) mIoU(U) hIoU SPNetCV PR′19 / 78.0 15.6 26.1 / 35.2 8.7 14.0 ZS3NeurIPS′19 / 77.3 17.7 28.7 / 34.7 9.5 15.0 CaGNetACMMM ′20 80.7 78.4 26.6 39.7 56.6 33.5 12.2 18.2 SIGNICCV ′21 / 75.4 28.9 41.7 / 32.3 15.5 20.9 JointICCV ′21 / 77.7 32.5 45.9 / / / / ZegFormerCV PR′22 / 86.4 63.6 73.3 / 36.6 33.2 34.8 zssegarXiv′21 90.0 83.5 72.5 77.5 60.3 39.3 36.3 37.8 ZegCLIPCV PR′23 94.6 91.9 77.8 84.3 62.0 40.2 41.4 40.8 SPT-SEG (Ours) 96.7 (+2.1) 92.9 (+1.0) 87.4 (+9.6) 90.1 (+5.8) 62.9 (+0.9) 40.6 (+0.4) 43.8 (+2.4) 42.1 (+1.3) ZegCLIP *CV PR′23 96.3 92.4 90.9 91.6 69.9 40.7 63.2 49.6 SPT-SEG * (Ours) 97.6 93.6 92.9 93.2 72.5 41.6 66.0 51.0 Table 1: Comparison with state-of-the-art methods on the PASCAL VOC 2012 and COCO-Stuff 164K datasets. The asterisk (*) denotes training involving all classes. The best results are highlighted in bold. Comparison with State-of-the-Art Methods To showcase the effectiveness of our method, we present the evaluation results in comparison with previous state-of-theart approaches, as shown in Tab. 1. Additionally, we include the results of fully supervised learning as an upper bound to demonstrate the performance gap between fully supervised segmentation and zero-shot segmentation on unseen classes. We provide qualitative results on the COCO-Stuff 164K dataset, depicted in Fig. 4. Our proposed method exhibits significant performance improvements, particularly for unseen classes, surpassing previous approaches, as depicted in Tab. 1. This highlights the superior generalization capability of our method compared to existing methods. Particularly noteworthy is the significant increase in mIoU for unseen classes in the VOC dataset 9.6% and for unseen classes in the COCO dataset 2.4% Fig. 4 showcases the segmentation outcomes of the ZegCLIP (Zhou et al. 2023) and our proposed SPT-SEG, both on seen and unseen classes. With the integration of our proposed designs, SPT-SEG demonstrates impressive segmentation capabilities on both seen and unseen classes, effectively distinguishing similar unseen classes. For example, our approach effectively segments small target ’sport ball’ objects and achieves full recognition of the unseen class ’playing field’ (Fig. 4(1)). Furthermore, our method successfully discriminates “plastic” classes from skateboard regions (Fig. 4(2)), and accurately segments “dog” instances bearing resemblance to “horses” (Fig. 4(3)). Overall, SPT-SEG completely segments the unseen classes(“playing field”, “plastic”) and significantly outperforms other methods in terms of segmentation details. These results confirm the effectiveness of our proposed method in achieving superior segmentation performance, especially for unseen classes. Ablation Study Detailed results of applying designs on baseline To demonstrate the effectiveness of our proposed designs, we further report the improvements of applying designs on baseline (ZegCLIP) in Tab. 2. The addition of the SPT sigBas. SPT SGD PASCAL VOC 2012 mIoU(S) mIoU(U) hIoU ✓ 91.9 77.8 84.3 ✓ ✓ 92.6 86.7 89.6 ✓ ✓ 92.0 79.9 85.5 ✓ ✓ ✓ 92.9 87.4 90.1 Table 2: Quantitative results on VOC dataset to demonstrate the effectiveness of our proposed two designs. Here ✓means that this component is applied. Note that our baseline (Bas.) method is ZegCLIP (Zhou et al. 2023). The best results are highlighted in bold. Depth PASCAL VOC 2012 mIoU(S) mIoU(U) hIoU 1-6 92.5 86.4 89.3 6-12 92.1 80.9 86.1 1-12 92.6 86.5 89.4 11-12 92.0 78.3 84.6 1-2 92.9 87.4 90.1 Table 3: Ablation on Spectral Prompt Tuning depth. The 1-st layer refers to the one closest to input. ViT-B has 12 layers in total.The best results are highlighted in bold. nificantly enhances the model’s performance on unseen data. When both SPT and SGD are utilized, the SPT-SEG model exhibits excellent results on the VOC test dataset. Effect of the depth of SPT Tab. 3 demonstrates the impact of SPT insertion positions and layers on SPT-SEG performance. The performance of SPT is notably superior when inserted in the earlier layers compared to the later ones. However, its overall performance is comparable when applied across all layers as well as with its application limited to the first two layers. This finding indicates the greater significance of early transformer layer spectral prompts over later layers’ prompts. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6374 (1) (3) (2) (b) GT (c) ZegCLIP (d) Ours (4) (5) (6) (7) (a) Image Figure 4: Qualitative results on COCO-Stuff 164K. (a) are the original testing images; (b) are the ground truths of each image.(c) represent the performance of ZegCLIP; (d) are the visualization results of our proposed SPT-SEG. Note that we have highlighted prominent regions using yellow arrows and marked other significant areas with yellow stars for emphasis. Layers PASCAL VOC 2012 mIoU(S) mIoU(U) hIoU 1 91.9 82.8 87.1 3 92.9 87.4 90.1 5 92.2 83.7 87.7 Table 4: Ablation on layers of Spectral Guided Decode Layer. The best results are highlighted in bold. Effect of Spectral Guided Decode layers To investigate the impact of decoder layers on the performance of SPTSEG, we conducted an ablation study on decoder layer depth. Tab. 4 demonstrates that within our research settings, the model achieved its optimal performance with 3 decoder layers. At this layer depth, the model exhibited excellent performance both at the pixel-level and class-level. However, when the decoder layers were increased to 5, we observed signs of overfitting, resulting in a decline in performance on the test set. Conversely, employing only 1 decoder layer significantly reduced the model’s performance. Limitations Limited by the recognition capability and resolution of CLIP, pixel classification may be prone to errors in complex scenes such as object occlusion and glass reflection (e.g. (Fig. 4(5))). Additionally, the ability to recognize details, such as object edges, also needs improvement. Resolving these limitations and enhancing the robustness of the SPTSEG method are important directions for future research. Conclusion In this work, we present an efficient one-stage direct zeroshot semantic segmentation method based on the pre-trained vision-language model CLIP. We introduce two innovative designs to transfer image classification capabilities to dense prediction tasks while maintaining a leading edge in zeroshot knowledge. These designs enable us to achieve competitive results on known classes and significantly improve performance on novel classes. To demonstrate the effectiveness of our approach, we comprehensively test its performance on two widely-used benchmark datasets, outperforming the previous state-of-the-art methods. Our research aims to explore the use of pre-trained visual language models for semantic segmentation. By integrating spectral information and enhancing the capability of CLIP, we successfully apply its zero-shot knowledge to downstream tasks, providing a flexible and accurate solution for zero-shot semantic segmentation. Acknowledgements This work was supported by Beijing Natural Science Foundation No. JQ23014, and in part by the National Natural Science Foundation of China (Nos. U21A20515, 62271074 and 62276031). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6375 References Baek, D.; Oh, Y.; and Ham, B. 2021. Exploiting a joint embedding space for generalized zero-shot semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 9536–9545. Bucher, M.; Vu, T.-H.; Cord, M.; and P´erez, P. 2019. Zeroshot semantic segmentation. Advances in Neural Information Processing Systems, 32. Chen, X.; Yuan, Y.; Zeng, G.; and Wang, J. 2021. Semisupervised semantic segmentation with cross pseudo supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2613–2622. Cheng, J.; Nandi, S.; Natarajan, P.; and Abd-Almageed, W. 2021. Sign: Spatial-information incorporated generative network for generalized zero-shot semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9556–9566. Contributors, M. 2020. MMSegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark. https: //github.com/open-mmlab/mmsegmentation. Ding, J.; Xue, N.; Xia, G.-S.; and Dai, D. 2022. Decoupling zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11583–11592. Dong, J.; Cong, Y.; Sun, G.; Fang, Z.; and Ding, Z. 2021. Where and How to Transfer: Knowledge AggregationInduced Transferability Perception for Unsupervised Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. Dong, J.; Cong, Y.; Sun, G.; Zhong, B.; and Xu, X. 2020. What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4022–4031. Esmaeilpour, S.; Liu, B.; Robertson, E.; and Shu, L. 2022. Zero-shot out-of-distribution detection based on the pretrained model clip. In Proceedings of the AAAI conference on artificial intelligence, volume 36, 6568–6576. Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; and Qiao, Y. 2021. Clip-adapter: Better visionlanguage models with feature adapters. arXiv preprint arXiv:2110.04544. Gu, Z.; Zhou, S.; Niu, L.; Zhao, Z.; and Zhang, L. 2020. Context-aware feature generation for zero-shot semantic segmentation. In Proceedings of the 28th ACM International Conference on Multimedia, 1921–1929. Hong, Y.; Wu, Q.; Qi, Y.; Rodriguez-Opazo, C.; and Gould, S. 2021. Vln bert: A recurrent vision-and-language bert for navigation. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 1643–1653. Huang, Z.; Zeng, Z.; Huang, Y.; Liu, B.; Fu, D.; and Fu, J. 2021. Seeing out of the box: End-to-end pre-training for vision-language representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12976–12985. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021a. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, 4904–4916. PMLR. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021b. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, 4904–4916. PMLR. Jia, M.; Tang, L.; Chen, B.-C.; Cardie, C.; Belongie, S.; Hariharan, B.; and Lim, S.-N. 2022. Visual prompt tuning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, 709–727. Springer. Jiang, J.; Liu, Z.; and Zheng, N. 2022. Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering. arXiv preprint arXiv:2209.06954. Kamath, A.; Singh, M.; LeCun, Y.; Synnaeve, G.; Misra, I.; and Carion, N. 2021. Mdetr-modulated detection for endto-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1780–1790. Kim, W.; Son, B.; and Kim, I. 2021. Vilt: Vision-andlanguage transformer without convolution or region supervision. In International Conference on Machine Learning, 5583–5594. PMLR. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Liu, Z.; Rodriguez-Opazo, C.; Teney, D.; and Gould, S. 2021. Image retrieval on real-life images with pretrained vision-and-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2125–2134. Lu, X.; Wang, W.; Danelljan, M.; Zhou, T.; Shen, J.; and Van Gool, L. 2020. Video object segmentation with episodic graph memory networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, 661–679. Springer. Lu, Y.; Liu, J.; Zhang, Y.; Liu, Y.; and Tian, X. 2022. Prompt distribution learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5206– 5215. Pakhomov, D.; Hira, S.; Wagle, N.; Green, K. E.; and Navab, N. 2021. Segmentation in style: Unsupervised semantic image segmentation with stylegan and clip. arXiv preprint arXiv:2107.12518. Pan, Z.; Cai, J.; and Zhuang, B. 2022. Fast Vision Transformers with HiLo Attention. In NeurIPS. Pastore, G.; Cermelli, F.; Xian, Y.; Mancini, M.; Akata, Z.; and Caputo, B. 2021. A closer look at self-training for zero-label semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2693–2702. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6376 Patro, B. N.; Namboodiri, V. P.; and Agneeswaran, V. S. 2023. SpectFormer: Frequency and Attention is what you need in a Vision Transformer. arXiv preprint arXiv:2304.06446. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18082–18091. Sandler, M.; Zhmoginov, A.; Vladymyrov, M.; and Jackson, A. 2022. Fine-tuning image transformers using learnable memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12155–12164. Wang, C.; Xu, R.; Xu, S.; Meng, W.; and Zhang, X. 2023a. Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network. Engineering Applications of Artificial Intelligence, 123: 106168. Wang, C.; Xu, R.; Xu, S.; Meng, W.; and Zhang, X. 2023b. Treating Pseudo-labels Generation as Image Matting for Weakly Supervised Semantic Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 755–765. Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; Liu, Q.; Aggarwal, K.; Mohammed, O. K.; Singhal, S.; Som, S.; et al. 2023c. Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19175–19186. Wang, Z.; Lu, Y.; Li, Q.; Tao, X.; Guo, Y.; Gong, M.; and Liu, T. 2022. Cris: Clip-driven referring image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11686–11695. Wang, Z.; Simoncelli, E. P.; and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, 1398–1402. Ieee. Xian, Y.; Choudhury, S.; He, Y.; Schiele, B.; and Akata, Z. 2019. Semantic projection network for zero-and few-label semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8256–8265. Xie, G.-S.; Liu, J.; Xiong, H.; and Shao, L. 2021. Scaleaware graph neural network for few-shot semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5475–5484. Xing, Y.; Wu, Q.; Cheng, D.; Zhang, S.; Liang, G.; and Zhang, Y. 2022. Class-aware visual prompt tuning for vision-language pre-trained model. arXiv preprint arXiv:2208.08340. Xu, M.; Zhang, Z.; Wei, F.; Lin, Y.; Cao, Y.; Hu, H.; and Bai, X. 2021a. A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model. arXiv preprint arXiv:2112.14757. Xu, R.; Wang, C.; Sun, J.; Xu, S.; Meng, W.; and Zhang, X. 2023a. Self Correspondence Distillation For End-to-End Weakly-Supervised Semantic Segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence. Xu, R.; Wang, C.; Xu, S.; Meng, W.; and Zhang, X. 2021b. DC-net: Dual context network for 2D medical image segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27– October 1, 2021, Proceedings, Part I 24, 503–513. Springer. Xu, R.; Wang, C.; Xu, S.; Meng, W.; and Zhang, X. 2023b. Dual-stream Representation Fusion Learning for accurate medical image segmentation. Engineering Applications of Artificial Intelligence, 123: 106402. Xu, R.; Wang, C.; Xu, S.; Meng, W.; and Zhang, X. 2023c. Wave-Like Class Activation Map With Representation Fusion for Weakly-Supervised Semantic Segmentation. IEEE Transactions on Multimedia. Xu, R.; Wang, C.; Zhang, J.; Xu, S.; Meng, W.; and Zhang, X. 2023d. Rssformer: Foreground saliency enhancement for remote sensing land-cover segmentation. IEEE Transactions on Image Processing, 32: 1052–1064. Yu, J.; Wang, Z.; Vasudevan, V.; Yeung, L.; Seyedhosseini, M.; and Wu, Y. 2022. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917. Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; and Li, H. 2022. Tip-adapter: Training-free adaption of clip for few-shot classification. In European Conference on Computer Vision, 493–510. Springer. Zhou, C.; Loy, C. C.; and Dai, B. 2022. Extract free dense labels from clip. In European Conference on Computer Vision, 696–712. Springer. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhou, Z.; Lei, Y.; Zhang, B.; Liu, L.; and Liu, Y. 2023. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11175–11185. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6377 | 2024 | 708 |
18,527 | SCTNet: Single-Branch CNN with Transformer Semantic Information for Real-Time Segmentation Zhengze Xu1*, Dongyue Wu1, Changqian Yu2, Xiangxiang Chu2, Nong Sang1, Changxin Gao1† 1National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology 2Meituan {m202273191, dongyue wu nsang, cgao}@hust.edu.cn, [email protected], [email protected] Abstract Recent real-time semantic segmentation methods usually adopt an additional semantic branch to pursue rich long-range context. However, the additional branch incurs undesirable computational overhead and slows inference speed. To eliminate this dilemma, we propose SCTNet, a single branch CNN with transformer semantic information for real-time segmentation. SCTNet enjoys the rich semantic representations of an inference-free semantic branch while retaining the high efficiency of lightweight single branch CNN. SCTNet utilizes a transformer as the training-only semantic branch considering its superb ability to extract long-range context. With the help of the proposed transformer-like CNN block CFBlock and the semantic information alignment module, SCTNet could capture the rich semantic information from the transformer branch in training. During the inference, only the single branch CNN needs to be deployed. We conduct extensive experiments on Cityscapes, ADE20K, and COCO-Stuff10K, and the results show that our method achieves the new state-of-the-art performance. The code and model is available at https://github.com/xzz777/SCTNet. Introduction As a fundamental task in computer vision, semantic segmentation aims to assign a semantic class label to each pixel in the input image. It plays a vital role in autonomous driving, medical image processing, mobile applications, and many other fields. In order to achieve better segmentation performance, recent semantic segmentation methods pursue abundant long-range context. Different methods have been proposed to capture and encode rich contextual information, including large receptive fields (Chen et al. 2014, 2017, 2018), multi-scale feature fusion (Ronneberger, Fischer, and Brox 2015; Zhao et al. 2017), self-attention mechanism (Fu et al. 2019; Huang et al. 2019; Yuan et al. 2018; Zhao et al. 2018b; Dosovitskiy et al. 2020), etc. Among them, the selfattention mechanism, as an essential component of transformers, has been proven to have a remarkable ability to model long-range context. Although these works improve significantly, they usually lead to high computational costs. *Work strenthened during an internship at Meituan. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. STDC2-75 PIDNet-M AFFormer-B-100 AFFormer-B-75 SFNet-R18 RTFormer-S RTFormer-B TopFormer-B-100 SeaFormer-B-100 SCTNet-S-75 SCTNet-B-100 SCTNet-B-75 SCTNet-B-50 BiSeNetV2-L SegNext-T-100 SegNext-T-75 DDRNet23 DDRNet23Slim PIDNet-S 75 76 77 78 79 80 81 20 30 40 50 60 70 80 90 100 110 120 130 140 150 Accuracy (mIoU%) Inference Speed (FPS) Figure 1: The speed-accuracy performance on Cityscapes validation set. Our methods are presented in red stars, while others are presented in blue dots. Our SCTNet establishes a new state-of-the-art speed-accuracy trade-off. Note that self-attention-based works even have square computation complexity with respect to image resolution, which significantly increases latency in processing high-resolution images. These limitations hinder their application in realtime semantic segmentation. Many recent real-time works adopt a bilateral architecture to extract high-quality semantic information at a fast speed. BiSeNet (Yu et al. 2018) proposes a bilateral network to separate the detailed spatial features and ample contextual information at early stages and process them in parallel, which is shown in Figure 2(a). Following BiseNet (Yu et al. 2018), BiSeNetV2 (Yu et al. 2021) and STDC (Fan et al. 2021) make further efforts to strengthen the capability to extract rich long-range context or reduce the computational costs of the spatial branch. To balance inference speed and accuracy, DDRNet (Pan et al. 2022), RTFormer (Wang et al. 2022), and SeaFormer (Wan et al. 2023) adopt a feature-sharing architecture that divides spatial and contextual features at the deep stages, as shown in Figure 2(b). However, these methods introduce dense fusion modules between two branches to boost the semantic information of extracted features. In conclusion, all these bilateral methods suffer from limited The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6378 Input 1/4 1/8 1/16 1/32 FM Output 1/8 1/8 FM FM (a) (b) (c) Input 1/4 1/8 1/16 1/32 1/2 FM 1/4 1/8 Output Input Output 1/16 1/32 SIAM SIAM 1/4 1/8 1/4 1/8 1/16 1/32 Figure 2: Real-time semantic segmentation paradigms. (a) Decoupled bilateral network divides a semantic branch and a spatial branch at the early stage. (b) Feature sharing bilateral network separates the two branches at the latter stage and adopts dense fusion modules. (c) Our SCTNet applies a single hierarchy branch with a semantic extraction transformer, free from the extra branch and costly fusion module in inference. FM: Fusion Module, SIAM: Semantic Information Alignment Module. Dashed arrows and boxes denote training-only. inference speed and high computational costs due to the additional branch and multiple fusion modules. To eliminate the aforementioned dilemma, we propose a single-branch CNN with transformer semantic information for real-time segmentation (SCTNet). It can extract semantic information efficiently without heavy computation caused by the bilateral network. Specifically, SCTNet learns long-range context from a training-only transformer semantic branch to the CNN branch. To mitigate the semantic gap between the transformer and CNN, we elaborately design a transformer-like CNN block called CFBlock and utilize a shared decoder head before the alignment. With the aligned semantic information in training, the single-branch CNN can encode the semantic information and spatial details jointly. Therefore, SCTNet could align the semantic representation from the large effective receptive field of transformer architecture while maintaining the high efficiency of a lightweight single branch CNN architecture in inference. The overall architecture is illustrated in Figure 2(c). Extensive experimental results on three challenging datasets demonstrate that the proposed SCTNet has a better trade-off between accuracy and speed than previous works. Figure 1 intuitively shows the comparison between SCTNet and other real-time segmentation methods on the Cityscapes val set. The main contributions of the proposed SCTNet can be summarized as the following three aspects: • We propose a novel single-branch real-time segmentation network called SCTNet. By learning to extract rich semantic information utilizing semantic information alignment from the transformer to CNN, SCTNet enjoys high accuracy of the transformer while maintaining fast inference speed of the lightweight single branch CNN. • To alleviate the semantic gap between CNN features and transformer features, we design the CFBlock (ConvFormer Block), which could capture long-range context as a transformer block using only convolution operations. Moreover, we propose SIAM(Semantic Information Alignment Module) to align the features in a more effective way. • Extensive experimental results show that the proposed SCTNet outperforms existing state-of-the-art methods for real-time semantic segmentation on Cityscapes, ADE20K, and COCO-Stuff-10K. SCTNet provides a new view of boosting the speed and improving the performance for real-time semantic segmentation. Related Work Semantic Segmentation. FCN (Long, Shelhamer, and Darrell 2015) leads to the tendency to utilize CNN for semantic segmentation. Following FCN, a series of improved CNN-based semantic segmentation methods are proposed. DeepLab (Chen et al. 2017) enlarges the receptive field with dilated convolution. PSPNet (Zhao et al. 2017), U-Net (Ronneberger, Fischer, and Brox 2015), and RefineNet (Lin et al. 2017) fuse different level feature representations to capture multi-scale context. Some methods (Fu et al. 2019; Huang et al. 2019; Yuan et al. 2018; Zhao et al. 2018b)propose various attention modules to improve segmentation performance. In recent years, transformer has been adopted for semantic segmentation and shows promising performance. SETR (Zheng et al. 2021) directly applies the vision transformer to image segmentation for the first time. PVT (Wang et al. 2021) introduces the typical hierarchical architecture in CNN into the transformer-based semantic segmentation model. SegFormer (Xie et al. 2021) proposes an efficient multi-scale transformer-based segmentation model. Real-time Semantic Segmentation. Early real-time semantic segmentation methods (Paszke et al. 2016; Wu, Shen, and Hengel 2017) usually accelerate inference by compressing channels or fast down-sampling. ICNet (Zhao et al. 2018a) first introduces a multi-resolution image cascade network to accelerate the speed. BiSeNetV1 (Yu et al. 2018) and BiSeNetV2 (Yu et al. 2021) adopt two-branch architecture and feature fusion modules to achieve a better tradeoff between speed and accuracy. STDC (Fan et al. 2021) rethinks the two-branch network of BiSeNet, removes the spatial branch, and adds a detailed guidance module. DDRNets (Pan et al. 2022) achieves a better trade-off by sharing branches in the early stages. Very recently, some efficient transformer methods for real-time segmentation have been proposed, but they still have unresolved problems. TopFormer (Zhang et al. 2022) only uses transformer on 1/64 scale of the feature maps, leading to low accuracy. RTFormer (Wang et al. 2022) and SeaFormer (Wan et al. 2023) need frequent interaction between the two branches. This additional computation slows down the inference speed. In addition, there are also some single-branch and multi-branch methods in real-time segmentation. Attention mechanism. Attention mechanism has been widely used in computer vision in recent years. Many methods contribute to attention mechanism with linear comThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6379 Backbone Feature Alignment SIAM Shared Decoder Head Alignment m Decoder Stem Conv Block Conv Block Conv Block Transformer Block Stem Transformer Block Transformer Block Transformer Block CF Block Training Only 𝐻𝐻 4 × 𝑊𝑊 4 × C Conv Block CF Block CF Block Decoder Conv Block 𝐻𝐻 8 × 𝑊𝑊 8 × 2C 𝐻𝐻 16 × 𝑊𝑊 16 × 4C 𝐻𝐻 32 × 𝑊𝑊 32 × 8C 𝐻𝐻𝑐𝑐× 𝑊𝑊𝑐𝑐× 𝐶𝐶𝑐𝑐 𝐻𝐻× 𝑊𝑊× 𝐶𝐶𝑡𝑡 𝐻𝐻× 𝑊𝑊× 𝐶𝐶𝑡𝑡 Resize&Projection … Reshape BFA Semantic Alignment Loss Projection Decoder Decoder Decoder Semantic Alignment Loss SDHA Figure 3: The architecture of SCTNet. CFBlock (Conv-Former Block, detailed in Figure 4) takes advantage of the training-only Transformer branch (greyed-out in the dashed box) via SIAM (Semantic Information Alignment Module) which is composed of BFA (Backbone Feature Alignment) and SDHA (Shared Decoder Head Alignment). plexity. Among them, Some classical linear attention like Swin (Liu et al. 2021) and MSG (Fang et al. 2022) contains frequent shift or reshape operations which bring lots of latency. MSCA (Guo et al. 2022b) shows a promising performance, but the large kernel is not employ-friendly, and the multi-scale design of attention further incurs inference speed. External attention (Guo et al. 2022a) has a very simple form. It uses external parameters as the key and value and implements the attention mechanism with two linear layers. GFA(GPU-Friendly Attention) (Wang et al. 2022) improves external attention by replacing head split in EA with group double norm, which is more friendly for GPU devices. Methodology Motivation Removing the semantic branch of bilateral networks can significantly speed up the inference. However, this results in shallow single-branch networks that lack long-range semantic information, leading to low accuracy. While using deep encoders and powerful decoders or complex enhancement modules can recover accuracy, it slows down the inference process. To address this issue, we propose a trainingonly alignment method that enriches semantic information without sacrificing inference speed. Specifically, we proposed SCTNet, a single-branch convolution network with a training-only semantic extraction transformer, which owns high accuracy of transformer and fast inference speed of CNN. The overview of SCTNet is presented in Figure 3. Conv-Former Block As different types of networks, the feature representations extracted by CNN and transformer significantly differ. Directly aligning the features between the CNN and the transformer makes the learning process difficult, resulting in limited performance improvement. In order to make the CNN branch easily learns how to extract high-quality semantic information from the transformer branch, we design the ConvFormer Block. Conv-Former Block simulates the structure of the transformer block as much as possible to learn the semantic information of the transformer branch better. Meanwhile, the Conv-Former Block implements the attention function using only efficient convolution operations. The structure of the Conv-Former Block is similar to the structure of a typical transformer encoder (Vaswani et al. 2017), as presented in the left of Figure 4. The process can be described as follows: f = Norm(x + ConvAttention(x)), y = Norm(f + FFN(f)), (1) where Norm(·) refers to batch normalization (Ioffe and Szegedy 2015), and x, f, y denote input, hidden feature and output, respectively. Convolutional Attention. Attention mechanisms used for real-time segmentation should have the property of low latency and powerful semantic extraction ability. As discussed in the related work, We believe GFA is a potential candidate. Our convolutional attention is derived from GFA. There are two main differences between GFA and the proposed convolutional attention. Firstly, we replace the matrix multiplication in GFA with pixel-wise convolution operations. Point convolution is equivalent to pixel-to-pixel multiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6380 Convolution Attention Add & Norm FFN Add & Norm Input Output GDN H W C … C k N … C k N H W C … C N k C k N … GDN 𝑲𝒆𝒚𝟏 𝑲𝒆𝒚𝟐 𝑽𝒂𝒍𝒖𝒆𝟐 𝑽𝒂𝒍𝒖𝒆𝟏 𝑸𝒖𝒆𝒓𝒚 Figure 4: Design of Conv-Former Block (left) and the details of convolutional attention (right). GDN means Grouped Double Normalization. ⊗means convolution operations, ⊕ stands for addition, and k means the kernel size. plication but without feature flattening and reshaping operations. These operations are detrimental to maintaining the inherent spatial structure and bring in extra inference latency. Moreover, convolution provides a more flexible way to extend external parameters. Then, due to the semantic gap between the transformer and CNN, it is not enough to capture rich context that simply calculates the similarity between several learnable vectors and each pixel and then enhances the pixels according to the similarity map and the learnable vectors. To better align the semantic information of the transformer, we enlarge the learnable vectors to learnable kernels. On the one hand, this converts the similarity calculation between pixel and learnable vectors to that between pixel patches with learnable kernels. On the other hand, the convolution operation with learnable kernels retains more local spatial information to some extent. The operations of convolution attention can be summarized as follows: X = θ (X ⊗K) ⊗KT , (2) where X ∈ RC×H×W ,K ∈ RC×N×k×k, KT ∈ RN×C×k×k represents input image and learnable query and key, respectively. C, H, W denote the channel, height, and width of the feature map, respectively. N denotes the number of learnable parameters, and k denotes the kernel size of the learnable parameters. θ symbolizes the grouped double normalization, which applies softmax on the dimension of H × W and grouped L2 Norm on the dimension of N. ⊗ means convolution operations. Taking efficiency into consideration, we implement the convolution attention with stripe convolution rather than standard convolutions. More specifically, we utilize a 1 × k and a k × 1 convolution to approximate a k × k convolution layer. Figure 4 illustrate the implementation details of convolution attention. Feed Forward Network. Typical FFN plays a vital role in providing position encoding and embedding channels. The typical FFN (Feed Forward Network) in recent transformer models consists of a expand point convolution, a depth-wise 3×3 convolution, and a squeeze point convolution. Different from typical FFN, our FFN is made up of two standard 3 × 3 convolution layers. Compared with the typical FFN, our FFN is more efficient and provides a larger receptive field. Semantic Information Alignment Module A simple yet effective alignment module is proposed to conduct the feature learning in the training process, as shown in Figure 3. It can be divided into backbone feature alignment and shared decoder head alignment. Backbone Feature Alignment. Thanks to the transformerlike architecture of the Conv-Former Block, the alignment loss can easily align the Conv-Former Block’s features with the features of transformers. In short, the backbone feature alignment first down-sample or up-sample the feature from the transformer and CNN branches for alignment. Then it projects the feature of the CNN to the dimension of the transformer. The projection can: 1) unify the number of channels and 2) avoid direct alignment of features, which damages the supervision of ground truth for the CNN in the training process. Finally, a semantic alignment loss is applied to the projected features to align the semantic representations. Shared Decoder Head Alignment. Transformer decoders often use the features of multiple stages for complex decoding, while SCTNet decoder only picks the features of stage2&stage4. Considering the significant difference in decoding space between them, direct alignment of the decoding features and output logits can only get limited improvement. Therefore, we propose shared decoder head alignment. Specifically, the concatenation stage2&stage4 features of the single-branch CNN are input into a point convolution to expand the dimension. Then the high-dimension features are passed through the transformer decoder. The transformer decoder’s new output features and logits are used to calculate alignment loss with its origin outputs. Overall Architecture To reduce computational costs while obtain rich semantic information, we simplify the popular two-branches architecture to one swift CNN branch for inference and a transformer branch for semantic alignment only for training. Backbone. To improve the inference speed, SCTNet adopts a typical hierarchical CNN backbone. SCTNet starts from a stem block consisting of two sequential 3×3 convolution layers. The former two stages consist of stacked residual blocks (He et al. 2016), and the latter two stages include the proposed transformer-like blocks called Conv-Former Blocks (CFBlocks). The CFBlock employs several elaborately designed convolution operations to perform the similar long-range context capturing function of the transformer block. We apply a convdown layer consisting of a stridden convolution with batch normal and ReLu activation for down-sampling at the beginning of stage 2 ∼4, which is omitted in Figure 3 for clarity. Decoder Head. The decoder head consists of a DAPPM (Pan et al. 2022) and a segmentation head. To further enrich the context information, we add a DAPPM after the output of stage 4. Then we concatenate the output with the feature map of Stage 2. Finally, this output feature is passed into a segmentation head. Precisely, the segmentation head consists of a 3×3 Conv-BN-ReLU operator followed by a 1×1 convolution classifier. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6381 Method Reference #Params↓ Resolution FPS(TRT)↑ FPS(Torch)↑ mIoU(%)↑ SFNet-ResNet18 ECCV 2020 12.3M 2048 × 1024 50.5 24.0 79.0 AFFormer-B-Seg100 AAAI 2023 3.0M 2048 × 1024 58.3 28.4 78.7 AFFormer-B-Seg75 AAAI 2023 3.0M 1536 × 768 96.4 38.6 76.5 AFFormer-B-Seg50 AAAI 2023 3.0M 1024 × 512 148.4 49.5 73.5 SegNext-T-Seg100 NeurIPS 2022 4.3M 2048 × 1024 46.5 28.1 79.8 SegNext-T-Seg75 NeurIPS 2022 4.3M 1536 × 768 78.3 45.6 78.0 PIDNet-S CVPR 2023 7.6M 2048 × 1024 127.1 93.2 78.8 PIDNet-M CVPR 2023 34.4M 2048 × 1024 90.7 39.8 80.1 CNN-based Bilateral Networks BiSeNet-ResNet18 ECCV 2018 49.0M 1536 × 768 182.9 112.3 74.8 BiSeNetV2-L IJCV 2021 1024 × 512 102.3 67.6 75.8 STDC1-Seg75 CVPR 2021 14.2M 1536 × 768 209.5 101.9 74.5 STDC2-Seg75 CVPR 2021 22.2M 1536 × 768 149.2 84.3 77.0 STDC1-Seg50 CVPR 2021 14.2M 1024 × 512 397.6 146.2 72.2 STDC2-Seg50 CVPR 2021 22.2M 1024 × 512 279.7 94.6 74.2 DDRNet-23-S TIP 2022 5.7M 2048 × 1024 138.9 106.7 77.8 DDRNet-23 TIP 2022 20.1M 2048 × 1024 101.9 56.7 79.5 Transformer-based Bilateral Networks TopFormer-B-Seg100 CVPR 2022 5.1M 2048 × 1024 128.4 81.4 76.3 TopFormer-B-Seg50 CVPR 2022 5.1M 1024 × 512 410.9 95.7 70.7 SeaFormer-B-Seg100 ICLR 2023 8.6M 2048 × 1024 103.6 37.5 77.7 SeaFormer-B-Seg50 ICLR 2023 8.6M 1024 × 512 231.6 45.2 72.2 RTFormer-S NeurIPS 2022 4.8M 2048 × 1024 89.6 76.3 RTFormer-B NeurIPS 2022 16.8M 2048 × 1024 50.2 79.3 SCTNet-S-Seg50 Ours 4.6M 1024 × 512 451.2 160.3 72.8 SCTNet-S-Seg75 Ours 4.6M 1536 × 768 233.3 149.2 76.1 SCTNet-B-Seg50 Ours 17.4M 1024 × 512 374.6 144.9 76.5 SCTNet-B-Seg75 Ours 17.4M 1536 × 768 186.6 105.2 79.8 SCTNet-B-Seg100 Ours 17.4M 2048 × 1024 105.0 62.8 80.5 Table 1: Comparisons with other state-of-the-art real-time methods on Cityscapes val set. Seg100, Seg75, Seg50 denote the input size of 1024 × 2048, 768 × 1536, 512 × 1024, respectively. #Params refers to the number of parameters. Training Phase. It is well known that transformer excels at capturing global semantic context. On the other hand, CNN has been widely proven to be better at modeling hierarchical locality information than transformers. Motivated by the advantages of transformer and CNN, we explore equipping a real-time segmentation network with both merits. We propose a single-branch CNN that learns to align its features with those of a powerful transformer, which is illustrated in the blue dotted box in Figure 3. This feature alignment enables the single-branch CNN to extract both rich global context and detailed spatial information. Specifically, there are two streams in the training phase. SCTNet adopts a train-only transformer as the semantic branch to extract powerful global semantic context. The semantic information alignment module supervises the convolution branch to align high-quality global context from the transformer. Inference Phase. To avoid the sizeable computation costs of two branches, only the CNN branch is deployed in the inference. With the transformer-aligned semantic information, the single-branch CNN can generate accurate segmentation results without the extra semantic extraction or costly dense fusion. Specifically, the input image is fed into a singlebranch hierarchy convolution backbone. Then the decoder head picks up the features in the backbone and conducts simple concatenation followed by pixel-wise classification. Alignment Loss For better alignment of semantic information, a alignment loss focusing on semantic information rather than spatial information is needed. In the implementation, we use CWD Loss (channel-wise distillation loss) (Shu et al. 2021) as the alignment loss, which shows better results than other loss functions. CWD Loss can be summarized as follows: ϕ(xc) = exp( xc,i T ) PW ·H i=1 exp( xc,i T ) , (3) Lcwd = T 2 C C X c=1 H·W X i=1 ϕ(xc,i T ) · log hϕ(xc,i T ) ϕ(xc,i S ) i , (4) where c = 1, 2, ..., C indexes the channel, and i = 1, 2, ..., H · W denotes the spatial location, xT and xS are the feature maps of the transformer branch and CNN branch, respectively. ϕ converts the feature activation into a channelwise probability distribution, removing the influences of scales between the transformer and the compact CNN. To minimize Lcwd, ϕ(xc,i S ) should be large when ϕ(xc,i T ) is large. But when ϕ(xc,i T ) is small, the value of ϕ(xc,i S ) does The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6382 not matter. This force the CNN to learn the distribution of the foreground salience, which contains the semantic information. T denotes a hyper-parameter called temperature. And the larger T is, the softer the probability distribution is. Experiments Datasets and Implementation Details We conduct experiments of the SCTNet on three datasets, i.e., Cityscapes (Cordts et al. 2016), ADE20K (Zhou et al. 2017), and COCO-Stuff-10K (Caesar, Uijlings, and Ferrari 2018) to demonstrate the effectiveness of our method. For a fair comparison, we build our base model SCTNet-B with a comparable size to RTFormer-B/DDRNet-23/STDC2. Furthermore, we also introduce a smaller variant called SCTNet-S. We first pre-train our CNN backbones on ImageNet (Deng et al. 2009), then fine-tune it on semantic segmentation datasets. The semantic transformer branch in the training phrase can be any hierarchical transformer network. In our implementation, we choose SegFormer as the transformer branch for all experiments. We measure the inference speed of all methods on a single NVIDIA RTX 3090. All reported FPS results are obtained under the same input resolution for fair performance comparison unless specified. For Cityscapes, we measure the speed implemented with both torch and tensor-RT. Comparison with State-of-the-art Methods Results on Cityscapes. The corresponding results on Cityscapes(Cordts et al. 2016) are shown in Table 1. Our SCTNet outperforms other real-time methods by a large margin and attains the best speed-accuracy trade-off with both tensorRT and Torch implementations. For example, our SCTNet-B-Seg100 achieves 80.5% mIoU at 62.8 FPS, which is a new state-of-the-art performance for real-time segmentation. Our SCTNet-B-Seg75 reaches 79.8% mIoU, which is better than the state-of-the-art transformer-based bilateral network RTFormer-B and cnn-based bilateral network DDRNet-23 in accuracy but has a two times faster speed. Our SCTNet-B is faster at all input resolutions with better mIoU results than all other methods. Besides, our SCTNet-S also achieves a better trade-off compared with STDC2 (Fan et al. 2021), RTFormer-S (Wang et al. 2022), SeaFormer-B (Wan et al. 2023) and TopFormer-B (Zhang et al. 2022). Results on ADE20K. On ADE20K(Zhou et al. 2017), our SCTNet achieves the best accuracy with the fastest speed. For instance, our SCTNet-B achieves 43.0% mIoU at superior 145.1 FPS, which is about 1.6 times faster than RTFormer-B (Wang et al. 2022) with 0.9% higher mIoU performance. Our SCTNet-S reaches 37.7% mIoU while keeping the highest FPS among all other methods on ADE20K (Zhou et al. 2017). Considering the large variety of images and various semantic categories in ADE20K (Zhou et al. 2017), this outstanding results further also demonstrate the generalization capability of our SCTNet. Results on COCO-Stuff-10K. The corresponding results on COCO-Stuff-10K are shown in Table 3. SCTNet shows SOTA performance and maintains the highest inference Method #Params↓ FPS↑ mIoU(%)↑ FCN(MV2) 9.8M 64.4∗ 19.7 PSPNet(MV2) 13.7M 57.7∗ 29.6 DeepLabV3+(MV2) 15.4M 43.1∗ 34.0 SegFormerB0 3.8M 84.4 37.4 TopFormer-B 5.1M 96.2 39.2 SeaFormer-B 8.6M 44.5 41.0 SegNext-T 4.3M 60.3 41.1 AFFormer-B 3.0M 49.6 41.8 RTFormer-S 4.8M 95.2 36.7 RTFormer-B 16.8M 93.4 42.1 SCTNet-S 4.7M 158.4 37.7 SCTNet-B 17.4M 145.1 43.0 Table 2: Comparisons with other state-of-the-art real-time methods on ADE20K. The FPS is measured at resolution 512 × 512. * means speed from other papers, MV2 stands for MobileNetV2. Method #Params↓ FPS↑ mIoU(%)↑ PSPNet50 6.6∗ 32.6 ICNet 35.7∗ 29.1 BiSeNetV2-L 65.1 28.7 TopFormer-B 5.1M 94.7 33.4 SeaFormer-B 8.6M 41.9 34.1 AFFormer-B 3.0M 46.5 35.1 DDRNet23 20.1M 108.8 32.1 RTFormer-B 16.8M 90.9 35.3 SCTNet-B 17.4M 141.5 35.9 Table 3: Comparisons with other state-of-the-art real-time methods on COCO-Stuff-10K test set. The FPS is measured at resolution 640 × 640. speed on COCO-Stuff-10K in real-time semantic segmentation methods. With the input size 640 × 640, SCTNet-B achieves 35.9% mIoU at 141.5 FPS, which is 0.6% higher than RTFormer-B, and about 1.6 times faster. Ablation Study Comparison on Different Types of Blocks. To verify the effectiveness of our proposed CFBlock, we replace the CFBlocks with other kinds of convolution blocks and transformer blocks in real-time segmentation. For quick evaluations, all these results in Table 4 are not pre-trained on ImageNet. We select four kinds of blocks for comparison. As shown in Table 4, our CFBlock outperforms the typical ResBlock FPS↑ mIoU(%)↑ param ResBlock 66.7 77.9 15.3M SegFormerBlock 57.3 77.7 22.2M GFABlock 66.2 78.5 16.3M MSCANBlock 60.5 79.3 19.8M CFBlock (Ours) 62.8 79.4 17.4M Table 4: Comparison of different blocks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6383 (a) Image (b) GT (c) DDRNet-23 (d) RTFormer-B (e) SCTNet-B Figure 5: Visualization results on Cityscapes validation set. Compared with DDRNet-23(Pan et al. 2022) and RTFormerB (Wang et al. 2022), SCTNet-B generates masks with finer details as highlighted in the light blue box and more accurate large-area predictions, as highlighted in the yellow box. Block and the lightweight SegFormer Block by a significant mIoU margin. Moreover, compared with the state-of-theart GFABlock (Wang et al. 2022) and MSCANBlock from SegNext (Guo et al. 2022b), our CFBlock get better speed and accuracy trade-off. Our CFBlock has 0.9% higher mIoU than GFABlock and maintains the similar performance with fewer parameters and faster speed than MSCANBlock. This also demonstrates that our SCTNet can better mitigate the gap of semantic information between CNN and transformer while getting rid of the high computation cost. Effectiveness of the Semantic Information Alignment Module. Although our SIAM(semantic information alignment module) is closely related to the elaborately designed SCTNet, it can also improve the performance of other CNN and transformer segmentation methods. As presented in Table 5, employing our SIAM attains consistent improvements on SegFormer, SegNext, SeaFormer, and DDRNet, which proves the effectiveness and generalization capability of our proposed SIAM. At the same time, as representatives of the bilateral-branch transformer and the bilateral-branch CNN network, the improvements of SeaFormer and DDRNet are relatively slim. This may be attributed to the fact that their bilateral-branch network structure already benefits from the additional semantic branch. And this also confirms that the cooperation of our SIMA and training-only transformer does play the role of the semantic branch in the bilateral-branch network, leading to improvements in the accuracy of the single-branch network. Components Ablation. We explore the effect of the proposed components in Table 6. Take Seg100 as an example, simply replacing the Resblock with our CFBlock brings a 2.1% improvement of mIoU with little speed loss. The BFA leads to a 1.2% higher mIoU, and the SDHA further attains a 0.8% improvement of mIoU without sacrificing speed. Visualization Results Figure 5 shows visualization results on Cityscapes (Cordts et al. 2016) validation set. Compared with DDRNet and Block Seg100(%) Seg75(%) Seg50(%) SegNext-T 79.8 78.0 SegNext-T+SIAM 80.1(+0.3) 78.2(+0.2) SegFormer-B0 74.7 74.4 70.7 SegFormer-B0+SIAM 77.3(+2.6) 76.8(+2.4) 72.5(+1.8) SeaFormer-B 77.7 72.2 SeaFormer-B+SIAM 78.1(+0.4) 72.5(+0.3) DDRNet-23 79.5 DDRNet-23+SIAM 79.6(+0.1) SCTNet-B-SIAM 78.5 77.5 75.2 SCTNet-B (Ours) 80.5(+2.0) 79.8(+2.3) 76.5(+1.3) Table 5: Comparison of the effect of the SIAM. Components Seg100(%) Seg75(%) Seg50(%) FPS(Seg100) Baseline 76.4 76.0 73.0 66.7 +CFBlock 78.5(+2.1) 77.5(+1.5) 75.2(+2.2) 62.8 +BFA∗ 79.7(+1.2) 79.1(+1.6) 75.7(+0.5) 62.8 +SDHA 80.5(+0.8) 79.8(+0.7) 76.5(+0.8) 62.8 Table 6: Ablation studies on the components of SCTNet RTFormer, our SCTNet provides not only better results for those classes with large areas like roads, sidewalks, and big trucks but also more accurate boundaries for small or thin objects such as poles, traffic lights, traffic signs, and cars. This indicates that SCTNet extracts high-quality long-range context while preserving fine details. Conclusion In this paper, we propose SCTNet, a novel single-branch architecture that can extract high-quality long-range context without extra inference computation cost. Extensive experiments demonstrate that SCTNet achieves new state-ofthe-art results. Moreover, by demonstrating the efficiency of SCTNet, we provide a novel insight for the semantic branch in the bilateral-branch network and a new way to boost the real-time segmentation community by not only adopting the structure of the transformer but also unitizing its knowledge. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6384 Acknowledgments This work is supported by Hubei Provincial Natural Science Foundation of China No.2022CFA055 and the National Natural Science Foundation of China No.62176097. References Bo, D.; Pichao, W.; and Wang, F. 2023. AFFormer: Head-Free Lightweight Semantic Segmentation with Linear Transformer. In Proceedings of the AAAI Conference on Artificial Intelligence. Caesar, H.; Uijlings, J.; and Ferrari, V. 2018. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1209–1218. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2014. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4): 834–848. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In European Conference on Computer Vision, 801–818. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In IEEE Conference on Computer Vision and Pattern Recognition, 3213–3223. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, 248–255. Ieee. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fan, M.; Lai, S.; Huang, J.; Wei, X.; Chai, Z.; Luo, J.; and Wei, X. 2021. Rethinking bisenet for real-time semantic segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9716–9725. Fang, J.; Xie, L.; Wang, X.; Zhang, X.; Liu, W.; and Tian, Q. 2022. Msg-transformer: Exchanging local spatial information by manipulating messenger tokens. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12063–12072. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual attention network for scene segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3146–3154. Guo, M.-H.; Liu, Z.-N.; Mu, T.-J.; and Hu, S.-M. 2022a. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence. Guo, M.-H.; Lu, C.-Z.; Hou, Q.; Liu, Z.; Cheng, M.-M.; and Hu, S.-M. 2022b. Segnext: Rethinking convolutional attention design for semantic segmentation. Advances in Neural Information Processing Systems, 35: 1140–1156. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 770–778. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; and Liu, W. 2019. Ccnet: Criss-cross attention for semantic segmentation. In IEEE International Conference on Computer Vision, 603–612. Ioffe, S.; and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 448–456. pmlr. Li, X.; You, A.; Zhu, Z.; Zhao, H.; Yang, M.; Yang, K.; Tan, S.; and Tong, Y. 2020. Semantic flow for fast and accurate scene parsing. In European Conference on Computer Vision, 775–793. Springer. Lin, G.; Milan, A.; Shen, C.; and Reid, I. 2017. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1925–1934. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10012– 10022. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 3431– 3440. Pan, H.; Hong, Y.; Sun, W.; and Jia, Y. 2022. Deep dualresolution networks for real-time and accurate semantic segmentation of traffic scenes. IEEE Transactions on Intelligent Transportation Systems, 24(3): 3448–3460. Paszke, A.; Chaurasia, A.; Kim, S.; and Culurciello, E. 2016. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241. Springer. Shu, C.; Liu, Y.; Gao, J.; Yan, Z.; and Shen, C. 2021. Channel-wise knowledge distillation for dense prediction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5311–5320. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Neural Information Processing Systems, 30. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6385 Wan, Q.; Huang, Z.; Lu, J.; Yu, G.; and Zhang, L. 2023. SeaFormer: Squeeze-enhanced Axial Transformer for Mobile Semantic Segmentation. arXiv preprint arXiv:2301.13156. Wang, J.; Gou, C.; Wu, Q.; Feng, H.; Han, J.; Ding, E.; and Wang, J. 2022. RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer. In Advances in Neural Information Processing Systems. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 568–578. Wu, Z.; Shen, C.; and Hengel, A. v. d. 2017. Real-time semantic image segmentation via spatial sparsity. arXiv preprint arXiv:1712.00213. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. Xu, J.; Xiong, Z.; and Bhattacharyya, S. P. 2023. PIDNet: A Real-Time Semantic Segmentation Network Inspired by PID Controllers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19529– 19539. Yu, C.; Gao, C.; Wang, J.; Yu, G.; Shen, C.; and Sang, N. 2021. Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation. International Journal of Computer Vision, 129: 3051–3068. Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; and Sang, N. 2018. BiSeNet: Bilateral segmentation network for realtime semantic segmentation. In European Conference on Computer Vision, 325–341. Yuan, Y.; Huang, L.; Guo, J.; Zhang, C.; Chen, X.; and Wang, J. 2018. Ocnet: Object context network for scene parsing. arXiv preprint arXiv:1809.00916. Zhang, W.; Huang, Z.; Luo, G.; Chen, T.; Wang, X.; Liu, W.; Yu, G.; and Shen, C. 2022. TopFormer: Token pyramid transformer for mobile semantic segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12083–12093. Zhao, H.; Qi, X.; Shen, X.; Shi, J.; and Jia, J. 2018a. Icnet for real-time semantic segmentation on high-resolution images. In European Conference on Computer Vision, 405–420. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In IEEE Conference on Computer Vision and Pattern Recognition, 2881–2890. Zhao, H.; Zhang, Y.; Liu, S.; Shi, J.; Loy, C. C.; Lin, D.; and Jia, J. 2018b. Psanet: Point-wise spatial attention network for scene parsing. In European Conference on Computer Vision, 267–283. Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P. H.; et al. 2021. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6881–6890. Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; and Torralba, A. 2017. Scene parsing through ade20k dataset. In IEEE Conference on Computer Vision and Pattern Recognition, 633–641. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6386 | 2024 | 709 |
18,528 | DMMR: Cross-Subject Domain Generalization for EEG-Based Emotion Recognition via Denoising Mixed Mutual Reconstruction Yiming Wang, Bin Zhang∗, Yujiao Tang School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China {yimingwang, 3121358009}@stu.xjtu.edu.cn, [email protected] Abstract Electroencephalography (EEG) has proven to be effective in emotion analysis. However, current methods struggle with individual variations, complicating the generalization of models trained on data from source subjects to unseen target subjects. To tackle this issue, we propose the Denoising Mixed Mutual Reconstruction (DMMR) model, employing a two-stage pre-training followed by fine-tuning approach. During the pre-training phase, DMMR leverages self-supervised learning through a multi-decoder autoencoder, which encodes and reconstructs features of one subject, aiming to generate features resembling those from other subjects within the same category, thereby encouraging the encoder to learn subject-invariant features. We introduce a hidden-layer mixed data augmentation approach to mitigate the limitations posed by the scarcity of source data, thereby extending the method to a two-stage process. To bolster stability against noise, we incorporate a noise injection method, named “Time Steps Shuffling”, into the input data. During the fine-tuning phase, an emotion classifier is integrated to extract emotionrelated features. Experimental accuracy on the SEED and SEED-IV datasets reached 88.27% (±5.62) and 72.70% (±8.01), respectively, demonstrating state-of-the-art and comparable performance, thereby showcasing the superiority of DMMR. The proposed data augmentation and noise injection methods were observed to complementarily enhance accuracy and stability, thus alleviating the aforementioned issues. Introduction Human emotions are closely related to health conditions and behavioral patterns, such as Autism Spectrum Disorder (Mayor-Torres et al. 2021) and depression (Bocharov, Knyazev, and Savostyanov 2017), as well as malicious behaviors resulting from the accumulation of negative emotions. Real-time monitoring of individuals’ emotional states can contribute to objective health assessments and early warning of malicious behaviors. Due to the difficulty in disguising physiological signals, wearable devices are commonly used to monitor emotion-related physiological signals, such as EEG, eye movements (Lu et al. 2015), facial ∗Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. electromyography (Chen et al. 2015), etc. Among them, EEG-based emotion recognition has become a crucial means of emotion identification due to its high temporal resolution and accuracy. EEG signals from specific channels and frequency bands demonstrate different responses to various emotional stimuli (Zheng and Lu 2015), making it possible to detect finegrained emotional tendencies. However, individuals differ in their cranial structure and emotional experiences, which result in varying sensitivity to the same emotion. Consequently, there are significant distributional differences among data from different subjects, making it challenging for a model trained on source subject data to generalize effectively to target subjects. To solve this problem, transfer learning and other methods are employed to extract subjectinvariant emotion features, with the expectation that emotional knowledge can be effectively transferred to target subjects from data of source subjects. Early works assumed that a large amount of unlabeled data or a small amount of labeled data from the target subjects is available, explicitly narrowing the distribution gap between source and target subjects. These methods are known as unsupervised domain adaptation (Luo and Lu 2021; Li et al. 2018a) and semisupervised domain adaptation approaches (Li et al. 2020a). However, these approaches rely on the target subject's data to train the model, making them less user-friendly. A more challenging task is domain generalization (DG), which assumes that the target subjects are entirely unseen during the training process, so the model is trained solely on the source subject’s data to create a subject-invariant and robust model. There are two main types of cross-subject DG methods for EEG-based emotion recognition: Sample-intrinsic subject-invariant feature extraction methods (SI-SIFE) and Cross-subject subject-invariant feature extraction methods (CS-SIFE). The two methods are related to two non-inclusive categories of features: internally-invariant and mutually-invariant (Lu et al. 2022). The former extracts features The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 628 from a single subject, the later takes into account the differences and traits among subjects, treating each subject as a separate domain, thereby incorporating traditional DG techniques to extract subject-invariant features. Despite related methods’ great achievements, several issues remain: • Involve joint training (simultaneously training subject-independent feature extraction and emotion recognition tasks), where task interference may impede the extraction of subject-independent features. • Tend to overfit to the scarce source data, resulting in poor generalization performance on unseen subject data. • Not emphasize the model’s robustness against potential noise in the data collection process. Inspired by DiMAE (Yang et al. 2022), this paper proposes a new CS-SIFE model named DMMR to address the aforementioned issues. We introduce the “mutual reconstruction” method for EEG-based cross-subject emotion recognition. It reconstructs one subject’s features into another’s, which is a novel approach in the field. Unlike the Joint training mode, DMMR combines two-stage pretraining with mutual reconstruction self-supervised learning for the first time, addressing individual variability in EEG signals. To tackle the problem of scarce source data, we propose a mixed data augmentation strategy extending the mixup technique (Zhang et al. 2018), boosting data generalization without the need for extra parameters by creating new subject features in the hidden layer, improving recognition accuracy. To enhance the model’s stability against noise, we incorporate the denoising task proposed by (Vincent et al. 2008). This involves relearning clean samples from noise-distorted features. We propose an EEG-tailored noise injection method enhancing denoising while preserving essential information, improving stability. The main contributions of this paper are as follows: • Proposing the DMMR model with a pre-training-fine-tuning paradigm, which extends the mutual reconstruction method by proposing a novel mix data augmentation approach in the hidden layer and a noise injection method named Time Steps Shuffling. • The experimental accuracy on the SEED (Zheng and Lu 2015) and SEED-IV (Zheng et al. 2019) datasets reached 88.27% (±5.62) and 72.70% (±8.01), respectively, achieving state-of-the-art and comparable performance, demonstrating the superiority of DMMR. • The proposed data augmentation and noise injection methods are observed to complementarily enhance accuracy and stability, alleviating the issues of the scarcity of source data and potential noise interference. Related Work This section describes the two subject-invariant feature extraction methods (SIFE): SI-SIFE and CS-SIFE. And analyze the differences between our approach and related work. SI-SIFE methods always consider the correlations between different channels from a single subject. The method assumes cross-subject invariance in channel correlations. One strategy constructs inter-channel connections into a sequence manually, utilizing LSTM for high-dimensional emotional features (Li et al. 2020b). Others employ distance metrics or trainable parameters to create adjacency matrices, employing graph neural networks for high-dimensional emotional semantics (Song et al. 2018; Zhong, Wang, and Miao 2020; Zhang et al. 2021; Priyasad et al. 2022). Building upon this, GMSS (Li et al. 2022) enhances data using self-supervised learning, employing jigsaw tasks for robust intrinsic feature extraction and utilizes unsupervised contrastive learning methods for distance manipulation. CS-SIFE methods treat each subject as a different domain. DG-DANN (Ma et al. 2019) employs gradient reversal (Ganin and Lempitsky 2015) to confuse subject discriminators and yielding subject-invariant features. Notably, it aligns the Jensen-Shannon divergence, implicitly aligns the marginal probability distributions across multiple subjects (Li et al. 2018b). DResNet (Ma et al. 2019) further combines residuals from subject-specific and subject-shared encoders for emotion classification. In the case of shared features in same-label samples across subjects, methods like contrastive learning manipulate sample distances. Clisa (Shen et al. 2022) uses contrastive learning to minimize distances between samples from the same emotional stimulus and maximize distances between different stimuli, ensuring consistent representations for identical emotional stimuli. However, the aforementioned methods did not address three critical issues: constrained joint training, overfitting to scarce source data, and practical noise robustness needs. In image processing, Ghifary et al. (Ghifary et al. 2015) proposed MTAE using multiple decoders for mutual reconstruction. During the pre-training process, they employed a single domain’s feature as input and assigned specific decoders for each source domain, aiding domain-invariant feature generation. Yang et al. (Yang et al. 2022) extended this with DiMAE, using CP-styleMix for data augmentation and a mask mechanism for visible parts. Differing from these methods, this paper further proposes hidden layer output mixing for data augmentation and a custom noise injection method for EEG signals to address the mentioned challenges. Methods We let 𝑋! = {𝑋! ", 𝑌! "}"#$ % to be the data and labels of n source subjects, the model trained with the data of source subjects is used to predict the emotion class for the unseen target subject 𝑋&. Similar to the data preprocessing method of (Zhao, Yan, and Lu 2021), to fully exploit the EEG data’s temporal features, we employ overlapping sliding windows along the time axis with time steps 𝑇, thus a sample to be fitted into The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 629 Figure 1. The pre-training phase of DMMR, it aimed at extracting subject-invariant features from multiple source subjects’ data in a self-supervised mode, which consists of a noise injection process and three different modules, namely the ABP module, the MMR module and the DG-DANN module Figure 2. The fine-tuning phase of DMMR. Subject-invariant emotional features are further extracted, which are classified into different emotions by a new emotion classifier. the model is represented as 𝑥= (𝑥$, 𝑥' ⋯𝑥() ∈𝑅(∗* , among them, 𝐷= 𝐶∗𝐵 represents the number of feature dimensions, 𝐶 represents the number of channels, and 𝐵 represents the number of frequency bands. Figure 1 and Figure 2 show the pre-training and fine-tuning phases of the proposed framework DMMR, respectively. The solid arrows indicate the flow of data, while the dashed arrows represent the losses to be computed. In the pre-training phase, to solve the potential noise problem, a noise injection process is used to the input, followed by three different modules, namely the ABP module, the DG-DANN module and the Mixed Mutual Reconstruction (MMR) module. In the ABP module, we introduce the way of (Zhao, Yan, and Lu 2021), use the self-attention method to add weights for important channels and frequent bands of a sample. In the MMR module, an encoder 𝐸! is used to extract shared features among subjects. Like the way of (Yang et al. 2022), we set n special decoder 𝐷! = {𝐷! "}"#$ % to reconstruct features of different subjects within the same category. To solve the problem of the scarcity of source data, we further propose a two-stage mixed mutual reconstruction approach. In the first stage, the outputs of multiple decoders are mixed to generate a new mix-subject data. In the second stage, the mixed output is fed back into the multi-decoder autoencoder. To supervise the decoders with self-supervised learning, we utilize sampled samples of the same category taken from different subjects. These samples are then processed through the ABP module to generate weighted features, which can be considered as representations of the same category features in different subjects. The DG-DANN module (Ma et al. 2019) is an application of the DANN model (Li et al. 2018a) to DG problems. It leverages a multi-domain discriminator (since we treat each subject as an individual domain, we rename it as the Subject Discriminator 𝑆𝐷) and domain adversarial techniques to extract subject-invariant features. In the fine-tuning phase, only the ABP module and encoder 𝐸! is preserved, and a new emotion classifier 𝐶 is added to achieve emotion classification. It further leverages emotion category labels for supervised learning, enabling the model to extract emotion-related but subject-invariant features. Accordingly, algorithm 1 summarizes the pseudocode of DMMR. The testing process directly employs the fine-tuned model for evaluation, so we won’t provide additional details in the following text and pseudo-code. Noise Injection In order to enhance the stability of the model against noise, we propose a method called “Time Steps Shuffling” to inject noise into the input samples. This method shuffles only the order of the temporal dimension, leaving the feature dimensions within individual time steps unaffected. Considering that the encoder structure utilizes a unidirectional LSTM, the final time step is retained more than other time steps and represents the current moment’s emotional state. Hence, we fix the final time step and only shuffle others to preserve the essential characteristics of the input samples. To distinguish it from the original input feature 𝑥, we use 𝑥%+"!,- to represent the noise injected feature. The ABP Module The ABP module is a bottom-layer feature weighting method proposed by (Zhao, Yan, and Lu 2021). It assigns The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 630 weights to channels and frequency bands of samples automatically using a self-attention weighting method. This method introduces a linear layer to map the original features to a new feature space. The weights are obtained by normalizing the mapped features using the Softmax function. As shown in Formula 1, in which 𝑊. ∈𝑅*∗* and 𝑏. ∈𝑅*∗$ represent trainable weights and bias in the linear layer, respectively, 𝛼∈𝑅*∗$ is a one-dimensional attention weight. The weighted features are obtained through element-wise multiplication of the normalized weights with the original features, as shown in Formula 2. 𝑥8 ∈𝑅(∗* represents the weighted feature. 𝛼= 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑊.𝑥%+"!,- + 𝑏.) (1) 𝑥8 = 𝛼∙𝑥%+"!,- (2) The MMR Module The MMR module defines a shared encoder for all subjects and different decoders for each source subject. After extracting high-dimensional features with the encoder, these features are reconstructed into features of different subjects within the same category using different decoders. This forces the encoder to learn subject-invariant representations for specific emotions. As the reconstruction loss requires the use of emotional labels from the source subjects, this method is a self-supervised learning approach. Both the encoder and multiple decoders take the weighted features from the ABP module’s output corresponding to the subject as their input and supervision, respectively, creating a symmetric and mutually reconstructive autoencoder structure. The encoder and decoders in this paper are single-layer LSTM models. The construction of the decoder is consistent with that in (Zhao, Yan, and Lu 2021), generating features for each time step in reverse order and then using a linear layer to map the features to the same dimension as the input of the encoder. To simplify the description, we formally define the encoder and decoders in Equation 3. 𝑜$ " ∈𝑅(∗* represents the output representation of the i-th decoder in the first stage. 𝑜$ " = 𝐷!"(𝐸!(𝑥8)), 𝑖∈{1,2, ⋯𝑛} (3) To further address the scarcity of source data, we draw inspiration from the mixup technique (Zhang et al. 2018), which linearly combines different samples and labels to create new samples for data augmentation, we propose a twostage mixed mutual reconstruction method. In the first stage, the outputs of each decoder are summed directly to obtain new mix-subject features 𝑥/"0 ∈𝑅(∗*, as shown in Equation 4. These features are linear combinations of same-category features from different subjects, creating new subject features unseen by the model while preserving their label information. 𝑥/"0 = ∑ 𝑜$ " % "#$ (4) In the second stage, we reconstruct these mix-subject features into features of different subjects within the same category using the encoder and multi-decoder structure defined above, as shown in Equation 5. 𝑜' " ∈𝑅(∗* represents the output representation of the i-th decoder in the second stage. 𝑜' " = 𝐷!"(𝐸!(𝑥/"0)), 𝑖∈{1,2, ⋯𝑛} (5) Since the encoder and decoders between the two stages are parameter-sharing, we only need to calculate the reconstruction loss for each subject after the second stage. The Mean Squared Error (MSE) loss is employed to quantify the differences between the generated features and the corresponding subject features, as shown in Equation 6. In which 𝑟" ∈𝑅(∗* represents the representation of the i-th subject’s features (without noise injection) after being weighted by the ABP module. These representations share the same labels as x and are used as supervision for the corresponding decoder. The final reconstruction loss 𝑙1,2+% is the sum of n individual reconstruction losses, as shown in Equation 7. 𝑙1,2+% " = 𝑀𝑆𝐸(𝑜' ", 𝑟"), 𝑖∈{1,2, ⋯𝑛} (6) 𝑙1,2+% = ∑ 𝑙1,2+% " % "#$ (7) The DG-DANN Module We utilize the DG-DANN method from (Ma et al. 2019) to establish a multi-class subject discriminator 𝑆𝐷 (a singlelayer fully connected network) for discerning feature ownership. Each subject receives a distinct ID for supervision during training. A gradient reversal layer (GRL) is added before the discriminator, multiplying gradients by −𝜆 in the backpropagation process. This confounds the discriminator and encourages subject-insensitive feature extraction by the encoder. This adversarial interplay between encoder and discriminator strives for a Nash equilibrium, enabling the encoder to generate subject-invariant features. It’s important Algorithm 1: DMMR method Input: Iteration T1, T2. Source data 𝑋! = {𝑋! ", 𝑌! "}"#$ % . Output: optimal DMMR model The Pre-Training Phase: 1: Randomly initialize ABP, 𝐸!, 𝐷! $~% and 𝑆𝐷. 2: for t=1: T1 do 3: Inject noise to source data. 4: Optimize ABP, 𝐸!, 𝐷! $~% and 𝑆𝐷 by minimizing Equation (10). 5: end for 6: return ABP, 𝐸!. The Fine-Tuning Phase: 7: Randomly initialize 𝐶. 8: Obtain the pre-trained ABP, 𝐸!. 9. for t=1: T2 do 10: Optimize ABP, 𝐸! and 𝐶 by minimizing Equation (12). 11: end for 12: return ABP, 𝐸! and 𝐶 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 631 to note that the DG-DANN module is limited to the first stage to avoid interfering with decoder feature generation. The process for feature ownership prediction is shown in Equation 8. 𝑑L" represents the Softmax predictions of 𝑆𝐷. The computation of the adversarial loss 𝑙4-5 is presented in Equation 9, in which 𝑑" is the ID of the i-th subject. 𝑑L" = 𝑆𝐷(𝐸!(𝑥8)), 𝑖∈{1,2, ⋯𝑛} (8) 𝑙4-5 = −𝜆𝑑"𝑙𝑜𝑔(𝑑L"), 𝑖∈{1,2, ⋯𝑛} (9) Learning Loss in The Pretraining Phase The final pre-training loss combines the reconstruction loss and adversarial loss using a balancing hyperparameter 𝛽, as shown in Equation 10. 𝑙61,7&14"%"%8 = 𝑙1,2+%+𝛽∗𝑙4-5 (10) The Fine-Tuning Phase The fine-tuning stage aims to further extract subject-invariant emotion features. The input data is the same as in the pre-training phase, and noise injection is no longer needed. We take the ABP module and encoder from the pre-trained model and add an emotion classifier 𝐶 (a single-layer fully connected network) on top of the encoder’s output to get the emotion prediction result. All parameters need to be finetuned in order to obtain distinct emotion category boundaries in the extracted features. We use the original emotion labels for supervised learning. As shown in Formula 11. 𝑦P9 ∈𝑅2∗$ is the Softmax predictions of emotion and 𝑐 is the number of emotion classes. 𝑙2:! is the cross-entropy loss for emotion classification, as shown in Formula 12. Where 𝑦" ∈ 𝑅2∗$ is the ground truth emotion label. 𝑦P9 = 𝐶(𝐸!(𝐴𝐵𝑃(𝑥))), 𝑗∈{1,2, ⋯𝑐} (11) 𝑙2:! = 𝑦9𝑙𝑜𝑔(𝑦P9) 𝑗∈{1,2, ⋯𝑐} (12) Experiments Datasets We evaluated the performance of the DMMR model on two publicly available datasets, SEED and SEED-IV. Both datasets involve inducing EEG signals by presenting specific emotional videos. The SEED dataset consists of 15 different emotional videos, covering three emotion categories (positive, negative, and neutral). The SEED-IV dataset includes 24 different emotional videos, covering four emotion categories (happy, sad, neutral, and fearful). The experiments were conducted across three separate sessions. For data acquisition, both datasets utilized the ESI NeuroScan system, following the international 10-20 standard, to collect 62- channel EEG signals. The EEG signals were downsampled of 0-75Hz. These filtered signals were further divided into five frequency bands: δ: 1-3 Hz, θ: 4-7 Hz, α: 8-13 Hz, β: 14-30 Hz, and γ: 31-50 Hz. To extract frequency domain features from the EEG signals, a non-overlapping sliding Method SEED SEED-IV Avg. Std. Avg. Std. DGCNN 79.95 9.02 - - BiHDM* 81.55 9.74 67.47 8.22 RGNN* 81.92 9.35 71.65 9.34 GMSS 86.52 6.22 73.48 7.41 DG-DANN 84.30 8.32 - - DResNet 85.30 7.97 - - Clisa 86.40 6.40 - - PPDA-NC 85.40 7.10 - - PPDA 86.70 7.10 - - DMMR 88.27 5.62 72.70 8.01 Table 1. Performance comparison of our proposed DMMR with baselines (%) window of 1 second size was applied in the raw signals, and in each window, the Differential Entropy (DE) feature was extracted. This process mapped every 1-second data to a 310-dimensional feature space (62 channels * 5 frequency bands), the data sizes for a single experiment are approximately 3400 and 830 for the two datasets, respectively. Implementation Details We utilized the DE features from the first session of all subjects in both datasets. For evaluation, we adopted a leaveone-subject-out cross-validation approach, where we used one subject as the test set while using the remaining subjects as the training set. The average accuracy (avg.) and standard deviation (std.) were calculated across all subjects after each subject served as the target subject once. Regarding hyperparameters, for both datasets, the input data has a feature dimension of 310, and the hidden size of the encoder is 64. The balancing hyperparameter 𝛽 is set to 0.05. We utilize the Adam optimizer with a learning rate of 1e-3 and a weight decay rate of 5e-4. Due to the different sizes of the SEED and SEED-IV dataset, the batch sizes are set to 512 and 256, while time steps 𝑇 are set to 30 and 10, respectively. To ensure reproducibility, the random seed for all experiments is fixed at 3. All experiments were implemented using PyTorch on a Nvidia Tesla V-100 GPU. Code of DMMR is at https://github.com/CodeBreathing/DMMR. Experiment Results Comparison with baseline methods The performance comparison between DMMR and baseline models is shown in Table 1. In order to comprehensively compare with relevant baseline methods, we selected the best-performing models in two categories of SIFE methods. Among them, DGCNN (Song et al. 2018), BiHDM (Li et al. 2020b), RGNN (Zhong, Wang, and Miao 2020), and GMSS (Li et al. 2022) are SI-SIFE methods. In which BiHDM and RGNN are unsupervised domain adaptation models, which The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 632 Method SEED SEED-IV Avg. Std. Avg. Std. DMMR 88.27 5.62 72.70 8.01 w/o noise 87.15 6.02 72.93 12.16 w/o mix 85.42 5.89 71.12 10.06 w/o both 85.25 5.29 71.12 11.63 Mask Time Steps 85.46 5.61 70.83 10.79 Channels Shuffling 86.83 5.79 71.65 9.49 Mask Channels 86.54 4.79 72.41 9.69 Dropout 86.37 5.43 71.21 11.54 Table 2. Ablation Studies and Performance Comparison of Different Noise Injection Methods (%) can be used for DG tasks when the gradient reversal layer is removed. We use an asterisk “*” to indicate their performance in DG tasks. DG-DANN (Ma et al. 2019), DResNet (Ma et al. 2019), and Clisa (Shen et al. 2022) are CS-SIFE methods, where the first two are classical joint learning DG methods, and Clisa is a special method that generates more robust DE features through contrastive learning. Additionally, Some basic modules of DMMR and PPDA (Zhao, Yan, and Lu 2021) are shared for their excellent performance shown in PPDA, like the ABP module and DG-DANN module. Specifically, the two models both extract temporal features from the DE features using sliding windows and employ LSTM autoencoders. Therefore, PPDA is included as one of the baselines for comparison. However, unlike DG methods, PPDA calibrates the pre-trained model using a small amount of unlabeled data from the target subject. When the calibration part is eliminated, the resulting PPDA_NC model can achieve DG tasks. Differently, DMMR utilizes inter-subject mutual reconstruction to extract subject-invariant features, rather than PPDA’s self-reconstruction approach, which only ensures the robustness of the generated features. By comparing with the optimal results of two SIFE methods, the proposed DMMR model achieves the highest accuracy of 88.27% and the lowest standard deviation of 5.62 on the SEED dataset. Particularly, the performance of the DMMR model outperforms the PPDA model. However, there have been almost no reports on the performance of CSSIFE methods on the SEED-IV dataset. By comparing with the current state-of-the-art SI-SIFE methods, we find that our method’s performance of 72.70% (±8.01) is only slightly inferior to the GMSS model, which could be due to the smaller dataset, which limits the efficiency of the tailored data augmentation method. The performance comparison on both datasets demonstrates the effectiveness of our approach. Ablation studies To analyze the effectiveness of our proposed noise injection method and data augmentation technique, we conducted ablation experiments on both datasets, Figure 3. Different Noise injection methods, with the Time Steps Shuffling being the method used in DMMR. as shown in the upper side of the double solid line in Table 2. “w/o noise” refers to the ablation of the noise injection module; “w/o mix” refers to the ablation of the mixed data augmentation module. In this case, the two-stage mixed mutual reconstruction reduces to a one-stage mutual reconstruction, where the reconstruction loss is directly applied to the first output of multiple decoders; “w/o both” means that both the two modules are ablated. It is observed that both modules contribute to the overall performance improvement. The data augmentation method based on multi-decoder fusion shows a more significant effect on accuracy enhancement, while the noise injection method significantly reduces the standard deviation, indicating that both modules complement each other. Particularly, although ablating the noise injection module leads to slightly higher accuracy on the SEED-IV dataset compared to the DMMR model, its standard deviation is considerably higher, indicating that this method lacks stability. Noise injection methods comparison To validate that our proposed noise injection method is more suitable for subject-invariant feature extraction in the EEG based emotion recognition task, Figure 3 illustrates the Time Steps Shuffling method proposed in this paper, along with three other random noise injection methods, including “Mask Time Steps” (excluding the last time step), “Channels Shuffling”, “Mask Channels”. It represents different time steps with various dashed and solid lines, and different colored circles indicate different channels, with white color indicating a situation where a time step or channel is masked out. In this case, the features of each time step are composed of C-dimensional channels and B-dimensional frequency bands. In order to manipulate the channel dimension, it is necessary to reshape the shape into (𝑇, 𝐶, 𝐵), and then apply noise injection methods only along the C dimension. Additionally, Dropout is also a commonly used noise augmentation technique, which randomly drops a corresponding rate of channels or frequency bands at each time step. Therefore, we also include this method for comparison. Both the masking and dropout ratios are set to 20%. From the results in the lower side of the double solid line in Table 2, we found that our The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 633 proposed noise injection method performs the best, while the other four noise injection methods even perform worse than the “w/o noise” baseline, suggesting that for EEG data, a careful selection of the noise injection method is essential. In particular, the mask-based and dropout methods both lead to a certain degree of information loss. We speculate that due to the relatively small size of the dataset, the negative impact of information loss is more pronounced. Moreover, the “Channels Shuffling” method confuses the boundaries among dimensions, making the data boundaries within the same dimension less distinct. The “Time Steps Shuffling” used in DMMR does not cause any information loss and has no impact on the features at individual time steps. As a result, it achieves better performance compared to other methods. Visualization of extracted features To assess whether our model can extract subject-invariant features and clearly delineate the boundaries of emotion categories, we employed the T-SNE algorithm (Van der Maaten and Hinton 2008) on the randomly selected 50 samples from data of each source subject of the SEED dataset. We visualized the subject distribution and emotion distribution of the original features and the features extracted by our pre-trained model and finetuned model. The resulting plots are shown in Figure 4. In the subject distribution, each subject is represented by a unique color, notably, the target subject is represented in red; in the emotion distribution, positive, neutral, and negative correspond to orange, light blue, and dark blue, respectively. From the perspective of subject distributions, the original distribution of subjects is relatively scattered with little overlap. However, following self-supervised pretraining, there is a high degree of feature overlap among subjects, and their distributions tend to become consistent. After fine-tuning, influenced by emotional categories, three clusters emerge, yet the distributions of subjects within each category still exhibit significant overlap. Regarding emotional distributions, the original emotional distribution shows considerable overlap. Since the pretraining process only focuses on extracting subject-invariant features, the issue of overlapping emotional distributions remains unaddressed. Nevertheless, through the fine-tuning process, features corresponding to the distinct emotional categories cluster separately with clear boundaries. The aforementioned observations show that the pretrained model captures subject-invariant features. The finetuning process further delineates category boundaries and captures subject-invariant features within each category. Particularly, based on the post-fine-tuning subject distribution graph (bottom left corner), the distribution of target subject data highly overlaps with the distribution of source subjects, indicating that the DMMR is effective in aligning the distributions between source subjects and unseen target subjects. From the post-fine-tuning emotional distribution plot (bottom right corner), it is observed that the features corresponding to positive-neutral and negative-neutral emotional Figure 4. Features visualization. From left to right, we have the subject distribution and emotional distribution. From top to bottom, we show the original DE features, features after pre-training, and features after fine-tuning. pairs are more prone to confusion, aligning with the process of continuous emotional transitions. Conclusion and Future Work In conclusion, this paper introduces the DMMR model, a novel approach for EEG-based emotion recognition that addresses critical challenges in DG. The model leverages a two-stage pre-training followed by fine-tuning approach, incorporating self-supervised learning through mutual reconstruction to extract subject-invariant features. To address the problem of the scarcity of source data and potential noise in the data, this paper proposes a method of mixed data augmentation at the hidden layer and a noise injection method called Time Steps Shuffling. The experimental accuracy on the SEED and SEED-IV datasets reached 88.27% (±5.62) and 72.70% (±8.01), respectively, achieving state-of-the-art and comparable performance. The methods for data augmentation and noise injection have been observed to effectively enhance both accuracy and stability, providing complementary solutions to the aforementioned issues. In the future, we plan to explore DG methods under the condition of limited annotated EEG data for source subjects, accelerating the practical application of relevant techniques. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 634 Acknowledgments This work is supported by the Key Research and Development Program of Shaanxi (ProgramNo.2022GY-075), and the National Key Projects of China (ProgramNo. 2021XJTU0016). References Bocharov, A. V.;Knyazev, G. G.; and Savostyanov, A. N. 2017. Depression and implicit emotion processing: An EEG study. Neurophysiologie Clinique/Clinical Neurophysiology 47(3):225-230. Chen, J.;Hu, B.;Xu, L.;Moore, P.; and Su, Y. 2015. Featurelevel fusion of multimodal physiological signals for emotion recognition. In 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM): IEEE, 395-399. Ganin, Y., and Lempitsky, V. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning: PMLR, 1180-1189. Ghifary, M.;Kleijn, W. B.;Zhang, M.; and Balduzzi, D. 2015. Domain generalization for object recognition with multi-task autoencoders. In Proceedings of the IEEE international conference on computer vision, 2551-2559. Li, H.;Jin, Y.-M.;Zheng, W.-L.; and Lu, B.-L. 2018a. CrossSubject Emotion Recognition Using Deep Adaptation Networks. In Neural Information Processing: 25th International Conference, ICONIP 2018, 403-413. doi.org/10.1007/978-3-030-04221-9_36. Li, J.;Qiu, S.;Shen, Y. Y.;Liu, C. L.; and He, H. 2020a. Multisource Transfer Learning for Cross-Subject EEG Emotion Recognition. IEEE Transactions on Cybernetics 50(7):3281-3293. doi.org/10.1109/TCYB.2019.2904052. Li, Y.;Chen, J.;Li, F.;Fu, B.;Wu, H.;Ji, Y.;Zhou, Y.;Niu, Y.;Shi, G.; and Zheng, W. 2022. GMSS: Graph-Based Multi-Task Self-Supervised Learning for EEG Emotion Recognition. IEEE Transactions on Affective Computing. Li, Y.;Tian, X.;Gong, M.;Liu, Y.;Liu, T.;Zhang, K.; and Tao, D. 2018b. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European conference on computer vision (ECCV), 624-639. Li, Y.;Wang, L.;Zheng, W.;Zong, Y.;Qi, L.;Cui, Z.;Zhang, T.; and Song, T. 2020b. A novel bi-hemispheric discrepancy model for EEG emotion recognition. IEEE Transactions on Cognitive and Developmental Systems 13(2):354-367. Lu, W.;Wang, J.;Li, H.;Chen, Y.; and Xie, X. 2022. Domain-invariant Feature Exploration for Domain Generalization. Transactions on Machine Learning Research. Lu, Y.;Zheng, W.-L.;Li, B.; and Lu, B.-L. 2015. Combining eye movements and EEG to enhance emotion recognition. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 1170-1176. Luo, Y., and Lu, B.-L. 2021. Wasserstein-Distance-Based Multi-Source Adversarial Domain Adaptation for Emotion Recognition and Vigilance Estimation. In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 1424-1428. doi.org/10.1109/bibm52615.2021.9669383. Ma, B.-Q.;Li, H.;Zheng, W.-L.; and Lu, B.-L. 2019. Reducing the subject variability of eeg signals with adversarial domain generalization. In International Conference on Neural Information Processing: Springer, 30-42. Mayor-Torres, J. M.;Ravanelli, M.;Medina-DeVilliers, S. E.;Lerner, M. D.; and Riccardi, G. 2021. Interpretable sincnet-based deep learning for emotion recognition from eeg brain activity. In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC): IEEE, 412-415. Priyasad, D.;Fernando, T.;Denman, S.;Sridharan, S.; and Fookes, C. 2022. Affect recognition from scalp-EEG using channel-wise encoder networks coupled with geometric deep learning and multi-channel feature fusion. KnowledgeBased Systems 250:109038. Shen, X.;Liu, X.;Hu, X.;Zhang, D.; and Song, S. 2022. Contrastive learning of subject-invariant EEG representations for cross-subject emotion recognition. IEEE Transactions on Affective Computing. Song, T.;Zheng, W.;Song, P.; and Cui, Z. 2018. EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing 11(3):532-541. Van der Maaten, L., and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research 9(11):2579-2605. Vincent, P.;Larochelle, H.;Bengio, Y.; and Manzagol, P.-A. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, 1096-1103. Yang, H.;Tang, S.;Chen, M.;Wang, Y.;Zhu, F.;Bai, L.;Zhao, R.; and Ouyang, W. 2022. Domain Invariant Masked Autoencoders for Self-supervised Learning from Multidomains. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXI: Springer, 151-168. Zhang, G.;Yu, M.;Liu, Y.-J.;Zhao, G.;Zhang, D.; and Zheng, W. 2021. SparseDGCNN: Recognizing Emotion from Multichannel EEG Signals. IEEE Transactions on Affective Computing:537–548. doi.org/10.1109/taffc.2021.3051332. Zhang, H.;Cisse, M.;Dauphin, Y. N.; and Lopez-Paz, D. 2018. mixup: Beyond Empirical Risk Minimization. In International Conference on Learning Representations, 1-13. Zhao, L. M.;Yan, X.; and Lu, B. L. 2021. Plug-and-play domain adaptation for cross-subject EEG-based emotion recognition. In the AAAI Conference on Artificial Intelligence, 863-870. Zheng, W. L.;Liu, W.;Lu, Y.;Lu, B. L.; and Cichocki, A. 2019. EmotionMeter: A Multimodal Framework for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 635 Recognizing Human Emotions. IEEE Trans Cybern 49(3):1110-1122. doi.org/10.1109/TCYB.2018.2797176. Zheng, W. L., and Lu, B. L. 2015. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Transactions on Autonomous Mental Development 7(3):162-175. doi.org/10.1109/tamd.2015.2431497. Zhong, P.;Wang, D.; and Miao, C. 2020. EEG-Based Emotion Recognition Using Regularized Graph Neural Networks. IEEE Transactions on Affective Computing 13(3):1290-1301. doi.org/10.1109/taffc.2020.2994159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 636 | 2024 | 71 |
18,529 | Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control Zunnan Xu, Yachao Zhang †, Sicheng Yang, Ronghui Li, Xiu Li† Tsinghua Shenzhen International Graduate School, Tsinghua University University Town of Shenzhen, Nanshan District, Shenzhen, Guangdong, P.R. China [email protected], {yachaozhang, li.xiu}@sz.tsinghua.edu.cn Abstract This study aims to improve the generation of 3D gestures by utilizing multimodal information from human speech. Previous studies have focused on incorporating additional modalities to enhance the quality of generated gestures. However, these methods perform poorly when certain modalities are missing during inference. To address this problem, we suggest using speech-derived multimodal priors to improve gesture generation. We introduce a novel method that separates priors from speech and employs multimodal priors as constraints for generating gestures. Our approach utilizes a chainlike modeling method to generate facial blendshapes, body movements, and hand gestures sequentially. Specifically, we incorporate rhythm cues derived from facial deformation and stylization prior based on speech emotions, into the process of generating gestures. By incorporating multimodal priors, our method improves the quality of generated gestures and eliminate the need for expensive setup preparation during inference. Extensive experiments and user studies confirm that our proposed approach achieves state-of-the-art performance. Introduction Gesture synthesis is a significant area of research within the realm of human-computer interaction (HCI), with diverse applications across various fields such as movies, robotics, virtual reality, and digital humans (Kucherenko et al. 2021). It is a challenging task that requires accounting for the dynamic movements of the human body, as well as the underlying rhythm, emotion, and intentionality (Nyatsanga et al. 2023). In co-speech gesture generation, three key indicators have emerged: (i) generating gestures that synchronize with the audio and accurately depict the semantic content of the spoken text, (ii) generating gestures that are consistent with the speaker’s style, and (iii) aligning the generated gestures with the speaker’s intentions, including symbolic actions that may resemble sign language. While substantial progress has been made in generating gestures synchronized to audio (Ginosar et al. 2019; Qian et al. 2021; Yazdian, Chen, and Lim 2022; Yang et al. 2023d; Ao, Zhang, and Liu 2023; Yang et al. 2023a), there has been limited exploration of emotive gesture generation that Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. †Corresponding author Ours CaMN w retraining CaMN w/o retraining MultiContext FGD(The lower, the better) Speech+Face+Emotion+SpeakerID Speech+Face+Emotion Speech+Face Speech+SpeakerID Speech+Emotion+SpeakerID Speech+Face+SpeakerID Speech+Emotion Only Speech(Audio+Text) Figure 1: Performance comparison with limited modal during inference. The performance of existing multimodal methods is significantly hindered by the inadequate incorporation of multiple modalities during the inference stage. Our method addresses this limitation by utilizing prior information from speech to enable multimodal conditional control. matches the rhythm and intention of the speech. Previous studies (Yoon et al. 2020; Liu et al. 2022a) have demonstrated the effectiveness of introducing more modalities for gesture synthesis. However, most of these studies have not fully explored the potential of multimodal gesture synthesis modeling. In CaMN (Liu et al. 2022a), facial blendshapes and emotion labels are incorporated as additional inputs and represented as embeddings that are concatenated with the existing inputs to generate more native and expressive gestures. However, as shown in Figure 1, when the number of input modalities decreases, the performance of the model experiences a notable decline. In practical applications, it is common to encounter scenarios where some modalities are missing partially. Retraining a new model to accommodate these missing modalities can be a costly endeavor. Recent studies (Yang et al. 2023b; Qi et al. 2023) have explored incorporating emotion labels into the generation of human gestures using various techniques (e.g., random mask, cross attention), resulting in diverse and emotive gestures. However, these approaches still rely on using additional modalities, such as emotion labels, during the inference process. The inconsistency between the assigned emotions and the context of speech may also lead to unnatural gestures. Furthermore, existing works overlook the importance of facial expressions in capturing speaker rhythm and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6387 intent, leading to suboptimal gesture generation. To address these problems, we propose a cascaded conditional control method for gesture synthesis. It aims to improve gesture generation by utilizing prior knowledge extracted from the speech, and maintain multimodal performance during inference when some modal data is missing (inference-efficient). Specifically, we introduce a novel method for extracting facial deformation and emotion-aware stylization priors from speech. By incorporating these priors, we greatly enhance the quality of gesture synthesis compared to previous methods. We incorporate an emotionaware style injector to highlight the importance of emotional expression within the larger speech context. Firstly, we train a classifier to extract emotion information from speech. The extracted emotion information is then transformed into style features. We further introduce gesture adaptive layer normalization to apply the generated style features to all input features used by the decoder, resulting in more expressive gestures. Meanwhile, we suggest using a face decoder to convert speech into facial blendshapes. Considering that facial blendshapes provide important prior knowledge of facial deformations (e.g., lip language, the rate of deformation related to speech rhythm), we propose a temporal facial feature decoder and a rhythmic identification loss to extract facial deformation-aware priors from speech, which helps generate more rhythmic gestures. To ensure consistency in the generated results, we propose a chain-like generation framework, as shown in Figure 2, where the output of each stage serves as input for the next stage, ensuring a coherent generation across the face, body, and hands. Our approach eliminates the need for costly preparation during inference. This is achieved by training the model to generate facial blendshapes and emotion information from speech during the training phase. Furthermore, by integrating these generated modalities into the cascaded generation process, our method improves the quality of the resulting emotional gestures. The main contributions of our work are: • We propose a novel inference-efficient approach that enables the model to learn to extract multimodal prior information during training, eliminating the need for costly setup preparation (e.g., blendshape capture devices) during inference. Simplification of modalities can greatly reduce the difficulty of application, as in most cases, it is not guaranteed that all modalities can be collected. • We introduce a rhythmic identification loss to incorporate facial deformation as a guiding factor for generating gestures that synchronize with the speaker’s speech rhythm. • To enhance emotional expression in generated gestures, we propose an emotion-aware style injector. This component extracts emotional information from speech and incorporates it as a stylization prior to gesture synthesis. • Extensive experiments and analyses demonstrate the effectiveness of the proposed approach. Related Work This work aims to develop a multimodal conditional approach for co-speech gesture generation. In this section, we summarize previous studies and discuss the relations and differences. Multimodal Conditional Generation aims to generate content conditioned on various input modalities. This requires models to understand the relationships between different modalities and use them to guide the generation process (Qian et al. 2021). Mainstream approaches for achieving multimodal fusion include: (i) Attention mechanisms, which can model fine-grained inter-modality interactions (Li et al. 2021a; Zhang et al. 2022a; Li et al. 2023); (ii) Variational autoencoders, which can capture multimodal distributions (Liu et al. 2022b; Qi et al. 2023); (iii) Concatenation of representations from multiple modality-specific encoders (Yoon et al. 2020; Yang et al. 2022). Recent studies (Liu et al. 2022a; Yang et al. 2023b) have included emotional embeddings as inputs. Some pioneering works (Qi et al. 2023; Yin et al. 2023) achieve gesture diversity by learning emotion distributions and modifying emotion inputs during inference. However, these studies fail to consider the connection between emotion and the context in which speech occurs. Inconsistency between assigned emotions and the context of speech can result in unnatural gestures. Moreover, these methods rely on a large amount of modalities as input. They suffer a significant drop in performance when the number of input modalities decreases. Additionally, previous research has neglected the utilization of important priors that can be derived from facial blendshapes, leading to suboptimal gesture generation. Our approach to multimodal conditional generation focuses on extracting multimodal priors from speech. This allows the model to efficiently utilize information from the speech context and generate emotional gestures that align with the speech rhythm. Co-speech Gesture Generation focuses on generating gestures based on speech input. Previous methods can be categorized into three types: (i) Linguistic rule-based methods convert speech into predefined gesture fragments and generate gestures using these rules (Cassell et al. 1994; Kopp and Wachsmuth 2004; Wagner, Malisz, and Kopp 2014); (ii) Statistical models that learn mapping rules from data and combine them with predefined gesture units to generate gestures (Kipp et al. 2007; Levine, Theobalt, and Koltun 2009; Levine et al. 2010); (iii) Deep learning methods that use neural networks to model the relationship between speech and gesture (Yoon et al. 2019; Yang et al. 2023c). While rulebased approaches can yield results that are easy to understand and control, they require a substantial amount of manual effort to create gesture datasets and engineer rules. Datadriven methods have become predominant for this task. Recent advances (Hu et al. 2022; Zhang et al. 2021) in deep learning have allowed neural networks to directly learn the complex relationships between speech and gestures from raw multimodal data (Zhang et al. 2022b). Previous research has shown that increasing the number of input modalities allows models to generate a wider range of expressive gestures (Yoon et al. 2020; Liu et al. 2022a). Although incorporating additional input modalities (e.g., facial blendshapes, emotion) improved performance, it also led to greater application difficulty during inference (e.g., requiring more capture devices, longer preprocessing times). Additionally, when there are multiple input modalities, there is often redundant modal information, allowing models to take shortThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6388 TFFD Face Generator Face Decoder Audio Encoder Text Encoder Body Generator CW-Attn Body Decoder Hand Generator CW-Attn Hand Decoder 𝐴𝑢𝑑𝑖𝑜 𝑇𝑒𝑥𝑡 𝑆𝑒𝑒𝑑𝑃𝑜𝑠𝑒 Training Objects 𝐿𝑅ℎ𝑦 𝐿𝑀𝑠𝑒 Speech Feature Extractor Gesture AdaLN Speaker Embedding Emotion Embedding Hypernetwork Emotion-Aware Style Injector 𝐿𝑅𝑒𝑐 Emotion & SpeakerID Classifier Nice to meet you! … … … Figure 2: Overall architecture of the proposed CoG. Given audio sequences, text sequences, and seed poses, we use a chainlike framework to generate facial blendshapes, body movements, and hand gestures sequentially. We incorporate a classifier to extract emotion labels and speaker IDs from the speech, and use a hypernetwork to learn the style features. We further introduce gesture adaptive layer normalization to apply the generated style features to all input features used by the body and hand decoder. We leverage the ground truth of facial blendshapes as extra guidance during the training of the face generator to facilitate the transition from speech to facial blendshape. The features from the temporal facial feature decoder (TFFD) are utilized to calculate rhythmic identification losses, aiming to enhance the rhythm of generated gestures. (Best viewed in color.) cuts without fully utilizing all modalities. To overcome these limitations, our method investigates the capacity of models to generate supplementary modalities exclusively from speech. This approach not only reduces inference costs but also enhances the utilization of multimodal features. Methodology Overall Framework To reduce the dependence on multimodal information that must be complete and consistent with training data during model prediction (Lin et al. 2023), we aim to design a gesture synthesis method that can utilize multimodal prior information for better training, while still maintaining high inference performance when lacks partial modal information. Such a modal that eliminates the costly setup preparation during inference, can improve its flexibility and applicability. We propose a chain of generation method: Cascaded Gesture Synthesizer, which models the gesture in order from simple to complex, beginning with the facial blendshape, followed by the body, and concluding with the hand. Specifically, as a common approach in speech-driven gesture synthesis, we use audio sequences A = {a1, ..., aN}, text sequences T = {t1, ..., tN} and previous gestures as input (e.g. seed poses) to guide the continuous sequence of co-speech gestures denoted as G = {g1, ..., gN}, as inputs. Here, N represents the total number of frames, and gi ∈RJ×3 represents the 3D rotation pose state of the i-th frame. For speech input words, we use a pre-trained FastText (Bojanowski et al. 2017) to convert them into a word embedding set. Then, the word sets are fine-tuned by an encoder ET to generate the text feature T ∈R128. For audio feature extraction, we utilize a semi-supervised speech model, wav2vec2.0 (Baevski et al. 2020). We found that it is difficult to maintain the performance during the inference phase if some modal information is missing. Therefore, we first introduce a linear projection layer at the end of the audio encoder to obtain the 128dimensional latent audio feature A ∈R128, for refining the audio feature. In order to guide the transition, we present a supervision signal (annotated facial blendshapes) during training to encourage the separation of facial blendshapes from speech. Then, we also extract rhythmic features from facial deformation using a temporal convolutional decoder and a rhythmic identification loss. Finally, to ensure accurate emotional expressions in the generated poses, we propose an emotion-aware style injector, which incorporates an additional classifier that generates emotion information from the speech input, and a hypernetwork to learn the style features. We further introduce gesture adaptive layer normalization to apply the generated style features to all input features used by the body and hand decoder. In this way, we utilize the deformation prior of facial blendshapes and the stylization The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6389 prior derived from speech to improve gesture synthesis. Cascaded Gesture Synthesizer Inspired by the concept of chain of thought and modal interaction (Wei et al. 2022; Xu et al. 2023), we propose to decompose the problem of speech-to-gesture generation into several stages. These stages include speech-to-emotionbased style recognition, speech-to-facial blendshape generation, facial blendshape-to-body generation, and body-tohands generation. With emotion-based style features integrated into the latent features, our approach utilizes a chainlike modeling method, sequentially generating facial blendshapes, body movements, and hand gestures. Face Generator. Our face generator is designed to generate a sequence of facial blendshapes F = {f1, ..., fN} that are synchronized with audio features A = {a1, ..., aN}, where N denotes the total number of frames. To achieve this goal, we formulate 3D facial blendshape synthesis as a speech-driven sequence-to-sequence learning problem. Specifically, we adopt a 3-layer temporal convolutional decoder ( FaceDec(·)) to transfer audio features into facial blendshape Ft. Considering the temporal transformation of facial blendshape (e.g., the rate and degree of deformation), which can help the model understand the rhythm of speech, we decode the blendshapes into facial features ˆFt using a temporal facial feature decoder (TFFDec(·)), which consists of 4 layers of temporal convolutional layers. The features will be further used to generate gestures in the body and hand generator. This allows us to capture local facial deformation over time, which can be formalized as: Ft = FaceDec(At), ˆFt = TFFDec(Ft), (1) where t ∈N denotes the t-th frame and N represents the total number of frames. Due to the temporal alignment of the annotated blendshape and speech, we add supervision on Ft to ensure the time consistency of the generated facial features with the speech. Body and Hand Generator. As two generators have the same structure, we use body generator as an example to introduce this method for simplicity. In this generator, two modules (Channel-wise Attention Module and Body/Hand Decoer) are introduced detailed as follows. Channel-wise Attention Module (CW-Attn). In order to facilitate the learning of features, we introduce the channelwise attention module to perform adaptive reweighting of channel information to focus on more salient features. These attention weights selectively enhance informative channels via channel-wise multiplication of the latent features, which we refer to as “attended features”. This module emphasizes useful channels while suppressing less informative ones. It contains average and max pooling layers to aggregate channel-wise statistics. The pooled outputs are fed into convolutional layers to generate a per-channel attention vector, activated by a sigmoid to range from 0 to 1, formulated as CW-Attn(·). We concatenate the multi-modal features and feed them into CW-Attn as: Ct = CW-Attn([ ˆAt : ˆTt : ˆFt]), (2) where t ∈N denotes the i-th frame, N represents the total number of frames, and [:] denotes the concatenation operation. The attention vector is generated by convolving the pooled features, which selectively enhances salient channels via element-wise multiplication with the input. Body Decoder. We adopt the separated, cascaded LSTM structure from previous work (Liu et al. 2022a) as decoder to capture the body/hand feature. Unlike these approaches, we incorporate multimodal priors as supplementary conditions to enhance gesture naturalness. For reconstruction, the independent MLPs are added to the end of the body decoders to enable body synthesis. The process can be formalized as: Mt = StyleInject(Ct, St), ˆBt = BodyDec(Mt), Bt = BodyMlp( ˆBt). (3) where StyleInject refers to the operation of injecting emotive and personal gesture style, which is given in (§). Hand Decoder. The difference is that the input feature of the hand generator concatenates the output of the body generator, denoted as: ˆCt = CW-Attn([Mt : ˆBt]). (4) We further model the hand gesture synthesis and get the ˆ Mt, ˆHt, and Ht same as body decoder. Emotion-Aware Style Injector We introduce an emotion-aware style injector to enable more expressive gestures. Contrary to previous works (Liu et al. 2022a; Yang et al. 2023b) that simply converts emotional labels into embedded features and concatenates them with other modal features as inputs. We consider emotional gesture generation as a stylized task and utilize a classifier to derive emotional information from speech. Emotion & Speaker ID Classifier. To extract emotional information from the speech input, we first train a classifier on the training set using speech, emotion labels and speaker IDs. The classifier consists of a 3-layer temporal convolutional network with two linear projection layers, enabling the prediction of probabilities. We utilize cross-entropy loss to optimize the alignment between the predicted probability and the true emotion and speaker category. Then, we freeze the weights of the classifier and use its predictions as a guide to learn the style vector for the cascaded gesture synthesizer. Gesture Adaptive Layer Normalization. We build upon the concept of adaptive layer normalization (Huang and Belongie 2017) and propose GestureAdaLN for stylized gesture generation. GestureAdaLN utilizes a hypernetwork to leverage inter-speaker and inter-emotion priors. The hypernetwork takes speaker and emotion embeddings as inputs and employs a 2-layer temporal convolution network to generate a style vector St for representing gesture style. Through two linear projection layers (f(·) and g(·)), the style vector St is mapped to channel-wise mean and standard deviation parameters. These parameters are then used The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6390 to modulate the latent feature X: St = HyNet(Et, It), ˆX = f(St) · X + g(St), (5) where HyNet(·) denotes the hypernetwork, and Et and It represent the emotion labels and speaker ids at frame t. By considering the speaker identity and emotions, our method effectively captures the style differences in gesture content across different contexts, resulting in gestures that better align with the current speech content. Training Objective Rhythmic Identification Loss. Given the significance of a speaker’s speech rhythm in their nonverbal communication, we suggest a novel method to aid in the generation of gestures from audio by integrating speech rhythms. A key insight is that the rhythm and prosody of speech provide important cues for generating natural gestures, while facial blendshapes provide additional contextual information about the tone and intent. Therefore, we apply the InfoNCE loss (Chen et al. 2020) to encourage temporal synchronization between the facial features ˆF and audio features ˆA to extract the speaking rhythm. Specifically, we compute the InfoNCE loss between encoded representations of the two modalities, which are obtained using neural network encoders f and g, respectively. This acts as a multimodal alignment loss that matches rhythmic cues in the audio with facial expressions, thereby generating better features with rhythmic information. The loss is defined as: ℓRhy = −1 N N X i=1 log exp sim(f( ˆFi), g( ˆAi))/τ PN j=1 exp sim(f( ˆFi), g( ˆAj))/τ , (6) where N is the number of frames in the aligned sequences, and ˆFi and ˆAi are the facial and audio features at the ith frame, respectively. sim(·) denotes cosine similarity between the encoded representations, and τ represents the temperature hyperparameter. The synchronization loss encourages neural network encoders to learn representations that capture meaningful correlations between two modalities while disregarding irrelevant variations from the speech. Face Reconstruction Loss. We utilize the mean squared error (MSE) loss as the face blendshape reconstruction loss. Specifically, the loss is defined as: ℓMSE = 1 N N X i=1 (Fi −ˆFi)2, (7) where N indicates the number of frames, Fi denotes the i-th frame of ground truth blendshape parameters, and ˆFi denotes the corresponding predicted blendshapes by our model. By minimizing the MSE during training, we aim to improve the accuracy and fidelity of the facial blendshapes in capturing facial expressions and deformations. This, in turn, enables the generation of expressive gestures by cascaded gesture synthesizer. Body & Gesture Reconstruction Loss. For body and gesture generator, we utilize the L1 loss as the reconstruction loss function. The L1 loss measures the absolute difference between the predicted and ground truth values of the body and gesture parameters, providing a robust and efficient metric for reconstruction quality. The losses are defined as: ℓB rec = E h
B −ˆB
1 i , ℓH rec = E h
H −ˆH
1 i , ℓRec =ℓB rec + αℓH rec, (8) where a weight α is adopted to balance the body and hands penalties, and E indicates the maximum likelihood estimation. Our objective is to improve the quality of the generated gesture by minimizing the L1 loss during training. This, in turn, enhances the model’s capability to capture the dynamics and subtle nuances of human motion. The Overall Objective. In summary, the overall optimization objective for our proposed method is formalized as: L = λRhyℓRhy + λMseℓMse + λRecℓRec. (9) where λRhy = 1, λMSE = 1000 and λRec = 500. Experiments Experiments Setting Dataset. To evaluate the effectiveness of each component in our approach, we conducted comprehensive experiments on a large-scale multimodal dataset called BEAT (BodyExpression-Audio-Text) (Liu et al. 2022a). The dataset comprises 76 hours of multi-modal data captured from 30 speakers conversing in four different languages while expressing eight distinct emotions. This includes conversational gestures accompanied by facial expressions, emotions, and semantics, as well as annotations for audio, text, and speaker identity. To ensure a fair comparison, we followed CaMN (Liu et al. 2022a) and utilized approximately 16 hours of speech data from English speakers. Additionally, we followed the established practice of dividing the dataset into separate training, validation, and testing subsets, while maintaining the same data partitioning scheme as in previous work to ensure the fairness of the comparison. Implementation Details. We use the Adam optimizer with an initial learning rate of 0.00025, and set the batch size to 512. To ensure a fair comparison, we use N = 34 frame clips with a stride of 10 during training. The initial four frames are used as seed poses, and the model is trained to generate the remaining 30 poses, which correspond to a duration of 2 seconds. Our models utilize 47 joints in the BEAT dataset, including 38 hand joints and 9 body joints. The latent dimensions of the facial blendshape, audio, text, and gesture features are all set to 128, while the speaker embedding and emotion embedding are set to 8. We set τ = 0.1 in the rhythmic identification loss. All experiments are conducted using NVIDIA A100 GPUs. More analysis results about hyperparameter are given in supplementary materials. Evaluation Metrics Fr´echet Gesture Distance (FGD) We used the Fr´echet Gesture Distance (Yoon et al. 2020) to evaluate the distribution distance between the synthesized and ground truth gestures. To compute this metric, we utilize the autoencoder pre-trained by BEAT (Liu et al. 2022a). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6391 Methods FGD ↓ SRGR ↑ BeatAlign ↑ Seq2Seq 261.3 0.173 0.729 Speech2Gesture 256.7 0.092 0.751 MultiContext 176.2 0.195 0.776 Audio2Gesture 223.8 0.097 0.766 CaMN 123.7 0.239 0.783 TalkShow 91.00 0.840 GestureDiffuCLIP 85.17 CoG (ours) 45.87 0.308 0.931 Table 1: Comparison with methods (Yoon et al. 2019; Ginosar et al. 2019; Yoon et al. 2020; Li et al. 2021a; Liu et al. 2022a; Yi et al. 2022; Ao, Zhang, and Liu 2023) in the term of FGD, SRGR and BeatAlign. All methods are trained on BEAT datasets. ↓denotes the lower the better while ↑denotes the higher the better. The best results are in bold. Semantic-Relevant Gesture Recall (SRGR). We adopt the Semantic-Relevant Gesture Recall metric (Liu et al. 2022a) to evaluate the semantic relevance of generated gestures. SRGR leverages the semantic scores as weights for the Probability of Correct Keypoint metric between the generated gestures and the ground truth gestures. It can accurately capture the semantic aspects related to the generated gestures. Beat Alignment Score (BeatAlign). To evaluate the correlation between gestures and audio, we utilized the Beat Alignment Score (Li et al. 2021b) to calculate the similarity between gesture beats and audio beats. BeatAlign provides a measure of alignment between the two modalities. Qualitative Results We compared our proposed method with existing multimodal gesture synthesis methods on the BEAT dataset in Table 1. Our approach achieves competitive performance across all metrics when compared to the state-of-the-art methods, which validates the effectiveness of our cascaded conditional framework for this task. In our method, we adopt an incremental modeling approach for human gestures. We start by generating facial blendshapes, then move on to the body, and finally the gestures. The chain-like generation pipeline enables lower FGD, leading to high-fidelity gesture reconstruction by gradually modeling gestures from simple to complex. Additionally, we effectively utilize multimodal information by decoupling stylization and rhythm cues from speech, resulting in significant improvements in the SRGR and BeatAlign scores. Ablation Study We validate the effectiveness of our proposed approach by conducting ablation studies on different components of our proposed method. Effect of Cascaded Gesture Synthesizer. We evaluated the effectiveness of the cascaded gesture synthesizer through experiments conducted under various settings, as shown in Table 2. We examined the significance of the chain structure by conducting ablative experiments on the face generator. Specifically, “−facecog” refers to the method that does not join the face generator in our cascaded framework. In We were … … long time … each other … to see … Ground Truth Trimodal CaMN CoG (Ours) wow what … Figure 3: Visualization of our predicted 3D gestures against various baseline methods. The results of different methods are presented in separate rows, with each row representing the generated results of a method at different time frames. this setting, the facial blendshapes are converted into embeddings using a facial encoder and then concatenated with the encoding features of other modalities. “+ facecog” represents the method that incorporates a face generator, where facial blendshape is included as a generation condition using the temporal facial feature decoder. The results of the ablation study show that incorporating the temporal facial feature decoder to process the generated features leads to a notable 31.7% decrease in FGD. We also examine the impact of integrating a channel-wise attention module, which highlights important channels while suppressing less informative ones. By incorporating the channel-wise attention module, indicated as ”+ cw-attn,” we achieve adaptive reweighting of channel information, giving higher priority to more prominent features and leading to enhanced overall metrics. Effect of Emotion-Aware Style Injector. We validated the effectiveness of the emotion-aware style injector, as shown in Table 2, the method labeled as “+ style” includes the emotion-aware style injector to enable more expressive gestures. The ablation results demonstrate that incorporating emotional information from speech as a stylized prior enhances our method’s capability to generate natural and contextually appropriate gestures. Moreover, the results confirm that considering gesture synthesis as a stylized task can enhance the expressiveness of the generated gestures. This improvement is clearly demonstrated in the overall metrics, particularly in the case of BeatAlign, which experienced a Settings FGD↓ SRGR↑ BeatAlign↑ −facecog 88.50 0.228 0.762 + facecog 60.44 0.240 0.834 + cw-attn 58.74 0.238 0.842 + style 52.36 0.241 0.916 + LRhy 45.87 0.308 0.931 Table 2: Ablation study on different components of our proposed method. ↓denotes the lower the better, and ↑denotes the higher the better. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6392 significant increase of 8.79%. Effect of Rhythmic Identification Loss. We validate the effectiveness of the rhythmic identification loss, as listed in Table 2, “+ LRhy” signifies the method that incorporates the rhythmic identification loss for further exploration of the rhythm of gesture. This improvement provides evidence for the correlation between facial blendshapes and gesture rhythm, as well as the successful separation of speech and facial features through contrastive learning. The benefits of investigating rhythm through facial features are clearly illustrated by the significant 27.8% increase in the SRGR score. Qualitative Analysis User Study. We conducted user study to evaluate the visual quality of the generated co-speech 3D gestures. For each compared method, we generated 10 results, and these gestures were converted into videos for evaluation by 22 participants. In each test, participants are presented with 20-second video clips synthesized by different models. Then, the participants are asked to provide ratings based on four dimensions: (i) naturalness, (ii) appropriateness, (iii) style correctness, and (iv) synchrony. In the naturalness test, participants are asked to evaluate the similarity of the generated gestures to those made by humans. This assessment primarily focuses on the naturalness and smoothness of the movements. In the appropriateness test, participants are required to assess the consistency of the gestures with the speech content, including both the literal content and the conveyed semantic meaning. In the style correctness test, participants are provided with emotion labels and asked to determine whether the generated gestures align with the intended style. In the synchrony test, participants assess the level of synchronization between gestures, speech rhythm, and accompanying audio and facial movements. They evaluate how well the gestures synchronize with these elements, ensuring a cohesive and harmonious overall presentation. We compared three methods, including MultiContext (Yoon et al. 2020), CaMN (Liu et al. 2022a), our method, and ground truth. As shown in Table 3, the average results demonstrate that our method achieves a significant advantage over the compared methods, performing better across all metrics. Visualization. As illustrated in Figure 3, our method generates gestures that are more rhythmic and natural, aligning well with the speaking cadence. The gestures highlighted Methods N↑ M↑ C↑ S↑ MultiContext 3.127 2.901 3.129 3.110 CaMN 3.588 3.261 3.225 3.414 Ours 3.916 3.624 3.541 3.685 Ground Truth 4.492 4.570 4.382 4.413 Table 3: The user study on naturalness (N, human likeness), matchness (M, the degree of consistency with the speech content), style correctness (C, with emotion labels), and synchrony (S, the level of synchronization with the speech rhythm). The rating score range is 1-5, with 5 being the best. ↑indicates the higher the better. facial blendshape w/o face w face Figure 4: Visualization of the gestures generated by our method, the results of without/with a facial generator and rhythmic identification loss (denoted as w/o face and w face). The facial blendshape results are shown in the first row, and w/o face and w face are given in the second and third rows, respectively. within the green rectangle indicate that previous methods were lacking in terms of diversity. In contrast, the gestures generated by our method exhibit a greater range of diversity. The gestures circled within the red rectangle indicate that our method is capable of recognizing gestures corresponding to expressive words. As shown in the figure, when the speaker utters an expressive word like “wow”, the corresponding audio exhibits rhythmic fluctuations. Our method responds by performing a simultaneous upward movement of both hands, resembling the human gestures represented by the ground truth. In contrast, previous methods have been deficient in capturing this aspect. The gestures highlighted within the blue rectangle reveal that our method is capable of producing gestures that correspond to the intended meaning of the speaker. It can be observed that when the speaker mentions the word “long”, our method generates a gesture of both hands spreading outwards, which aligns with the semantic meaning of the word. This gesture closely resembles the ground truth. In contrast, other methods lack a similar response in this regard. Our approach enables the generation of expressive gestures that not only appear natural but also synchronize with the rhythm of speech. We compared the results generated without the face generator to those generated with the face generator in Figure 4. It can be observed that incorporating facial features aligns gesture rhythms with speech pace, demonstrating the advantages gained from including facial deformation priors. Conclusion In this study, we propose a framework to enhance the generation of 3D gestures by leveraging multimodal information from human speech. Our approach incorporates multimodal priors as constraints to enhance gesture generation. We adopt a chain-like modeling approach to sequentially generate facial blendshapes, body movements, and hand gestures. By incorporating rhythm cues from facial blendshapes and stylization priors into the generation process, our approach improves the quality of the generated gestures and reduces the number of modalities needed during inference. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6393 Acknowledgements This research was partly supported by Shenzhen Key Laboratory of next generation interactive media innovative technology (Grant No: ZDSYS20210623092001004), the China Postdoctoral Science Foundation (No.2023M731957), the National Natural Science Foundation of China under Grant 62306165. References Ao, T.; Zhang, Z.; and Liu, L. 2023. GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents. arXiv preprint arXiv:2303.14613. Baevski, A.; Zhou, Y.; Mohamed, A.; and Auli, M. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33: 12449–12460. Bojanowski, P.; Grave, E.; Joulin, A.; and Mikolov, T. 2017. Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5: 135–146. Cassell, J.; Pelachaud, C.; Badler, N.; Steedman, M.; Achorn, B.; Becket, T.; Douville, B.; Prevost, S.; and Stone, M. 1994. Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, 413–420. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Ginosar, S.; Bar, A.; Kohavi, G.; Chan, C.; Owens, A.; and Malik, J. 2019. Learning individual styles of conversational gesture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3497–3506. Hu, R.; Monebhurrun, V.; Himeno, R.; Yokota, H.; and Costen, F. 2022. An uncertainty analysis on finite difference time-domain computations with artificial neural networks: improving accuracy while maintaining low computational costs. IEEE Antennas and Propagation Magazine, 65(1): 60–70. Huang, X.; and Belongie, S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, 1501–1510. Kipp, M.; Neff, M.; Kipp, K. H.; and Albrecht, I. 2007. Towards natural gesture synthesis: Evaluating gesture units in a data-driven approach to gesture synthesis. In Intelligent Virtual Agents: 7th International Conference, IVA 2007 Paris, France, September 17-19, 2007 Proceedings 7, 15– 28. Springer. Kopp, S.; and Wachsmuth, I. 2004. Synthesizing multimodal utterances for conversational agents. Computer animation and virtual worlds, 15(1): 39–52. Kucherenko, T.; Jonell, P.; Yoon, Y.; Wolfert, P.; and Henter, G. E. 2021. A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020. In 26th international conference on intelligent user interfaces, 11–21. Levine, S.; Kr¨ahenb¨uhl, P.; Thrun, S.; and Koltun, V. 2010. Gesture controllers. In ACM SIGGRAPH 2010 papers, 1– 11. Levine, S.; Theobalt, C.; and Koltun, V. 2009. Real-time prosody-driven synthesis of body language. In ACM SIGGRAPH Asia 2009 papers, 1–10. Li, J.; Kang, D.; Pei, W.; Zhe, X.; Zhang, Y.; He, Z.; and Bao, L. 2021a. Audio2gestures: Generating diverse gestures from speech audio with conditional variational autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11293–11302. Li, R.; Yang, S.; Ross, D. A.; and Kanazawa, A. 2021b. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13401–13412. Li, R.; Zhao, J.; Zhang, Y.; Su, M.; Ren, Z.; Zhang, H.; Tang, Y.; and Li, X. 2023. FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10234–10243. Lin, Y.; Han, H.; Gong, C.; Xu, Z.; Zhang, Y.; and Li, X. 2023. Consistent123: One image to highly consistent 3d asset using case-aware diffusion priors. arXiv preprint arXiv:2309.17261. Liu, H.; Zhu, Z.; Iwamoto, N.; Peng, Y.; Li, Z.; Zhou, Y.; Bozkurt, E.; and Zheng, B. 2022a. BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, 612–630. Springer. Liu, X.; Wu, Q.; Zhou, H.; Du, Y.; Wu, W.; Lin, D.; and Liu, Z. 2022b. Audio-Driven Co-Speech Gesture Video Generation. arXiv preprint arXiv:2212.02350. Nyatsanga, S.; Kucherenko, T.; Ahuja, C.; Henter, G. E.; and Neff, M. 2023. A Comprehensive Review of Data-Driven Co-Speech Gesture Generation. In Computer Graphics Forum, volume 42, 569–596. Wiley Online Library. Qi, X.; Liu, C.; Li, L.; Hou, J.; Xin, H.; and Yu, X. 2023. EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation. arXiv preprint arXiv:2305.18891. Qian, S.; Tu, Z.; Zhi, Y.; Liu, W.; and Gao, S. 2021. Speech drives templates: Co-speech gesture synthesis with learned templates. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11077–11086. Wagner, P.; Malisz, Z.; and Kopp, S. 2014. Gesture and speech in interaction: An overview. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-ofthought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35: 24824–24837. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6394 Xu, Z.; Chen, Z.; Zhang, Y.; Song, Y.; Wan, X.; and Li, G. 2023. Bridging vision and language encoders: Parameterefficient tuning for referring image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 17503–17512. Yang, S.; Wang, Z.; Wu, Z.; Li, M.; Zhang, Z.; Huang, Q.; Hao, L.; Xu, S.; Wu, X.; Yang, C.; et al. 2023a. UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons. In Proceedings of the 31st ACM International Conference on Multimedia, 1033–1044. Yang, S.; Wu, Z.; Li, M.; Zhang, Z.; Hao, L.; Bao, W.; Cheng, M.; and Xiao, L. 2023b. DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models. arXiv preprint arXiv:2305.04919. Yang, S.; Wu, Z.; Li, M.; Zhang, Z.; Hao, L.; Bao, W.; and Zhuang, H. 2023c. QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2321–2330. IEEE. Yang, S.; Wu, Z.; Li, M.; Zhao, M.; Lin, J.; Chen, L.; and Bao, W. 2022. The ReprGesture entry to the GENEA Challenge 2022. In Proceedings of the 2022 International Conference on Multimodal Interaction, 758–763. Yang, S.; Xue, H.; Zhang, Z.; Li, M.; Wu, Z.; Wu, X.; Xu, S.; and Dai, Z. 2023d. The DiffuseStyleGesture+ entry to the GENEA Challenge 2023. In Proceedings of the 25th International Conference on Multimodal Interaction, 779– 785. Yazdian, P. J.; Chen, M.; and Lim, A. 2022. Gesture2Vec: Clustering gestures using representation learning methods for co-speech gesture generation. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3100–3107. IEEE. Yi, H.; Liang, H.; Liu, Y.; Cao, Q.; Wen, Y.; Bolkart, T.; Tao, D.; and Black, M. J. 2022. Generating Holistic 3D Human Motion from Speech. arXiv preprint arXiv:2212.04420. Yin, L.; Wang, Y.; He, T.; Liu, J.; Zhao, W.; Li, B.; Jin, X.; and Lin, J. 2023. EMoG: Synthesizing Emotive Cospeech 3D Gesture with Diffusion Model. arXiv preprint arXiv:2306.11496. Yoon, Y.; Cha, B.; Lee, J.-H.; Jang, M.; Lee, J.; Kim, J.; and Lee, G. 2020. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics (TOG), 39(6): 1–16. Yoon, Y.; Ko, W.-R.; Jang, M.; Lee, J.; Kim, J.; and Lee, G. 2019. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In 2019 International Conference on Robotics and Automation (ICRA), 4303–4309. IEEE. Zhang, M.; Cai, Z.; Pan, L.; Hong, F.; Guo, X.; Yang, L.; and Liu, Z. 2022a. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001. Zhang, Y.; Qu, Y.; Xie, Y.; Li, Z.; Zheng, S.; and Li, C. 2021. Perturbed Self-Distillation: Weakly supervised large-scale point cloud semantic segmentation. In ICCV, 15520–15528. Zhang, Y.; Xie, Y.; Li, C.; Wu, Z.; and Qu, Y. 2022b. Learning All-In Collaborative Multiview Binary Representation for Clustering. IEEE Transactions on Neural Networks and Learning Systems. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6395 | 2024 | 710 |
18,530 | Decoupled Contrastive Learning for Long-Tailed Recognition Shiyu Xuan, Shiliang Zhang National Key Laboratory for Multimedia Information Processing School of Computer Science, Peking University, Beijing, China shiyu [email protected], [email protected] Abstract Supervised Contrastive Loss (SCL) is popular in visual representation learning. Given an anchor image, SCL pulls two types of positive samples, i.e., its augmentation and other images from the same class together, while pushes negative images apart to optimize the learned embedding. In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance. In addition, similarity relationship among negative samples, that are ignored by SCL, also presents meaningful semantic cues. To improve the performance on long-tailed recognition, this paper addresses those two issues of SCL by decoupling the training objective. Specifically, it decouples two types of positives in SCL and optimizes their relations toward different objectives to alleviate the influence of the imbalanced dataset. We further propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes. It uses patchbased features to mine shared visual patterns among different instances and leverages a self distillation procedure to transfer such knowledge. Experiments on different long-tailed classification benchmarks demonstrate the superiority of our method. For instance, it achieves the 57.7% top-1 accuracy on the ImageNet-LT dataset. Combined with the ensemble-based method, the performance can be further boosted to 59.7%, which substantially outperforms many recent works. Our code will be released. Introduction Thanks to the powerful deep learning methods, the performance of various vision tasks (Russakovsky et al. 2015; Long, Shelhamer, and Darrell 2015) on manually balanced dataset has been significantly boosted. In real-world applications, training samples commonly exhibit a long-tailed distribution, where a few head classes contribute most of the observations, while lots of tail classes are associated with only a few samples (Van Horn et al. 2018). Long-tailed distribution leads to two challenges for visual recognition: (a) the loss function designed for the balanced dataset can be easily biased toward the head classes. (b) each of tail classes contains too few samples to represent visual variances, leading to the under-representation of the tail classes. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. True Positive False Positive (a) Head Classes (b) Tail Classes Our Our SCL SCL SCL SCL Figure 1: Examples of retrieval results using features learned by SCL on head classes in (a) and tail classes in (b). In (b), features learned by SCL are biased to low-level appearance cues, while features learned by our method are more discriminative to semantic cues. By optimizing the intra-inter category distance, Supervised Contrastive Loss (SCL) (Khosla et al. 2020) has achieved impressive performance on balanced datasets. Given an anchor image, SCL pulls two kinds of positive samples together, i.e., (a) different views of the anchor image generated by the data augmentation, and (b) other images from the same classes. Those two types of positives supervise the model to learn different representations, i.e., images from the same categories enforce the learning of semantic cues, while samples augmented by appearance variances mostly lead to the learning of low-level appearance cues. Fig. 1 (a) shows that, SCL effective learns semantic features for head classes, e.g.,the learned semantic “bee” is robust to cluttered backgrounds. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6396 As shown in Fig. 1 (b), representations learned by SCL for tail classes are more discriminative to low-level appearance cues like shape, texture, and color. Our theoretical analysis in Section Overview indicates that, SCL poses imbalanced gradients on two kinds of positive samples, resulting a biased optimization for head and tail classes. We hence proposes the Decoupled Contrastive Learning, which adopts the Decoupled Supervised Contrastive Loss (DSCL) to handle this issue. Specifically, DSCL decouples two kinds of positive samples to re-formulate the optimization of intra-category distance. It alleviates the imbalanced gradients of two kinds of positive samples. We also give a theoretical proof that DSCL prevents the learning of a biased intra-category distance. In Fig. 1 (b), features learned by our method are discriminative to semantic cues, and substantially boost the retrieval performance on tail classes. To further alleviate the challenge of long-tailed distribution, we propose the Patch-based Self Distillation (PBSD) to leverage head classes to facilitate the representation learning in tail classes. PBSD adopts a self distillation strategy to better optimize the inter-category distance, through mining shared visual patterns between different classes and transferring knowledge from head to tail classes. We introduce patchbased features to represent visual patterns from an object. The similarity between patch-based features and instancelevel features is calculated to mine the shared visual patterns, i.e., if an instance shares visual patterns with a patch-based feature, they will have high similarity. We leverage the self distillation loss to maintain the similarity relationship among samples, and integrate the knowledge into the training. DSCL and PBSD are easy to implement, and substantially boosts the long-tailed recognition performance. We evaluate our method on several long-tailed datasets including ImageNet-LT (Liu et al. 2019), iNaturaList 2018 (Van Horn et al. 2018), and Places-LT (Liu et al. 2019). Experimental results show that our method improves the SCL by 6.5% and achieves superior performance compared with recent works. For example, it outperforms a recent contrastive learning based method TSC (Li et al. 2021) by 5.3% on ImageNet-LT. Our method can be flexibly combined with ensemble-based methods like RIDE (Wang et al. 2020), which achieves the overall accuracy of 74.9% on the iNaturaList 2018, outperforming the recent work CR (Ma et al. 2023) by 1.4% in overall accuracy. To the best of our knowledge, this is an original contribution decoupling two kinds of positives and using patch-based self distillation to boost the performance of SCL on long-tail recognition. The proposed DSCL decouples different types of positive samples to pursue a more balanced intra-category distance optimization across head and tail classes. It also introduces the similarity relationship cues to leverage shared patterns in head classes for the optimization in tail classes. Extensive experiments on three commonly used datasets have shown its promising performance. Our method is easy to implement and the code will be released to benefit the future research on long-tailed visual recognition. Related Work Long-tailed recognition aims to address the problem of the model training in the situation where a small portion of classes have massive samples but the others are associated with only a few samples. Current research can be summarized into four categories, e.g., re-balancing methods, decoupling methods, transfer learning methods and ensemble-based methods, respectively. Re-balancing methods use re-sampling or re-weighting to deal with long-tailed recognition. Re-sampling methods typically include over-sampling for the tail classes (Byrd and Lipton 2019) or under-sampling for the head classes (Japkowicz and Stephen 2002). Besides re-sampling, re-weighting the loss function is also an effective solution. For instance, Balanced-Softmax (Ren et al. 2020) presents the unbiased extension of Softmax based on the Bayesian estimation. Rebalancing methods could be harmful to the discriminative power of the learned backbone (Kang et al. 2019). Therefore, decoupling methods propose the two-stage training to decouple the representation learning and the classifier training. Transfer learning methods enhance the performance of the model by transferring knowledge from head classes to tail classes. BatchFormer (Hou, Yu, and Tao 2022) introduces a one-layer Transformer (Vaswani et al. 2017) to transfer knowledge by learning the sample relationship from each mini-batch. Ensemble-based methods leverage multi experts to solve long-tailed visual learning. RIDE (Wang et al. 2020) proposes a multi-branch network to learn diverse classifiers in parallel. Although ensemble-based method achieves superior performance, the introduction of multi experts increases the number of parameters and computational complexity. Contrastive learning has received much attention because of its superior performance on representation learning (He et al. 2020). Contrastive learning aims to find a feature space that can encode the semantic similarities by pulling the positive pairs together while pushing negative pairs apart. Some researchers have leveraged contrastive learning in the longtailed recognition. For example, KCL (Kang et al. 2020) finds that the self-supervised learning based on contrastive learning can learn a balanced feature space. To leverage the useful label information, they extend SCL (Khosla et al. 2020) by introducing a k-positive sampling method. TSC (Li et al. 2021) improves the uniformity of the feature distribution by making features of different classes converge to a pre-defined uniformly distributed targets. Some methods (Yun et al. 2022; Zhang et al. 2023) extend contrastive learning with localized information to benefit the dense prediction tasks. This work differs with previous ones in several aspects. Existing long-tailed recognition works using contrastive learning treat two kinds of positives equally. To the best of our knowledge, this is an early work revealing that treating two kinds of positives equally leads to a biased optimization across categories. We hence propose a decoupled supervised contrastive loss to pursue a balanced intra-category distance optimization. We further extend the contrastive learning by introducing patch-based self distillation to transfer knowledge between classes, mitigating the under-representation of the tailed classes and leading to a more effective optimization to inter-category distance. Different from other transfer The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6397 0 200 400 600 800 1000 Class index 0.00 0.05 0.10 0.15 0.20 0.25 Ratio of gradient L2 norm in Eq. (6) SCL Our SCL* Our* Tail Head Figure 2: Average ratio of gradient L2 norm computed by pulling the anchor with two types of positives as in Eq. (6) on ImageNet-LT. ‘*’ denotes the theoretical ratio. SCL treats two types of positives equally, and leads to the imbalanced optimization. Two types of positives denote the data argumentation and other images in the same category. learning methods, PBSD leverages patch-based features to mine shared patterns between different classes and designs a self distillation procedure to transfer knowledge. The self distillation procedure does not rely on a large teacher model or multi-expert models (Li et al. 2022), making it efficient. Compared with patch-based contrastive learning methods that only mine similar patches from different views of an image, PBSD transfers knowledge between different images. Those differences and the promising performance in extensive experiments highlight the contribution of this work. Methodology Analysis of SCL Given a training dataset D = {xi, yi}n i=1, where xi denotes an image and yi ∈{1, . . . , K} is its class label. Assuming that nk denotes the cardinality of class k in D, and indexes of classes are sorted by cardinality in decreasing order, i.e., if a < b, then na ≥nb. In long-tailed recognition, the training dataset is imbalanced, i.e., n1 ≫nK, and the imbalance ratio is defined as n1/nK. For the image classification task, algorithm aims to learn a feature extraction backbone vi = fθ(xi), and a linear classifier, which first maps an image xi into a global feature map ui and uses the global pooling to get a d-dim feature vector vi. It hence classifies the feature vector to a K-dim classification score. Typically, the testing dataset T is balanced. Supervised Contrastive Learning (SCL) is commonly adopted to learn the feature extraction backbone. Given an anchor image xi, defining zi = gγ(vi) as the normalized feature extracted with the backbone and an extra projection head gγ (He et al. 2020), z+ i as the normalized feature of a positive sample of xi generated by data augmentation. We use M to denote a set of sample features that can be acquired by the memory queue (He et al. 2020), and use Pi as the positive feature set of xi drawn from M with Pi = {zt ∈M : yt = yi}. SCL decreases the intra-category distance by pulling the anchor image and its positive samples together, meanwhile enlarges the inter-category distance through pushing images with different class labels apart, i.e., Lscl = −1 |Pi| + 1 X zt∈{z+ i ∪Pi} log p(zt|zi), (1) where |Pi| is the cardinality of Pi. Using τ to denote a predefined temperature parameter, the conditional probability p(zt|zi) is computed as, p(zt|zi) = exp(zt · zi/τ) P zm∈{z+ i ∪M} exp(zm · zi/τ). (2) Eq. (1) can be formulated as a distribution alignment task, Lalign = X zt∈{z+ i ∪M} −ˆp(zt|zi) log p(zt|zi), (3) where ˆp(zt|zi) is the probability of the target distribution. For z+ i and zt ∈Pi, SCL treats them equally as positive samples and sets their target probability as 1/(|Pi| + 1). For other images with different class labels in M, SCL treats them as negative samples and sets their target probability as 0. For the feature zi of an anchor image xi, the gradient of SCL is, ∂Lscl ∂zi = 1 τ ( X zj∈Ni zjp(zj|zi) + z+ i p(z+ i |zi) − 1 |Pi| + 1 + X zt∈Pi zt p(zt|zi) − 1 |Pi| + 1 ) , (4) where Ni is the negative set of xi containing features drawn from {zj ∈M : yj ̸= yi}. SCL involves two types of positive samples z+ i and zt ∈Pi. We compute gradients of pulling the anchor with two types of positive samples as, ∂Lscl ∂zi z+ i = z+ i p(z+ i | zi) − 1 |Pi| + 1 , ∂Lscl ∂zi zt = zt p(zt|zi) − 1 |Pi| + 1 , zt ∈Pi. (5) At the beginning of the training, the ratio of gradient L2 norm of two kinds of positive samples is,
∂Lscl ∂zi z+ i
2 P zt∈Pi
∂Lscl ∂zi zt
2 ≈ 1 |Pi|. (6) When SCL converges, the optimal conditional probability of z+ i is, p(z+ i |zi) = 1 |Pi| + 1. (7) A detailed proof of above computations can be found in the Supplementary Material. In SCL, the memory queue M is uniformly sampled from the training set, which leads to |Pi| ≈nyi n |M|. In a balanced The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6398 Global view 1 Global view 2 Small patches Model EMA Model Exponential moving average z" # z" {s"[']}*+, . … ℒ0123 ℒ4510 Small patches cropping c"['] *+, ROI pooling Global feature map Global pooling Figure 3: Illustration of the proposed method. Data augmentation is performed to get two global views of a training image. Then small patch is cropped from the global view. The backbone and Exponential Moving Average (EMA) backbone (He et al. 2020) are used to extract normalized features. These features are used to calculate the similarity distribution with memory queue M. Ldscl optimizes the feature space by pulling the anchor image with its positive samples together and pushing the anchor image with its negative samples apart. Lpbsd transfers knowledge through mimicking two similarity distributions. dataset, n1 ≈n2 ≈· · · ≈nK, resulting a balanced |Pi| across different categories. For a long-tail dataset with imbalanced |Pi|, SCL makes the head classes pay more attention to pulling the anchor zi with features from Pi together as the gradient is dominated by the third term in Eq. (4). As shown in Fig. 2, the ratio of gradient L2 norm of pulling two kinds of positive samples is unbalanced. When the training of SCL converges, the optimal value of p(z+ i |zi) is also influenced by the |Pi| as shown in Eq. (7). The inconsistency of learned features across categories is illustrated in Fig. 1(a) and (b). This phenomena has also been validated by (Wei et al. 2020) that pulling zi with z+ i and samples from Pi leads to learning different representations, i.e., appearance features for tail classes and semantic features for head classes, respectively. Eq. (4) also indicates that, SCL equally pushes away all the negative samples to enlarges the inter-category distance. This strategy ignores the valuable similarity cues among different classes. To seek a better way to optimize intra and inter category distance, we propose the Decoupled Supervised Contrastive Loss (DSCL) to decouple two kinds of positive samples to prevent the biased optimization, as well as the Patch-based Self Distillation (PBSD) to leverage similarity cues among classes. Decoupled Supervised Contrastive Loss DSCL is proposed to ensure a more balanced optimization to the intra-category distance across different categories. It decouples two kinds of positive samples and add different weights on them to make the gradient L2 norm ratio and the optimal value of p(z+ i |zi) not influenced by the number of samples in each category. We represent the DSCL as, Ldscl = −1 |Pi| + 1 X zt∈{z+ i ∪Pi} log exp wt(zt · zi/τ) P zm∈{z+ i ∪M} exp(zm · zi/τ), (8) where, wt = α(|Pi| + 1), zt = z+ i (1 −α)(|Pi| + 1) |Pi| , zt ∈Pi (9) where α ∈[0, 1] is a pre-defined hyper-parameter. The proposed DSCL is a generalization of SCL in both balanced setting and imbalanced setting. If the dataset is balanced, DSCL is the same as SCL by setting α = 1/(|Pi| + 1). We proceed to show why Eq. (8) leads to a more balanced optimization. At the beginning of the training, the gradient L2 norm ratio of two kinds of positives is,
∂Ldscl ∂zi z+ i
2 P zt∈Pi
∂Ldscl ∂zi zt
2 ≈ α 1 −α. (10) When DSCL converges, the optimal conditional probability of z+ i is p(z+ i |zi) = α, where a detailed proof can be found in the Supplementary Material. As shown in Eq. (10) and Fig. 2, the gradient ratio of two kinds of positive samples is not influenced by |Pi|. DSCL also ensures that the optimal value of p(z+ i |zi) is not influenced by |Pi|, hence alleviates the inconsistent feature learning issue between head and tail classes. Patch-based Self Distillation Visual patterns can be shared among different classes, e.g., the visual pattern ‘wheel’ is shared by the class ‘truck’, ‘car’, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6399 and ‘bus’. Features of many visual patterns in tail classes can be learned from head classes that share these visual patterns, hence reducing the difficult of representation learning in tail classes. SCL pushes two instances from different classes away in the feature space, even they share meaningful visual patterns. As shown in Fig. 4, we extract query patch features from yellow bounding boxes and retrieve the top-3 similar samples from the dataset. Retrieval results of SCL denoted by ‘w/o PBSD’ are not semantically related to the query patch, indicating that SCL is not effective in learning and leveraging patch-level semantic cues. Inspired by the patch-based methods in fine-grained image recognition (Zhang et al. 2014; Quan et al. 2019; Sun et al. 2018), we introduce patch-based features to encode visual patterns. Given the global feature map ui of an image xi extracted by the backbone, we first randomly generate some patch boxes. The coordinates of those patch boxes denote {Bi[j]}L j=1, where L is the number of the patch boxes. We hence apply ROI pooling (He et al. 2017) according to the coordinates of those patch boxes and send pooled features into a projection head to get the normalized embedding features {ci[j]}L j=1 with ci[j] = gγ (ROI (ui, Bi[j])) . (11) Similar to Eq. (2), the conditional probability is leveraged to calculate the similarity relationship between instances, p(zt|cj i) = exp(zt · ci[j]/τ) P zm∈{z+ i ∪M} exp(zm · ci[j]/τ). (12) If the image corresponding to zt has shared visual patterns with the patch-based features, zt and ci[j] will have a high similarity. Therefore, Eq. (12) encodes the similarity cues between each pair of instances. We use the similarity cues as the knowledge to supervise the training procedure. To maintain such knowledge, we also crop multi image patches from the image according to {Bi[j]}L j=1, and extract their feature embeddings {si[j]}L j=1 with the backbone, si[j] = gγ (fθ (Crop(xi, Bi[j]))) . (13) PBSD enforces the feature embeddings of image patches to produce the same similarity distribution as the patch-based features via the following loss, Lpbsd = 1 L L X j=1 X zt∈{z+ i ∪M} −p(zt|ci[j]) log p(zt|si[j]), (14) note that p(zt|ci[j]) is detached from the computation graph to block the gradient. Local visual patterns of an object can be shared by different categories. We hence use patch-based features to represent visual patterns. p(zt|ci[j]) is calculated to mine relationship of the shared patterns among images. Minimizing Eq. (14) maintains shared patterns to transfer knowledge and mitigate the under-representation of the tailed classes. The retrieval results shown in Fig. 4 indicate that our method effectively reinforces the learning of patch-level features and patch-toimage similarity, making it possible to mine shared visual w/ PBSD w/o PBSD Figure 4: Patch-based image retrieval results (top 3 returned) on ImageNet-LT. Query patches are highlighted with yellow bounding boxes. The response map of query patch features on the retrieved images is also illustrated. patterns of different classes. Experiments also validate that PBSD loss is important to the performance gain. Multi-crop trick (Caron et al. 2020) is commonly used in self-supervised learning to generate more augmented samples of the anchor image. It introduces low resolution crops to reduce the computational complexity. Our motivation and loss design are different with the multi-crop strategy. PBSD is motivated to leverage shared patterns between head and tail classes to assist the learning of the tail classes. Patch-based features are obtained with ROI pooling to represent shared patterns. Eq. (14) performs self distillation to maintain shared patterns. We conduct an experiment by replacing PBSD with the multi-crop trick. As shown in Table 1, the performance on ImageNet-LT drops from 57.7% to 56.1%, indicating that PBSD is more effective than the multi-crop strategy. Training Pipeline We illustrate our method in Fig. 3. To maintain a memory queue, we use a momentum updated model as in (He et al. 2020). The training is supervised by two losses, i.e., the decoupled supervised contrastive loss and the patch-based self distillation loss. The overall training loss is denoted as, Loverall = Ldscl + λLpbsd, (15) where λ is the loss weight. Our method focuses on the representation learning, and can be applied in different tasks by concatenating their losses. Following (Li et al. 2021; Kang et al. 2020), after the training of the backbone, we discard the learned projection head gγ(·) and train a linear classifier on top of the learned backbone using the standard cross-entropy loss with the class-balanced sampling strategy (Kang et al. 2019). The following section proceeds to present our evaluation to the proposed methods. Experiments Experimental Setup Datasets. We use three popular datasets to evaluate the longtailed recognition performance. • ImageNet-LT (Liu et al. 2019) contains 115,846 training images of 1,000 classes sampled from the ImageNet1K (Russakovsky et al. 2015), with class cardinality ranging from 5 to 1,280. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6400 Settings Many Medium Few Overall Baseline 61.6 48.6 30.3 51.2 DSCL 63.4 50.0 31.4 52.6 + PBSD 67.2 53.7 33.7 56.3 DSCL + PBSD∗ 67.2 53.9 33.7 56.2 DSCL + PBSD† 68.0 53.3 32.3 56.1 DSCL + PBSD 68.5 55.2 35.4 57.7 Table 1: Effectiveness of each component in our method on ImageNet-LT. SCL is used as baseline. * denotes using features of the global view instead of patch-based features to calculate Eq. (14). † denotes using the multi-crop trick (Caron et al. 2020) instead of PBSD. • iNaturaList 2018 (Van Horn et al. 2018) is a real-world long-tailed dataset with 437,513 training images of 8,142 classes, with class cardinality ranging from 2 to 1,000. • Places-LT (Liu et al. 2019) contains 62,500 training images of 365 classes sampled from the Places (Zhou et al. 2017), with class cardinality ranging from 5 to 4,980. Evaluation Metrics. We follow the standard evaluation metrics that evaluate our models on the testing set and report the overall top-1 accuracy across all classes. To give a detailed analysis, we follow (Liu et al. 2019) that groups the classes into splits according to their number of images: Many (> 100 ), Medium (20 - 100), and Few (< 20). Implementation Details. For a fair comparison, we follow the implementations of TSC (Li et al. 2021) and KCL (Kang et al. 2020) that train the backbone at the first stage and train the linear classifier with the freezed learned backbone at the second stage. We adopt ResNet-50 (He et al. 2016) as backbone for all experiments except that using ResNet-152 pre-trained on ImageNet1K for Places-LT. The α in Eq. (9) is set as 0.1 and the loss weight λ in Eq. (15) is 1.5. At the first stage, the basic framework is the same as MoCoV2 (Chen et al. 2020), the momentum value for the updating of EMA model is 0.999, the temperature τ is 0.07, the size of memory queue M is 65536, and the output dimension of projection head is 128. The data augmentation is the same as MoCoV2 (Chen et al. 2020). Locations to get the patchbased features are sampled randomly from the global view with the scale of (0.05, 0.6). Image patches cropped from the global view are resized to 64. The number of patch-based feature L per anchor image is 5. SGD optimizer is used with a learning rate decays by cosine scheduler from 0.1 to 0 with batch size 256 on 2 Nvidia RTX 3090 in 200 epochs. For Places-LT, we only fine-tune the last block of the backbone for 30 epochs (Kang et al. 2019). At the second stage, the parameters are the same as (Li et al. 2021). The linear classifier is trained for 40 epochs with CE loss and class-balanced sampling (Kang et al. 2019) with batch size 2048 using SGD optimizer. The learning rate is initialized as 10, 30, 2.5 for ImageNet-LT, iNaturaList 2018, and Places-LT, respectively, and multiplied by 0.1 at epoch 20 and 30. Ablation Study Components analysis. We analyze the effectiveness of each proposed component on ImageNet-LT in Table 1. Settings ResNet50 ResNeXt50 Baseline 51.2 51.8 DSCL 52.6 53.2 + PBSD 56.3 57.7 DSCL + PBSD 57.7 58.7 Table 2: Ablation study of each component in our method on different backbones. 56 57 58 0 0.05 0.1 0.15 0.2 Acc(%) 52 54 56 58 0 1 3 5 7 α L 52 54 56 58 0 0.5 1 1.5 2 λ (a) (b) (c) Figure 5: Evaluation of α in Eq. (9), the number of patchbased features L per anchor image, and the loss weight λ on ImageNet-LT in (a), (b), and (c), respectively. Green dotted line in (a) denotes the baseline SCL. SCL (Khosla et al. 2020) is used as baseline. Compared with the SCL baseline, DSCL improves the top-1 accuracy by 1.4%. This result is already better than the recent contrastive learning based method TSC (Li et al. 2021). Many methods for long-tailed classification could improve the performance of tail classes but sacrifice the head classes performance. Different from those works, PBSD improves the performance of both head and tail classes. The Table 1 clearly indicates that, the combination of DSCL and PBSD achieves the best performance. The introduction of patch-based features are important to PBSD. We conduct experiment by using features of global view to calculate Eq. (14). It decreases the overall accuracy by about 1.5%. In addition, our method is also more effective than the multi-crop trick, i.e., it improves the overall accuracy by 1.6% over the multi-crop trick. In summarize, each component in our method is effective in boosting the performance. Components analysis on different backbones. To validate that our method generalizes well on different backbones, we further conduct experiments using the ResNeXt50 (Xie et al. 2017) as backbone on ImageNet-LT. Results are summarized in Table 2, where our proposed components are also effective on ResNext50. Both DSCL and PBSD can bring performance improvement over the baseline. The combination of them achieves the best performance. The impact of α in Eq. (9) is investigated in Fig. 5 (a). α determines the weight of pulling the anchor with its data augmented one. α = 0 means only pulling the anchor with other images from the same class. This setting decreases the accuracy from 57.7% to 56.8%, showing the importance of involving two kinds of positives. In addition, this setting still outperforms the SCL baseline, i.e., denoted by the green dotted line in the figure. It indicates that preventing the biased features is important. α = 1 degenerates the loss into the self-supervised loss. The accuracy is only 39.8% because of the lack of label information. We set α as 0.1, which gets the best performance. Setting α = 0.1 also gets competiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6401 Methods Reference ImageNet-LT iNaturaList 2018 Places-LT Many Medium Few Overall Many Medium Few Overall Overall CE 64.0 33.8 5.8 41.6 72.2 63.0 57.2 61.7 30.2 Balanced NeurIPS20 61.1 47.5 27.6 50.1 38.7 cRT ICLR20 58.8 44.4 26.1 47.3 69.0 66.0 63.2 65.2 36.7 DisAlign CVPR21 59.9 49.9 31.8 51.3 67.8 39.3 BatchFormer CVPR22 61.4 47.8 33.6 51.1 38.2 KCL ICLR20 61.8 49.4 30.9 51.5 68.6 PaCo‡ ICCV21 59.7 53.2 38.1 53.6 66.3 70.8 70.6 70.2 41.2 TSC CVPR22 63.5 49.7 30.4 52.4 72.6 70.6 67.8 69.7 BCL CVPR22 56.0 71.8 Our* This paper 67.2 54.8 38.7 57.4 Our This paper 68.5 55.2 35.4 57.7 74.2 72.9 70.3 72.0 42.4 RIDE ICLR21 55.4 70.9 72.4 73.1 72.6 40.3 NCL† CVPR22 59.5 72.7 75.6 74.5 74.9 SADE NeurIPS22 72.9 40.9 Our + RIDE This paper 70.1 57.5 37.7 59.7 76.2 75.7 73.6 74.9 Table 3: Comparison with recent methods on ImageNet-LT, iNaturaList2018, and Places-LT. CE denotes training the model with the cross entropy loss. ∗denotes the learning rate at the second stage of our method is initialized as 2.5. ‡ denotes the model is trained without RandAug (Cubuk et al. 2020) and with 200 epochs for fair comparison. † denotes the model is trained with RandAug and 400 epochs, which is a more expensive training setup than ours. tive performance on different datasets as shown in following experiments. The impact of the number of patch-based features per anchor image is shown in Fig. 5 (b). The model benefits from involving more patch-based features into training. The top-1 accuracy improves from 55.0% to 57.7% when increasing L from 1 to 5. We set L as 5 for a reasonable trade-off between training cost and accuracy. The impact of the loss weight λ is shown in Fig. 5 (c). Because λ weights the influence of PBSD, the figure shows that PBSD is important. Setting λ from 1 to 2 gets similar performance. We set it as 1.5 for different datasets. Comparison with Recent Works We compare our method with recent works on ImageNet-LT, iNaturaList 2018, and Places-LT. The compared methods include re-balancing methods (Ren et al. 2020), decoupling methods (Kang et al. 2019; Zhang et al. 2021), transfer learning based methods (Hou, Yu, and Tao 2022), methods that extend SCL (Kang et al. 2020; Li et al. 2021; Cui et al. 2021; Zhu et al. 2022), and ensemble-based methods (Li et al. 2022; Zhang et al. 2022; Wang et al. 2020). Experimental results are summarized in Table 3. As shown in Table 3, directly using cross entropy loss leads to a poor performance on tail classes. Most long-tailed recognition methods improve the overall performance, but sacrifice the accuracy on ‘Many’ split. Compared with the rebalancing methods, decoupling methods adjust the classifier after the training, and achieve a better performance, showing the effectiveness of the two-stage training strategy. Compared with above works, transfer learning based methods get better performance on head classes. For instance, BatchFormer gets a higher accuracy on ‘Many’ split than DisAlign which has the same overall accuracy with it. Our method achieves the best overall accuracy of 57.7% on ImageNet-LT. It also outperforms PaCo (Cui et al. 2021) that uses stronger data augmentation and twice training epochs. To make a fair comparison, we train PaCo with the same data augmentation and training epochs as our method, which decreases it accuracy from 57.0% to 53.6%. We also found that the learning rate of the second stage linear classifier training can change the accuracy distribution on ‘Many’, ‘Medium’ and ‘Few’ splits, while maintaining the same overall accuracy. For instance, with a learning rate of 2.5 at the second training stage, the accuracy on ‘Few’ split increases from 35.4% to 38.7%, while the overall accuracy only decreases by about 0.3%. We hence note that, the overall accuracy could be more meaningful than individual accuracy on each split, which can be adjusted by hyperparameters. Our method can also be combined with ensemble-based method to further boost its performance. Combined with RIDE, our method achieves 59.7% overall accuracy on ImageNet-LT, outperforming all those compared ensemblebased methods. Our method also achieves superior performance on iNaturaList 2018, where it gets comparable performance with NCL that is trained with stronger data augmentation and twice training epochs. With only a single model, our method achieves the best performance on Places-LT. Conclusion To tackle the challenge of long-tailed recognition, this paper analyzed two issues in SCL and addressed them with DSCL and PBSD. The DSCL decouples two types of positives in SCL, and optimizes their relations toward different objectives to alleviate the influence of the imbalanced dataset. The PBSD leverages head classes to facilitate the representation learning in tail classes by exploring patch-level similarity relationship. Experiments on different benchmarks demonstrated the promising performance of our method, where it outperforms recent works using more expensive setups. Extending our method to long-tailed detection is considered as the future work. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6402 Acknowledgements This work is supported in part by Natural Science Foundation of China under Grant No. U20B2052, 61936011, in part by the Okawa Foundation Research Award. References Byrd, J.; and Lipton, Z. 2019. What is the effect of importance weighting in deep learning? In ICML, 872–881. PMLR. Caron, M.; Misra, I.; Mairal, J.; Goyal, P.; Bojanowski, P.; and Joulin, A. 2020. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS, 33: 9912–9924. Chen, X.; Fan, H.; Girshick, R.; and He, K. 2020. Improved baselines with momentum contrastive learning. arXiv:2003.04297. Cubuk, E. D.; Zoph, B.; Shlens, J.; and Le, Q. V. 2020. Randaugment: Practical automated data augmentation with a reduced search space. In CVPRW, 702–703. Cui, J.; Zhong, Z.; Liu, S.; Yu, B.; and Jia, J. 2021. Parametric contrastive learning. In ICCV, 715–724. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In CVPR, 9729–9738. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In ICCV, 2961–2969. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778. Hou, Z.; Yu, B.; and Tao, D. 2022. BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning. arXiv:2203.01522. Japkowicz, N.; and Stephen, S. 2002. The class imbalance problem: A systematic study. Intelligent data analysis, 6(5): 429–449. Kang, B.; Li, Y.; Xie, S.; Yuan, Z.; and Feng, J. 2020. Exploring balanced feature spaces for representation learning. In ICLR. Kang, B.; Xie, S.; Rohrbach, M.; Yan, Z.; Gordo, A.; Feng, J.; and Kalantidis, Y. 2019. Decoupling representation and classifier for long-tailed recognition. arXiv:1910.09217. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; and Krishnan, D. 2020. Supervised contrastive learning. arXiv:2004.11362. Li, J.; Tan, Z.; Wan, J.; Lei, Z.; and Guo, G. 2022. Nested Collaborative Learning for Long-Tailed Visual Recognition. In CVPR, 6949–6958. Li, T.; Cao, P.; Yuan, Y.; Fan, L.; Yang, Y.; Feris, R.; Indyk, P.; and Katabi, D. 2021. Targeted Supervised Contrastive Learning for Long-Tailed Recognition. arXiv:2111.13998. Liu, Z.; Miao, Z.; Zhan, X.; Wang, J.; Gong, B.; and Yu, S. X. 2019. Large-scale long-tailed recognition in an open world. In CVPR, 2537–2546. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In CVPR, 3431–3440. Ma, Y.; Jiao, L.; Liu, F.; Yang, S.; Liu, X.; and Li, L. 2023. Curvature-Balanced Feature Manifold Learning for LongTailed Classification. In CVPR. Quan, R.; Dong, X.; Wu, Y.; Zhu, L.; and Yang, Y. 2019. Auto-reid: Searching for a part-aware convnet for person re-identification. In ICCV, 3750–3759. Ren, J.; Yu, C.; Ma, X.; Zhao, H.; Yi, S.; et al. 2020. Balanced meta-softmax for long-tailed visual recognition. NeurIPS, 33: 4175–4186. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large scale visual recognition challenge. IJCV, 115(3): 211–252. Sun, Y.; Zheng, L.; Yang, Y.; Tian, Q.; and Wang, S. 2018. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV, 480–496. Van Horn, G.; Mac Aodha, O.; Song, Y.; Cui, Y.; Sun, C.; Shepard, A.; Adam, H.; Perona, P.; and Belongie, S. 2018. The inaturalist species classification and detection dataset. In CVPR, 8769–8778. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. NeurIPS, 30. Wang, X.; Lian, L.; Miao, Z.; Liu, Z.; and Yu, S. X. 2020. Long-tailed recognition by routing diverse distribution-aware experts. arXiv:2010.01809. Wei, L.; Xie, L.; He, J.; Chang, J.; Zhang, X.; Zhou, W.; Li, H.; and Tian, Q. 2020. Can Semantic Labels Assist Self-Supervised Visual Representation Learning? arXiv:2011.08621. Xie, S.; Girshick, R.; Doll´ar, P.; Tu, Z.; and He, K. 2017. Aggregated residual transformations for deep neural networks. In CVPR, 1492–1500. Yun, S.; Lee, H.; Kim, J.; and Shin, J. 2022. Patch-level representation learning for self-supervised vision transformers. In CVPR, 8354–8363. Zhang, N.; Donahue, J.; Girshick, R.; and Darrell, T. 2014. Part-based R-CNNs for fine-grained category detection. In ECCV, 834–849. Springer. Zhang, S.; Li, Z.; Yan, S.; He, X.; and Sun, J. 2021. Distribution alignment: A unified framework for long-tail visual recognition. In CVPR, 2361–2370. Zhang, S.; Zhou, Q.; Wang, Z.; Wang, F.; and Yan, J. 2023. Patch-level Contrastive Learning via Positional Query for Visual Pre-training. In ICML, 41990–41999. PMLR. Zhang, Y.; Hooi, B.; Hong, L.; and Feng, J. 2022. Selfsupervised aggregation of diverse experts for test-agnostic long-tailed recognition. NeurIPS, 35: 34077–34090. Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; and Torralba, A. 2017. Places: A 10 million image database for scene recognition. TPAMI, 40(6): 1452–1464. Zhu, J.; Wang, Z.; Chen, J.; Chen, Y.-P. P.; and Jiang, Y.-G. 2022. Balanced contrastive learning for long-tailed visual recognition. In CVPR, 6908–6917. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6403 | 2024 | 711 |
18,531 | Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks Lulu Xue1, Shengshan Hu1*, Ruizhi Zhao1, Leo Yu Zhang2, Shengqing Hu3, Lichao Sun4, Dezhong Yao5 1 School of Cyber Science and Engineering, Huazhong University of Science and Technology 2 School of Information and Communication Technology, Griffith University 3 Department of Nuclear Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology 4 Department of Computer Science and Engineering, Lehigh University 5 School of Computer Science and Technology, Huazhong University of Science and Technology {lluxue,hushengshan,zhaoruizhi,dyao}@hust.edu.cn, [email protected], [email protected], [email protected] Abstract Collaborative learning (CL) is a distributed learning framework that aims to protect user privacy by allowing users to jointly train a model by sharing their gradient updates only. However, gradient inversion attacks (GIAs), which recover users’ training data from shared gradients, impose severe privacy threats to CL. Existing defense methods adopt different techniques, e.g., differential privacy, cryptography, and perturbation defenses, to defend against the GIAs. Nevertheless, all current defense methods suffer from a poor tradeoff between privacy, utility, and efficiency. To mitigate the weaknesses of existing solutions, we propose a novel defense method, Dual Gradient Pruning (DGP), based on gradient pruning, which can improve communication efficiency while preserving the utility and privacy of CL. Specifically, DGP slightly changes gradient pruning with a stronger privacy guarantee. And DGP can also significantly improve communication efficiency with a theoretical analysis of its convergence and generalization. Our extensive experiments show that DGP can effectively defend against the most powerful GIAs and reduce the communication cost without sacrificing the model’s utility. 1 Introduction Collaborative learning (CL) (Shokri and Shmatikov 2015) is a distributed learning framework, where multiple users train a model locally and share their gradients among the peers or to a centralized server. CL claims to protect user privacy since users do not need to share their local (private) data directly. However, recent studies reveal that gradients can be used to recover the original training data information via gradient inversion attacks (GIAs) (Zhu, Liu, and Han 2019; Geiping et al. 2020). To against GIAs, a large number of studies have been proposed, where they leverage the advanced privacy protection techniques, such as differential privacy (DP) (Dwork, Roth et al. 2014), cryptography (Bonawitz et al. 2017; Hardy et al. 2017; GiladBachrach et al. 2019) and perturbation defense (Gao et al. 2021; Sun et al. 2021; Scheliga, M¨ader, and Seeland 2022). *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. However, none of the existing defense methods could take care of all privacy, utility, and efficiency difficulties in the CL framework. For example, traditional defenses such as DP and cryptography-based methods strike a balance among privacy protection, model performance, and efficiency simultaneously (Dwork, Roth et al. 2014; Bonawitz et al. 2017; Hardy et al. 2017; Gilad-Bachrach et al. 2019). To address this challenge, various perturbations-based methods have been proposed (Gao et al. 2021; Sun et al. 2021; Scheliga, M¨ader, and Seeland 2022). But they all rely on auxiliary optimization modules to reduce certain privacy leakage and cannot defend against all GIAs in practice (see Sec. 6.2 for details). For instance, perturbation-based defense methods (i.e., Precode (Scheliga, M¨ader, and Seeland 2022), Soteria (Sun et al. 2021)) can effectively defend against passive GIAs (Geiping et al. 2020; Wang et al. 2020; Wei et al. 2020b), but fail to work against the active GIAs (Boenisch et al. 2021; Pan et al. 2022), which is considered as the stateof-the-art attack method. On the contrary, the classic Topk based gradient pruning method (Lin et al. 2017; Alistarh et al. 2018) is generally ineffective for enhancing privacy against passive GIAs, and corresponding defenses (e.g., Outpost (Wang, Hugh, and Li 2023)) offer limited protection. But we find that they significantly outperform recent defense methods under the active attack. Tab. 1 gives a detailed experimental result for this observation. The new findings inspire us to seek a more practical and effective defense against both passive and active GIAs. In this paper, we propose a new gradient pruning-based method, Dual Gradient Pruning (DGP). Dual gradient pruning is a novel gradient pruning technique, which removes top-k1 largest gradient parameters and the bottom-k2 smallest gradient parameters from the local model. DGP leads to a strong privacy protection against both passive GIAs and active GIAs. To measure the level of protection, we present the theoretical analysis of reconstruction error from pruned gradients, showing that the error is proportional to gradient distance. So removing larger gradient parameters can rapidly enlarge the gradient distance, resulting in a significant reconstruction error. However, removing many larger parameters will The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6404 significantly impact the model’s utility. Thus, to improve the pruning ratio, which is essential to robustness against active attack (Boenisch et al. 2021; Fowl et al. 2021), we also remove smaller gradient parameters. In this way, our method could significantly mitigate GIAs without affecting the model’s utility. We conduct extensive experiments to evaluate our method. The quantitative and visualized results show that our design can effectively make recovered images unrecognizable under different attacks, and reduce the communication cost. Our contributions are as follows: 1) We revisit gradient pruning to show its potential for mitigating GIAs; 2) We propose an improved gradient pruning strategy to provide sufficient privacy guarantee while balancing the model accuracy and the system efficiency; 3) We conduct extensive experiments to show that our design outperforms existing defense methods w.r.t. privacy protection, model accuracy, and system efficiency. 2 Related Work Collaborative learning (Shokri and Shmatikov 2015) is considered to be a privacy-preserving framework for distributed machine learning as the training data is not directly outsourced. However, the emerging of GIAs (Zhu, Liu, and Han 2019; Fan et al. 2020; Zhao, Mopuri, and Bilen 2020; Geiping et al. 2020; Qian and Hansen 2020; Boenisch et al. 2021; Yin et al. 2021; Zhu and Blaschko 2020; Fowl et al. 2021) shatters this conception. It has been proven that the attacker (e.g., a curious server) can easily recover the private data from gradient to a great extent. The privacy guarantee of collaborative learning urgently needs to be strengthened. Traditional Defense. Traditionally, there are two approaches to construct privacy-preserving collaborative learning: using DP to disturb gradients (Dwork, Roth et al. 2014; Abadi et al. 2016; Geyer, Klein, and Nabi 2017; Yu et al. 2019; Chen, Wu, and Hong 2020) or using cryptographic tools to perform secure aggregation (Danner and Jelasity 2015; Bonawitz et al. 2017; Hardy et al. 2017; Mohassel and Zhang 2017; Sun, Qian, and Chen 2021; GiladBachrach et al. 2019). DP (Dwork, Roth et al. 2014) is a popular and effective privacy protection mechanism by adding random noise to the raw data, but it is well known that the noise introduced by DP can greatly degrade the model accuracy when meaningful privacy is enforced (Wei et al. 2020a). Cryptographic-based secure aggregation can guarantee both privacy and accuracy simultaneously, but it incurs expensive computation and communication costs (Kairouz et al. 2021). Using the shuffle model (Liu et al. 2020; Sun, Qian, and Chen 2021) can only provide anonymity. Moreover, it totally changes the system model of collaborative learning since an additional semi-trusted third party is introduced to work cooperatively with the server. Perturbation Defense. Recently, researchers have begun to explore the possibility of constructing new gradient perturbation mechanisms to better balance privacy and accuracy. (Sun et al. 2021) proposed Soteria, a scheme that perturbs the representation of inputs by pruning the gradients of a single layer. (Gao et al. 2021) proposed ATS, an optimized training data augmentation policy by transforming original sensitive images into alternative inputs, to reduce the visibility of reconstructed images. (Scheliga, M¨ader, and Seeland 2022) presented Precode to extend the model architecture by using variational bottleneck (VB) (Alemi et al. 2016) to prevent attackers from obtaining optimal solutions to reconstructed data. These works focus on the semi-honest setting (Zhu, Liu, and Han 2019; Wang et al. 2019; Wei et al. 2020b) but fail to protect privacy when an active server modifies the model to launch GIAs (Fowl et al. 2021). Moreover, these works suffer from high computation costs or a huge communication burden. Gradient Pruning Defense. From an independent research domain, gradient pruning has been commonly used for saving communication bandwidth. The most common pruning strategy is Top-k selection, which retains top k gradient parameters with the largest absolute values (Lin et al. 2017; Alistarh et al. 2018). It has been widely proved that gradient pruning provides very limited privacy protection ability (Zhu, Liu, and Han 2019; Gao et al. 2021; Huang et al. 2021; Sun et al. 2021; Scheliga, M¨ader, and Seeland 2022) unless a high pruning ratio (e.g., removing 99% of the gradients) is used at the cost of 10% accuracy drop (Huang et al. 2021). However, we emphasize that this is misunderstood as they only consider the Top-k selection strategy and it has never received an in-depth investigation in the field of security. It is originally designed for improving system efficiency, thus a direct application inherently suffers from many weaknesses. Recently, (Wang, Hugh, and Li 2023) proposed Outpost, a privacy-preserving method that combines Top-k gradient pruning with adaptive noise addition. However, our experiments indicate that Outpost cannot effectively defend against passive GIAs. In contrast, our work shows that a slight modification can unleash the potential of gradient pruning to provide a strong privacy guarantee, as shown in Sec. 4. 3 Threat Model and Gradient Attacks In this work, we consider a strong threat scenario, where an active server, after receiving gradients from users, tries to reconstruct the local training data and is motivated to modify model parameters in each iteration to strengthen the attack effect. As will be shown in Sec. 5 and Sec. 6, our method provides a theoretical guarantee against passive attacks and empirical protection against active attacks. So we briefly discuss both kinds of attacks below. Analytical Attack (Passive). Analytical attack exploits the structure of the gradients to recover the inputs, such as using gradient bias terms (Phong et al. 2017). Recently proposed R-gap attack (Zhu and Blaschko 2020) exploits the recursive relationship between gradient layers to solve the input. An effective analytical attack depends on the specific structure and parameters of gradients. Optimization Attack (Passive). Optimization attack is firstly proposed in (Zhu, Liu, and Han 2019), which approximates the desired data (x, y) with dummy data (x∗, y∗) by optimizing the euclidean distance between the gradients g∗ (generated by dummy data (x∗, y∗)) and the original gradients ∇W (produced by real private data (x, y)) with LBFGS optimizer. (Geiping et al. 2020) proposed IG, optiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6405 mizing the cosine distance with Adam optimizer, and (Yin et al. 2021) proposed GI, optimizing the Euclidean distance with Adam optimizer. These methods are state-of-the-art optimization attacks. Furthermore, recent works (Yin et al. 2020; Li et al. 2022) utilize GANs to generate data approximating the input. However, these attacks are impractical as they necessitate training GANs with vast amounts of data that closely resemble private data. Despite different optimizers can be used to achieve better attack quality (Geiping et al. 2020; Wang et al. 2020; Wei et al. 2020b), the existing attacks are all measured by the distance between the virtual gradients g∗and the original gradients ∇W. We therefore propose a general definition for passive attacks to better evaluate their performance. From Definition 1, for a given success probability (1 −δ), a smaller ε indicates a better attack strategy A. Definition 1 A passive attack A is a (ε, δ)-passive attack, if it satisfies: P(E(DA(∇W, g∗)) ≤ε) ≥1 −δ. (1) where P represents the probability, E represents the expectation, DA is the distance (commonly instantiated with Euclidean or cosine distance) estimated under A. Active Server Attack. In this kind of attack, the server can actively modify the global model to realize a better attack result rather than honestly executing the protocols (Boenisch et al. 2021; Pan et al. 2022; Wen et al. 2022). Recently proposed Rob attack (Fowl et al. 2021) adds imprint modules to the model and uses the difference between the gradient parameters in adjacent rows of the imprint module to recover the data, achieving the best attack effect in the literature. 4 Dual Gradient Pruning 4.1 Analysis of Gradient Pruning We owe the failure of common Top-k gradient selection methods to two reasons: 1) the distance between the Topk pruned gradient g and the real gradient ∇W is small; and 2) large gradient parameters in ∇W also reveal label information about user data. The first reason stems from the intuitive observation that when the perturbed gradient is close to the true gradient, it becomes easier for the attacker to infer sensitive information about the true gradient. And we give a specific example to illustrate this point. In particular, Fig. 1 plots the recovery results of IG attack (in terms of PSNR (↓), MSE (↑), LPIPS (Zhang et al. 2018) (↑), SSIM (Wang et al. 2004) (↓) metrics) under various relative gradient distance ||∇W −g||2/||∇W||2 (measured in ratio). It is clear from the figure that greater distance leads to worse reconstruction for all metrics. To better support this observation, we propose the following non-rigorous proposition. Proposition 1 For any given input x and shared model W, the distance between the recovered data x′ and the real data x is bounded by: ||x −x′||2 ≥||φ(x, W) −φ(x′, W)||2 ||∂φ(x, W)/∂x||2 , (2) where φ is the mapping from input to the gradient, i.e., the reconstruction quality is limited by ||φ(x, W) − φ(x′, W)||2 = ||∇W −g||2. (a) PSNR (↓) (b) MSE (↑) (c) SSIM (↓) (d) LPIPS (↑) original 0% 17.77% 34.25% 53.67% 72.42% (e) Visualization of original and reconstructed data at various ratios Figure 1: Relationship between relative gradient distance and reconstruction quality under IG (CIFAR10(Krizhevsky, Hinton et al. 2009) with ResNet18 (He et al. 2016)). Referring to the proof technique of Lemma 1 in (Sun et al. 2021), we employ the first-order Taylor expansion in our proof. The specific proof of the above proposition is moved to the appendix due to space limit (the same hereinafter). And we will present a more rigorous analysis in our follow-up study. According to the above example and this proposition, it is clear that the reconstruction error is proportional to the gradient distance ||∇W −g||2, i.e., effective defense methods should enlarge the gradient distance as much as possible. However, for the Top-k gradient selection (Lin et al. 2017; Alistarh et al. 2018), the k largest parameters are retained, making the gradient distance small by nature. To explain the second reason, we consider a Llayer perceptron model trained with cross-entropy loss for classification. Let a column vector r = [r1, r2, .. . , rn] be the logits (the output of the L-th linear layer) that input to the softmax layer, the confidence score probability vector is thus h er1 P j erj , er2 P j erj , · · · , ern P j erj i and the succinct form of the cross-entropy loss becomes ℓ(x, y) = −log( ery P j erj ). Focus on the L-th layer WLx + bL = r, it is easy to find ∂ℓ(x, y) ∂bi = ∂ℓ(x, y) ∂ri · ∂ri ∂bi = ∂ℓ(x, y) ∂ri = eri P j erj −Ii=y, and ∇WL = ∂ℓ(x, y) ∂r · xT = [∂ℓ(x, y) ∂r1 , · · · , ∂ℓ(x, y) ∂rn ] · xT . For a given x (and so xt is fixed), the magnitude of certain elements of the gradient matrix ∇WL (i.e., the i-th row) is particularly large if i is the true label of the training data x due to reason that | ∂ℓ(x,y) ∂ri | = P j̸=i | ∂ℓ(x,y) ∂rj |. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6406 DGP Original Top-k (a) Recovered data under IG (b) ResNet18 on CIFAR dataset Figure 2: Comparison between Top-k and DGP on privacy and accuracy (20% of parameters are selected in Top-k). To summarize, due to the above two reasons, we conclude that common Top-k gradient selection cannot provide sufficient protection for user data against passive optimization attacks. From another point of view, a sufficient gradient pruning ratio also plays an important role in defending against active server attacks. As mentioned in Sec. 3, active attackers can exploit the correspondence of partial gradient parameters to recover the real data. So, the gradient pruning will directly destroy the relationship among gradient parameters constructed by the active attacker. Intuitively, the higher the pruning rate, the stronger the impact. As will be validated in Sec.6, a high pruning rate can prevent the attacker from obtaining useful gradient information. 4.2 Dual Gradient Pruning Generally speaking, large gradient parameters of local model need to be removed to make the gradient distance larger, but the distance should also be appropriately bounded to maintain high model accuracy. Moreover, it is also necessary to delete gradient parameters to achieve a high pruning ratio, which can reduce the input information that the active server may retain on the gradient by modifying the model and improve communication efficiency. Considering the model performance, we choose to remove small gradient parameters to achieve this. With these observations, we propose dual gradients pruning (DGP), a new parameter selection strategy for gradient pruning. The users first layerwisely sort the absolute values of local gradient parameters ∇W in the descending order. Let Tk1(∇W) represent the set of top-k1 percents of elements of ∇W, Bk2(∇W) represent the set of its bottom-k2 percents. Then the users remove Tk1(∇W) and Bk2(∇W) from ∇W for gradient pruning. A detailed illustration of DGP is shown in Alg. 1. Note that we set p = k1/k2 as a hyperparameter to reguAlgorithm 1: Dual Gradient Pruning (DGP). Require: Original gradient matrix ∇W, values of k1 and k2. 1: for l ←1 to L do 2: Search sets Tk1(∇Wl) and Bk2(∇Wl). 3: Obtain gl by removing the parameters in Tk1(∇Wl) and Bk2(∇Wl) from ∇Wl. 4: end for 5: return Pruned gradient matrix g = gi L i=1. Algorithm 2: A Complete Illustration of Our Defense. Require: Initial model W0, value k1 and k2, total rounds T, total users N . 1: Set e0 = 0. 2: for t ←0 to T −1 do 3: for i ←1 to N do 4: The i-th user generates local gradient ∇Wt,i. 5: Pt,i = ∇Wt,i + et,i. 6: gt,i = DGP(k1, k2, Pt,i) 7: et+1,i = Pt,i −gt,i 8: end for 9: Sever side aggregation: 10: Wt+1 = Wt −η PN i=1 gt,i N 11: end for 12: return Shared global model WT . late the trade-off between privacy and accuracy. The authors in (Lin et al. 2017) show that large gradient parameters are more likely to have an impact on the model’s performance, hence removing these large parameters will reduce model’s accuracy. To reduce this negative impact and increase convergence speed, we introduce the error feedback mechanism (Karimireddy et al. 2019). In particular, at the iteration round t, after user i obtaining his local gradient ∇Wt,i, he will combine ∇Wt,i with an error term accumulated in the previous (t−1) rounds before performing the DGP. A complete illustration of our method is shown in Alg. 2, and the steps from et,i to et+1,i provide the implementation details of error feedback mechanism. We emphasize that although such dual gradients pruning strategy is very simple, it can significantly mitigate GIAs without affecting the model accuracy. Fig. 2(a) gives an example of ResNet18 showing the privacy guarantee when k1 = 5%, k2 = 75%. Fig. 2(b) gives a comparison of model performance. The convergence analysis of our method is shown in Sec. 5, and more experimental results can be found in Sec. 6. 5 Theoretical Analysis This section presents the security analysis with regard to passive GIAs, as well as the generalization and convergence analyses of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6407 5.1 Assumptions Following the literature studies in (Wilson et al. 2017; Karimireddy et al. 2019), for a given L-layer centralized model, we model the first (L −1) layers as a robust feature extractor of any input sample. Thus, the function of this model is characterized by f(x|W) = Wx+b, and the optimization objective is the loss ℓ(x, y) (such as cross-entropy). To facilitate analyses and following literature studies (Chen et al. 2020; Dai et al. 2019; Karimireddy et al. 2019), the assumptions about the smoothness of DGP and l, as well as the variance of the stochastic gradient are employed. Assumption 1 The pruning mechanism DGP(k1, k2, ·) is Lipschitz, so the following conditions hold: ||∇W −DGP(k1, k2, ∇W)||2 2 = ||DGP(0, 0, ∇W) −DGP(k1, k2, ∇W)||2 2 ≤γ1||∇W||2 2, where γ1 is a constant related to k1 and k2 and satisfies (1 −√1 −k1 ∗k2)2 < γ1 < 1. Assumption 2 The objective function l : Rd →R has a low bound l∗and it is Lipschitz-smooth, i.e., for any x1, x2, ||∇l(x1) −∇l(x2)||2 ≤K||x1 −x2||2 and l(x1) ≤l(x2) + ⟨∇l(x2), x1 −x2⟩+ K 2 ||x1 −x2||2 2. Assumption 3 The collaborative stochastic gradient ∇Wt,i (t = [0, T −1], i = [1, N]) is bounded, i.e., ||∇Wt,i||2 2 ≤ G2, and the average aggregated gradient ∇Wt is the expectation of collaborative stochastic gradient ∇Wt,i, i.e., ∇Wt = E(∇Wt,i). Moreover, the variance between ∇Wt,i and ∇Wt is bounded: E||∇Wt,i −∇Wt||2 2 ≤σ2. 5.2 Security Analysis When considering passive attacks, we prove that DGP achieves a stronger privacy protection in the sense of Definition 1. Theorem 1 For any (ε, δ)-passive attack A, under the presence of DGP, it will be degenerated to (ε+√γ1||∇W||2, δ)passive attack if DA is measured by Euclidean distance, and degenerated to (ε + (1 −ε)√γ1, δ)-passive attack if DA is measured by cosine distance. Theorem 1 is based on Assumption 1 about DGP. It reveals that, with the same successful chance (1−δ), DGP weakens the passive attack A’s capability to obtain a better estimation of the true ∇W. In particular, A’s estimation of ∇W is enlarged by √γ1||∇W||2 under Euclidean distance and enlarged by (1 −ε)√γ1 under cosine distance. 5.3 Convergence Guarantee We start the convergence analysis by proving the generalization of DGP. The generalization analysis aims to quantify how the trained model performs on the test data, and it is achieved by analyzing the how DGP affects the properties of the optima reached (without gradient pruning) (Karimireddy et al. 2019; Wilson et al. 2017). For ease of expression, let CL-SGD represent the training in CL with the SGD optimizer. Based on Assumptions 1 and 3, the following Lemma can be obtained. Lemma 1 Let et = PN i=1 et,i/N be the averaged accumulated error among all users at iteration t, the expectation of the norm of et is bounded, i.e., E||et||2 2 ≤3γ1(2 + γ1) 2(1 −γ1)2 G2. (3) Note that the difference between the averaged pruned gradient gt = PN i=1 gt,i/N and the averaged collaborative SGD gradient ∇Wt = PN i=1 ∇Wt,i/N is simply || PT −1 i=0 (∇Wt−gt)||2 2 = ||eT ||2 2. So the lemma above indicates that the accumulated gradient difference between our algorithm and CL-SGD is bounded. That said, the optima reached by DGP and the optima reached by CL-SGD will eventually be very close if the algorithm converge. Armed with Lemma 1 and based on Assumptions 1, 2 and 3, we demonstrate the convergence of the our algorithm. Theorem 2 The averaged norm of the full gradient ∇l(Wt) derived from centralized training is correlated with the our algorithm as follows: PT −1 t=0 E||∇l(Wt)||2 2 T ≤4l0 −l∗ ηT + 2Kη(G2 + σ2) + 4η2K2 3γ1(2 + γ1) 2(1 −γ1)2 G2, (4) where l0 is the initialization of l, and η is the learning rate. The implication of Theorem 2 is that, with an appropriate learning rate η, DGP converges similar to CL-SGD (slower by a negligible term O( 1 √ T )), as shown in Corollary 1. Corollary 1 Let η=(l0 −l∗)/KT(G2 + σ2), we have PT −1 t=0 E||∇l(Wt)||2 2 T ≤6 r K(l0 −l∗)(σ2 + G2) T + O( 1 T ). 6 Experiments 6.1 Experimental Setup We run the experiments with PyTorch by using one RTX 2080 Ti GPU and a 2.10 GHz CPU. For fair comparison, we follow the setting of (Gao et al. 2021), using ten users with the same data distribution. We assess model privacy against various attacks and evaluate model performance on CIFAR10 and CIFAR100, which is a common setting used in many studies (Huang et al. 2021; Gao et al. 2021). We follow (Huang et al. 2021; Jeon et al. 2021) to quantify the privacy effect of defenses, i.e., visualizing the reconstructed data and using learned perceptual image patch similarity (LPIPS) and structural similarity (SSIM) to measure the quality of the recovered data. A better defense should have larger LPIPS (↑) and smaller SSIM (↓). Attack methods. We evaluate DGP against IG, GI, R-gap, and Rob attacks, which represent state-of-the-art passive and active GIAs, as discussed in Sec. 3. We use the following default attack settings: ResNet18 for IG , GI, Rob on CIFAR10. And we apply R-gap with CNN6 (Zhu and Blaschko The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6408 2020) on CIFAR10, as this analysis attack is only suitable for models with simple structures. We provide additional attack details, more privacy evaluations (e.g.more models and datasets) and efficiency evaluation (computation costs and communication costs) in the appendix. Defense methods. We compare DGP with six state-of-theart defenses: Soteria, ATS, Precode, Outpost, DP and Top-k pruning. Besides, we set CL-SGD as the baseline that adopts no defense. Note that DP provides privacy guarantee by adding noise to gradients in deep learning. We adhere to the DP settings of (Sun et al. 2021) and use Gaussian noise with standard deviation σ = 10−2. When quantifying the defense performance of ATS, we not only evaluate the similarity between the raw images and the recovered data (ATS-T), but also evaluate the similarity between the disturbed training images (i.e., the real inputs) and the recovered data (ATSR). For Top-k and DGP, we set k = 20%, k1 + k2 = 80% with the regulation hyperparameter p = 1/15. The rest defenses remain the original settings. 6.2 Privacy Evaluation Tab. 1 shows the defense performance with SSIM, and LPIPS under four attacks. For each metric, we bold the best result and underline the second best result (the same hereinafter). The results show that ATS, Soteria, Precode, DP perform poorly under Rob attack, while Top-k and Outpost are vulnerable to IG attack and GI attack. In summary, DGP can provide excellent privacy protection under all attacks, while still retain high model accuracy. To perceptually demonstrate the defense performance, we also visualize the reconstructed images. Note that ATS-T refers to processed raw data, while ATS-R represents the reconstructed raw data in Fig. 3. Fig. 3(a) and Fig. 3(b) depict the recovered images under optimization attacks (e.g., IG, GI). We can find that the attacker can still recover the outline of inputs with ATS, Top-k and Outpost. Soteria, Precode, DP and DGP can make the recovered images unrecognizable. Fig. 3(c) shows the recovered images from the R-gap attack. We can see that all defenses but ATS can well defend against R-gap because ATS does not damage the gradient structure, validating that a slight perturbation on gradients can mitigate the analytical attacks easily. We are not able to provide the result of Precode because its VB operation destroys the model structure, making R-gap cannot be mounted. Fig. 3(d) plots the recovered images from the Rob attack. It shows that ATS, Precode, and Soteria fail to work and most inputs can be reconstructed. Fig. 3(d) shows that DP also cannot defend against Rob. This might be because the server calculates the inputs by superimposing a large number of the malicious imprint module’s gradient parameters. And the noise added to the gradient follows a normal distribution, potentially canceling out when aggregated in large numbers. However, DGP, Topk, and Outposts can effectively defend against Rob attack because the gradients of all layers are pruned , including those of the malicious imprint modules. However, we reiterate that the main weakness of the gradient pruning based on Top-k selection is its vulnerability to optimization attacks (e.g., IG, GI), as widely demonstrated in the literature (Gao et al. 2021; Sun et al. 2021). Original ATS-T Baseline ATS-R Soteria DGP Precode DP Top-k Outpost (a) IG, CIFAR10 Original ATS-T Baseline ATS-R Soteria DGP Precode DP Top-k Outpost (b) GI, CIFAR10 Original ATS-T Baseline ATS-R Soteria DGP DP Top-k Outpost (c) R-gap, CIFAR10 DGP Original ATS-T Baseline ATS-R Soteria Precode DP Top-k Outpost (d) Rob, CIFAR10 Figure 3: Data visualization on privacy evaluation by using multiple gradient inversion attacks. 6.3 Accuracy Evaluation Tab. 1 lists the accuracy of ResNet18 on CIFAR10 under different defenses. Clearly, ATS, Soteria, Precode, Outpost, Top-k and our method can achieve model accuracy similar to the unprotected baseline, while DP performs worst as expected. Additionally, we evaluated more model performance with DGP, including ResNet18, VGG11 (Simonyan and Zisserman 2014), CNN6, LeNet (Zhu) (Geiping et al. 2020). And we further perform ablation experiments to explore the role of the error feedback mechanism. Fig. 4 shows that the model performance of DGP with error feedback is close to the baseline. However, DGP without error feedback performs poorly and even fails to converge. This is because accumulated errors result in a larger disparity between the model’s update direction and the correct update direction. Notably, this effect is mitigated in structurally complex models due to the presence of numerous redundant parameters. Prior research (Molchanov et al. 2016) indicated that even if these redundant parameters are not updated (i.e., their gradient parameters are set to 0), their impact on model performance is small. Our theoretical analysis and Fig. 4 show that the error feedback mechanism can effectively correct the negative effects caused by gradient pruning. And Topk method can also enjoy the benefit since it is also based on pruning. However, further experiments (see details in the appendix) validate that, to achieve a similar level of privacy protection of DGP with 80% pruning, the pruning rate of Top-k exceeds 95% and results in inferior accuracy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6409 Attack Metric Baseline ATS-R ATS-T Soteria Precode DP Top-k Outpost DGP R-gap LPIPS 7.7E-4 1.3E-4 0.020 0.378 0.373 0.379 0.378 0.375 SSIM 0.965 0.989 0.870 0.252 0.259 0.249 0.250 0.248 IG LPIPS 0.003 4.5E-4 0.108 0.190 0.371 0.268 0.029 0.088 0.316 SSIM 0.954 0.981 0.566 0.368 0.257 0.333 0.769 0.640 0.287 GI LPIPS 0.004 0.003 0.094 0.201 0.453 0.343 0.045 0.111 0.382 SSIM 0.918 0.908 0.563 0.362 0.247 0.305 0.697 0.612 0.199 Rob LPIPS 0.023 0.028 0.150 0.023 0.025 0.023 0.523 0.295 0.527 Min LPIPS 7.43E-15 5.03E-15 0.011 7.79E-15 5.52E-15 8.79E-07 0.231 0.195 0.243 SSIM 0.933 0.926 0.514 0.933 0.929 0.899 0.038 0.221 0.051 Max SSIM 1.000 1.000 0.931 1.000 1.000 1.000 0.224 0.310 0.365 Final Model Acc. 93.62% 93.14% 92.90% 92.83% 76.01% 93.44% 92.96% 93.40% Table 1: Evaluation of the defense performance under four attacks. Figure 4: Evaluation of model accuracy with different datasets and models (EF denotes the error feedback). (k1 + k2) p = k1/k2 48% 80% 96% 1/15 1/7 1/3 LPIPS 0.426 0.527 0.531 0.316 0.351 0.383 SSIM 0.146 0.051 0.029 0.287 0.250 0.234 Acc.(%) 93.42 93.40 92.91 93.40 93.21 92.82 Table 2: The impact of different parameters on DGP. 6.4 Further Discussions Choice of k1, k2 and p for DGP. According to the analysis in Sec. 4.1, active GIA is greatly impacted by (k1 + k2) and optimization GIA is greatly affected by p = k1/k2. In this concern, we use the Rob attack to evaluate the privacy of DGP with different (k1 + k2) and IG attack to evaluate DGP with different p. As shown in Tab. 2, larger pruning rate (k1 + k2) leads to better privacy-preserving, but the model’s performance suffers as a consequence. Furthermore, a larger p, i.e., more large parameters are eliminated, can better defend against optimization GIAs but impact accuracy. Reducing download communication cost. Although DGP provides a sufficient privacy guarantee as well as reducing upload cost, users’ download cost could still be expensive. This is because different users have different sets of Tk1(·) and Bk2(·) when pruning their own local gradients, so the global model parameters will become dense after aggregation. We suggest aligned DGP (ADGP), an improved scheme to align the selected gradients to further reduce download cost. Similar to DGP, for best privacy, each user will still firstly identify his top-k1 gradients location set Tk1. Different from DGP, ADGP also wants to save users’ download comm. cost by ensuring that all users’ uploaded pruned gradient parameters reside in the same location set. This is achieved by randomly selecting a user, who identifies a top-2k (k1 < k) location set T2k (represented with a binary location matrix I) and broadcasts I to all other users. Note that Tk1 ⊂T2k is not necessarily true. Upon receiving I, each user first discards gradient parameters in Tk1 and then only transmits the k largest gradient parameters whose locations belong to I. After aggregation, users only need to download the global gradients’ parameters associated with I. We give the specific comm. cost in the appendix and find that ADGP further reduces the overall comm. cost. Moreover, with error feedback mechanism, it can also maintain the model performance, shown in Fig. 4. To summarize, ADGP can provide better communication efficiency while maintain model performance. We leave the work of investigating the privacy-protection of ADGP as the future work. 7 Conclusion, Limitation, and Future Contrary to the traditional belief that gradient pruning is not a good choice to protect privacy, this paper proposes DGP, a gradient pruning-based defense, to achieve a better trade-off among privacy protection, model performance, and communication efficiency for collaborative learning. This finding is built upon the analysis of how pruned gradients bound the attacker’s recovery error and why large gradient parameters leak more private information and should be pruned. By dual-pruning both large and small gradients, DGP guarantees theoretical convergence and better privacy protection against passive attackers. By comparing to state-of-the-art defenses, experimental results corroborate our theoretical analysis, as well as empirically demonstrating the advantage of DGP against active attackers. In terms of limitations, the success of ADGP relies on selecting a reliable user to broadcast its locations. When this user becomes malicious, the entire system will fail. In the future, we will provide more rigorous and more comprehensive privacy analysis, investigate the privacy property of ADGP under passive attacks, explore the applications of (A)DGP in federated learning and broaden our research to more domains like NLP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6410 Acknowledgements Shengshan’s work is supported in part by the National Natural Science Foundation of China (Grant No.U20A20177) and Hubei Province Key R&D Technology Special Innovation Project under Grant No.2021BAA032. Shengqing’s work is supported in part by Hubei Provincial Natural Science Foundation Project (NO. 2023AFB342) and Open Program of Nuclear Medicine and Molecular Imaging Key Laboratory of Hubei Province (NO. 2022fzyx018). The work is supported by HPC Platform of Huazhong University of Science and Technology. Shengshan Hu is the corresponding author. References Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H. B.; Mironov, I.; Talwar, K.; and Zhang, L. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (CCS’16), 308–318. Alemi, A. A.; Fischer, I.; Dillon, J. V.; and Murphy, K. 2016. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410. Alistarh, D.; Hoefler, T.; Johansson, M.; Konstantinov, N.; Khirirat, S.; and Renggli, C. 2018. The convergence of sparsified gradient methods. In Proceedings of the 2018 Neural Information Processing Systems (NeurIPS’18), 5977–5987. Boenisch, F.; Dziedzic, A.; Schuster, R.; Shamsabadi, A. S.; Shumailov, I.; and Papernot, N. 2021. When the Curious Abandon Honesty: Federated Learning Is Not Private. arXiv preprint arXiv:2112.02918. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H. B.; Patel, S.; Ramage, D.; Segal, A.; and Seth, K. 2017. Practical secure aggregation for privacypreserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS’17), 1175–1191. Chen, C.-Y.; Ni, J.; Lu, S.; Cui, X.; Chen, P.-Y.; Sun, X.; Wang, N.; Venkataramani, S.; Srinivasan, V. V.; Zhang, W.; et al. 2020. Scalecom: Scalable sparsified gradient compression for communication-efficient distributed training. In Proceedings of the 2020 Neural Information Processing Systems (NeurIPS’20), 13551–13563. Chen, X.; Wu, Z. S.; and Hong, M. 2020. Understanding gradient clipping in private SGD: A geometric perspective. In Proceedings of the 2020 Neural Information Processing Systems (NeurIPS’20), 13773–13782. Dai, X.; Yan, X.; Zhou, K.; Yang, H.; Ng, K. K.; Cheng, J.; and Fan, Y. 2019. Hyper-sphere quantization: Communication-efficient sgd for federated learning. arXiv preprint arXiv:1911.04655. Danner, G.; and Jelasity, M. 2015. Fully distributed privacy preserving mini-batch gradient descent learning. In Proceedings of the 15th International conference on distributed applications and interoperable systems (IFIP’15), 30–44. Dwork, C.; Roth, A.; et al. 2014. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4): 211–407. Fan, L.; Ng, K. W.; Ju, C.; Zhang, T.; Liu, C.; Chan, C. S.; and Yang, Q. 2020. Rethinking privacy preserving deep learning: How to evaluate and thwart privacy attacks. In Federated Learning, volume 12500, 32–50. Springer. Fowl, L.; Geiping, J.; Czaja, W.; Goldblum, M.; and Goldstein, T. 2021. Robbing the fed: Directly obtaining private data in federated learning with modified models. arXiv preprint arXiv:2110.13057. Gao, W.; Guo, S.; Zhang, T.; Qiu, H.; Wen, Y.; and Liu, Y. 2021. Privacy-preserving collaborative learning with automatic transformation search. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21), 114–123. Geiping, J.; Bauermeister, H.; Dr¨oge, H.; and Moeller, M. 2020. Inverting gradients-how easy is it to break privacy in federated learning? In Proceedings of the 2020 Neural Information Processing Systems (NeurIPS’20), 16937–16947. Geyer, R. C.; Klein, T.; and Nabi, M. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557. Gilad-Bachrach, R.; Laine, K.; Lauter, K.; Rindal, P.; and Rosulek, M. 2019. Secure data exchange: A marketplace in the cloud. In Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop (CCSW’19), 117–128. Hardy, S.; Henecka, W.; Ivey-Law, H.; Nock, R.; Patrini, G.; Smith, G.; and Thorne, B. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv preprint arXiv:1711.10677. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’16), 770–778. Huang, Y.; Gupta, S.; Song, Z.; Li, K.; and Arora, S. 2021. Evaluating gradient inversion attacks and defenses in federated learning. In Proceedings of the 2021 Neural Information Processing Systems (NeurIPS’21), 7232–7241. Jeon, J.; Lee, K.; Oh, S.; Ok, J.; et al. 2021. Gradient inversion with generative image prior. In Proceedings of the 2021 Neural Information Processing Systems (NeurIPS’21), 29898–29908. Kairouz, P.; McMahan, H. B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A. N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2): 1–210. Karimireddy, S. P.; Rebjock, Q.; Stich, S.; and Jaggi, M. 2019. Error feedback fixes signsgd and other gradient compression schemes. In Proceedings of the 36th International Conference on Machine Learning (ICML’19), 3252–3261. Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. Li, Z.; Zhang, J.; Liu, L.; and Liu, J. 2022. Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage. In Proceedings of the 2022 IEEE/CVF Conference The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6411 on Computer Vision and Pattern Recognition (CVPR’22), 10132–10142. Lin, Y.; Han, S.; Mao, H.; Wang, Y.; and Dally, W. J. 2017. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887. Liu, R.; Cao, Y.; Chen, H.; Guo, R.; and Yoshikawa, M. 2020. Flame: Differentially private federated learning in the shuffle model. arXiv preprint arXiv:2009.08063. Mohassel, P.; and Zhang, Y. 2017. Secureml: A system for scalable privacy-preserving machine learning. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP’17), 19–38. Molchanov, P.; Tyree, S.; Karras, T.; Aila, T.; and Kautz, J. 2016. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440. Pan, X.; Zhang, M.; Yan, Y.; Zhu, J.; and Yang, Z. 2022. Exploring the security boundary of data reconstruction via neuron exclusivity analysis. In Proceedings of the 31st USENIX Security Symposium (Security’22), 3989–4006. Phong, L. T.; Aono, Y.; Hayashi, T.; Wang, L.; and Moriai, S. 2017. Privacy-preserving deep learning: Revisited and enhanced. In Proceedings of the 8th International Conference on Applications and Techniques in Information Security (ATIS’17), 100–110. Qian, J.; and Hansen, L. K. 2020. What can we learn from gradients? arXiv preprint arXiv:2010.15718. Scheliga, D.; M¨ader, P.; and Seeland, M. 2022. PRECODEA Generic Model Extension to Prevent Deep Gradient Leakage. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV’22), 1849– 1858. Shokri, R.; and Shmatikov, V. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security (CCS’15), 1310–1321. Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Sun, J.; Li, A.; Wang, B.; Yang, H.; Li, H.; and Chen, Y. 2021. Soteria: Provable defense against privacy leakage in federated learning from representation perspective. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21), 9311–9319. Sun, L.; Qian, J.; and Chen, X. 2021. Ldp-fl: Practical private aggregation in federated learning with local differential privacy. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI’21), 1571–1578. Wang, F.; Hugh, E.; and Li, B. 2023. More than Enough is Too Much: Adaptive Defenses against Gradient Leakage in Production Federated Learning. In Proceedings of the International Conference on Computer Communications (Infocom’ 23). Wang, Y.; Deng, J.; Guo, D.; Wang, C.; Meng, X.; Liu, H.; Ding, C.; and Rajasekaran, S. 2020. Sapag: A selfadaptive privacy attack from gradients. arXiv preprint arXiv:2009.06228. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wang, Z.; Song, M.; Zhang, Z.; Song, Y.; Wang, Q.; and Qi, H. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proceedings of the 2019 IEEE Conference on Computer Communications (INFOCOM’19), 2512–2520. Wei, K.; Li, J.; Ding, M.; Ma, C.; Yang, H. H.; Farokhi, F.; Jin, S.; Quek, T. Q.; and Poor, H. V. 2020a. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15: 3454–3469. Wei, W.; Liu, L.; Loper, M.; Chow, K.-H.; Gursoy, M. E.; Truex, S.; and Wu, Y. 2020b. A framework for evaluating gradient leakage attacks in federated learning. arXiv preprint arXiv:2004.10397. Wen, Y.; Geiping, J.; Fowl, L.; Goldblum, M.; and Goldstein, T. 2022. Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification. arXiv preprint arXiv:2202.00580. Wilson, A. C.; Roelofs, R.; Stern, M.; Srebro, N.; and Recht, B. 2017. The marginal value of adaptive gradient methods in machine learning. In Proceedings of the 2017 Neural Information Processing Systems (NeurIPS’17), 4148–4158. Yin, H.; Mallya, A.; Vahdat, A.; Alvarez, J. M.; Kautz, J.; and Molchanov, P. 2021. See through gradients: Image batch recovery via gradinversion. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21), 16337–16346. Yin, H.; Molchanov, P.; Alvarez, J. M.; Li, Z.; Mallya, A.; Hoiem, D.; Jha, N. K.; and Kautz, J. 2020. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8715–8724. Yu, L.; Liu, L.; Pu, C.; Gursoy, M. E.; and Truex, S. 2019. Differentially private model publishing for deep learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP’19), 332–349. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the 2018 IEEE conference on computer vision and pattern recognition (CVPR’18), 586–595. Zhao, B.; Mopuri, K. R.; and Bilen, H. 2020. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610. Zhu, J.; and Blaschko, M. 2020. R-gap: Recursive gradient attack on privacy. arXiv preprint arXiv:2010.07733. Zhu, L.; Liu, Z.; and Han, S. 2019. Deep leakage from gradients. In Proceedings of the 2019 Neural Information Processing Systems (NeurIPS’19), 14747–14756. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6412 | 2024 | 712 |
18,532 | A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation Mufan Xue1, Xinyu Wu1, Jinlong Li1, Xuesong Li2, Guoyuan Yang1,3* 1Advanced Research Institute of Multidisciplinary Sciences, Beijing Institute of Technology, Beijing 100081, China 2School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China 3School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China [email protected] Abstract Recently, convolutional neural networks (CNNs) have become the best quantitative encoding models for capturing neural activity and hierarchical structure in the ventral visual pathway. However, the weak interpretability of these black-box models hinders their ability to reveal visual representational encoding mechanisms. Here, we propose a convolutional neural network interpretable framework (CNNIF) aimed at providing a transparent interpretable encoding model for the ventral visual pathway. First, we adapt the feature-weighted receptive field framework to train two highperforming ventral visual pathway encoding models using large-scale functional Magnetic Resonance Imaging (fMRI) in both goal-driven and data-driven approaches. We find that network layer-wise predictions align with the functional hierarchy of the ventral visual pathway. Then, we correspond feature units to voxel units in the brain and successfully quantify the alignment between voxel responses and visual concepts. Finally, we conduct Network Dissection along the ventral visual pathway including the fusiform face area (FFA), and discover variations related to the visual concept of ‘person’. Our results demonstrate the CNN-IF provides a new perspective for understanding encoding mechanisms in the human ventral visual pathway, and the combination of ante-hoc interpretable structure and post-hoc interpretable approaches can achieve fine-grained voxel-wise correspondence between model and brain. The source code is available at: https://github.com/BITYangLab/CNN-IF. Introduction The ventral visual pathway is a remarkable feat of nature, capable of processing complex visual stimuli with predominant efficiency and accuracy. Recently, CNNs have emerged as optimal quantitative encoding models for capturing neural activities and hierarchical structures in the ventral visual pathway (Yamins et al. 2014; Kriegeskorte 2015; Schrimpf et al. 2018; Cadena et al. 2019; Schrimpf et al. 2020; Storrs et al. 2021). These models provide a hierarchical correspondence to the early visual cortex (V1-V4) and inferior temporal (IT) (Khosla et al. 2022): early CNN layers predict V1 best, while intermediate and late layers predict V4 and IT best (Yamins et al. 2014; Cichy et al. 2016; G¨uc¸l¨u and *Corresponding author: Guoyuan Yang Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. van Gerven 2015). Comparable strategies have proven successful in understanding the human auditory cortex (Kell et al. 2018) and motor cortex (Sussillo et al. 2015), highlighting the universality of CNN encoding models (Zhuang et al. 2021; Konkle and Alvarez 2022). Nonetheless, the current understanding of the fundamental mechanisms within the brain and model systems remains incomplete. Unraveling the nature of representational transformations and computations in the ventral visual pathway has long been a vital aim in neuroscience. Importantly, the use of computational models enables the simulation of visual hierarchical processing and facilitates the exploration of hypotheses that may not be readily accessible in the human brain (Beguˇs, Zhou, and Zhao 2023). To date, CNNs have been predominantly regarded as black-box models, posing a significant challenge in terms of interpretability. The inherent inability to delve into the internal workings of these models impedes our progress in comprehending the fundamental mechanisms by which they encode visual representations (Ribeiro et al. 2022). Despite ongoing efforts from researchers to enhance the interpretability of CNNs, such as utilizing Grad-CAM (Gradient-weighted Class Activation Mapping) (Selvaraju et al. 2017) and LRP (Layer-wise Relevance Propagation) (Bach et al. 2015), striking a balance between accuracy and interpretability remains a major obstacle. With the emergence of large-scale brain imaging datasets (Chang et al. 2019; Allen et al. 2022), both goal-driven and data-driven approaches have the potential to provide advanced encoding models for the ventral visual pathway (Qiao et al. 2021; Gu et al. 2022). This creates a further appetite for model interpretability. The goal-driven approach is to characterize voxel responses through the feature space trained on high-level tasks, and the data-driven approach is to directly train the model with fMRI data to characterize voxel responses (Cadena et al. 2019; Xiao et al. 2022). It is essential to note that biological visual learning is a process of differentiation, wherein the learning involves discerning differences in existing visual features present in visual inputs rather than constructing new features for each new category (Konkle and Alvarez 2022). This highlights the need to strike a balance between predictive performance and interpretability. Overemphasizing the predictive performance of CNN models may lead to high accuracy in voxel response prediction but often at the cost of understanding The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6413 Figure 1: The architecture of CNN-IF. (A) All voxels within the ROI are characterized by a shared CNN extractor. To effectively predict voxel responses for input images, spatial pooling fields, and voxel-weighted matrices are specifically allocated to each voxel. The input images utilized in this study are obtained from the NSD experiment, leveraging the COCO dataset, comprising a total of 73,000 images. (B) The input images are segmented by a binary segmentation network, yielding up to six concept maps per image. The IoU (Intersection over Union) is then calculated between the concept maps and the thresholded voxel maps. (C) For each voxel, the Pearson correlation between the predictive responses and the measured responses is computed on a testing dataset and then corrected by the noise ceiling of that voxel. their underlying neural processing mechanisms (Cole et al. 2017; Kohoutov´a et al. 2020). Therefore, it is of utmost importance to give more attention to both goal-driven and datadriven approaches in terms of interpretability. Here, we propose CNN-IF which is used for the interpretation of CNN models in the human ventral visual pathway. First, we adapt the feature-weighted receptive filed (fwrf) model (St-Yves and Naselaris 2018) to establish an interpretable component for our encoding models. Then, we train encoding models on a large-scale fMRI dataset named the Natural Scenes Dataset (NSD) (Allen et al. 2022). Next, we utilize the voxel-weighted matrix derived from the parameters of the fwrf to correspond feature units to voxel units in the brain and successfully quantify the alignment between voxel responses with visual concepts. Finally, we conduct Network Dissection (Bau et al. 2017, 2020) along the visual ventral pathway and achieve similar results to a previous study (Khosla and Wehbe 2022) on the fusiform face area (FFA) (Kanwisher, McDermott, and Chun 1997), the extrastriate body area (EBA) (Downing et al. 2001), the visual word form area (VWFA) (Cohen et al. 2000), and the retrosplenial cortex (RSC) (Dilks et al. 2013). We demonstrate that our framework achieves fine-grained hierarchical alignment between the model and the brain. Overall, our main contributions are as follows: • The CNN-IF can provide transparent interpretable encoding models for the ventral visual pathway. • The CNN layer-wise predictions align with the functional hierarchy of the ventral visual pathway. • We captured fine-grained hierarchical alignment between voxel units and a set of visual concepts. • We discovered variations related to the visual concept of ‘person’ in V1-V2-V3-hV4-FFA brain regions. • We visualized the spatial pooling field and the activation map to explain this sensational finding. Related Work Human Ventral Visual Pathway The human ventral visual pathway is a major neural pathway in the brain that is responsible for object recognition and visual perception. It is located primarily in the ventral (lower) region of the brain, specifically the temporal lobe. The visual process of object recognition along the ventral visual pathway mainly includes four stages retinal imaging, feature extraction, feature combination, and finally object recognition. Among these brain functional regions, V1 is responsible for capturing information such as edges and curvatures as well as detecting simple features such as shape, color, and position (Hubel and Wiesel 1962; Cadena et al. 2019). V2 which receives most of the input from V1 to extract more complex local features has similar selectivity to direction and spatial scale (Levitt, Kiper, and Movshon 1994; Coggan et al. 2017) and shows stronger selectivity and tolerance to visual texture The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6414 (Ziemba et al. 2016). Finally, the high-level regions, including FFA, EBA, VWFA, and RSC, complete the final coding semantic category to achieve object recognition tasks such as faces, scenes, etc (Khosla et al. 2022). Network Dissection Network dissection is a technique used in computer vision to understand the inner workings of deep neural networks (Bau et al. 2017). It involves analyzing the intermediate layers of a neural network model to identify and interpret the function of individual units or groups of units. The goal of network dissection is to uncover the semantics and meanings learned by the network for specific tasks. It helps researchers gain insights into what the network has learned and how it has encoded and represented information. Network dissection has been used to analyze and interpret various deep network architectures, such as CNNs for image classification and generative adversarial networks (GANs) for image synthesis (Bau et al. 2020). Network dissection is also used to interpret the tumor segmentation results (Natekar, Kori, and Krishnamurthi 2020). In the human visual system, network dissection is applied to the last layer of their encoding model to demonstrate strong selectivity and functional specialization of the high-level visual areas (Khosla and Wehbe 2022; Sarch et al. 2023), but they don’t discover the encoding processing for this selectivity and specialization. Methods CNN-IF The architecture of CNN-IF is shown in Figure 1. We adopt the fwrf model to separate the “where” parameter, which indicates the location of feature pooling, from the “what” parameter, which fine-tunes the feature weights, establishing interpretable components called spatial pooling fields for each voxel of in human brain (Fig. 1A). The size of these spatial pooling fields matches the size of the feature map in each layer of the model. For the goal-driven model, a single isotropic 2D Gaussian pooling field is selected from a predefined set and applied to all feature maps. In contrast, for the data-driven model, an independent and flexible pooling field is applied to each layer of feature maps. Feature maps are grouped based on the model layers and then multiplied pixel-wise by the corresponding spatial pooling field gi v that determines the region of visual space that drives voxel response. The weighted pixel values in each feature map are then weighted by the voxel-weighted matrix Wv to yield predictive voxel responses. The CNN extractors remain identical for all voxels across the encoding models, while the spatial pooling fields are optimized and vary across voxels. After training, feature maps of each convolutional layer in the extractor are multiplied by the voxel-weighted matrix to obtain voxel maps. We then perform network dissection with these voxel maps. This allows us to quantify the alignment between a set of semantic concepts with all voxels in a specific region of interest (ROI) (Fig. 1B). Our procedure closely follows the work of (Bau et al. 2017). Dataset All encoding models were trained on the NSD1 (Allen et al. 2022), which consists of individual high-density sampling fMRI data obtained from 8 participants (6 females, aged 1932 years). During 30-40 sessions of 7T MRI (whole-brain gradient-echo EPI, 1.8 mm isotropic voxels, and 1.6 s TR), each participant viewed 9,000-10,000 different colored natural scenes, with each scene repeated 2-3 times. A special set of 1,000 images was shared across subjects, while the rest were mutually exclusive. The trained model is validated on these 1000 shared pictures to obtain validation accuracy. The images that subjects viewed (3 s on and 1 s off) were from the Microsoft Common Objects in Context (COCO) database (Lin et al. 2014) with a square crop resized to 8.4 × 8.4°. Regions of Interests We focus on modeling responses within 8 ROIs in the ventral visual in the study. Four ROIs belonging to the retinotopic early visual cortex, namely, V1, V2, V3, and hV4 are defined using a population receptive field (pRF) localizer scan session, and four higher-level visual ROIs, namely FFA, EBA, RSC, VWFA are manually drawn based on the results of the functional localizer (fLoc) experiment after a liberal thresholding procedure (Allen et al. 2022). We use the cortical flap map resent eight selected ROIs on the ventral visual pathway (Fig. 2C). To better demonstrate the generalization of CNN-IF, we register the ROI of each subject to a common anatomical space (MNI152), and the model predictions are presented in the same manner. Model Architecture We employ AlexNet (Krizhevsky, Sutskever, and Hinton 2012) and GNet (Allen et al. 2022; St-Yves et al. 2023) to predict voxel responses. These models possess intricate brain-inspired architectures and provide biologically plausible interpretations, enabling the effective capture of hierarchical representations of visuals in the human brain. AlexNet has previously been shown to deliver state-of-theart performance in visual response modeling (G¨uc¸l¨u and van Gerven 2015; St-Yves and Naselaris 2018). GNet is a datadriven encoding model that has been shown to train models from scratch and accurately predict voxel responses for V1V2-V3-hV4. Both AlexNet and GNet consist of a CNN feature extractor and an interpretable fwrf model used to predict the voxel response. The CNNs utilized in the AlexNet and GNet models are constructed by hierarchically composing functions that process an input image denoted as t: fl(t) = fl−1(t) · ξl where ξl is a CNN extractor that operates at layer l on the output of fl−1(t). fl(t) is the output of layer l and is fed into the next layer as input. The encoding models leverage the intermediate representations fl(t), which are feature maps 1http://naturalscenesdataset.org The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6415 with pixels donated by [fl(t)]k,i,j, where (i, j) is the location of the pixel in the kth feature map. The predictive response of voxel v to the input t is expressed by the following formula: ˜Rt,v,l = bv + X k Wk,v · σk,v,l(t) where Wk,v is the feature weight for voxel v and feature k. The summation P k Wk,v of voxel v indicates the voxelweighted matrix by summing up the weights of all features in encoding models. bv is a bias item for voxel v. σk,v,l(t) = X i,j [fl(t)]k,i,j · gl v,i,j where gl v,i,j indicates the spatial pooling field of voxel v in CNN-IF to reduce the spatial dimensions of feature maps while preserving important features. Important features are located by pixel (i, j). The spatial pooling field of each voxel in different layers is initialized with the same parameters. Model Training and Testing We aim to maximize the utilization of data from all eight subjects for model training, a rigorous evaluation resulted in excluding data from subjects 4, 6, 7, and 8 due to significantly lower signal-to-noise ratios, particularly in the higherlevel visual brain areas focused on in this study. Finally, we selected data from subjects 1, 2, 3, and 5 for both training and testing. The dataset consists of a total of 37,000 natural scene images, with 1,000 images shared by all subjects, and each subject contributing 9,000 unique images. The model is tested on the 1,000 shared images, while the remaining 36,000 images were split into a training set (90%) and a validation set (10%). To fully exploit the advantages of the NSD dataset, we jointly optimized our CNN extractor using data from the four subjects. Specifically, for the goaldriven encoding model, the CNN extractor parameters were pre-trained based on object classification in the ImageNet database (Deng et al. 2009). As for the data-driven encoding model, the CNN extractor parameters, spatial pooling fields, and feature weights were all optimized using stochastic gradient descent and an L2-norm weighted loss function: Loss( ˜R, R) = X t∈Batch X s X l X t ( ˜Rt,s,v,l −Rt,s,v)2 where t, s denote the image t presented to subject s, ˜Rt,s,v,l denotes the predictive response of model layer l for stimulu t received by voxel v of subject s. Rt,s,v denotes the measured response of voxel v of subject s to stimulu t. We quantified the predictive accuracy of the model by calculating the Pearson correlation coefficient between the predictive responses of each voxel and the measured response and then compared the predictive accuracy with the noise ceiling (Fig. 2B). We employed the ADAM optimizer (Kingma and Ba 2014) with a learning rate of 1e-3, β1=0.9, β2=0.999, 50 epochs, and a batch size of 50 for training. Additionally, in order to promote stability during the training process, parameter updates were alternated between feature extractors, spatial pooling fields, and feature weights. Experiments In the following experiment, we first trained interpretable encoding models AlexNet and GNet in goal-driven and datadriven approaches on the NSD dataset. We found differences in training methods and predictive performance between the two models. Then, we evaluated the predictive accuracy of each layer of the model and found that network layer-wise predictions align with the functional hierarchy of the ventral visual pathway. Next, we used the voxel-weighted matrix to correspond feature units to voxel units and successfully quantified the alignment between voxel responses with a set of visual concepts by network dissection. Finally, we performed network dissection along the ventral visual pathway and visualized spatial pooling fields and activation maps, explaining variations related to the visual concept of ‘person’ in V1-V2-V3-hV4-FFA ROIs. Interpretable Encoding Models for the Ventral Visual Pathway To validate the effectiveness of our CNN-IF, we carefully selected eight ROIs (V1, V2, V3, hV4, FFA, EBA, RSC, VWFA) along the ventral visual pathway with hierarchical relationships (Fig. 2C). Voxel responses corresponding to these ROIs were extracted from the NSD dataset and utilized for training. To assess the generalizability of the CNN-IF, we employed both goal-driven and data-driven approaches to train the AlexNet and GNet encoding models. In the data-driven approach, we further partitioned the model by initializing the CNN extractor with the identical parameters utilized in our ‘goal-driven-pretrained encoding model’, which we referred to as the ‘data-driven-pretrained encoding model’. Additionally, we conducted training from scratch with the random initialization, denoting this particular variant as the ‘data-driven-unpretrained encoding model’. Detailed information about the construction and training of the models can be found in the method section. When only a small amount of data is available, we found that the goal-driven encoding model exhibits significantly higher predictive accuracy compared to the datadriven encoding model. However, as we further increase the amount of data, the predictive accuracy of the models eventually levels off. The difference in predictive accuracy between the two approaches narrows. This suggests that data-driven approaches demonstrate impressive performance improvements when there is a large amount of available data, approaching the performance of the goal-driven encoding model, particularly evident on GNet (Fig. 2A, 2E). Providing the model with pretraining parameters does indeed improve the predictive accuracy, which is more pronounced in the case of AlexNet. To further understand the performance of the GNet model, we compared its predictive performance with the noise ceiling estimate (Fig. 2B). Throughout the voxels, the predictive accuracy is closely related to the noise ceiling, indicating that voxel differences in predictive accuracy simply reflect differences in signal-tonoise ratio (SNR). Additionally, the predictive accuracy approaches but does not reach the noise ceiling. Next, the cortical flapmap reveals voxel-wise predictive performance (Fig. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6416 Figure 2: Prediction of voxel responses in the ventral visual pathway. (A) The results of the change in the predictive accuracy of the training results of six models with the amount of training data. The validation accuracy is estimated as the Pearson correlation coefficient between measured voxel responses and predictive responses on the testing dataset. (B) The distribution of prediction accuracy per voxel relative to the corresponding noise ceiling shows voxel differences in predictive accuracy simply reflect differences in SNR. (C) Illustration of ROIs of the ventral visual pathway for encoding models. (D) The cortical flat map demonstrates the achieved predictive accuracy of our models across all voxels in the eight ROIs, revealing high accuracy across extensive regions within these ROIs. (E) The distribution of voxel-wise differences in predictive accuracy between goaldriven and data-driven approaches shows that pretrained parameters contribute to an increase in predictive accuracy. d-d-u, data-driven-unpretrained; d-d-p, data-driven-pretrained; g-d-p, goal-driven-pretrained. 2D). The predictive performance of the early visual cortex is higher than the predictive accuracy of the floc ROIs. This is due to the higher SNR in the primary visual cortex. Fine-grained Hierarchical Alignment between Model and the Ventral Visual Pathway Layer-wise hierarchy To evaluate the alignment between network layer-wise predictions and the functional hierarchy of the ventral visual pathway, we divided the eight ROIs into the hierarchy of early visual cortex (V1, V2, V3, hV4) and floc ROIs (FFA, EBA, RSC, VWFA) (Fig. 3A). Then, we quantified the correlation between predictive responses of all goal-driven encoding model layers of AlexNet and GNet with measured responses of the ventral visual pathway hierarchy to obtain the AlexNet hierarchy (Fig. 3B) and GNet hierarchy (Fig. 3C). The results of the correlation of datadriven encoding models are similar to goal-driven encoding models, which are provided in the Appendix2. Results from 2https://github.com/BIT-YangLab/CNN-IF both models consistently demonstrate a hierarchical alignment between the model and the brain. Specifically, the early layers of the model exhibit the strongest predictive performance for the early visual cortex, whereas the intermediate and late layers of the model exhibit the strongest predictive performance for the floc ROIs. Fine-grained voxel-wise hierarchy To further obtain fine-grained hierarchical alignment, we first used the voxelweighted matrix to correspond feature units to voxel units, quantifying the alignment between voxel responses and a set of visual concepts. Then, we performed network dissection for each of the four floc ROIs (FFA, EBA, RSC, VWFA). The alignment between each concept map and individual voxel map is quantified by the Intersection over Union (IoU), which is computed on Broden, a broadly and densely labeled dataset (Bau et al. 2017). The units with semantics are given labels across a range of objects, parts, scenes, and materials. We chose an IoU threshold of 0.04 and a voxel map activation threshold of 0.01 to quantify the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6417 Figure 3: Alignment between model and brain. (A) Visualization of the hierarchical structure of the ventral visual pathway, proceeding from posterior to anterior regions (including the early visual cortex and floc ROIs). (B) and (C) show the correlation between predictive voxel responses and measured voxel responses for each ROI from all goal-driven encoding model layers. The results were averaged across four subjects. (D) The number of voxels that are detected on AlexNet (goal-driven-pretrained). (E) The number of voxels that are detected on AlexNet (data-driven-pretrained). (F) The number of voxels that are detected on AlexNet (data-driven-unpretrained). Floc ROIs are aligned with the last convolutional layer of AlexNet for network dissection. The number of voxels reflects the highest alignment (IoU > 0.04) of all voxels within a certain ROI with a set of visual concepts. (G) Variations in the number of voxels aligned with ‘person’ are detected by different model layers. (H) Variations in the proportion of voxels aligned with the ‘wall’ are detected by different model layers. interpretability of a layer. Specifically, if IoUv,k calculated by voxel map for voxel v and concept map k for concept c exceed 0.4, we consider voxel v to have successfully detected representations encoded by the model feature space for the input image. To quantify the interpretability of a layer, we only select the concept with the highest IoU score per voxel and count the number of unique concepts aligned with voxel units. FFA, EBA, and VWFA all show selectivity for the concept of ‘person’ (Fig. 3D), but FFA is more concentrated on the face (Fig. 4A), EBA is more concentrated on the body parts (Fig. 4B), and VWFA is not only highly selective for ‘word’ but also for ‘person’. This is due to the local anatomical overlap of FFA, EBA, and VWFA. Due to the lack of a Word-related label in Broden, this concept was detected as ‘building’, which can be seen in the activation map visualized in Fig. 4D. RSC shows selectivity for the concepts of ‘wall’ and ‘street-s’ (Fig. 3D), which can be seen in the activation map visualized in Fig. 4C. We achieved similar results to a previous study by (Khosla and Wehbe 2022). Additionally, we found that the AlexNet (goal-driven-pretrained) model (Fig. 3D) detected more concepts than the AlexNet (data-driven-pretrained) model (Fig. 3E), and this alignment was more difficult to capture in the AlexNet (data-driven unpretrained) model (Fig. 3F). The results of the GNet network dissection are in the Appendix2. Variations of the ‘person’ concept in V1-V2-V3-hV4FFA ROIs Our model successfully simulated the face recognition function of human high-level visual areas. To further explore the variations of this process along the ventral visual pathway, we performed network dissection along the ventral visual pathway (V1-V2-V3-hV4-FFA ROIs) to seek variations related to the visual concept of ‘person’ that occur at the functional hierarchy learned by AlexNet (goalThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6418 Figure 4: Activation map of AlexNet (goal-drivenpretrained). For each region of floc ROIs, we took the top five activation maps with the highest activation from the top five voxels with the highest IoU score (top 1% quantile level). (A) Voxels in FFA are aligned with ‘person’. (B) Voxels in EBA are aligned with ‘person’ but more focused on the body. (C) Voxels in RSC are aligned with ‘wall’ and ‘street-s’. (D) Voxels in VWFA are aligned with ‘person’ and ‘building’ (the concept of ‘word’ is hidden in the label of ‘building’). (E) The activation of two images at each layer of AlexNet (goal-driven-pretrained) and they are aligned with ‘person’ only from conv5 and conv6. driven-pretrained) models. Our results show that the brain encodes the visual concept of ‘person’ along the ventral visual pathway and ultimately forms the representation of ‘person’ in the FFA in an unprecedented way (Fig. 3G). We also counted the proportion of voxels aligned with the ‘wall’ concept in the corresponding ROI in this hierarchy (Fig. 3H). According to the concept map of the ‘wall’, it contains some basic semantic information such as color and text, which is easier to detect in the early layer of the model. As expected, when a high-level concept contains more low-level semantic information, this problem may more easily occur. The inability of early layers to detect the ‘person’ label may be because these voxels are involved in encoding other concepts. Although V1-hV4 (corresponding to conv1-conv4 of the model) didn’t merge unique voxel units aligned with the ‘person’ concept, these ROI voxels are still involved in encoding representation that contains the semantic of ‘person’ when we paid attention to two images that contain ‘person’ (Fig. 4E). Conv1 and conv2 activate only the regions around the person, indicating a focus on some basic semantic information such as color and texture. Since V3, although voxels corresponding to ‘person’ has not been detected at this time, the activation map of conv4 shows that voxels in V3 and hV4 are sensitive to the circle (the face is also round). By comparFigure 5: Visualization of the spatial pooling field of AlexNet (goal-driven-pretrained). We fit the corresponding spatial pooling field for each voxel in each layer of the model to visualize this interpretable component. ing conv5 and conv6 (where voxel is starting to appear that can detect ‘person’) with other earlier layers, we can find that conv5 and conv6 have a larger whole of active areas, which can include all the information related to a ‘person’, while earlier layers are still a few scattered activated areas. This is the reason why we find that representations about the ‘person’ are ultimately formed at FFA rather than at the early visual cortex. Finally, we exhibited the variations of interpretable spatial pooling fields of the AlexNet (goal-driven-pretrained) model. The parameters of spatial pooling fields reflect where the corresponding model layer should focus the most when predicting voxel responses (Fig. 5). All receptive fields are initialized in the same way. As the model layer deepens, the activation region of the receptive field expands, however, the size of the activated area decreases. This observation suggests that the spatial pooling field prioritizes local pivotal information while fitting voxel responses. Conclusion We propose the CNN-IF which provides an interpretable encoding model for the human ventral visual pathway and well balances the predictive performance and interpretability of the model. By exploiting the interpretable hierarchical fwrf model of two high-performing encoding models, including AlexNet and GNet, we discover that the network layer-wise predictions align with the functional hierarchy of the ventral visual pathway. Using network dissection, we quantify the alignment between voxel responses and a set of visual concepts. The results show variations related to the visual concept of ‘person’ in the high-level visual area corresponding to higher layers of the model. Finally, we exhibit the spatial pooling field and the activation map to explain this sensational finding. We demonstrate that CNN-IF provides a new perspective on the interpretability of the CNN model for understanding encoding mechanisms in the human ventral visual pathway and achieving fine-grained hierarchical alignment between the model and the ventral visual pathway. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6419 Acknowledgments This work was supported by the National Natural Science Foundation of China (grants number 82302175, 62071049 and 62336002); the National Science and Technology Innovation 2030 Program (grant number 2021ZD0200500); the Beijing Municipal Science and Technology Commission (grants number Z171100000117012 and Z181100001518003); the Beijing Municipal Natural Science Foundation Project (grant number 4222018); the China Postdoctoral Science Foundation (grant number 2021M700015). References Allen, E. J.; St-Yves, G.; Wu, Y.; Breedlove, J. L.; Prince, J. S.; Dowdle, L. T.; Nau, M.; Caron, B.; Pestilli, F.; Charest, I.; et al. 2022. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nature neuroscience, 25(1): 116–126. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; M¨uller, K.-R.; and Samek, W. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7): e0130140. Bau, D.; Zhou, B.; Khosla, A.; Oliva, A.; and Torralba, A. 2017. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6541– 6549. Bau, D.; Zhu, J.-Y.; Strobelt, H.; Lapedriza, A.; Zhou, B.; and Torralba, A. 2020. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences, 117(48): 30071–30078. Beguˇs, G.; Zhou, A.; and Zhao, T. C. 2023. Encoding of speech in convolutional layers and the brain stem based on language experience. Scientific Reports, 13(1): 6480. Cadena, S. A.; Denfield, G. H.; Walker, E. Y.; Gatys, L. A.; Tolias, A. S.; Bethge, M.; and Ecker, A. S. 2019. Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS computational biology, 15(4): e1006897. Chang, N.; Pyles, J. A.; Marcus, A.; Gupta, A.; Tarr, M. J.; and Aminoff, E. M. 2019. BOLD5000, a public fMRI dataset while viewing 5000 visual images. Scientific data, 6(1): 49. Cichy, R. M.; Khosla, A.; Pantazis, D.; Torralba, A.; and Oliva, A. 2016. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports, 6(1): 27755. Coggan, D. D.; Allen, L. A.; Farrar, O. R.; Gouws, A. D.; Morland, A. B.; Baker, D. H.; and Andrews, T. J. 2017. Differences in selectivity to natural images in early visual areas (V1–V3). Scientific Reports, 7(1): 2444. Cohen, L.; Dehaene, S.; Naccache, L.; Leh´ericy, S.; Dehaene-Lambertz, G.; H´enaff, M.-A.; and Michel, F. 2000. The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain, 123(2): 291–307. Cole, J. H.; Poudel, R. P.; Tsagkrasoulis, D.; Caan, M. W.; Steves, C.; Spector, T. D.; and Montana, G. 2017. Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. NeuroImage, 163: 115–124. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Dilks, D. D.; Julian, J. B.; Paunov, A. M.; and Kanwisher, N. 2013. The occipital place area is causally and selectively involved in scene perception. Journal of Neuroscience, 33(4): 1331–1336. Downing, P. E.; Jiang, Y.; Shuman, M.; and Kanwisher, N. 2001. A cortical area selective for visual processing of the human body. Science, 293(5539): 2470–2473. Gu, Z.; Jamison, K.; Sabuncu, M.; and Kuceyeski, A. 2022. Personalized visual encoding model construction with small data. Communications Biology, 5(1): 1382. G¨uc¸l¨u, U.; and van Gerven, M. A. 2015. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35(27): 10005–10014. Hubel, D. H.; and Wiesel, T. N. 1962. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of physiology, 160(1): 106. Kanwisher, N.; McDermott, J.; and Chun, M. M. 1997. The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of neuroscience, 17(11): 4302–4311. Kell, A. J.; Yamins, D. L.; Shook, E. N.; Norman-Haignere, S. V.; and McDermott, J. H. 2018. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3): 630–644. Khosla, M.; Jamison, K.; Kuceyeski, A.; and Sabuncu, M. 2022. Characterizing the ventral visual stream with response-optimized neural encoding models. Advances in Neural Information Processing Systems, 35: 9389–9402. Khosla, M.; and Wehbe, L. 2022. High-level visual areas act like domain-general filters with strong selectivity and functional specialization. bioRxiv, 2022–03. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kohoutov´a, L.; Heo, J.; Cha, S.; Lee, S.; Moon, T.; Wager, T. D.; and Woo, C.-W. 2020. Toward a unified framework for interpreting machine-learning models in neuroimaging. Nature protocols, 15(4): 1399–1435. Konkle, T.; and Alvarez, G. A. 2022. A self-supervised domain-general learning framework for human ventral stream representation. Nature communications, 13(1): 491. Kriegeskorte, N. 2015. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual review of vision science, 1: 417–446. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6420 Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. Levitt, J. B.; Kiper, D. C.; and Movshon, J. A. 1994. Receptive fields and functional architecture of macaque V2. Journal of neurophysiology, 71(6): 2517–2542. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Natekar, P.; Kori, A.; and Krishnamurthi, G. 2020. Demystifying brain tumor segmentation networks: interpretability and uncertainty analysis. Frontiers in computational neuroscience, 14: 6. Qiao, K.; Zhang, C.; Chen, J.; Wang, L.; Tong, L.; and Yan, B. 2021. Effective and efficient roi-wise visual encoding using an end-to-end cnn regression model and selective optimization. In Human Brain and Artificial Intelligence: Second International Workshop, HBAI 2020, Held in Conjunction with IJCAI-PRICAI 2020, Yokohama, Japan, January 7, 2021, Revised Selected Papers 2, 72–86. Springer. Ribeiro, F. L.; Bollmann, S.; Cunnington, R.; and Puckett, A. M. 2022. An explainability framework for cortical surface-based deep learning. arXiv preprint arXiv:2203.08312. Sarch, G. H.; Tarr, M. J.; Fragkiadaki, K.; and Wehbe, L. 2023. Brain Dissection: fMRI-trained networks reveal spatial selectivity in the processing of natural images. bioRxiv, 2023–05. Schrimpf, M.; Kubilius, J.; Hong, H.; Majaj, N. J.; Rajalingham, R.; Issa, E. B.; Kar, K.; Bashivan, P.; Prescott-Roy, J.; Geiger, F.; et al. 2018. Brain-score: Which artificial neural network for object recognition is most brain-like? bioRxiv, 407007. Schrimpf, M.; Kubilius, J.; Lee, M. J.; Murty, N. A. R.; Ajemian, R.; and DiCarlo, J. J. 2020. Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 108(3): 413–423. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626. St-Yves, G.; Allen, E. J.; Wu, Y.; Kay, K.; and Naselaris, T. 2023. Brain-optimized deep neural network models of human visual areas learn non-hierarchical representations. Nature communications, 14(1): 3329. St-Yves, G.; and Naselaris, T. 2018. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces. NeuroImage, 180: 188–202. Storrs, K. R.; Kietzmann, T. C.; Walther, A.; Mehrer, J.; and Kriegeskorte, N. 2021. Diverse deep neural networks all predict human inferior temporal cortex well, after training and fitting. Journal of cognitive neuroscience, 33(10): 2044– 2064. Sussillo, D.; Churchland, M. M.; Kaufman, M. T.; and Shenoy, K. V. 2015. A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7): 1025–1033. Xiao, W.; Li, J.; Zhang, C.; Wang, L.; Chen, P.; Yu, Z.; Tong, L.; and Yan, B. 2022. High-Level visual encoding model framework with hierarchical ventral stream-optimized neural networks. Brain Sciences, 12(8): 1101. Yamins, D. L.; Hong, H.; Cadieu, C. F.; Solomon, E. A.; Seibert, D.; and DiCarlo, J. J. 2014. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23): 8619–8624. Zhuang, C.; Yan, S.; Nayebi, A.; Schrimpf, M.; Frank, M. C.; DiCarlo, J. J.; and Yamins, D. L. 2021. Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3): e2014196118. Ziemba, C. M.; Freeman, J.; Movshon, J. A.; and Simoncelli, E. P. 2016. Selectivity and tolerance for visual texture in macaque V2. Proceedings of the National Academy of Sciences, 113(22): E3140–E3149. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6421 | 2024 | 713 |
18,533 | Self-Supervised 3D Human Mesh Recovery from a Single Image with Uncertainty-Aware Learning Guoli Yan, Zichun Zhong, Jing Hua Department of Computer Science, Wayne State University, Detroit, MI, USA {guoliyan, zichunzhong, jinghua}@wayne.edu Abstract Despite achieving impressive improvement in accuracy, most existing monocular 3D human mesh reconstruction methods require large-scale 2D/3D ground-truths for supervision, which limits their applications on unlabeled in-the-wild data that is ubiquitous. To alleviate the reliance on 2D/3D ground-truths, we present a self-supervised 3D human pose and shape reconstruction framework that relies only on selfconsistency between intermediate representations of images and projected 2D predictions. Specifically, we extract 2D joints and depth maps from monocular images as proxy inputs, which provides complementary clues to infer accurate 3D human meshes. Furthermore, to reduce the impacts from noisy and ambiguous inputs while better concentrate on the high-quality information, we design an uncertainty-aware module to automatically learn the reliability of the inputs at body-joint level based on the consistency between 2D joints and depth map. Experiments on benchmark datasets show that our approach outperforms other state-of-the-art methods at similar supervision levels. Introduction 3D human mesh recovery from monocular images is a challenging task in computer vision that can be used for a variety of human-centric applications such as augmented reality, human-robot interaction, computer-assisted coaching, etc. It has received increasing attention in recent years due to the availability of parametric 3D human body model, e.g. SCAPE (Anguelov et al. 2005) and SMPL (Loper et al. 2015), and advances in deep learning techniques (Tian et al. 2023). Although recent monocular 3D human mesh reconstruction methods have gained considerable improvement in accuracy, most of these works are in a fully-supervised setting (Kanazawa et al. 2018; Kolotouros et al. 2019; Lin, Wang, and Liu 2021a,b). Such approaches require largescale 2D/3D ground truth labels for supervision, restricting their applications on unlabeled in-the-wild data that is abundantly available. In the absence of 3D ground-truth labels, e.g., SMPL pose and shape parameters, several recent works leverage more easily obtained 2D ground-truth, such as 2D keypoints and silhouette (Pavlakos et al. 2018; Tan, Budvytis, and Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Cipolla 2017), for a weak supervision. To further alleviate the reliance on paired 2D and 3D ground-truths, attempts are made to regress 3D human pose and shape in a selfsupervised manner (Tung et al. 2017a; Kundu et al. 2020; Gong et al. 2022). However, there are still restrictions existing in the previous self-supervised approaches that hinder the generalizability. For example, Gong et al. (2022) requires SMPL data to generate synthetic training data for full supervision, which inherently induces domain gap between synthetic data and real data. Kundu et al. (2020) requires video datasets to generate sequential image pairs for appearance consistency-based self-supervision, which limits its application on datasets with only single-shot images. Different from these methods, we aim to achieve superior generalizability by designing a self-supervised framework that relies only on self-consistency between intermediate representations of images and projected 2D predictions. Recently, regressing 3D human mesh from intermediate representations (e.g., 2D joints, silhouettes and IUV maps) has achieved promising performance in the self-supervised setting (Tung et al. 2017a; Mugaludi et al. 2021; Gong et al. 2022). These representations can be automatically extracted from RGB images using off-the-shelf algorithms (Cao et al. 2019; Wu et al. 2019; G¨uler, Neverova, and Kokkinos 2018). Many previous works (Pavlakos et al. 2018; Sengupta, Budvytis, and Cipolla 2020) have explored 2D joints and silhouettes as a combination to provide the pose and shape clues. However, both of them are highly vulnerable to induce pose ambiguity since two different 3D poses may have the same 2D joints and silhouette projection. The depth maps can alleviate such ambiguities and thus can be viewed as a richer substitute to silhouettes. Therefore, we propose to utilize both 2D joints and depth maps that are automatically extracted from images as proxy inputs to infer accurate 3D human meshes. Although depth maps have been employed in the multihuman reconstruction problem using depth-ordering consistency constraints (Jiang et al. 2020), they are still unexplored for self-supervised 3D human reconstruction. To effectively use depth information, we design depth trimming and depthpoint sampling methods for better alignment between input depth and predicted depth. Furthermore, we propose an uncertainty-aware module to automatically learn the reliability of the inputs at body-joint level based on the conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6422 sistency between 2D joints and depth map. By incorporating these reliability values in the self-consistency losses, the proposed approach can effectively reduce the impacts from noisy and ambiguous inputs while concentrate more on the high-quality information. The contributions of this work can be summarized as follows: • We propose a simple, novel self-supervised framework that relies only on self-consistency between intermediate representations of images and projected 2D predictions. Without using any 2D/3D ground-truths for supervision, our method can be applied on ubiquitous unlabeled inthe-wild data, achieving superior generalizability. • We incorporate depth maps in our framework to strengthen the self-consistency constraints, with depth trimming and depth-point sampling designed for better alignment between input depth and predicted depth. To our best knowledge, this work is the first to exploit depth maps for self-supervised 3D human mesh recovery. • We design an uncertainty-aware module to automatically learn the reliability of the intermediate representations at body-joint level based on the consistency between 2D joints and depth map. The impacts from noisy and ambiguous inputs are effectively reduced by incorporating the reliability values in the self-consistency losses. • We conduct extensive experiments on benchmark datasets and achieve state-of-the-art results against previous methods at similar supervision levels. Related Work 3D Human Pose Estimation The 3D human pose estimation task is commonly formulated as the problem of predicting the 3D positions of body joints from images. Recent approaches can be mainly categorized into image-based and 2D pose-based methods. The image-based approaches employ the end-to-end learning paradigm, estimating 3D joint locations directly from input images (Pavlakos et al. 2017; Tome, Russell, and Agapito 2017; Zhou et al. 2017; Mehta et al. 2017; Sun et al. 2018; Pavlakos, Zhou, and Daniilidis 2018). For instance, Pavlakos et al. (2017) utilized a volumetric representation for 3D pose and adopted a coarse-to-fine prediction scheme to iteratively refine the 3D joint localization. Sun et al. (2018) proposed an integral regression approach and predicted 3D joint locations in a differentiable way. More recently, Pavlakos, Zhou, and Daniilidis (2018) proposed to use the ordinal depths of human joints as a weak supervision signal to mitigate the need of 3D annotations for 3D pose estimation. However, it is still sub-optimal to train end-to-end 3D pose estimation systems due to the limited availability of 3D captures in the wild and the appearance variations between train and test data. The 2D pose-based approaches take the intermediately predicted 2D pose as input and lift it to the 3D space (Tung et al. 2017b; Moreno-Noguer 2017; Martinez et al. 2017; Zhao et al. 2019; Wang et al. 2018). For example, Martinez et al. (2017) proposed to use a simple multi-layer perceptron network to regress 3D poses from 2D joint locations. Wang et al. (2018) predicted the depth rankings of body joints by a Pairwise Ranking CNN, and used that as a cue to estimate 3D poses from 2D human joint locations. Zhao et al. (2019) proposed a novel Semantic Graph Convolutional Networks (SemGCN) to capture the spatial relationships between joints for 3D pose regression. These methods gain the advantages of existing 2D pose estimation algorithms to obtain the intermediately estimated 2D poses. Different from the aforementioned methods, our goal is to estimate the whole surface geometry of the human body instead of only 3D joint locations, which is more challenging. Monocular 3D Human Mesh Recovery For parametric model-based 3D human pose and shape reconstruction, the goal is to estimate the parameters of the 3D body model, such as SCAPE (Anguelov et al. 2005) and SMPL (Loper et al. 2015). These model-based methods can be further categorized into optimization-based (Bogo et al. 2016; Lassner et al. 2017; Song, Chen, and Hilliges 2020) and regression-based methods (Guler and Kokkinos 2019; Kanazawa et al. 2018; Omran et al. 2018; Choutas et al. 2020; Pavlakos et al. 2018; Tung et al. 2017a). Optimization-based approaches aim to fit a 3D body model to 2D observations, such as body joints (Bogo et al. 2016) and silhouettes (Lassner et al. 2017). For example, Bogo et al. (2016) proposed a fully automatic approach, SMPLify, to fit the SMPL model to 2D keypoints that are detected by a CNN keypoint detector (Pishchulin et al. 2016). Lassner et al. (2017) extended SMPLify by fitting the SMPL model to body surface landmarks and silhouettes. However, their fitting process is typically very slow and sensitive to initialization. Regression-based approaches aim to regress the body model parameters from image pixels (Kanazawa et al. 2018; Omran et al. 2018) or intermediate representations such as 2D keypoints and silhouettes (Pavlakos et al. 2018; Tung et al. 2017a). For instance, Kanazawa et al. (2018) proposed HMR to regress SMPL pose and shape parameters directly from image pixels using joint reprojection loss and adversarial prior. Pavlakos et al. (2018) estimated 2D joint heatmaps and the silhouette first before regressing pose parameters from 2D joints and shape parameters from the silhouette. Kolotouros et al. (2019) combined both optimization and regression approaches in one framework. Within a training loop, they used the regressed estimate to initialize SMPLify (Bogo et al. 2016), and used the optimized parameters from SMPLify to supervise the learning of the regressor. More recently, transformer models (Vaswani et al. 2017) have been applied on 3D human mesh recovery domain (Lin, Wang, and Liu 2021a,b), which significantly improve the reconstruction performance. Self-supervised 3D Human Mesh Recovery Recent model-based works have also provided selfsupervised solutions by leveraging synthetic data (Tung et al. 2017a; Mugaludi et al. 2021; Gong et al. 2022, 2023) or paired appearance consistency (Jiang et al. 2020). Mugaludi et al. (2021) proposed a self-adaptive approach that uses synthetic data as a source domain to perform full The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6423 2D Joints Detection CNN M(θ, β) ↦P3D {s, R, T} β θ ϕ TIQR Depth Estimation CNN Regressor Rendering & Projection ⊖ Uncertainty-aware Module Self-Consistency Constraints w ⊗ Figure 1: Overview of the proposed approach. The entire framework is trained end-to-end with self-supervision from 2D joints and depth maps. Here ⊖denotes operations to generate relational vectors from the two representations. These relational vectors are the input of the uncertainty-ware module. ⊗means applying the outputs of the uncertainty-ware module as weights to refine the self-consistency constraints. supervision, and then leverages topological-skeleton that extracted from the raw silhouette to perform self-supervised learning when applying the source-trained model to the unlabeled target domain. Gong et al. (2022) proposed a synthetic-training pipeline that utilizes SMPL data to generate synthetic training data for full supervision, and uses 2D joints and IUV maps as proxy inputs to alleviate the synthetic-to-real gap. Different from these methods, our proposed approach does not require any synthetic data to provide 2D/3D supervision, which inherently bypass the synthetic-to-real gap issue. Kundu et al. (2020) introduced a self-supervised method that relies only on foreground (FG) appearance consistency. This work is close to ours. However, this method requires video datasets to generate sequential image pairs for appearance consensus based selfsupervision, which prevents its application to datasets with only single-shot images. Method The overall framework of the proposed approach is illustrated in Fig. 1. Given a single image, we use off-the-shelf 2D joints detection algorithms (Cao et al. 2017; Cao et al. 2019; Wu et al. 2019) and depth map estimation methods (Tang et al. 2019; Jafarian and Park 2021) to generate 2D joints and depth maps of humans, respectively. These 2D joints and depth maps serve as the actual inputs of the network and the pseudo-labels to guide the training of the whole network. We also introduce an uncertainty-aware module to automatically learn the reliability of the inputs at body-joint level based on the consistency between 2D joints and depth map. The entire framework is trained end-to-end with self-supervision. SMPL Human Body Model Instead of reconstructing 3D human mesh by the network directly, we estimate only a small number of SMPL (Loper et al. 2015) parameters, which are sufficient for generating detailed 3D human mesh by the SMPL model. As a parametric statistical human body model, SMPL represents a 3D human body by Θ, which is composed of pose parameters θ ∈R72 and shape parameters β ∈R10. The pose parameters contain the relative rotation of 23 joints in axis-angle representation and the global rotation. The shape parameters contain the first 10 coefficients of a PCA shape space. Given the parameters Θ = {θ, β}, a triangulated mesh M(θ, β) ∈R3×N can be generated by the SMPL model, where N = 6890 denotes the number of vertices. The major body joints P3D is defined as a linear combination of mesh vertices. Specifically, P3D = WM, where W is a pretrained linear regressor. Proxy Inputs Generation In the absence of any 2D/3D ground truths, it is still quite challenging to recover 3D human meshes directly from image pixels. To alleviate this issue, we employ intermediate representations of images (i.e., 2D joints and depth maps of humans) as proxy inputs to the regression network, which provides complementary clues to infer 3D human meshes. These representations focus on human bodies, filtering out the information from illumination, background clutter, etc., thus can be viewed as a distillation of RGB images. 2D Joints Generation. With the advances in 2D pose detection approaches in recent years, it is convenient to acquire rather reliable human 2D joints using off-the-shelf methods. Specifically, given an input image, we use Keypoint The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6424 R-CNN (He et al. 2017) to predict 2D joint locations. The 2D joints prediction is denoted as J ∈RK×2, where K is the number of joints. The 2D joints are further transformed into 2D Gaussian joint heatmaps, F ∈RH×W ×K, where H and W represent image height and width, respectively. Depth Map Trimming. Compared with other intermediate representations such as 2D joints and silhouettes, much less attention has been putted on human depth information for 3D human shape and pose recovery. One main reason is that the estimation of human depth is typically less accurate and less robust compared to other intermediate representations. However, in the self-supervised setting with 2D images only, we find it is beneficial to employ estimated human depth maps for 3D human recovery, which can provide complementary information to 2D joints and alleviate the pose ambiguity issue. Specifically, we employ an unsupervised depth estimation algorithm (Jafarian and Park 2021) to extract depth maps of humans from RGB images. The estimated depth map is denoted as D ∈RH×W . According to our observations, the original estimated depth maps D are likely to be contaminated by extreme depth values, which results in unreasonable depth ranges for such depth maps, making it hard for the alignment between input depth and predicted depth. To eliminate the impact of outliers (i.e., the extreme depth values), we introduce a depth trimmer TIQR in our framework. Specifically, we apply the interquartile range (IQR) method for calculating the lower and upper bounds of the depth values and trimming off any values that fall outside of the range. Given a depth map D with H × W depth values, we first get the number of valid (i.e., non-zero) depth values, n, then the IQR is calculated as the difference between the third quartile Q3 and the first quartile Q1, where Q1 is the median of the ⌊n/2⌋smallest values, and Q3 is the median of the ⌊n/2⌋largest values. The lower and upper bounds are defined as Blower = Q1 −1.5 × IQR, (1) Bupper = Q3 + 1.5 × IQR. (2) Figure 2: Depth maps before and after IQR trimming. The first row shows original depth maps, the second row shows corresponding trimmed depth maps. All depth maps are normalized for visualization. Finally, the trimmed depth map ˜D = TIQR(D) is formulated as ˜Dij = Bupper, Dij > Bupper, Blower, 0 < Dij < Blower, Dij, otherwise. (3) Fig. 2 illustrates the difference between the original and trimmed depth maps, which are normalized for visualization. It shows that the original depth maps in the 1st row are dominated by dark-color areas due to the existing of outliers, while the trimmed depth maps in the 2nd row have more balanced dark-to-bright color distribution, which are more faithful to the ground truth. Uncertainty Modeling Due to the differences in view point, occlusion condition, pose topology, etc., the quality of the pre-extracted 2D joints and depth maps usually varies among different image samples as well as different human parts within a sample. In our self-supervised framework, we take the above intermediate representations as pseudo ground truth labels to guide the learning of the whole network. Therefore, it is critical to quantify the uncertainty of each pseudo ground-truth, so that the reliable ones can be better concentrated while the impact of noisy and ambiguous ones can be alleviated. We define the uncertainty at body-joint level based on the consistency between 2D joints and depth map from the same image. The measure of consistency between the two representations is based on the symmetrical nature of human bodies, e.g., the length of left upper-arm is equal to the length of right upper-arm typically. Thus if the depth difference of left upper-arm is close to that of right upper-arm, their bone lengths should be close in the 2D skeleton that derived from 2D joints. Specifically, we denote each bone in the left body as Bl i, and the symmetrically corresponding bone in the right body as Br i , where i is a shared bone-index among left and right body part. Since each bone is a connection between two joints, we define the 2D bone length Len(·) as the Euclidean distance between the connected two joints, and the bone depth discrepancy DD(·) as the depth difference between the two joints. According to the bone symmetricity and 3D-to-2D projection properties, we have the following observations: (1) the smaller the DD(·), the larger the Len(·); (2) The closer between DD(Bl i) and DD(Br i ), the closer between Len(Bl i) and Len(Br i ). The conformity of such relationships among bone lengths and depth discrepancies reflects the consistency between 2D joints and depth map. Therefore, we can learn the uncertainty of proxy inputs using the bone lengths and depth discrepancies. We utilize a multilayer perceptron (MLP) network that contains two fully connected layers and a Sigmoid function to automatically learn joint-level reliability: w = Sigmoid(FC(FC(vLen ⊕vDD))), (4) where ⊕means concatenation operation, vLen is a vector of bone lengths calculated by Len(·), and vDD is a vector of depth discrepancies calculated by DD(·). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6425 Self-Supervised Human Reconstruction Pose Prior. In order to prevent the network from producing physically implausible 3D bodies, we employ a pose prior model in our framework similar to (Kundu et al. 2020; Jafarian and Park 2021). The pose prior is the decoder of an adversarial auto-encoder (Makhzani et al. 2015) trained on a large amount of 3D pose samples (Mahmood et al. 2019). It learns a latent pose feature ϕ ∈[−1, 1]32 in the bottleneck, and the decoder is learned to recover the realistic SMPL pose θ ∈R69 of 23 joints from ϕ. We only take the decoder from the trained auto-encoder as the pose prior model in our framework, and keep its weights frozen during our self-supervised training. Network Architecture. Similar to HMR (Kanazawa et al. 2018), we use the ResNet-18 (He et al. 2016) as the CNN backbone. The input is the concatenation of joint heatmaps F and trimmed depth map ˜D along the channel dimension, which results in a tensor of shape H × W × (K + 1). The output of ResNet is average pooled, which produces features f ∈R512. The subsequent regression module consists of two fully connected layers with 512 neurons each, followed by an output layer with 48 neurons, which contains the camera parameters {s, R, T}, where s ∈R and T ∈R2 denotes the scale and translation, respectively, R ∈R3 is the global rotation. The output layer also contains the SMPL shape parameter β ∈R10 and a pose embedding vector ϕ ∈R32, which is then sent to the pose prior to generate the SMPL pose parameter θ ∈R69 of 23 joints. Self-Consistency Loss. To conduct self-consistency on 2D joints, the estimated SMPL parameters Θ = {θ, β} are transformed into 2D joints J′ through 3D joints regression from reconstructed mesh and weak-perspective projection using the estimated camera parameters. Then the selfconsistency loss for 2D joints can be expressed as Ljoint(J, J′) = K X i=1
Ji −Ji ′
2 2 . (5) For self-consistency on depth maps, the estimated SMPL parameters are transformed into depth map D′ through differentiable rendering (Kato, Ushiku, and Harada 2018) and weak-perspective projection. However, we do not calculate the L2 distance between ˜D and D′ directly. Instead, we evenly sample depth points on bones in order to obtain better correspondences between ˜D and D′. Specifically, we keep the two depth points on joints and sample the remaining ones evenly along each bone. The depth loss is calculated between each depth point ˜Di p and its corresponding point Di′ p , i.e., Ldepth( ˜D, D′) = m X i=1
˜Di p −Di′ p
2 2 , (6) where m is the total number of depth points. The reliability of each depth point is correlated with the the reliability of the two joints on the same bone. For simplicity, we define the reliability wpi of each depth point pi as the reliability of the joint with closer distance. We take the depth point reliability as a weight to boost network training. Then the uncertainty-aware depth consistency loss is defined as L∗ depth( ˜D, D′) = m X i=1 wpi
˜Di p −Di′ p
2 2 . (7) Our final loss function is defined as L = αjLjoint(J, J′) + αdL∗ depth( ˜D, D′), (8) where αj and αd are loss weights for the joint and depth, respectively. Experiments Datasets In our experiments, we use Human3.6M (Ionescu et al. 2013), 3DPW (Von Marcard et al. 2018) and UP-3D (Lassner et al. 2017) for training. For 3DPW (Von Marcard et al. 2018) and Human3.6M (Ionescu et al. 2013), we report evaluation results using mean per joint position error (MPJPE) and Procrustes-aligned mean per joint position error (PAMPJPE). Note that we only use image data and no 2D/3D annotations from these datasets are involved during training. More detailed information of these datasets is provided in the following. Human3.6M is a large-scale indoor dataset captured in a controlled environment. Its training set contains 7 subjects performing 4 types of actions under 4 camera views. Following the Protocol 2 (Kanazawa et al. 2018), we train our model on 5 subjects (S1, S5, S6, S7, S8) and test on the front-view samples of the rest 2 subjects (S9, S11). All videos are downsampled from 50fps to10fps. 3DPW is an in-the-wild dataset that contains both indoor and outdoor scenes. The dataset has 60 video sequences. Both training and testing sets contain 24 videos, and the rest 12 video are used for validation. Following (Kocabas, Athanasiou, and Black 2020), we use its training data when conducting experiments on 3DPW. UP-3D is an outdoor dataset. It contains more than 8K images. This dataset is only used for training. Implementation Details The MLP network for uncertainty modeling contains two fully connected layers with 128 and 64 neurons, respectively. The output layer contains 12 neurons, which is equal to the number of joints excluding joints on the head. We set the joint loss weight αj = 1 and the depth loss weight αd = 0.04. We use the Adam optimizer (Kingma and Ba 2014) with an initial learning rate of 10−5, and batch size of 64. After training for 10 epochs, we regularize β to remain close to the mean shape. Our experiments run on a single NVIDIA GeForce RTX 3090 GPU. Ablation Study In Table 1, we compare our proposed model with several variants on 3DPW dataset to investigate the contribution of each component. Specifically, we design the following baselines: (1) “Ours ⊖uncertainty” denotes our model using The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6426 Figure 3: Examples of qualitative results on two datasets. Left three columns: 3DPW dataset. Right three columns: Human3.6M dataset. The 3rd and 6th columns show another view of the 3D reconstruction results. Methods MPJPE PA-MPJPE Ours 159.4 89.5 Ours ⊖uncertainty 166.9 93.4 Ours ⊖uncertainty ⊖TIQR 208.2 113.5 Ours ⊖uncertainty ⊖sampling 182.6 99.2 Ours ⊖depth input 165.9 97.6 Table 1: Ablation study on 3DPW dataset. The MPJPE and PA-MPJPE (both in mm) are reported. Ldepth for depth loss instead of using uncertainty-weighted depth loss L∗ depth; (2) “Ours ⊖uncertainty ⊖TIQR” denotes “Ours ⊖uncertainty” using the original depth maps as pseudo ground truths instead of using IQR trimmed depth maps; (3) “Ours ⊖uncertainty ⊖sampling” denotes our method self-supervised on depth maps directly without depth points sampling and uncertainty awareness; (4) “Ours ⊖depth input” denotes our model using only 2D joints as input while keep the same loss functions. All these models are trained on the training images from 3DPW and UP3D. Compared with our proposed model, the reconstruction error is increased by 7.5 (MPJPE) and 3.9 (PA-MPJPE) on the baseline “Ours ⊖uncertainty awareness”. This shows the effectiveness of using uncertainty-aware depth loss, which boosts the self-supervised learning more effectively while alleviates the impacts from high-uncertainty depth. The performance of “Ours ⊖uncertainty awareness ⊖TIQR” is further degraded by a large margin, which demonstrates the effectiveness of applying IQR trimming on the depth map. Without depth trimming, the outliers will cause unreasonable depth range, making it hard for the alignment between D and D′ along depth dimension. Actually, its results are even worse than not using depth for self-supervision. Compared with our model, the MPJPE and PA-MPJPE are increased by 6.5 and 8.1 on the baseline “Ours ⊖depth input”, which shows the complementary effect of depth to the 2D joints. In fact, it is highly possible to have pose ambiguSup. Methods MPJPE PAMPJPE Full HMR (2018) 128.1 81.3 SPIN (2019) 98.6 59.2 PyMAF (2021) 92.8 58.9 Weak SMPLify (2016) 199.2 106.1 Mugaludi et al. (2021) 126.3 79.1 (S→R, weak) Self-sup. (use syn.) RGB Only (2019) 105.6 Flow Only (2019) 100.1 Mugaludi et al. (2021) 159.0 95.1 Self-sup. Kundu et al. (2020) 187.1 102.7 Ours 159.4 89.5 Table 2: Comparison with the state-of-the-art methods on 3DPW in terms of MPJPE and PA-MPJPE (both in mm). ity using 2D joints only since two different 3D poses may have same 2D projection. The depth map can alleviate such ambiguities. Comparison with the State-of-the-Art We compare the reconstruction performance of our method with previous state-of-the-art methods of different supervision degrees on 3DPW and Human3.6M datasets. The results are shown in Table 2 and Table 3. On 3DPW dataset (Table 2), our method achieves the state-of-the-art performance among the self-supervised methods. Specifically, our method outperforms Kundu et al. (2020) by a large margin. The reconstruction error of our method is decreased by 27.7 (MPJPE) and 13.2 (PAMPJPE). Our method also outperforms Flow Only (Doersch and Zisserman 2019) and Mugaludi et al. (2021) in terms of PA-MPJPE. Note that Mugaludi et al. (2021) is fully supervised on the synthetic data (source domain) and adapt to a target domain with self-adaption, while we do not need any synthetic data for 2D/3D supervision. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6427 Sup. Methods PA-MPJPE Full Lassner et al. (2017) 93.9 Pavlakos et al. (2018) 75.9 HMR (2021a) 56.8 Kolotouros et al. (2019) 50.1 SPIN (2019) 41.1 Weak HMR (unpaired) (2021a) 66.5 SPIN (unpaired) (2019) 62.0 Mugaludi et al. (2021) 58.1 (S→R, weak) Self-sup. (use syn.) Tung et al. (2017a) 98.4 Mugaludi et al. (2021) 81.3 Self-sup. Rhodin et al. (2018) 98.2 Kundu et al. (2020) 90.5 Ours 85.4 Table 3: Comparison with the state-of-the-art methods on Human3.6M dataset using Protocol 2. On Human3.6 dataset (Table 3), our method consistently outperforms Kundu et al. (2020), which is directly comparable with our method due to the similar supervision setting. The reconstruction error of our method is decreased by 5.1 (PA-MPJPE) compared with Kundu et al. (2020), even though we do not require any paired images for training. In addition, our result is also comparable with Mugaludi et al. (2021), which requires synthetic data for full supervision, while we do not have such requirements. Qualitative Results We have also evaluated our models qualitatively on 3DPW and Human3.6 datasets. Some examples of our results are presented in Fig. 3. We observe that the postures of humans are well captured with our proposed method, although we do not use any 2D/3D ground-truths from these dataset for training. By incorporating the estimated depth maps in our framework, the pose ambiguity issue is relatively alleviated. In Fig. 4, we qualitatively compare the performance of our method with baseline models on the 3DPW dataset. It shows that the reconstruction results of our method are consistently better than all the baseline models. In addition, the performance of baseline “Ours ⊖uncertainty” is also better than the other two baselines qualitatively, which is in line with the quantitative results in Table 1. These further demonstrate the effectiveness of each proposed component. Limitations. In the circumstance of severe occlusion, it is quite challenging to get accurate intermediate representations from images, thus our method tends to fail in such cases (see Fig. 5). From the qualitative results in Fig. 3, we also observe that the reconstruction on feet and hands is still not so desirable compared with existing supervised methods. The main reason is that we use a sparse 2D pose representation which has no joint on feet or hands other than the ankle or wrist joints. This issue can be alleviated by adding more extra joints on the region of interest to enhance the accuracy based on the task need. Ours Input Baseline 1 Baseline 2 Baseline 3 Figure 4: Qualitative comparison with baseline models on 3DPW. Our results are in the 2nd column. The following columns show results of baseline “Ours ⊖uncertainty”, “Ours ⊖uncertainty ⊖TIQR”, and “Ours ⊖uncertainty ⊖ sampling”, respectively. Figure 5: Failure cases caused by severe occlusion. Conclusion In this work, we have presented a simple, novel selfsupervised framework for 3D human mesh recovery from monocular images with uncertainty-aware learning. The proposed method does not require any 2D/3D ground-truths for supervision, relying only on self-consistency between intermediate representations, i.e., 2D joints and depth maps, and projected ones after 3D human reconstruction. We incorporate depth maps in our framework to strengthen the self-consistency constraints, with depth trimming and depthpoint sampling methods designed for better alignment between input depth and predicted depth. Furthermore, we design an uncertainty-aware module to automatically learn the reliability of the intermediate representations at body-joint level based on the consistency between two representations. The impacts from noisy and ambiguous inputs are effectively reduced by incorporating the reliability values in the self-consistency losses. Experiments demonstrate the effectiveness of our proposed method. Acknowledgements We would like to thank the reviewers for their valuable comments. This work was partially supported by the NSFC 61972353 and the NSF under Grant Numbers IIS-1816511, OAC-1845962, OAC-1910469, and OAC-2311245. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6428 References Anguelov, D.; Srinivasan, P.; Koller, D.; Thrun, S.; Rodgers, J.; and Davis, J. 2005. Scape: shape completion and animation of people. In ACM SIGGRAPH 2005 Papers, 408–416. Bogo, F.; Kanazawa, A.; Lassner, C.; Gehler, P.; Romero, J.; and Black, M. J. 2016. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In European conference on computer vision, 561–578. Springer. Cao, Z.; Hidalgo Martinez, G.; Simon, T.; Wei, S.; and Sheikh, Y. A. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cao, Z.; Simon, T.; Wei, S.-E.; and Sheikh, Y. 2017. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In CVPR. Choutas, V.; Pavlakos, G.; Bolkart, T.; Tzionas, D.; and Black, M. J. 2020. Monocular expressive body regression through body-driven attention. In European Conference on Computer Vision, 20–40. Springer. Doersch, C.; and Zisserman, A. 2019. Sim2real transfer learning for 3d human pose estimation: motion to the rescue. Advances in Neural Information Processing Systems, 32. Gong, X.; Song, L.; Zheng, M.; Planche, B.; Chen, T.; Yuan, J.; Doermann, D.; and Wu, Z. 2023. Progressive Multi-View Human Mesh Recovery with Self-Supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 676–684. Gong, X.; Zheng, M.; Planche, B.; Karanam, S.; Chen, T.; Doermann, D.; and Wu, Z. 2022. Self-supervised Human Mesh Recovery with Cross-Representation Alignment. In European Conference on Computer Vision, 212–230. Springer. Guler, R. A.; and Kokkinos, I. 2019. Holopose: Holistic 3D human reconstruction in-the-wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10884–10894. G¨uler, R. A.; Neverova, N.; and Kokkinos, I. 2018. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7297–7306. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961–2969. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2013. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7): 1325–1339. Jafarian, Y.; and Park, H. S. 2021. Learning high fidelity depths of dressed humans by watching social media dance videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12753–12762. Jiang, W.; Kolotouros, N.; Pavlakos, G.; Zhou, X.; and Daniilidis, K. 2020. Coherent reconstruction of multiple humans from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5579– 5588. Kanazawa, A.; Black, M. J.; Jacobs, D. W.; and Malik, J. 2018. End-to-end recovery of human shape and pose. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7122–7131. Kato, H.; Ushiku, Y.; and Harada, T. 2018. Neural 3D mesh renderer. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3907–3916. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kocabas, M.; Athanasiou, N.; and Black, M. J. 2020. Vibe: Video inference for human body pose and shape estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5253–5263. Kolotouros, N.; Pavlakos, G.; Black, M. J.; and Daniilidis, K. 2019. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2252–2261. Kolotouros, N.; Pavlakos, G.; and Daniilidis, K. 2019. Convolutional mesh regression for single-image human shape reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4501–4510. Kundu, J. N.; Rakesh, M.; Jampani, V.; Venkatesh, R. M.; and Venkatesh Babu, R. 2020. Appearance consensus driven self-supervised human mesh recovery. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 794–812. Springer. Lassner, C.; Romero, J.; Kiefel, M.; Bogo, F.; Black, M. J.; and Gehler, P. V. 2017. Unite the people: Closing the loop between 3D and 2D human representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6050–6059. Lin, K.; Wang, L.; and Liu, Z. 2021a. End-to-end human pose and mesh reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1954–1963. Lin, K.; Wang, L.; and Liu, Z. 2021b. Mesh graphormer. In Proceedings of the IEEE/CVF international conference on computer vision, 12939–12948. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; and Black, M. J. 2015. SMPL: A skinned multi-person linear model. ACM transactions on graphics (TOG), 34(6): 1–16. Mahmood, N.; Ghorbani, N.; Troje, N. F.; Pons-Moll, G.; and Black, M. J. 2019. AMASS: Archive of motion capture as surface shapes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5442–5451. Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I.; and Frey, B. 2015. Adversarial autoencoders. arXiv preprint arXiv:1511.05644. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6429 Martinez, J.; Hossain, R.; Romero, J.; and Little, J. J. 2017. A simple yet effective baseline for 3D human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, 2640–2649. Mehta, D.; Rhodin, H.; Casas, D.; Fua, P.; Sotnychenko, O.; Xu, W.; and Theobalt, C. 2017. Monocular 3D human pose estimation in the wild using improved cnn supervision. In 2017 international conference on 3D vision (3DV), 506– 516. IEEE. Moreno-Noguer, F. 2017. 3D human pose estimation from a single image via distance matrix regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2823–2832. Mugaludi, R. R.; Kundu, J. N.; Jampani, V.; et al. 2021. Aligning silhouette topology for self-adaptive 3D human pose recovery. Advances in Neural Information Processing Systems, 34: 4582–4593. Omran, M.; Lassner, C.; Pons-Moll, G.; Gehler, P.; and Schiele, B. 2018. Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In 2018 international conference on 3D vision (3DV), 484– 494. IEEE. Pavlakos, G.; Zhou, X.; and Daniilidis, K. 2018. Ordinal depth supervision for 3D human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7307–7316. Pavlakos, G.; Zhou, X.; Derpanis, K. G.; and Daniilidis, K. 2017. Coarse-to-fine volumetric prediction for single-image 3D human pose. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7025–7034. Pavlakos, G.; Zhu, L.; Zhou, X.; and Daniilidis, K. 2018. Learning to estimate 3D human pose and shape from a single color image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 459–468. Pishchulin, L.; Insafutdinov, E.; Tang, S.; Andres, B.; Andriluka, M.; Gehler, P. V.; and Schiele, B. 2016. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4929–4937. Rhodin, H.; Salzmann, M.; and Fua, P. 2018. Unsupervised geometry-aware representation for 3d human pose estimation. In Proceedings of the European conference on computer vision (ECCV), 750–767. Sengupta, A.; Budvytis, I.; and Cipolla, R. 2020. Synthetic training for accurate 3D human pose and shape estimation in the wild. arXiv preprint arXiv:2009.10013. Song, J.; Chen, X.; and Hilliges, O. 2020. Human body model fitting by learned gradient descent. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16, 744– 760. Springer. Sun, X.; Xiao, B.; Wei, F.; Liang, S.; and Wei, Y. 2018. Integral human pose regression. In Proceedings of the European Conference on Computer Vision (ECCV), 529–545. Tan, J. K. V.; Budvytis, I.; and Cipolla, R. 2017. Indirect deep structured learning for 3D human body shape and pose prediction. BMVC. Tang, S.; Tan, F.; Cheng, K.; Li, Z.; Zhu, S.; and Tan, P. 2019. A neural network for detailed human depth estimation from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7750–7759. Tian, Y.; Zhang, H.; Liu, Y.; and Wang, L. 2023. Recovering 3d human mesh from monocular images: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Tome, D.; Russell, C.; and Agapito, L. 2017. Lifting from the deep: Convolutional 3D pose estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2500–2509. Tung, H.-Y.; Tung, H.-W.; Yumer, E.; and Fragkiadaki, K. 2017a. Self-supervised learning of motion capture. Advances in neural information processing systems, 30. Tung, H.-Y. F.; Harley, A. W.; Seto, W.; and Fragkiadaki, K. 2017b. Adversarial inverse graphics networks: Learning 2D-to-3D lifting and image-to-image translation from unpaired supervision. In 2017 IEEE International Conference on Computer Vision (ICCV), 4364–4372. IEEE. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Von Marcard, T.; Henschel, R.; Black, M. J.; Rosenhahn, B.; and Pons-Moll, G. 2018. Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proceedings of the European conference on computer vision (ECCV), 601–617. Wang, M.; Chen, X.; Liu, W.; Qian, C.; Lin, L.; and Ma, L. 2018. Drpose3D: Depth ranking in 3D human pose estimation. arXiv preprint arXiv:1805.08973. Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; and Girshick, R. 2019. Detectron2. https://github.com/facebookresearch/ detectron2. Accessed: 2023-04-10. Zhang, H.; Tian, Y.; Zhou, X.; Ouyang, W.; Liu, Y.; Wang, L.; and Sun, Z. 2021. Pymaf: 3d human pose and shape regression with pyramidal mesh alignment feedback loop. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11446–11456. Zhao, L.; Peng, X.; Tian, Y.; Kapadia, M.; and Metaxas, D. N. 2019. Semantic graph convolutional networks for 3D human pose regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3425–3435. Zhou, X.; Huang, Q.; Sun, X.; Xue, X.; and Wei, Y. 2017. Towards 3D human pose estimation in the wild: a weaklysupervised approach. In Proceedings of the IEEE International Conference on Computer Vision, 398–407. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6430 | 2024 | 714 |
18,534 | HORIZON: High-Resolution Semantically Controlled Panorama Synthesis Kun Yan1, Lei Ji2, Chenfei Wu2, Jian Liang3, Ming Zhou4, Nan Duan2, Shuai Ma1 1SKLSDE Lab, Beihang University, 2Microsoft Reseach Asia, 3Peking University, 4Langboat Technology {kunyan,mashuai}@buaa.edu.cn, {leiji,chewu,nanduan}@microsoft.com, [email protected], [email protected] Abstract Panorama synthesis endeavors to craft captivating 360-degree visual landscapes, immersing users in the heart of virtual worlds. Nevertheless, contemporary panoramic synthesis techniques grapple with the challenge of semantically guiding the content generation process. Although recent breakthroughs in visual synthesis have unlocked the potential for semantic control in 2D flat images, a direct application of these methods to panorama synthesis yields distorted content. In this study, we unveil an innovative framework for generating high-resolution panoramas, adeptly addressing the issues of spherical distortion and edge discontinuity through sophisticated spherical modeling. Our pioneering approach empowers users with semantic control, harnessing both image and text inputs, while concurrently streamlining the generation of high-resolution panoramas using parallel decoding. We rigorously evaluate our methodology on a diverse array of indoor and outdoor datasets, establishing its superiority over recent related work, in terms of both quantitative and qualitative performance metrics. Our research elevates the controllability, efficiency, and fidelity of panorama synthesis to new levels. Introduction Panoramic images and videos are becoming increasingly popular, due to the ability to provide an unlimited field of view (FOV) compared with traditional, planar images. With panoramic images, viewers can navigate 360° views and shift the viewing perspective in all directions, capturing a wealth of environmental detail. Additionally, these images provide an immersive experience that opens up a range of possibilities for interactive applications in a variety of domains, such as advertising, entertainment, and the design industry. However, the process of panorama acquisition typically requires significant human efforts or specialized panoramic equipment. Thus, the development of automated panoramic synthesis techniques is becoming increasingly important as virtual and augmented reality technology and devices, such as head-mounted displays and glasses, continue to evolve. This technique not only helps designers save time and effort when creating and editing blueprints, but it also reduces the cost associated with specialized panoramic equipment. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Model Task 1: Panorama Generation Task 2: View Extrapolation Task 3: Guided Generation The sky is blue with cloud and there is water, and house around. Figure 1: HORIZON supports multitype panorama synthesis Panoramic images possess two unique characteristics: spherical distortion in distinct spatial locations and continuity between the left and right boundaries as compared to planar images. Current research on panoramic image synthesis primarily focuses on generating a spherical image with a large FOV from a single or a sequence of FOV images.(Sumantri and Park 2020) introduced the use of equirectangular projection for the generation of realistic spherical images from multiple images, addressing distorted projection issues commonly encountered when working with flat images. (Hara, Mukuta, and Harada 2021) proposed a method for generating spherical images without discontinuity. However, these methods lack the ability for user control, which is crucial in the synthesis of virtual worlds. In particular, the ability to control generated content with style or semantic guidance is crucial for achieving desired images. As designers often invest a significant amount of time into creating and editing images with similar backgrounds but different semantics or styles. Research has been conducted in order to mimic human’s capability to easily imagine 360-degree panoramic sceneries. This includes methods for image guidance for view extrapolation based on similar scene categories (Zhang et al. 2013), scene label guidance for controlling style using a co-modulated GAN (Karimi Dastjerdi et al. 2022), and text guidance for image synthesis (Chen, Wang, and Liu 2022). Both view extrapolation and panoramic image synthesis tasks can benefit from a variety of inputs to guide the generation process and make this technique more flexible and useful in real-world applications. As shown in Figure 1, the inputs can be text descripThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6431 tions and/or visual inputs for semantic and style guidance. It is worth noting that recent text-to-image generation methods, such as VAE (Ramesh et al. 2021), GAN (Esser, Rombach, and Ommer 2021), and Diffusion (Dhariwal and Nichol 2021), have achieved great success in terms of semantic relevance and controllability for planar images. These works leverage the capabilities of attention mechanism (Vaswani et al. 2017) to take various types of guidance as generating conditions. However, these methods are difficult to apply directly to panoramic image synthesis due to a lack of consideration for spherical characteristics. To the best of our knowledge, there are currently no existing frameworks that are general enough to handle all of these spherical properties and controlling guidance for panorama synthesis in a unified framework. This is the motivation behind our work, which aims to explore advanced guided image generation techniques and design specific modules, mechanisms, and training strategies for spherical structures. Additionally, it is imperative to generate high-resolution panoramic images in order to enhance the immersive experience for Virtual Reality and Augmented Reality. However, simply adopting current state-of-the-art methods such as DALL·E-like (Esser, Rombach, and Ommer 2021) for high-resolution panorama generation can result in spherical distortion and low efficiency. Recent efforts have attempted to address this issue by using separate models to first generate low-resolution panoramas and then upscaling to high-resolution, such as (Akimoto, Matsuo, and Aoki 2022a; Chen, Wang, and Liu 2022). These methods, however, can still result in artifacts caused by error accumulation. It is worth noting that we find properly managing highresolution context within a defined module can alleviate this problem and produce better results. In this paper, we introduce a novel, versatile framework for generating high-resolution panoramic images that have well-preserved spherical structure and easy-to-use semantic controllability. Specifically, we employ a two-stage procedure that includes learning an image encoder and decoder in the first step and a reconstruction model in the second step. In the second stage, we propose a new method called Spherical Parallel Modeling(SPM) that not only improves efficiency through parallel decoding, but also addresses the issue of local distortion through the use of spherical relative embedding and spherical conditioning improvement. Additionally, we have found that the panoramic pictures generated by SPM no longer have the problem of screen tearing when the left and right edges are spliced, which means that the generated panoramic pictures can be directly viewed in VR devices without the need for further editing. The contributions can be summarized as: • Alleviating the spherical distortion and edge incontinuity problem through spherical modeling. • Supporting semantic control through both image and text guidance. • Effectively generating high-resolution panoramas through parallel decoding. Related Work Panorama Synthesis Panorama synthesis, a wellestablished task in computer vision, involves various input types such as overlapped image sequences (Szeliski 2006; Brown and Lowe 2007), sparse images (Sumantri and Park 2020), and single images (Akimoto et al. 2019; Hara, Mukuta, and Harada 2021). Traditional methods employed image matching and stitching (Szeliski 2006), while recent generative models utilize GAN-based methods (Akimoto et al. 2019; Koh et al. 2022) and autoregressive models (Rockwell, Fouhey, and Johnson 2021) for panorama generation. In computer graphics, view synthesis techniques, including geometry and layout prediction, optical flow, depth, and illumination estimation (Song and Funkhouser 2019; Xu et al. 2021; Zhang, Wang, and Liu 2022; Wang et al. 2022; Somanath and Kurz 2021), are often studied. Spherical structure and texture are modeled using cube maps (Han and Suh 2020), cylinder convolution (Liao et al. 2022), predicted panoramic three-dimensional structures (Song et al. 2018), and scene symmetry (Hara, Mukuta, and Harada 2021).Most previous generative models are limited in handling fixed scenes and low-resolution images. Recent research (Sumantri and Park 2020; Akimoto, Matsuo, and Aoki 2022a) addresses these limitations using hierarchical synthesis networks (Sumantri and Park 2020), U-Net structures (Akimoto, Matsuo, and Aoki 2022a), and separate models for generating and upscaling low-resolution images (Chen, Wang, and Liu 2022). Our proposed method tackles high-resolution panorama context and preserves spherical structure within a single module. User-controlled semantic content generation is essential for interactive panorama generation. Recent works adopt scene symmetry with CVAE-based methods (Hara, Mukuta, and Harada 2021) or scene category with GAN-based methods (Karimi Dastjerdi et al. 2022) for view extrapolation. Our versatile framework addresses spherical, usercontrolling, and high-resolution panoramas in a unified manner, incorporating mechanisms to handle spherical distortion, continuity, and semantic guidance without additional tuning. We put a straightforward comparison of these most recent relevant efforts in Table 1, and detailed discussion can be found in the appendix. Image Generation Previous image generation works primarily employ generative adversarial networks (GAN) (Goodfellow et al. 2014; Reed et al. 2016; Xu et al. 2018; Qiao et al. 2019; Zhang et al. 2021a), VAEbased methods (Kingma and Welling 2013; van den Oord, Vinyals, and kavukcuoglu 2017), and denoising diffusion models (Nichol et al. 2021; Gu et al. 2021; Kim and Ye 2021). With the advent of transformer models, two-stage methods have emerged as a new paradigm for pretraining with web-scale image and text pairs, demonstrating effectiveness in generalizing high semantically related open-domain images (Ramesh et al. 2021; Ding et al. 2021; Zhang et al. 2021b). These methods tokenize images into discrete tokens using VQVAE (van den Oord, Vinyals, and kavukcuoglu 2017) or VAGAN (Esser, Rombach, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6432 Method High Resolution? Spherical Coherence? Global Semantic Condition? Multiscale Semantic Editing? Inference Efficiency? COCO-GAN (2019) ✗ ◗ ◗ ✗ ÷ InfinityGAN (2022) ✓ ✗ ✓ ✓ ÷ LDM (2021) ✓ ✗ ✓ ✗ ♂ Text2light (2022) ✓ ◗ ◗ ✗ ♂ OminiDreamer (2022b) ✗ ◗ ✗ ✗ ) Horizon ✓ ✓ ✓ ✓ ÷ Table 1: The comparison between our method and the most recent relevant methods. ◗represents partially satisfying the property. Find detailed evaluations and explanations in the appendix. Ommer 2021), then generate visual tokens for decoding into real images. Efforts have been made to improve high-resolution planar image generation (Esser, Rombach, and Ommer 2021; Chang et al. 2022; Wu et al. 2022). However, these models often produce blurred or teared artifacts, making them unsuitable for panoramic scenarios. Guided image generation methods achieve superior performance (Ramesh et al. 2021; Ding et al. 2021; Zhang et al. 2021b), but applying existing models to panoramic generation without considering the unique spherical structure remains challenging. Method The overall training is a two-stage procedure, similar to (Ramesh et al. 2021; Ding et al. 2021). The first stage is to train an encoder for image/view representation (discrete visual tokens in this paper) and a decoder for image generation, both of which are frozen in the second stage. The second stage is to learn a reconstruction model based on the discrete visual representation. • Stage 1. Every equirectangular projected panoramic image with a resolution of 768x1,536 is first divided into 3x6=18 RGB view patches, each with a resolution of 256x256. Then we train a VQGAN(Esser, Rombach, and Ommer 2021) on every view patch separately. The encoder of VQGAN compresses each RGB view patch into a 16×16 grid of view tokens. The overall view token dictionary has a size of 16,384 possible values. As a result, each panoramic image has 18×16×16 view tokens. • Stage 2. All 18 groups of view tokens from a single panoramic image are modeled as a whole context to incrementally learn the reconstruction of all view tokens. We progressively develop auto-regressive modeling, local parallel modeling, and the newly proposed spherical parallel modeling detailed described in the following subsections. We apply the off-the-shelf model in the first stage and mainly devote our effort to effectively modeling the prior in the second stage. Due to the large number of view tokens for a single panorama (18x16x16=4608), it is still a non-trivial problem to learn the reconstruction. We should balance the quality, efficiency, and controllability with considerable refinement. In the following subsections, we will describe our progressive attempts and corresponding design considerations. Auto-Regressive Modeling One intuitive way is to directly employ a auto-regressive transformer decoder to generate 4608 view tokens one by one. However, due to the quadraticity of the attention mechanism of the transformer itself, directly inputting 4608 tokens into the model will bring huge memory consumption and great difficulties to the training of the model. At the same time, the sequence is too long for the model to converge efficiently. We noticed that, as the relative distance increases, the impact of adjacent tokens becomes weak or even negative for the quality. Therefore, we shrink the attention scope for both efficiency and effectiveness consideration. Specifically, the range of attention for each view patch is limited to 2 surrounding view patches on the left and above, and autoregressively performs the prediction within the current patch. The interval of attention is shown at the top of Figure 2. After making this improvement, we have been able to generate decent high-resolution panoramas, which are also used as our first baseline ARM. Local Parallel Modeling Although ARM makes high-resolution panorama generation basically feasible, flattening the view patch into a onedimensional sequence of tokens in raster scan order is still not an optimal and efficient modeling solution. Since the length of the autoregressive sequence still grows quadratically, it not only presents a challenge for modeling long-term correlations but also makes decoding intractable. Inspired by MaskGIT(Chang et al. 2022), we adopt the Masked Visual Token Modeling(MVTM) into the view modeling process, which can be formulated as: LLPM = −E X ∀i∈[1,N],maski=1 log p (yi | YM; YW ) (1) For every training pass, sample a subset of tokens and replace them with a special [MASK] token. The number of masked tokens is parameterized by a scheduling function ⌈cos(r ∗π/2) · N⌉, r is a real number uniformly sampled from 0 to 1, N is the total number of view tokens in current view patch. Masked token sequence YM and ground truth view The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6433 tokens from surrounding view patch YW are fed into a multilayer bidirectional transformer to predict the probabilities p (yi | YM; YW ) for each masked token, where the negative log-likelihood is computed as the cross-entropy between the ground-truth and prediction. During inference, we use a similar iterative decoding technique with a constant step T, initially all tokens in the current patch are masked, at each step t only (1 −(cos πt 2T ))N tokens with higher confidence are kept, others will be refined during further steps. We named this adapted version of MaskGIT as Local Parallel Modeling (LPM), that is, the modeling is applied in parallel for each local view patch. By applying this strategy, we shorten the inference speed by 64 times. However, we also observe a significant performance drop compared with ARM. Spherical Parallel Modeling Both ARM and LPM modules regard each patch view equally in the image, which do not take the spherical characteristics of panoramic images into consideration in model design. They assume that the visual features after spherical projection are translation-invariant on the two-dimensional plane. This is obviously not in line with the actual situation. We observed that under the ARM method, the model can still maintain a strong relative positional relationship, and according to the current sequence order, it can be deduced what degree of deformation should be used to generate the current view token. However, under the LPM method, the generation of the current token is no longer strictly constrained by the previous token, and sequence order is no longer an important factor for model learning goals. Naturally, the panoramic images generated by LPM have distorted local details, which are relatively weak in performance indicators such as FID. In this section, we describe a new Spherical Parallel Modeling(SPM) method that not only maintains the efficiency of parallel decoding but also alleviates the local distortion problem through the spherical relative embedding and spherical conditioning improvement. Besides, we found that the panoramic images generated by SPM no longer have the problem of screen tearing when the left and right edges are spliced. Spherical Relative Embedding Relative positional embedding effectively captures positional information during attention, particularly for spherical properties. Formally, a positional encoding function f(x, l) is defined for item x at position l. For items q and k at positions m and n, the inner product between f(q, m) and f(k, n) depends on q, k, and their relative position m −n. The dot product between two vectors is a function of their magnitudes and the angle between them. Rotary Position Embedding (RoPE) (Su et al. 2021) encodes text token embedding with an absolute position using a rotation matrix, incorporating explicit relative position dependency in self-attention. Embeddings are treated as complex numbers and positions as pure rotations. During attention, if both query and key are shifted by the same amount, changing the absolute but not relative position, both representations are rotated similarly, maintaining the angle and dot product between them. The function solution that satisfies the above requirement can be formulated as below: f(q, m) = M1 M2 ... Md/2 q1 q2 ... qd = RmQm = RmWqXm (2) where Mj = cos mθj −sin mθj sinmθj cos mθj ! , Rm is the block diagonal rotation matrix, Wq is the learned query weights, and Xm is the embedding of the m-th token. For query k, a similar corresponding equation is applied. When extending to the 2dimensional case, the rotation matrix should correlate with both coordinates x and y: M x,y = cos xθ −sin xθ 0 0 sin xθ cos xθ 0 0 0 0 cos yθ −sin yθ 0 0 sin yθ cos yθ (3) However, only two-dimensional relative position embedding cannot represent the relative positional relationship of the spherical surface. This is manifested in two aspects. One is that the distance between the plane and the spherical surface is measured in different ways. Second, the coordinates of the same latitude have a ring-shaped positional relationship, that is, for a token sequence 0 . . . m at the same latitude, the positional embedding of token 0 and the positional embedding of token m should be as close as possible. To satisfy this property, we re-derived the rotation matrix, instead of the Θ in the original RoPE: Θ = θi = 10000−2(i−1)/d, i ∈[1, 2, . . . , d/2] (4) We define Θsphere as: Θshpere,x = θi = −2(i −1) ∗2π d ∗w , i ∈[1, 2, . . . , d/2] , (5) Θshpere,y = θi = −2(i −1) ∗π d ∗h , i ∈[1, 2, . . . , d/2] , (6) where x, y are the different axis of the spherical surface, w is the length of token sequences along the x axis, and h is the length along the y axis. Note that as Θshpere,x represents latitude, the numerator has a factor 2π, which makes the rotation of the sequence head and tail as close as possible. While Θshpere,y does not keep this property, as the poles of a sphere are far from each other naturally. Spherical Relative Embedding(SRE) applies to selfattention as follows: SRE(q⊤ (x1,y1))SRE(k(x2,y2)) = Rd Θ,(x1,y1)W qx(x1,y1) ⊤ Rd Θ,(x2,y2)W kx(x2,y2) (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6434 Parallelly Generate View Patches Local Condition Window Auto-Regressively Generate View Patches Local Condition Window ARM LPM Unidirect Transfor mer Bidirect Transfor mer (a) ARM and LPM SPM Spherical Condition Window Parallelly Generate & Refine View Patches Top Down Left Right Predicted Tokens Masked Tokens Refined Tokens Shared Weights Bidirectional Transformer Bidirectional Transformer (b) SPM Figure 2: Modeling Strategy: we progressively improve modeling strategy from ARM to LPM and eventually SPM, achieving both high efficiency and high fidelity. In this Figure, the red boxes show the current view patch to be generated, while the blue boxes present the condition window. Spherical Conditioning The autoregressive onedirectional transformer decoder generates the tokens from left to right, which leads to discontinuity between the left-most and right-most boundaries. When viewing these images with the panorama viewer, we see obvious tearing artifacts at the stitching seams. To make the left-most and right-most pixels consistent, these pixels should be generated with consideration of each other. This local detail also implies a deeper defect of the aforementioned method. If the context information of the complete spherical structure is not considered when predicting the token of the current position, the consistency and integrity of the final overall result cannot be guaranteed. In order to fix this defect, we redesign the conditions for generating each view patch. Through the two-pass mechanism, the model no longer only autoregressively focuses on the small window on the upper left but also focuses on the entire hemispherical area around the current patch view. Specifically, in the training phase, the model learns both LLPM and LSPM in each iteration step. The form of LSPM is as follows: LSPM = −E X ∀i∈[1,N],maski=1 log p (yi | YM; YS) (8) It is worth noting that YS is different from YW in LLPM. For the view patch at row i and column j, the corresponding YW contains view patches of upper and left, while YS contains these of upper, down, left, and right. Specifically, YS comprises of [(i −1, j), (i, j −1), (i −1, j −1)], and YS comprises of [(i −1, j −1), (i −1, j), (i −1, j + 1), (i, j −1), (i, j + 1), (i + 1, j −1), (i + 1, j), (i + 1, j + 1)]. When the above coordinates are out of bounds, YW will not consider the part beyond the boundary, and YS will extend the part beyond the x-axis to the other side. In addition, each YS also applies the spherical relative embedding transformation described above and a special learnable phase embedding I to let the model know which pass it is currently in. During inference, the model first performs a complete LPM decoding and retains all tokens, and then superimposes the spherical relative embedding and phase embedding I for each token in the second pass to optimize the generation result of the first pass. In this way, although the inference time is doubled, it is still faster than ARM and can further greatly improve the generation quality, surpassing both ARM and LPM. Guided Semantic-Condition To further enhance the capabilities of the HORIZON model, we employ semantics as an additional input condition to guide the panoramic image generation process. Specifically, we use FOV=90 degrees to cut out the front, back, left, and right perspective pictures of the panoramic image and encode them with the pretrained CLIP(Radford et al. 2021) visual module to get four semantic vectors for each panorama. We then take those vectors as semantic conditions sequentially appended to sphere conditions, and train in an end-to-end way. Similarly, we also employ the pretrained CLIP text module to encode text as a semantic condition to control the generation. During inference, we can either input visual conditions or text conditions respectively. Experiment Setup Dataset We evaluate our model on the high-resolution StreetLearn dataset (Mirowski et al. 2019), which consists of Google Street View panoramas. We use the Pittsburgh dataset containing 58k images, split into 52.2k for training and 5.8k for testing. The equirectangular panorama RGB images are stored as high-quality JPEGs with dimensions of 1664 x 832. In our experiments, we resize all panoramas to 1536 x 768. The experiments are conducted on 64 V100 GPUs, each with 32GiB memory. Our model is evaluated on three typical panorama tasks: panorama generation, view extrapolation, and guided generation. To further show the flexibility of our method on arbitrary resolutions, we also conducted experiments on a higher resolution setting of 3072x1536. However, as most previous works are unable to handle such large images, we present The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6435 only qualitative demonstrations in the appendix.To show our method are applicable to diverse scenes, we conduct experiments on Matterport 3D with different available baselines(Chen, Wang, and Liu 2022; Lin et al. 2019, 2022). Please find it in the appendix. Task 1: Panorama Generation The widely used Frechet Inception Distance (FID) score (Heusel et al. 2017) evaluates image quality by measuring feature distribution distances between real and fake images. However, FID treats all image positions equally, while information density in spherical images varies spatially. The top and bottom of an image often contain sparse information, while the middle holds denser information. To account for this, we propose a novel spherical FID for evaluating panoramic image quality, which dynamically considers information density variation. FID is calculated on different view patch sets within an image. For a panorama with a 3-row and 6-column grid of view patches, we label each row as top, middle, and bottom, and calculate FID scores for these subsets. Table 2 reports spherical FID scores for various locations. As a baseline, we train the text2light model (Chen, Wang, and Liu 2022) on the Streetlearn dataset, with detailed settings in the appendix. Our SPM(+SRE+SC) model generates state-of-the-art results, significantly outperforming baseline models. Both spherical relative embedding and spherical conditioning are effective mechanisms.The second pass balances efficiency and effectiveness by revising the first pass. LPM is the most efficient model, albeit with lower performance. View Discontinuity Problem The discontinuity of synthesis occurs when the left-most and the right-most boundary merged into one spherical image. Among the four snapshots from the viewer tool, the last (4th) image rendered is the merged image in which the middle is exactly the boundary between the left most and right most. From the showcases in Figure 3, we can see that the results of the baseline algorithm have an obvious separator in the middle while our algorithm considering the spherical attention generate a smooth connection. Moreover, we evaluate this continuity quantitatively by gradient based metrics as shown in Table 2. Inspired by the metric Left-Right Consistency Error (LRCE) (Shen et al. 2022) for depth estimation, we evaluate the consistency of the left-right boundaries by calculating the horizontal gradient between the both sides of the panorama. In details, the horizontal gradient GH I of the image I can be written as GI = max dim=−1 |Icol first−Icol last|, where Icol first/Icol last represents the RGB values in the first/last columns of the image I. Note that, different from LRCE, the generated panorama can not minus ground truth gradient to alleviate natural discontinuity. We choose to calculate the distribution distance instead of the absolute distance between the predicted horizontal gradient and real panorama gradient to measure the boundary continuity. The final calculation of LRCS(left-right continuity score) is as follows: LRCS = KL(NP, NGT ) (9) NP and NGT are two normal distributions estimated from the horizontal gradient of the predicted panorama({GP }) and ground truth({GGT }), respectively. KL means KL-distance. The lower LRCS means the panorama is more seamless. We show the result in Table 2, though Parallel Decoding increase the discontinuity, after using SRE and SC our final results has significantly resolve the problem and achieves 10 times lower LRCS than Text2light. All the above results demonstrates the effectiveness of our proposed spherical attention module. Baseline Ours Figure 3: Discontinuity v.s. Continuity. The three randomly selected cases present results from baseline and our models. The top images are the generated images of the baseline(LPM) method and the bottom examples are from our model(SPM). There is an obvious split line in the middle of each image on the top examples while the boundary is smooth on the bottom examples. Task 2: View Extrapolation We conduct quantitative experiments and adopt structural similarity (SSIM) and peak-to-signal-noise-ratio (PSNR) and FID as evaluation metrics specific for the view extrapolation tasks. Our generative model demonstrates superior performance compared to baseline methods as demonstrated in Table 3. To further illustrate the effectiveness of our approach, we have included a comparison of our method with Omnidream (Akimoto, Matsuo, and Aoki 2022a) in the appendix, where we have constrained the resolution to 1024x512 in accordance with their capabilities. Our generative model demonstrated exceptional performance in view extrapolation, as validated by the examples shown in Figure 4. The generated panoramas not only seamlessly filled in unseen content, but also possessed reasonable structure and rich semantics. These results showcase the superior capabilities of our model in generating highresolution, coherent panoramas. Task 3: Guided Generation We illustrate the guided generation showcases in Figure 5 and Figure 6. The visual guidance and text guidance are encoded by CLIP model. From these examples, we can observe that our framework can edit and modify semantic elements in the panorama by providing reference view images or text hints at specific locations. More cases can be found in the supplemental material. Visual Guidance As shown in Figure 5, the case presents the results given the visual guidance. The left case shows the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6436 FID↓ Spherical FID↓ Continuity mean top middle bottom LRCS↓ Text2light(Chen, Wang, and Liu 2022) 36.33 56.31 48.05 60.28 60.62 0.0224 ARM 25.36 41.21 26.16 32.17 65.32 0.0283 LPM 45.71 57.13 64.34 46.71 60.35 0.2032 SPM(+SRE) 10.74 39.18 28.72 26.44 62.39 0.0726 SPM(+SRE+SC) 7.79 20.97 15.82 21.80 25.29 0.0020 Table 2: Generation results. SPM is the spherical model in our paper, SRE is spherical relative embedding and SC is spherical conditioning. SSIM ↑ PSNR ↑ FID↓ ARM 0.521 15.39 11.78 LPM 0.508 14.94 17.62 SPM 0.542 15.49 5.53 Table 3: View Extrapolation Results. Input View Ground Truth Generated Image Figure 4: View Extrapolation. The first column gives the input samples, the second column presents the ground truth examples, and the third column demonstrates the generated panorama. original panoramic image (bottom) as well as 4 FOV images rendered (top 4 images). The middle and right cases demonstrate the generated results given the 4th guided image highlighted in the red box. The middle case edits the final view with “a single tree”, and the right case edits the final view with “lush trees”.This demonstrates the model is capable of generating panoramic images with both semantic and style controls. Please note, in order to generate a consistent image, the context view may be changed accordingly. Text Guidance As shown in Figure 6,the cases presents the results given the text guidance. In the Figure, the first row contains the original panorama images, the second row presents the text guidance and the third row illustrates the generated panoramic images. As shown in these cases, the highlighted red box shows the corresponding region modified. The semantic text guidance is “lush trees”, and the trees in these images are modified accordingly. Conclusion In this study, we present an innovative framework for crafting high-resolution panoramic visuals, skillfully integratFigure 5: Visual Guided generation. When we replace the guidance with different visual semantics as shown in the middle and right columns, we can manipulate the generated panoramas as we need. Lush trees grow on the side of the street Figure 6: Textual Guided generation. We can also use natural language to edit or embellish target panoramas. In each case, the top role are the original panoramas, and the bottom role are the embellished panoramas according to the text hint shows in the middle. ing spherical structure and semantic control. By employing spherical modeling, we adeptly tackle spherical distortion and edge continuity challenges while facilitating generation through image and text cues. Future endeavors will focus on embedding interactive features and enhancing inference speed, ultimately positioning the model as a viable alternative to current human-built interfaces. Acknowledgements This work was conducted during Kun Yan’s internship at Microsoft Research Asia and supported in part by NSFC 61925203 and U22B2021. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6437 References Akimoto, N.; Kasai, S.; Hayashi, M.; and Aoki, Y. 2019. 360-degree image completion by two-stage conditional gans. In 2019 IEEE International Conference on Image Processing (ICIP), 4704–4708. IEEE. Akimoto, N.; Matsuo, Y.; and Aoki, Y. 2022a. Diverse Plausible 360-Degree Image Outpainting for Efficient 3DCG Background Creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11441–11450. Akimoto, N.; Matsuo, Y.; and Aoki, Y. 2022b. Diverse Plausible 360-Degree Image Outpainting for Efficient 3DCG Background Creation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11431– 11440. Brown, M.; and Lowe, D. G. 2007. Automatic panoramic image stitching using invariant features. International journal of computer vision, 74(1): 59–73. Chang, H.; Zhang, H.; Jiang, L.; Liu, C.; and Freeman, W. T. 2022. MaskGIT: Masked Generative Image Transformer. ArXiv, abs/2202.04200. Chen, Z.; Wang, G.; and Liu, Z. 2022. Text2Light: ZeroShot Text-Driven HDR Panorama Generation. ACM Transactions on Graphics (TOG), 41(6): 1–16. Dhariwal, P.; and Nichol, A. 2021. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34. Ding, M.; Yang, Z.; Hong, W.; Zheng, W.; Zhou, C.; Yin, D.; Lin, J.; Zou, X.; Shao, Z.; Yang, H.; et al. 2021. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34: 19822–19835. Esser, P.; Rombach, R.; and Ommer, B. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12873–12883. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Gu, S.; Chen, D.; Bao, J.; Wen, F.; Zhang, B.; Chen, D.; Yuan, L.; and Guo, B. 2021. Vector quantized diffusion model for text-to-image synthesis. arXiv preprint arXiv:2111.14822. Han, S. W.; and Suh, D. Y. 2020. PIINET: A 360-degree Panoramic Image Inpainting Network Using a Cube Map. arXiv preprint arXiv:2010.16003. Hara, T.; Mukuta, Y.; and Harada, T. 2021. Spherical Image Generation from a Single Image by Considering Scene Symmetry. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 1513–1521. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In NIPS. Karimi Dastjerdi, M. R.; Hold-Geoffroy, Y.; Eisenmann, J.; Khodadadeh, S.; and Lalonde, J.-F. 2022. Guided CoModulated GAN for 360° Field of View Extrapolation. arXiv e-prints, arXiv–2204. Kim, G.; and Ye, J. C. 2021. Diffusionclip: Text-guided image manipulation using diffusion models. arXiv preprint arXiv:2110.02711. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Koh, J. Y.; Agrawal, H.; Batra, D.; Tucker, R.; Waters, A.; Lee, H.; Yang, Y.; Baldridge, J.; and Anderson, P. 2022. Simple and Effective Synthesis of Indoor 3D Scenes. arXiv preprint arXiv:2204.02960. Liao, K.; Xu, X.; Lin, C.; Ren, W.; Wei, Y.; and Zhao, Y. 2022. Cylin-Painting: Seamless 360 {\deg} Panoramic Image Outpainting and Beyond with Cylinder-Style Convolutions. arXiv preprint arXiv:2204.08563. Lin, C. H.; Chang, C.; Chen, Y.; Juan, D.; Wei, W.; and Chen, H. 2019. COCO-GAN: Generation by Parts via Conditional Coordinating. In IEEE International Conference on Computer Vision (ICCV). Lin, C. H.; Cheng, Y.-C.; Lee, H.-Y.; Tulyakov, S.; and Yang, M.-H. 2022. InfinityGAN: Towards Infinite-Pixel Image Synthesis. In International Conference on Learning Representations. Mirowski, P.; Banki-Horvath, A.; Anderson, K.; Teplyashin, D.; Hermann, K. M.; Malinowski, M.; Grimes, M. K.; Simonyan, K.; Kavukcuoglu, K.; Zisserman, A.; et al. 2019. The streetlearn environment and dataset. arXiv preprint arXiv:1903.01292. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. Qiao, T.; Zhang, J.; Xu, D.; and Tao, D. 2019. Mirrorgan: Learning text-to-image generation by redescription. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1505–1514. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. PMLR. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-toimage generation. In International Conference on Machine Learning, 8821–8831. PMLR. Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; and Lee, H. 2016. Generative adversarial text to image synthesis. In International Conference on Machine Learning, 1060–1069. PMLR. Rockwell, C.; Fouhey, D. F.; and Johnson, J. 2021. Pixelsynth: Generating a 3d-consistent experience from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 14104–14113. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6438 Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2021. High-Resolution Image Synthesis with Latent Diffusion Models. arXiv:2112.10752. Shen, Z.; Lin, C.; Liao, K.; Nie, L.; Zheng, Z.; and Zhao, Y. 2022. PanoFormer: Panorama transformer for indoor 360 depth estimation. arXiv e-prints, arXiv–2203. Somanath, G.; and Kurz, D. 2021. HDR Environment Map Estimation for Real-Time Augmented Reality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11298–11306. Song, S.; and Funkhouser, T. 2019. Neural illumination: Lighting prediction for indoor environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6918–6926. Song, S.; Zeng, A.; Chang, A. X.; Savva, M.; Savarese, S.; and Funkhouser, T. 2018. Im2pano3d: Extrapolating 360 structure and semantics beyond the field of view. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3847–3856. Su, J.; Lu, Y.; Pan, S.; Wen, B.; and Liu, Y. 2021. RoFormer: Enhanced Transformer with Rotary Position Embedding. arXiv preprint arXiv:2104.09864. Sumantri, J. S.; and Park, I. K. 2020. 360 panorama synthesis from a sparse set of images with unknown field of view. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2386–2395. Szeliski, R. 2006. Image alignment and stitching: a tutorial, foundations and trends in computer graphics and computer vision. Now Publishers, 2(1): 120. van den Oord, A.; Vinyals, O.; and kavukcuoglu, k. 2017. Neural Discrete Representation Learning. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, G.; Yang, Y.; Loy, C. C.; and Liu, Z. 2022. StyleLight: HDR Panorama Generation for Lighting Estimation and Editing. arXiv preprint arXiv:2207.14811. Wu, C.; Liang, J.; Hu, X.; Gan, Z.; Wang, J.; Wang, L.; Liu, Z.; Fang, Y.; and Duan, N. 2022. NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis. Xu, J.; Zheng, J.; Xu, Y.; Tang, R.; and Gao, S. 2021. Layout-guided novel view synthesis from a single indoor panorama. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16438–16447. Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; and He, X. 2018. AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1316–1324. Zhang, H.; Koh, J. Y.; Baldridge, J.; Lee, H.; and Yang, Y. 2021a. Cross-Modal Contrastive Learning for Text-toImage Generation. In CVPR. Zhang, W.; Wang, Y.; and Liu, Y. 2022. Generating HighQuality Panorama by View Synthesis Based on Optical Flow Estimation. Sensors, 22(2): 470. Zhang, Y.; Xiao, J.; Hays, J.; and Tan, P. 2013. Framebreak: Dramatic image extrapolation by guided shift-maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1171–1178. Zhang, Z.; Ma, J.; Zhou, C.; Men, R.; Li, Z.; Ding, M.; Tang, J.; Zhou, J.; and Yang, H. 2021b. M6-UFC: Unifying Multi-Modal Controls for Conditional Image Synthesis. arXiv preprint arXiv:2105.14211. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6439 | 2024 | 715 |
18,535 | CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning Qingsong Yan1, Qiang Wang2,*, Kaiyong Zhao3, Jie Chen4, Bo Li5, Xiaowen Chu6,5,*, Fei Deng1,7 1Wuhan University, Wuhan, China, 2Harbin Institute of Technology (Shenzhen), Shenzhen, China 3XGRIDS, Shenzhen, China, 4Hong Kong Baptist University, Hong Kong SAR, China 5The Hong Kong University of Science and Technology, Hong Kong SAR, China 6The Hong Kong University of Science and Technology (Guangzhou),Guangzhou, China 7Hubei Luojia Laboratory, Wuhan, China yanqs [email protected], [email protected], [email protected], [email protected] [email protected], [email protected], [email protected] Abstract Neural Radiance Fields have demonstrated impressive performance in novel view synthesis. However, NeRF and most of its variants still rely on traditional complex pipelines to provide extrinsic and intrinsic camera parameters, such as COLMAP. Recent works, like NeRFmm, BARF, and L2GNeRF, directly treat camera parameters as learnable and estimate them through differential volume rendering. However, these methods work for forward-looking scenes with slight motions and fail to tackle the rotation scenario in practice. To overcome this limitation, we propose a novel camera parameter free neural radiance field (CF-NeRF), which incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion. Given a sequence of images, CF-NeRF estimates camera parameters of images one by one and reconstructs the scene through initialization, implicit localization, and implicit optimization. To evaluate our method, we use a challenging realworld dataset, NeRFBuster, which provides 12 scenes under complex trajectories. Results demonstrate that CF-NeRF is robust to rotation and achieves state-of-the-art results without providing prior information and constraints. Introduction 3D reconstruction is a hot topic in computer vision that aims to recover 3D geometry from RGB images. However, traditional methods contain lots of complex procedures, such as feature extraction and matching (Lowe 2004; Yi et al. 2016), sparse reconstruction (Agarwal et al. 2011; Wu 2013; Schonberger and Frahm 2016; Moulon et al. 2016), and dense reconstruction (Yao et al. 2018; Mi, Di, and Xu 2022; Yan et al. 2023). Consequently, traditional methods are not a differential end-to-end reconstruction pipeline and require high-quality results from each sub-module to achieve accurate results. When the quality of results is poor, it is challenging to identify which module is causing the problem. Recently, Neural Radiance Fields (NeRF) (Mildenhall et al. 2020; Yu et al. 2021a; M¨uller et al. 2022) have demonstrated a novel way to render highly realistic novel views *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) NeRFmm (b) SiRENmm (c) BARF (d) GARF (e) L2G-NeRF (f) CF-NeRF Figure 1: We select a sequence from NeRFBuster (Warburg et al. 2023) and use novel views synthesis to compare the quality of camera parameters from NeRFmm (Wang et al. 2021b), SiRENmm (Guo and Sherwood 2021), BARF (Lin et al. 2021), GARF (Chng et al. 2022), L2G-NeRF (Chen et al. 2023) and our method CF-NeRF. with impressive quality. Without recovering 3D geometry, NeRF relies on multi-layer perception (MLP) to predict color and sigma for each point in the scene and samples several points along a ray to render a pixel through differential volume rendering. Unlike traditional 3D reconstruction, NeRF simplifies the reconstruction into one step and implicitly represents the 3D scene. Benefiting from the excellent ability of NeRF, it has been further extended to dynamic scenes (Pumarola et al. 2021), large-scale (Turki, Ramanan, and Satyanarayanan 2022), and even surface (Wang et al. 2021a) and material reconstruction (Boss et al. 2021a). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6440 Despite the remarkable performance of NeRF and its variants in novel view synthesis, they still require camera parameters before training. The most common processing pipeline is first recovering camera parameters using traditional complex methods (Schonberger and Frahm 2016; Moulon et al. 2016), and then training the NeRF through differential volume rendering. In other words, the differentiability of the whole reconstruction pipeline is destroyed and divided into two separate parts, resulting in the NeRF not being end-toend and the reconstruction quality being unidirectionally dependent on traditional methods. To unify camera parameter estimation and reconstruction, researchers have tried to recover or optimize camera parameters along with NeRF. The straightforward idea is to treat camera parameters as learnable, as NeRFmm (Wang et al. 2021b) does. BARF (Lin et al. 2021) recovers extrinsic camera parameters and the NeRF model by dynamically adjusting weights of different frequencies of positional encoding. GARF (Chng et al. 2022) replaces ReLU with Gaussian activations to obtain high-accuracy results. NeROIC (Kuang et al. 2022) and NeRFStudio (Tancik et al. 2023) optimize camera parameters and the NeRF simultaneously. However, these methods are only suitable for forward-looking scenes or scenes with initial camera parameters and cannot be directly used in the real world with complex movement. This paper proposes a new end-to-end approach called camera parameter free NeRF (CF-NeRF) to address the limitations of existing NeRF-based methods in estimating camera parameters. Figure 1 compares rendered novel views by camera parameters estimated by several methods (Wang et al. 2021b; Guo and Sherwood 2021; Lin et al. 2021; Chng et al. 2022; Chen et al. 2023) and our method CF-NeRF, where CF-NeRF is the only method that successfully reconstructs the 3D scene with rotation. Unlike other methods that simultaneously estimate all camera parameters, CFNeRF inherits ideas from incremental structure from motion (SfM) and recovers camera parameters one by one. CFNeRF contains three major components: initialization, implicit localization, and implicit optimization. CF-NeRF uses initialization to recover camera parameters and NeRF by a few images and estimates camera parameters of other images through two steps: the implicit localization provides an initial camera parameter for the newly added image, and the implicit optimization optimizes camera parameters of all images to reduce drift. Our contributions are as follows: 1. We propose a novel end-to-end method, CF-NeRF, that does not need prior information or constraints to recover the intrinsic and extrinsic camera parameters and the NeRF simultaneously. 2. We design an incremental training pipeline for the CFNeRF, inspired by the incremental SfM, to avoid trapping to local minimal and is suitable for complex trajectories. 3. Experiments of our method achieve state-of-the-art results on the NeRFBuster dataset (Warburg et al. 2023) captured in the real world, proving that the CF-NeRF can estimate accurate camera parameters with the specifically designed training procedure. Related Work In this section, we introduce the development of NeRFrelated methods with known camera parameters and several camera parameter estimation methods using SfM&SLAM (simultaneous localization and mapping) and the NeRF. NeRF NeRF (Mildenhall et al. 2020) uses the MLP to represent the 3D scene implicitly and can be trained through differential volume rendering from a set of images with known camera parameters. However, NeRF suffers from efficiency and needs around 1-2 days to train a scene and several minutes to render a novel view at the testing. Instant-NGP (M¨uller et al. 2022) builds a multi-resolution hash table to store space-aware feature vectors and reduces the complexity of the MLP network. Meanwhile, (Sun, Sun, and Chen 2022; Fridovich-Keil et al. 2022) try to use the coarse-to-fine strategy and (Yu et al. 2021a; Chen et al. 2022; Garbin et al. 2021) update the network structure to speed up training or testing. Besides, NeRF faces another problem that it cannot work for large-scale, unbounded 3D scenes. NeRF++ (Zhang et al. 2020) and MipNeRF360 (Barron et al. 2021, 2022) utilize different sampling strategies for foreground and background to model unbounded 3D scenes by a finite volume. MegaNeRF (Turki, Ramanan, and Satyanarayanan 2022) and BlockNeRF (Tancik et al. 2022) split a large scene into multiple small regions and assign a network for each part. Moreover, (Martin-Brualla et al. 2021; Pumarola et al. 2021; Attal et al. 2021) extend NeRF to dynamic scenes and (Jain, Tancik, and Abbeel 2021; Yu et al. 2021b; Niemeyer et al. 2022; Kim, Seo, and Han 2022) introduce context or geometry information into NeRF to suit scenes with sparse views. In addition to the advances in novel view synthesis, NeRF has made significant progress in geometric reconstruction (Yariv et al. 2021; Wang, Skorokhodov, and Wonka 2022; Darmon et al. 2022; Long et al. 2023; Fu et al. 2022). UniSURF (Oechsle, Peng, and Geiger 2021) and NeUS (Wang et al. 2021a) estimate the zero-level set of an implicit signed distance function instead of the space density. Furthermore, some work (Zhang et al. 2021; Verbin et al. 2022; Boss et al. 2021a; Kuang et al. 2022; Boss et al. 2021b, 2022) even combines BRDF and NeRF to decompose a scene into shape, reflectance, and illumination. However, all of these methods split the reconstruction into two steps and require traditional methods to provide camera parameters, which significantly limits the application of NeRF. Camera Parameter Estimation Traditional SfM (Wu 2013; Moulon, Monasse, and Marlet 2013; Schonberger and Frahm 2016; Moulon et al. 2016) and SLAM (Mur-Artal, Montiel, and Tardos 2015; Engel, Koltun, and Cremers 2017) can estimate camera parameters for given images. However, these methods divide the reconstruction pipeline into several non-differentiable modules that need hand-crafted features (Lowe 2004) or learningbased methods (Yi et al. 2016; Teed and Deng 2020) to establish image correspondences, and then reconstruct a sparse scene and camera parameters through multi-view geometry. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6441 Figure 2: The pipeline of CF-NeRF. CF-NeRF can estimate the weight θ of NeRF F and the camera parameter δ. After initializing through a few selected images, CF-NeRF recovers δ of the image one by one through implicit localization that only optimizes the newly added image and implicit optimization that refines θ and δ. Implicit optimization can be divided into partial and global optimization depending on the number of images used. We visualize δ reconstructed by CF-NeRF and sparse points from COLMAP (Schonberger and Frahm 2016) to show that CF-NeRF can reconstruct rotation in image sequences. In light of these limitations, it is worth exploring to estimate camera parameters during the training process of NeRF. The most direct attempt to utilize NeRF is the visual localization, where iNeRF (Yen-Chen et al. 2021), NeDDF (Ueda et al. 2022), and PNeRFP (Lin et al. 2023) try to estimate the extrinsic camera parameter of a new image by a pre-trained NeRF model. Then, NeRFmm (Wang et al. 2021b) and SiRENmm (Guo and Sherwood 2021) take the NeRF and camera parameters as learnable and prove that it is possible to train the NeRF model from scratch without camera parameters, but they only work for forward-looking scenes. To further enhance accuracy in forward-looking or rotation scenes with initial camera parameters, BARF (Lin et al. 2021) dynamically adjusts the weight of the positional encoding, GARF (Chng et al. 2022) replaces the ReLU activate function with the Gaussian activation function, and L2G-NeRF (Chen et al. 2023) introduces a local-to-global registration. Interestingly, GNeRF (Meng et al. 2021) and VMRF (Zhang et al. 2022) assume there is a prior known distribution of camera parameters to decrease the freedom of camera parameters during training the NeRF model. Meanwhile, other researchers try to add different external restrictions to guide the camera parameter estimation. SCNeRF (Jeong et al. 2021) and Level-S2fM (Xiao et al. 2023) rely on feature matches to guide camera parameters estimation. NoPe-NeRF (Bian et al. 2023), iMap (Sucar et al. 2021), NeRF-SLAM (Rosinol, Leonard, and Carlone 2022), NiceSLAM (Zhu et al. 2022), and Nicer-SLAM (Zhu et al. 2023) integrate depth maps from active sensors or CNN networks to tune the NeRF. Additionally, LocalLR (Meuleman et al. 2023) combines depth maps and optical flow to train NeRF. Regrettably, images acquired from real-world scenarios often exhibit a multitude of challenges. These challenges include rotations and the absence of prior information of camera parameters. Furthermore, the introduction of external constraints can augment the intricacy and unpredictability of the reconstruction process. To solve these problems, we propose CF-NeRF inspired by the traditional incremental SfM, which does not require any prior information or external constraints while reconstructing the 3D scene and camera parameters end-to-end from image sequences, demonstrating the powerful reconstruction capability of the NeRF after using a specific training strategy. Method In this section, we provide an overview of the proposed method. Firstly, we introduce the preliminary background of the NeRF and the traditional incremental SfM. Then, we explain the details of CF-NeRF that can recover camera parameters from image sequences. Preliminary Background NeRF NeRF can generate realistic images from a set of images I = (I1, I2, ..., IN) from N different places without explicitly reconstructing. However, NeRF needs associated camera parameters δ, including camera rotation δR = (δR1, δR2, ..., δRN ), camera translation δT = (δT1, δT2, ..., δTN ), and intrinsic camera parameter δK. Given a NeRF model F and corresponding weight θ , it can estimate color c and density σ through a implicit function c(x, ⃗d), σ(x) = Fθ(x, ⃗d) with a point x and a view direction The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6442 Figure 3: Estimated Parameters. CF-NeRF estimates the weight θ of NeRF model and the camera parameter δ, which include the camera rotation δR, the camera translation δT , and camera intrinsic parameter δK. ⃗d. To render a pixel p, NeRF needs to sample several points xp(t) = o+ ⃗dt along a ray shooting from the view position o and generate the color cp by the volume rendering function R as Eq. 1 shows, where T (t) = exp(− R t tn σ(xp(s))ds) indicates the accumulated transmittance along the ray. tn and tf are the near and far bounds of the ray. cp = R(p|θ) = Z tf tn T (t)σ(xp(t))c(xp(t), ⃗d)dt (1) Benefiting from the differential property of the volume rendering, NeRF can be trained end to end by minimizing the difference between cp and observed color I(p) as Eq. 2 shows, where L is the loss function. To be noted, NeRF only estimates θ and borrows δ from traditional SfM methods. However, NeRFmm (Wang et al. 2021b) prove that it is possible to estimate θ and δ simultaneously under the forwardlooking situation. arg min θ { X Ii∈I X p∈Ii L(R(p|θ), Ii(p))} (2) Incremental SfM Given a set of images, the incremental SfM can recover δ one by one in a linear time (Wu 2013) and contains four steps (Schonberger and Frahm 2016): Initialization The selection of an initial two-view is essential because a suitable initial two-view improves the robustness and quality of the reconstruction. With a given twoview and its matched features, incremental SfM computes the relative pose by multi-view geometry (MVG) and triangulates 3D points to initial the scene. Image Registration After initialization, incremental SfM adds images to the scene in order. Given a new image, incremental SfM builds the 2D-3D relationship by matching its features with images in the scene and recovers the camera parameter by Perspective-n-Point (PnP). Triangulation As a newly added image observes additional information that can extend the scale of the scene, incremental SfM triangulates more 3D points based on the new image and matched features. Bundle Adjustment Adding new images and 3D points without refinement leads to drift. Therefore, it is essential to apply bundle adjustment (BA) by minimizing the reprojection error. In terms of efficiency, incremental SfM proposes partial BA that refines only a subset of images, and global BA that optimizes all images. CF-NeRF Fusing the differentiability of NeRF and the reconstruction strategy of SfM, we propose CF-NeRF, which is capable of estimating the camera parameter under complex movement from sequential images. CF-NeRF consists of three modules: initialization, implicit localization, and implicit optimization, as Figure 2 shows. To convenient later introduction, we define the set of images we have completed estimating the camera parameter as E, which starts from ∅. Parameter CF-NeRF estimates camera parameter δ, which includes δR, δT , and δK, and the weight θ of NeRF, as Figure 3 shows. During the differential volume rendering, we calculate the ray ⃗rp(t) = δTi + δRiδ−1 Ki ˜pt of pixel p in image Ii ∈I, where ˜p is the homogeneous expression of p. Following NeRFmm (Wang et al. 2021b), we use the axis-angle to represent δR and assume all images have the same camera intrinsic parameter without distortion so that δK only contains the focal length. We initialize δR and δT to zero, and set δK to 53◦by a common field of view. The activation function determines how to initialize θ. NeRF using ReLU are initialized according to NeRF (Mildenhall et al. 2020), while NeRF using sine are initialized according to SIREN (Sitzmann et al. 2020). Initialization Similar to incremental SfM, CF-NeRF requires initialize θ, δR1, δT1, and δK before adding images to E. We select the first Ninit images Iinit from I to optimise these parameters by Eq. 3 with ξinit iterations. Since the rotation between adjacent images is not large and NeRF is hard to estimate rotation (Lin et al. 2021), we do not estimate the rotation in the initialization to reduce the freedom. After initialization, we add I1 to E and keep θ, δR1, δT1, and δK but discard other camera parameters. Note that, unlike the initialization in the previous section, the initialization here is data-specific, similar to the warm-up procedure. arg min θ,δT ,δK { X Ii∈Iinit X p∈Ii L(R(p|θ, δTi, δK), Ii(p))} (3) Implicit Localization After initialization, CF-NeRF estimates the camera parameter of the remaining images one by one and determines δRn and δTn for each new image In by localization. Specifically, we first initialize δRn and δTn by δRn−1 and δTn−1, and then optimize them by minimizing Eq. 4 with fixed θ through ξloc iterations. The localization is similar to iNeRF (Yen-Chen et al. 2021), but CF-NeRF does not have a pre-trained F. arg min δRn,δTn { X p∈In L((p|δRn, δTn), In(p))} (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6443 (a) GT (b) NeRFmm (c) SiRENmm (d) BARF (e) GARF (f) L2G-NeRF (g) CF-NeRF Figure 4: We select two sequences from NeRFBuster (Warburg et al. 2023) and render novel views to evaluate camera parameters. Our method CF-NeRF generates high-quality images, while results of NeRFmm (Wang et al. 2021b), SiRENmm (Guo and Sherwood 2021), BARF (Lin et al. 2021), GARF (Chng et al. 2022) and L2G-NeRF (Chen et al. 2023) contain lots of noise. Implicit Optimization Although implicit localization can roughly determine δRn and δTn, it faces two problems: the observation from In is not added to NeRF, and the localization does not take the multi-view consistency into account to reduce drift. Incremental SfM solves these problems using two separate steps: triangulation and BA, while CF-NeRF benefits from the volume rendering and deals with these problems together. However, it is time-consuming to optimize all images in E every time a new image is added. Therefore, CF-NeRF splits optimization into implicit partial optimization and implicit global optimization. Each time localizing a new image In, CF-NeRF performs implicit partial optimization. We select In and previous Npart−1 images to construct the partial image set Ipart, then optimizes them with ξpart iterations, as Eq.5 shows. arg min θ,δR,δT { X Ii∈Ipart X p∈Ii L(R(p|θ, δRi, δTi), Ii(p))} (5) When the number of images in E can be evenly divided by Nglob, CF-NeRF employs implicit global optimization for θ and all images in E to enhance the overall accuracy and reduce drifts with ξglob iterations, as Eq. 6 shows. arg min θ,δR,δT ,δK { X Ii∈IE X p∈Ii L(R(p|θ, δRiδTi, δK), Ii(p))} (6) Coarse-to-Fine CF-NeRF uses a coarse-to-fine strategy to improve robustness. CF-NeRF first constructs a Gaussian pyramid with depth dG, then recovers all parameters at a low-resolution image through the incremental pipeline. Finally, CF-NeRF directly performs implicit global optimization with a higher resolution in each scale of the Gaussian pyramid with ξG iterations. Loss Function To improve robustness, we employ the Smooth-L1 loss function, as Eq. 7 shows, where gt represents the ground truth, pr is the estimated value, and β is the set to 1.0 by default. L(pr, gt) = 0.5 ∗(gt −pr)2/β if|gt −pr| < β |gt −pr| −0.5 ∗β otherwise (7) Experiments Dataset We evaluate our method using a real-world dataset NeRFBuster (Warburg et al. 2023), mainly rotating around an object. We sample around 50 frames for each scene and resize all images to 480×270 with ground truth (GT) camera parameters from COLMAP (Schonberger and Frahm 2016). Implementation CF-NeRF is implemented using PyTorch. Similar to NeRFmm (Wang et al. 2021b), CF-NeRF does not have hierarchical sampling and uses the coarse network, which has eight layers and the dimension of the hidden layers is set to 128. Moreover, we use the sine activation function instead of the ReLU, as SiRENmm (Guo and Sherwood 2021) is more robust than NeRFmm. We utilize the Adam optimizer to optimize all learnable parameters. Specifically, we set the learning rate of θ to 0.001, which undergoes a decay The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6444 aloe art car century flowers garbage picnic pikachu pipe plant roses table ∆R↓ NeRFmm 159.973 177.591 129.580 119.626 106.920 150.823 154.778 113.700 164.821 165.030 102.275 115.299 SiRENmm 155.151 177.364 127.267 89.0172 103.874 82.9375 44.3671 25.3603 159.757 114.076 132.538 93.2612 BARF 158.669 59.1868 133.453 101.601 88.6842 88.7832 69.7201 41.0302 64.5250 143.198 133.757 111.288 GARF 125.980 171.917 153.559 105.187 106.060 84.4992 49.8503 32.7285 126.960 156.606 118.975 164.656 L2G-NeRF 124.237 24.3753 55.7291 131.968 96.3498 110.478 146.312 116.891 70.9131 70.7562 95.9982 116.497 CF-NeRF 12.1226 19.2496 17.5570 9.6811 8.2556 9.7658 12.6501 11.3067 19.9926 4.8968 5.1229 4.5837 ∆T↓ NeRFmm 11.5935 15.0762 23.9514 24.5934 12.8753 16.3842 12.9675 25.6841 19.3563 23.6613 8.0367 13.3849 SiRENmm 11.4912 14.8720 27.7235 28.8582 15.4841 13.3099 8.3607 15.8052 20.4572 31.2943 9.0498 13.8350 BARF 9.6196 17.3299 36.0351 26.0549 15.3166 17.1936 9.6679 18.7184 18.8936 31.8629 8.5266 17.5453 GARF 13.9100 17.1527 26.5184 26.0096 14.1459 16.1054 11.7021 17.4975 18.8366 32.3170 11.2654 15.0367 L2G-NeRF 14.4012 13.4240 20.5634 23.2650 7.4559 17.7167 12.9408 32.4048 11.4012 18.5012 10.7110 12.8061 CF-NeRF 3.3788 2.2821 6.5452 2.7383 2.7026 4.0535 1.2833 4.0586 9.4491 3.4346 1.1945 1.2127 PSNR↑ NeRFmm 20.6912 16.8220 17.7504 16.0326 18.4377 17.7229 18.4300 25.3819 20.0978 21.1558 17.7735 14.1651 SiRENmm 22.7462 20.3890 22.0268 18.0252 19.4640 17.2283 21.5628 27.8706 20.9538 23.4980 16.7480 18.8135 BARF 22.4366 21.1947 16.7665 15.3436 17.8350 15.9065 19.1846 23.0386 19.9728 25.5135 13.6741 13.8227 GARF 19.0241 19.3556 15.4460 14.4117 16.2955 15.3383 15.4035 20.9663 18.5371 20.5600 13.1274 12.6677 L2G-NeRF 21.3398 19.8099 17.3255 16.6476 18.0016 13.6077 18.5268 22.4939 18.1787 19.0160 17.2614 15.5658 CF-NeRF 26.9367 26.5293 22.4654 21.7072 21.6950 22.4736 22.5475 32.3661 22.2719 25.7312 24.3918 26.8491 LPIPS↓ NeRFmm 0.5560 0.4954 0.5991 0.5793 0.5778 0.5661 0.6113 0.3683 0.5614 0.4927 0.5371 0.6073 SiRENmm 0.4508 0.4034 0.4450 0.4785 0.5048 0.5193 0.5227 0.2883 0.5170 0.3256 0.5333 0.4659 BARF 0.3328 0.3511 0.5361 0.5394 0.5552 0.5480 0.5358 0.3440 0.5198 0.3217 0.6138 0.5913 GARF 0.5257 0.4055 0.5984 0.5845 0.6158 0.5931 0.6086 0.3987 0.5688 0.4345 0.6189 0.6356 L2G-NeRF 0.4620 0.4186 0.5409 0.5116 0.5466 0.6016 0.5530 0.4051 0.4741 0.3840 0.4788 0.5309 CF-NeRF 0.1939 0.2316 0.3983 0.3627 0.3983 0.3859 0.4686 0.1679 0.4453 0.2594 0.2831 0.3011 Table 1: We conduct experiments on the NeRFBuster (Warburg et al. 2023), which is captured in the real world with complex trajectories. CF-NeRF achieves state-of-the-art results compared to NeRFmm (Wang et al. 2021b), SiRENmm (Guo and Sherwood 2021), BARF (Lin et al. 2021), GARF (Chng et al. 2022), L2G-NeRF (Chen et al. 2023). of 0.9954 every 200 epochs. Similarly, the learning rate of δ is set to 0.001 and undergoes a decay of 0.9000 every 2000 epochs. Here, we describe how to set the hyper-parameters in CF-NeRF. We set Ninit and Npart to 3 to meet the minimum requirements that can filter outliers based on MVG. To balance drift and efficiency, we set Nglob to 5. Considering the input image resolution, we set dG to 3 to reconstruct all parameters by coarse-to-fine strategy. The most important parameter in CF-NeRF is iteration, which is the epoch number for each image. During initialization, we set ξinit to 3000 to guarantee that θ and δ can be correctly initialized with fewer images. Subsequently, during the incremental training, we maintain a consistent value of ξ, setting ξ = ξloc = ξpart = ξglob = ξG to 900, thus reconstructing the scene from images one by one. Throughout all our experiments, we use the NVIDIA RTX3090. Evaluation To demonstrate the performance of the proposed method, we conduct a comprehensive comparison between CF-NeRF and several state-of-the-art models, including NeRFmm (Wang et al. 2021b) SiRENmm (Guo and Sherwood 2021), BARF (Lin et al. 2021), GARF (Chng et al. 2022), and L2GNeRF (Chen et al. 2023). We use all images for camera parameter estimation without employing a train/test split. To evaluate the quality of the camera parameters, we calculate the average translation error ∆T and the average rotation error ∆R by aligning the estimated camera parameters δR and δT with COLMAP using a similarity transformation Sim(3) (Lin et al. 2021). It is worth noting that δT represents a relative translation error rather than an absolute measurement, as COLMAP can not reconstruct an absolute scale of the scene. We further evaluate the estimated camera parameters through a novel view synthesis by PSNR and LPIPS. To ensure a fair comparison and avoid the influence of varying network backbones across different methods, we uniformly use the NerfAcc (Li, Tancik, and Kanazawa 2022), where we select one image for testing in every eight images and the remaining is for training. Results We performed qualitative and quantitative evaluations of these methods on 12 scenes of the NeRFBuster (Warburg et al. 2023) dataset. Notably, BARF (Lin et al. 2021), GARF (Chng et al. 2022), and L2G-NeRF (Chen et al. 2023) require manual setting the focal length. In contrast, NeRFmm (Wang et al. 2021b), SiRENmm (Guo and Sherwood 2021), and CF-NeRF have the ability to estimate the focal length. Table 1 shows the results of qualitative experiments. Our method obtains the highest accuracy camera parameters, while all other methods fail outright. It is important to understand that ∆R and ∆T are calculated by aligning the camera positions with Sim(3) and that a slight difference in camera position can lead to huge errors. The rotation error ∆R of our method CF-NeRF is roughly around 10◦, while the other methods are around 100◦. Moreover, the translation error δT The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6445 G, ξ, Nglob aloe art car century flowers garbage picnic pikachu pipe plant roses table ∆R↓ F, 600, 10 17.8029 24.3389 17.1692 11.5924 11.6163 9.1240 14.6452 13.0037 19.0749 5.3354 5.9091 6.4731 F, 900, 10 14.8730 22.8142 17.8879 11.1201 10.4707 8.6973 11.4209 12.0625 18.4305 4.7303 5.8538 6.8481 C, 900, 5 12.4862 19.1647 17.4755 9.7177 8.4555 9.6460 12.3162 10.9802 19.9855 5.1579 5.5133 5.2821 F, 900, 5 12.1226 19.2496 17.5570 9.6811 8.2556 9.7658 12.6501 11.3067 19.9926 4.8968 5.1229 4.5837 ∆T↓ F, 600, 10 4.5457 5.9307 7.5697 2.9652 3.6234 4.5340 2.6677 5.3105 9.3544 4.4384 1.3324 1.9535 F, 900, 10 3.9111 6.1190 7.3752 3.6834 3.3956 4.3080 3.4918 3.2682 8.4666 3.6109 1.3013 2.3973 C, 900, 5 3.4681 2.2770 6.6250 2.8224 2.7405 4.1085 1.2886 4.2462 9.6998 3.5309 1.2182 1.2232 F, 900, 5 3.3788 2.2821 6.5452 2.7383 2.7026 4.0535 1.2833 4.0586 9.4491 3.4346 1.1945 1.2127 Table 2: Ablation experiments. We compare the accuracy of camera parameters of CF-NeRF under different hyper-parameter settings, including the iteration ξ, the global optimization frequency Nglob and the coarse-to-fine strategy, where C means the coarse stage and F means the fine stage. of CF-NeRF is approximately about 4, while all other methods are around 15. Although NeRFmm, SiRENmm, BARF, GARF, and L2G-NeRF claim high accuracy on forwardlooking scenes from scratch, they are unsuitable for scenes with rotation and are prone to be trapped in a local minimum. In contrast, CF-NeRF recovers the camera parameters sequentially and can effectively handle image sequences with complex trajectories. Furthermore, SiRENmm outperforms NeRFmm in camera parameter estimation, which is why CF-NeRF uses the sine activate function. Table 1 also shows the quality of the novel view synthesis, which serves as an additional evaluation criterion for the quality of camera parameters. CF-NeRF achieves state-ofthe-art results on PSNR and LPIPS. Interestingly, the reconstruction results of other methods appear reasonable compared to their poor camera parameters, mainly due to the high over-fitting ability of NeRF and partial camera parameters are correctly reconstructed. We further visualize the rendering results of three scenes from different methods in Figure 1 and Figure 4. CF-NeRF can generate high-quality results, while other methods have lots of noise in their results due to their inability to provide accurate camera parameters. Ablation Experiments We conduct several ablation experiments on the iteration ξ, the global optimization frequency Nglob, and the coarse-tofine strategy to validate the influence of hyper-parameters in CF-NeRF, and results are presented in Table 2. The iteration ξ The iteration ξ is the most important hyper-parameter in our method, determining how many times to optimize the camera parameter for each image. We compare two configurations: F, ξ = 600, Nglob = 10 and F, ξ = 900, Nglob = 10. Table 2 reveals that increasing ξ improves the final results for almost all scenes. This observation aligns with NeRF (Mildenhall et al. 2020) and iNeRF (Yen-Chen et al. 2021), where NeRF requires a large number of iterations to converge, and iNeRF enhances the quality of camera parameters through more iterations. The global optimization frequency Nglob To mitigate drift while maintaining efficiency, CF-NeRF employs the implicit global optimization when every Nglob image is added E. We conduct two experiments F, ξ = 900, Nglob = 10 and F, ξ = 900, Nglob = 5 to find out the influence of Nglob. As highlighted in Table 2, reducing Nglob yields improved final results, which can be attributed to the fact that global optimization ensures global consistency to avoid NeRF trap into a local minimum. The coarse-to-fine strategy CF-NeRF adopts a coarse-tofine strategy to avoid directly estimating camera parameters on high-resolution images, where the fine stage refines initial results from the coarse stage. We conduct two experiments C, ξ = 900, Nglob = 5 and F, ξ = 900, Nglob = 5. Results in Table 2 demonstrate that the fine stage outperforms the coarse stage across almost all scenes. The coarse-to-fine strategy facilitates the training process of CF-NeRF, as the pixel gradient is smoother at the coarse stage and has less RGB information to learn. Limitation Although CF-NeRF achieves state-of-the-art results in camera parameter estimation, surpassing other NeRF-based methods, there are still some gaps between CF-NeRF and COLMAP (Schonberger and Frahm 2016), and the accuracy can be further improved through the adjustment of the sample space (Wang et al. 2023) or the utilization of a more robust function (Sabour et al. 2023). Conclusion This paper presents CF-NeRF, a novel end-to-end method that does not require prior camera parameters to deal with image sequences with complex trajectories. Following the pipeline of incremental SfM, CF-NeRF contains three major sub-modules: initialization, implicit localization, and implicit optimization. Experiments on the NeRFBuster dataset demonstrate that CF-NeRF achieves state-of-the-art results, while NeRFmm, SiRENmm, BARF, GARF, and L2G-NeRF only work for forward-looking scenes and get trapped in the local minimum on the NeRFBuster dataset. More importantly, CF-NeRF highlights the unlimited potential of NeRF and differential volume rendering, showing that NeRF has impressive reconstruction capabilities and can also be used to estimate camera parameters in complex trajectories. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6446 Acknowledgments The research was supported in part by a RGC RIF grant under the contract R6021-20, RGC CRF grants under the contracts C7004-22G and C1029-22G, and RGC GRF grants under the contracts 16209120, 16200221, and 16207922. This research was also supported by the National Natural Science Foundation of China (No. 62302126), and the Shenzhen Science and Technology Program (No. RCBS20221008093125065, No. JCYJ20220818102414030). References Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S. M.; and Szeliski, R. 2011. Building rome in a day. Communications of the ACM, 54: 105–112. Attal, B.; Laidlaw, E.; Gokaslan, A.; Kim, C.; Richardt, C.; Tompkin, J.; and O’Toole, M. 2021. T¨orf: Time-of-flight radiance fields for dynamic scene view synthesis. NeurIPS. Barron, J. T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; and Srinivasan, P. P. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In ICCV, 5855–5864. Barron, J. T.; Mildenhall, B.; Verbin, D.; Srinivasan, P. P.; and Hedman, P. 2022. Mip-nerf 360: Unbounded antialiased neural radiance fields. In CVPR, 5470–5479. Bian, W.; Wang, Z.; Li, K.; Bian, J.-W.; and Prisacariu, V. A. 2023. Nope-nerf: Optimising neural radiance field with no pose prior. In CVPR, 4160–4169. Boss, M.; Braun, R.; Jampani, V.; Barron, J. T.; Liu, C.; and Lensch, H. 2021a. Nerd: Neural reflectance decomposition from image collections. In ICCV, 12684–12694. Boss, M.; Engelhardt, A.; Kar, A.; Li, Y.; Sun, D.; Barron, J.; Lensch, H.; and Jampani, V. 2022. Samurai: Shape and material from unconstrained real-world arbitrary image collections. NeurIPS, 26389–26403. Boss, M.; Jampani, V.; Braun, R.; Liu, C.; Barron, J.; and Lensch, H. 2021b. Neural-pil: Neural pre-integrated lighting for reflectance decomposition. NeurIPS, 10691–10704. Chen, A.; Xu, Z.; Geiger, A.; Yu, J.; and Su, H. 2022. Tensorf: Tensorial radiance fields. In ECCV, 333–350. Chen, Y.; Chen, X.; Wang, X.; Zhang, Q.; Guo, Y.; Shan, Y.; and Wang, F. 2023. Local-to-global registration for bundleadjusting neural radiance fields. In CVPR, 8264–8273. Chng, S.-F.; Ramasinghe, S.; Sherrah, J.; and Lucey, S. 2022. Gaussian activated neural radiance fields for high fidelity reconstruction and pose estimation. In ECCV. Darmon, F.; Bascle, B.; Devaux, J.-C.; Monasse, P.; and Aubry, M. 2022. Improving neural implicit surfaces geometry with patch warping. In CVPR, 6260–6269. Engel, J.; Koltun, V.; and Cremers, D. 2017. Direct sparse odometry. TPAMI, 40: 611–625. Fridovich-Keil, S.; Yu, A.; Tancik, M.; Chen, Q.; Recht, B.; and Kanazawa, A. 2022. Plenoxels: Radiance fields without neural networks. In CVPR, 5501–5510. Fu, Q.; Xu, Q.; Ong, Y. S.; and Tao, W. 2022. Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction. NeurIPS, 3403–3416. Garbin, S. J.; Kowalski, M.; Johnson, M.; Shotton, J.; and Valentin, J. 2021. Fastnerf: High-fidelity neural rendering at 200fps. In ICCV, 14346–14355. Guo, J.; and Sherwood, A. 2021. imporved-nerfmm. github. com/ventusff/improved-nerfmm. Accessed: 2023-07-01. Jain, A.; Tancik, M.; and Abbeel, P. 2021. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In ICCV, 5885–5894. Jeong, Y.; Ahn, S.; Choy, C.; Anandkumar, A.; Cho, M.; and Park, J. 2021. Self-calibrating neural radiance fields. In ICCV, 5846–5854. Kim, M.; Seo, S.; and Han, B. 2022. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In CVPR, 12912–12921. Kuang, Z.; Olszewski, K.; Chai, M.; Huang, Z.; Achlioptas, P.; and Tulyakov, S. 2022. NeROIC: neural rendering of objects from online image collections. ACM TOG, 1–12. Li, R.; Tancik, M.; and Kanazawa, A. 2022. NerfAcc: A General NeRF Acceleration Toolbox. arXiv preprint arXiv:2210.04847. Lin, C.-H.; Ma, W.-C.; Torralba, A.; and Lucey, S. 2021. Barf: Bundle-adjusting neural radiance fields. In ICCV. Lin, Y.; M¨uller, T.; Tremblay, J.; Wen, B.; Tyree, S.; Evans, A.; Vela, P. A.; and Birchfield, S. 2023. Parallel inversion of neural radiance fields for robust pose estimation. In ICRA. Long, X.; Lin, C.; Liu, L.; Liu, Y.; Wang, P.; Theobalt, C.; Komura, T.; and Wang, W. 2023. Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies. In CVPR, 20834–20843. Lowe, D. G. 2004. Distinctive image features from scaleinvariant keypoints. IJCV, 91–110. Martin-Brualla, R.; Radwan, N.; Sajjadi, M. S.; Barron, J. T.; Dosovitskiy, A.; and Duckworth, D. 2021. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In CVPR, 7210–7219. Meng, Q.; Chen, A.; Luo, H.; Wu, M.; Su, H.; Xu, L.; He, X.; and Yu, J. 2021. Gnerf: Gan-based neural radiance field without posed camera. In ICCV, 6351–6361. Meuleman, A.; Liu, Y.-L.; Gao, C.; Huang, J.-B.; Kim, C.; Kim, M. H.; and Kopf, J. 2023. Progressively optimized local radiance fields for robust view synthesis. In CVPR. Mi, Z.; Di, C.; and Xu, D. 2022. Generalized binary search network for highly-efficient multi-view stereo. In CVPR. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV. Moulon, P.; Monasse, P.; and Marlet, R. 2013. Global fusion of relative motions for robust, accurate and scalable structure from motion. In ICCV, 3248–3255. Moulon, P.; Monasse, P.; Perrot, R.; and Marlet, R. 2016. OpenMVG: Open multiple view geometry. In International Workshop on RRPR, 60–74. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6447 M¨uller, T.; Evans, A.; Schied, C.; and Keller, A. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics, 41: 1–15. Mur-Artal, R.; Montiel, J. M. M.; and Tardos, J. D. 2015. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 1147–1163. Niemeyer, M.; Barron, J. T.; Mildenhall, B.; Sajjadi, M. S.; Geiger, A.; and Radwan, N. 2022. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In CVPR, 5480–5490. Oechsle, M.; Peng, S.; and Geiger, A. 2021. Unisurf: Unifying neural implicit surfaces and radiance fields for multiview reconstruction. In ICCV, 5589–5599. Pumarola, A.; Corona, E.; Pons-Moll, G.; and MorenoNoguer, F. 2021. D-nerf: Neural radiance fields for dynamic scenes. In CVPR, 10318–10327. Rosinol, A.; Leonard, J. J.; and Carlone, L. 2022. NeRFSLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields. arXiv preprint arXiv:2210.13641. Sabour, S.; Vora, S.; Duckworth, D.; Krasin, I.; Fleet, D. J.; and Tagliasacchi, A. 2023. RobustNeRF: Ignoring Distractors with Robust Losses. In CVPR, 20626–20636. Schonberger, J. L.; and Frahm, J.-M. 2016. Structure-frommotion revisited. In ICCV, 4104–4113. Sitzmann, V.; Martel, J.; Bergman, A.; Lindell, D.; and Wetzstein, G. 2020. Implicit neural representations with periodic activation functions. NeurIPS, 33: 7462–7473. Sucar, E.; Liu, S.; Ortiz, J.; and Davison, A. J. 2021. iMAP: Implicit mapping and positioning in real-time. In ICCV. Sun, C.; Sun, M.; and Chen, H.-T. 2022. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In CVPR, 5459–5469. Tancik, M.; Casser, V.; Yan, X.; Pradhan, S.; Mildenhall, B.; Srinivasan, P. P.; Barron, J. T.; and Kretzschmar, H. 2022. Block-nerf: Scalable large scene neural view synthesis. In CVPR, 8248–8258. Tancik, M.; Weber, E.; Ng, E.; Li, R.; Yi, B.; Wang, T.; Kristoffersen, A.; Austin, J.; Salahi, K.; Ahuja, A.; et al. 2023. Nerfstudio: A modular framework for neural radiance field development. In ACM SIGGRAPH, 1–12. Teed, Z.; and Deng, J. 2020. Raft: Recurrent all-pairs field transforms for optical flow. In ECCV, 402–419. Turki, H.; Ramanan, D.; and Satyanarayanan, M. 2022. Mega-nerf: Scalable construction of large-scale nerfs for virtual fly-throughs. In CVPR, 12922–12931. Ueda, I.; Fukuhara, Y.; Kataoka, H.; Aizawa, H.; Shishido, H.; and Kitahara, I. 2022. Neural Density-Distance Fields. In ECCV, 53–68. Verbin, D.; Hedman, P.; Mildenhall, B.; Zickler, T.; Barron, J. T.; and Srinivasan, P. P. 2022. Ref-nerf: Structured viewdependent appearance for neural radiance fields. In CVPR. Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; and Wang, W. 2021a. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689. Wang, P.; Liu, Y.; Chen, Z.; Liu, L.; Liu, Z.; Komura, T.; Theobalt, C.; and Wang, W. 2023. F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories. In CVPR, 4150–4159. Wang, Y.; Skorokhodov, I.; and Wonka, P. 2022. HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details. In NeurIPS. Wang, Z.; Wu, S.; Xie, W.; Chen, M.; and Prisacariu, V. A. 2021b. NeRF–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064. Warburg, F.; Weber, E.; Tancik, M.; Holynski, A.; and Kanazawa, A. 2023. Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs. In ICCV. Wu, C. 2013. Towards linear-time incremental structure from motion. In 3DV, 127–134. IEEE. Xiao, Y.; Xue, N.; Wu, T.; and Xia, G.-S. 2023. Level-S2fM: Structure From Motion on Neural Level Set of Implicit Surfaces. In CVPR, 17205–17214. Yan, Q.; Wang, Q.; Zhao, K.; Li, B.; Chu, X.; and Deng, F. 2023. Rethinking Disparity: A Depth Range Free MultiView Stereo Based on Disparity. In AAAI, 3091–3099. Yao, Y.; Luo, Z.; Li, S.; Fang, T.; and Quan, L. 2018. Mvsnet: Depth inference for unstructured multi-view stereo. In ECCV, 767–783. Yariv, L.; Gu, J.; Kasten, Y.; and Lipman, Y. 2021. Volume rendering of neural implicit surfaces. NeurIPS, 4805–4815. Yen-Chen, L.; Florence, P.; Barron, J. T.; Rodriguez, A.; Isola, P.; and Lin, T.-Y. 2021. inerf: Inverting neural radiance fields for pose estimation. In IROS, 1323–1330. Yi, K. M.; Trulls, E.; Lepetit, V.; and Fua, P. 2016. Lift: Learned invariant feature transform. In ECCV, 467–483. Yu, A.; Li, R.; Tancik, M.; Li, H.; Ng, R.; and Kanazawa, A. 2021a. Plenoctrees for real-time rendering of neural radiance fields. In CVPR, 5752–5761. Yu, A.; Ye, V.; Tancik, M.; and Kanazawa, A. 2021b. pixelnerf: Neural radiance fields from one or few images. In CVPR, 4578–4587. Zhang, J.; Zhan, F.; Wu, R.; Yu, Y.; Zhang, W.; Song, B.; Zhang, X.; and Lu, S. 2022. Vmrf: View matching neural radiance fields. In ACM MM, 6579–6587. Zhang, K.; Riegler, G.; Snavely, N.; and Koltun, V. 2020. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492. Zhang, X.; Srinivasan, P. P.; Deng, B.; Debevec, P.; Freeman, W. T.; and Barron, J. T. 2021. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. ACM Transactions on Graphics, 1–18. Zhu, Z.; Peng, S.; Larsson, V.; Cui, Z.; Oswald, M. R.; Geiger, A.; and Pollefeys, M. 2023. Nicer-slam: Neural implicit scene encoding for rgb slam. arXiv preprint arXiv:2302.03594. Zhu, Z.; Peng, S.; Larsson, V.; Xu, W.; Bao, H.; Cui, Z.; Oswald, M. R.; and Pollefeys, M. 2022. Nice-slam: Neural implicit scalable encoding for slam. In CVPR, 12786–12796. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6448 | 2024 | 716 |
18,536 | Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation Shilin Yan1,2*, Renrui Zhang2,3*, Ziyu Guo3*, Wenchao Chen1, Wei Zhang1†, Hongyang Li2† Yu Qiao2, Hao Dong4,5, Zhongjiang He6, Peng Gao2† 1School of Computer Science, Fudan University 2Shanghai Artificial Intelligence Laboratory 3The Chinese University of Hong Kong 4School of CS, Peking University 5PKU-agibot Lab 6China Telecom Corporation Ltd. Data&AI Technology Company [email protected], {zhangrenrui, guoziyu, gaopeng}@pjlab.org.cn, [email protected] Abstract Recently, video object segmentation (VOS) referred by multimodal signals, e.g., language and audio, has evoked increasing attention in both industry and academia. It is challenging for exploring the semantic alignment within modalities and the visual correspondence across frames. However, existing methods adopt separate network architectures for different modalities, and neglect the inter-frame temporal interaction with references. In this paper, we propose MUTR, a Multimodal Unified Temporal transformer for Referring video object segmentation. With a unified framework for the first time, MUTR adopts a DETR-style transformer and is capable of segmenting video objects designated by either text or audio reference. Specifically, we introduce two strategies to fully explore the temporal relations between videos and multi-modal signals. Firstly, for low-level temporal aggregation before the transformer, we enable the multi-modal references to capture multi-scale visual cues from consecutive video frames. This effectively endows the text or audio signals with temporal knowledge and boosts the semantic alignment between modalities. Secondly, for high-level temporal interaction after the transformer, we conduct inter-frame feature communication for different object embeddings, contributing to better object-wise correspondence for tracking along the video. On Ref-YouTube-VOS and AVSBench datasets with respective text and audio references, MUTR achieves +4.2% and +8.7% J &F improvements to state-of-the-art methods, demonstrating our significance for unified multi-modal VOS. Code is released at https://github.com/OpenGVLab/MUTR. Introduction Multi-modal video object segmentation (VOS) aims to track and segment particular object instances across the video sequence referred by a given multi-modal signal, including referring video object segmentation (RVOS) with language reference, and audio-visual video object segmentation (AV-VOS) with audio reference. Different from the vanilla *These authors contributed equally. †Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. VOS (Xu et al. 2018; Yan et al. 2023) with only visual information, the multi-modal VOS is more challenging and in urgent demand, which requires a comprehensive understanding of different modalities and their temporal correspondence across frames. There exist two main challenges in multi-modal VOS. Firstly, it requires to not only explore the rich spatial-temporal consistency in a video, but also align the multi-modal semantics among image, language, and audio. Current approaches mainly focus on the visual-language or visual-audio modal fusion within independent frames, simply by cross-modal attention (Chen et al. 2019; Hu et al. 2020; Shi et al. 2018) or dynamic convolutions (Margffoy-Tuay et al. 2018) for feature interaction. This, however, neglects the multi-modal temporal information across frames, which is significant for consistent object segmentation and tracking along the video. Secondly, for the given references of two modalities, language and audio, existing works adopt different architecture designs and training strategies to separately tackle their modal-specific characteristics. Therefore, a powerful and unified framework for multi-modal VOS still remains an open question. To address these challenges, we propose MUTR, a Multimodal Unified Temporal transformer for Referring video object segmentation. Our approach, for the first time, presents a generic framework for both language and audio references, and enhances the interaction between temporal frames and multi-modal signals. In detail, we adopt a DETR-like (Carion et al. 2020) encoder-decoder transformer, which serves as the basic architecture to process visual information within different frames. On top of this, we introduce two attention-based modules respectively for low-level multi-modal temporal aggregation (MTA), and high-level multi-object temporal interaction (MTI). Firstly before the transformer, we utilize the encoded multi-modal references as queries to aggregate informative visual and temporal features via the MTA module. We concatenate the visual features of adjacent frames and adopt sequential attention blocks for multi-modal tokens to progressively capture temporal visual cues of different image scales. This contributes to better low-level cross-modal alignment The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6449 and temporal consistency. Then, we regard the multi-modal tokens after MTA as object queries and feed them into the transformer for frame-wise decoding. After that, we apply the MTI module to conduct inter-frame object-wise interaction, and maintain a set of video-wise query representations for associating objects across frames inspired by (Heo et al. 2022). Such a module enhances the instance-level temporal communication and benefits the visual correspondence for segmenting the same object in a video. Finally, we utilize a segmentation head following previous works (Wu et al. 2022, 2021) to output the final object mask referred by multimodality input. To evaluate our effectiveness, we conduct extensive experiments on several popular benchmarks for multi-modal VOS. RVOS with language reference (Ref-YouTube-VOS (Seo, Lee, and Han 2020) and Ref-DAVIS 2017 (Khoreva, Rohrbach, and Schiele 2019)), and one benchmark for AVVOS with audio reference (AVSBench (Zhou et al. 2022)). On Ref-YouTube-VOS (Seo, Lee, and Han 2020) and RefDAVIS 2017 (Khoreva, Rohrbach, and Schiele 2019) with language references, MUTR surpasses the state-of-the-art method ReferFromer (Wu et al. 2022) by +4.2% and +4.1% J &F scores, respectively. On AV-VOS (Zhou et al. 2022) with audio references, we also outperform Baseline (Zhou et al. 2022) by +8.7% J &F score. Overall, our contributions are summarized as follows: • For the first time, we present a unified transformer architecture, MUTR, to tackle video object segmentation referred by multi-modal inputs, i.e., language and audio. • To better align the temporal information with multi-modal signals, we propose two attention-based modules, MTA and MTI, respectively for low-level multi-scale aggregation and high-level multi-object interaction, achieving superior cross-modal understanding in a video. • On benchmarks of two modalities, our approach both achieves state-of-the-art results, e.g., +4.2 % and +4.1% J &F for Ref-YouTube-VOS and Ref-DAVIS 2017, +8.7% J &F for AV-VOS. This fully indicates the significance and generalization ability of MUTR. Related Work Referring video object segmentation (R-VOS). R-VOS introduces the language expression for target object tracking and segmentation, following the trend of vision-language learning (Zhang et al. 2022, 2023b; Zhu et al. 2023; Fang et al. 2023). Existing R-VOS methods can be broadly classified into three categories. One of the most straightforward ideas is to apply referring image segmentation methods (Ding et al. 2021; Yang et al. 2022; Wang et al. 2022) independently to video frames, such as RefVOS (Bellver et al. 2020). Obviously, it disregards the temporal information, which makes it difficult to process common video challenges like object disappearance in reproduction. Another approach involves propagating the target mask detected from key frame and selecting the object to be segmented based on a visual grounding model (Kamath et al. 2021; Luo et al. 2020). Although it applies the temporal information to some extent, its complex multi-stage training approach is not desirable. The recent work MTTR (Botach, Zheltonozhskii, and Baskin 2022) and ReferFormer (Wu et al. 2022) have employed query-based mechanisms. Nevertheless, they are end-to-end frameworks, they perform R-VOS task utilizing image-level segmentation. Constrastly, our unified framework fully explores video-level visual-attended language information for low-level temporal aggregation. Audio-visual video object segmentation (AV-VOS). Inspired by recent multi-modality efforts (Zhang et al. 2023a; Gao et al. 2023; Lin et al. 2023; Wang et al. 2023; Guo et al. 2023; Han et al. 2023b,a), AV-VOS is proposed for predicting pixel-level individual positions based on a given sound signal. There is little previous work on audio-visual video object segmentation. Until recently (Zhou et al. 2022) proposed the audio-visual video object segmentation dataset. Different from it, (Mo and Tian 2023) is based on the recent visual foundation model Segment Anything Model (Kirillov et al. 2023; Zhang et al. 2023c) to achieve audio-visual segmentation. However, all of them lack the temporal alignment between multi-modal information. Method In this section, we illustrate the details of our MUTR for multi-modal video object segmentation. We first describe the overall pipeline in Section . Then, in Section and Section , we respectively elaborate on the proposed designs of the multiscale temporal aggregation module (MTA), and multi-object temporal interaction module (MTI). Overall Pipeline The overall pipeline of MUTR is shown in Figure 1. We adopt a DETR-based (Carion et al. 2020) transformer as our basic architecture, including a visual backbone, a visual encoder and a decoder, on top of which, two modules MTA and MTI are proposed for temporal multi-modal interaction. In this section, we successively introduce the pipeline of MUTR for video object segmentation. Feature Backbone. Given an input video-text/audio pair, we first sample T frames from the video clip, and utilize the visual backbone and a pre-trained text/audio backbone to extract the image and multi-modal features. Specifically, we utilize ResNet (He et al. 2016) or Swin Transformer (Liu et al. 2021) as the visual backbone, and obtain the multiscale visual features of the 2nd, 3rd, 4th stages. Concurrently, for the text reference, we employ an off-the-shelf language model, RoBERTa (Liu et al. 2019), to encode the linguistic embedding tokens. For the audio reference, we first process it as a spectrogram transform via a short-time Fourier Transform and then feed it into a pre-trained VGGish (Hershey et al. 2017) model. After the text/audio encoding, a linear projection layer is adopted to align the multi-modal feature dimension with the visual features. Note that, following previous work (Wu et al. 2022), we adopt an early fusion module in the visual backbone to inject preliminary text/audio knowledge into visual features. MTA Module. On top of feature extraction, we feed the visual and text/audio features into the multi-scale temporal The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6450 “A person is skateboarding.” Visual Backbone Audio Backbone MTA Module Segmentation Head Input Video Input Audio MTI Module 𝑀 Multi-scale Features Visual Encoder … Text Backbone Repeat Initialize Visual Decoder … … Visual Decoder … … Visual Decoder … … 𝑀 … … Input Text Loss on: • Mask • Bbox • Class 1st frame 2nd frame 3rd frame 𝑄 … … Output Result 𝑄′ 𝑃 Figure 1: The Overall Pipeline of MUTR for referring video object segmentation. We present a unified transformer architecture to tackle video object segmentation referred by multi-modal inputs. We propose MTA module and MTI module for low-level multi-scale aggregation and high-level multi-object interaction, respectively. aggregation module (MTA). We concatenate the visual features of adjacent frames, and adopt cascaded cross-attention blocks to enhance the multi-scale and multi-modal feature fusion, which is specifically described in Section . Visual Encoder-decoder Transformer. The basic transformer consists of a visual encoder and a visual decoder, which processes the video in a frame-independent manner to focus on the feature fusion within a single frame. In detail, the visual encoder adopts vanilla self-attention blocks to encode the multi-scale visual features. The visual decoder regards the encoded visual features as the key and value, and the output references from the MTA module as learnable object queries for decoding. Unlike the randomly initialized queries in traditional DETR (Carion et al. 2020), ours are input-conditioned ones obtained via MTA module, which contains video-level multi-modal prior knowledge. With the visual decoder, the object queries gain rich instance information, which provides effective cues for the final segmentation process. MTI Module. After the visual transformer, a multi-object temporal interaction (MTI) module is proposed for objectwise interaction, which is described in Section . In detail, we utilize an MTI encoder to communicate temporal features of the same object in different views. Then an MTI decoder is proposed to grasp information into a set of video-wise query representations for associating objects across frames, inspired by (Heo et al. 2022). Segmentation Head and Loss Function. On top of the components introduced above, we obtain the final mask predictions from the extracted multi-modal features via a segmentation head. We follow previous works (Wu et al. 2022, 2021) to design the segmentation head that contains a bounding box head, a classification head, and a mask head. Then, we find the best assignment from the predictions of MUTR by using Hungarian Matching (Carion et al. 2020). During training, we calculate three losses in MUTR, which are focal loss (Lin et al. 2017) Lcls on the predictions of referred object sequence, Lbox on the bounding box of predicted instance, and Lmask on the predicted object masks. In detail, Lbox is the combination of L1 loss and GIoU loss (Rezatofighi et al. 2019), and Lmask is the summation of the Dice (Milletari, Navab, and Ahmadi 2016) and binary focal loss. The whole loss function is formulated as L = λcls Lcls + λbox Lbox + λmask Lmask , (1) where λcls, λbox and λmask denote the weights for Lcls, Lbox and Lmask. Multi-scale Temporal Aggregation To boost both the multi-modal and multi-frame feature fusion, we introduce Multi-scale Temporal Aggregation module for low-level temporal aggregation. The proposed MTA module generates a set of object queries that contain multi-modal knowledge for subsequent transformer decoding. Multi-scale Temporal Transform. As shown in Figure ??, the MTA module take the text/audio features Fr, and multiscale visual features as input, i.e., the extracted features of 2nd, 3rd, 4th stages from the visual backbone. We first utilize linear projection layers on the multi-scale features to transform them into the same dimension. Specifically, we separately utilize 1 × 1 convolution layers on the 2nd, 3rd, 4th The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6451 frame 1 frame 2 frame 3 Self-Attn. Block 𝐌𝐓𝐈𝐄𝐧𝐜𝐨𝐝𝐞𝐫 𝑁 … … … 𝑁 𝑄′ … … … … … 𝐌𝐮𝐥𝐭𝐢-𝐬𝐜𝐚𝐥𝐞𝐓𝐞𝐦𝐩𝐨𝐫𝐚𝐥𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧 𝐹! 1st Cross-Attn. Block 2nd Cross−Attn. Block 𝐹!" Multi-scale Features Project & Concat. query FFN Self Attn. Cross Attn. Random Init. 𝐌𝐓𝐈𝐃𝐞𝐜𝐨𝐝𝐞𝐫 1st frame 2nd frame 3rd frame 𝐌𝐮𝐥𝐭𝐢-𝐨𝐛𝐣𝐞𝐜𝐭𝐓𝐞𝐦𝐩𝐨𝐫𝐚𝐥𝐈𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐢𝐨𝐧 3rd Cross−Attn. Block 4th Cross−Attn. Block frame 1 frame 2 frame 3 frame 1 frame 2 frame 3 frame 1 frame 2 frame 3 1st frame 2nd frame 3rd frame 1st frame 2nd frame 3rd frame 𝑄 𝑃 𝑃′ 1st scale 2nd scale 3rd scale 4th scale Figure 2: Multi-scale Temporal Aggregation. For low-level multi-modal temporal aggregation, we propose MTA module for inter-frame interaction, which generates tokens with multi-modal knowledge as the input queries for transformer decoding. scale features, and an additional 3 × 3 convolution layer on the 4th stage features to obtain the 5th scale features. We denote the projected features as {F i vj}, where 2 ≤i ≤5, 1 ≤ j ≤T represent the stage number and frame number. After that, we concatenate the visual features of adjacent frames for each scale, formulated as F i v = Concat(F i v1, F i v2, ..., F i vj, ..., F i vT ), (2) where 2 ≤i ≤5, 1 ≤j ≤T, F i vj represents the projected jth frame features of ith scale, and {F i v}5 i=2 is the final transformed multi-scale visual feature. Then, the resulting multi-modal temporal features are regarded as the key and value in the following cross-attention blocks. Multi-modal Cross-attention. On top of this, we adopt sequential cross-attention mechanisms for multi-modal tokens to progressively capture temporal visual cues of different image scales. We adopt four cross-attention blocks that are assigned to each scale respectively for multi-scale temporal feature extracting. In each attention block, the text/audio features serve as the query, while the multi-scale visual features serve as the key and value. We formulate it as Ff = Blocki−1(Fr, F i v, F i v), 2 ≤i ≤5, (3) where Block represents the sequential cross-attention blocks in MTA module, Ff is the output multi-modal tokens that contain the multi-modal information. After that, we simply repeat the class token of Ff for T × N times, where T is the frame number and N is the query number. We adopt them as the initialized queries fed into the visual transformer for frame-wise decoding. With the frame 1 frame 2 frame 3 Self-Attn. Block 𝐌𝐓𝐈 𝐄𝐧𝐜𝐨𝐝𝐞𝐫 𝑁 … … … 𝑁 𝑄′ … … … … … 𝐌𝐮𝐥𝐭𝐢-𝐬𝐜𝐚𝐥𝐞 𝐓𝐞𝐦𝐩𝐨𝐫𝐚𝐥 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧 𝐹! 1st Cross-Attn. Block 2nd Cross−Attn. Block 𝐹!" Multi-scale Features Project & Concat. query FFN Cross Attn. Self Attn. Random Init. 𝐌𝐓𝐈 𝐃𝐞𝐜𝐨𝐝𝐞𝐫 1st frame 2nd frame 3rd frame 𝐌𝐮𝐥𝐭𝐢-𝐨𝐛𝐣𝐞𝐜𝐭 𝐓𝐞𝐦𝐩𝐨𝐫𝐚𝐥 𝐈𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐢𝐨𝐧 3rd Cross−Attn. Block 4th Cross−Attn. Block frame 1 frame 2 frame 3 frame 1 frame 2 frame 3 frame 1 frame 2 frame 3 1st frame 2nd frame 3rd frame 1st frame 2nd frame 3rd frame 𝑄 𝑃 𝑃′ 1st scale 2nd scale 3rd scale 4th scale Figure 3: Multi-object Temporal Interaction. We introduce MTI module for inter-frame object-wise interaction, and maintain a set of video-wise query representations for associating objects across frames. MTA module, the pre-initialized input queries obtain prior multi-scale knowledge and temporal information for better multi-modal alignment during subsequent decoding. Multi-object Temporal Interaction As the visual transformer adopts a frame-independent manner and fails to interact information among multiple frames, we further introduce a Multi-object Temporal Interaction module to conduct inter-frame object-wise interaction. This module enhances the high-level temporal communication of objects, and benefits the visual correspondence for effective segmentation. The details of MTI are shown in Figure ??, which consists of an MTI encoder and an MTI decoder. MTI Encoder. We obtain the object query outputs P of each frame from the transformer decoder, and feed them into the MTI encoder, which contains a self-attention layer to conduct object-wise interaction across multiple frames, and a feed-forward network layer for feature transformation. To achieve more efficient implementation, we adopt shifted window-attention (Liu et al. 2021) with linear computational complexity in the self-attention layer. The process of MTI encoder is formulated as P ′ = MTI_Encoder(P) (4) where MTI_Encoder denotes the MTI encoder, and P ′ is the outputs of MTI encoder. MTI Decoder. Based on the MTI encoder, we maintain a set of video-wise query Q for associating objects across frames, which are randomly initialized. We regard the outputs from MTI encoder as the key and value, and feed them and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6452 Method Backbone Ref-YouTube-VOS Ref-DAVIS 2017 J &F J F J &F J F CMSA (Ye et al. 2019) ResNet-50 34.9 33.3 36.5 34.7 32.2 37.2 URVOS (Seo, Lee, and Han 2020) 47.2 45.3 49.2 51.5 47.3 56.0 LBDT-4 (Ding et al. 2022b) 48.2 50.6 49.4 YOFO (Li et al. 2022) 48.6 47.5 49.7 53.3 48.8 57.9 ReferFormer (Wu et al. 2022) 58.7 57.4 60.1 61.1 58.0 64.1 MUTR 61.9 60.4 63.4 65.3 62.4 68.2 CITD (Liang et al. 2021) ResNet-101 56.4 54.8 58.1 ReferFormer (Wu et al. 2022) 59.3 58.1 60.4 61.0 58.1 63.8 MUTR 63.6 61.8 65.4 65.3 61.9 68.6 ReferFormer (Wu et al. 2022) Swin-L 64.2 62.3 66.2 63.9 60.8 67.0 MUTR 68.4 66.4 70.4 68.0 64.8 71.3 MTTR (Botach, Zheltonozhskii, and Baskin 2022) Video-Swin-T 55.3 54.0 56.6 MANet (Chen et al. 2022) 55.6 54.8 56.5 ReferFormer (Wu et al. 2022) 62.6 59.9 63.3 62.8 60.8 67.0 MUTR 64.0 62.2 65.8 66.5 63.0 70.0 VLT (Ding et al. 2022a) Video-Swin-B 63.8 61.9 65.6 61.6 58.9 64.3 ReferFormer (Wu et al. 2022) 64.9 62.8 67.0 64.3 60.7 68.0 MUTR 67.5 65.4 69.6 66.4 62.8 70.0 Table 1: Performance of MUTR on Ref-YouTube-VOS and Ref-DAVIS 2017 Datasets. We report the results of MUTR and prior works on multiple backbones, where our MUTR shows the state-of-the-art performance on all datasets. video-wise queries Q into MTI decoder for video-wise decoding. The MTI decoder consists of a cross-attention layer, a self-attention layer, and a feed-forward network layer. We formulate them as Q′ = MTI_Decoder(Q, P ′, P ′) (5) where MTI_Decoder represents the MTI decoder, Q′ is the outputs of MTI decoder. In this way, the proposed MTI module promotes high-level temporal fusion and enhances the connection and interaction of the same objects in different frames, which further contributes to effective segmentation. Joint Training for Multi-modality As a unified VOS framework for multi-modality, MUTR has the potential to segment video objects referred by either text or audio reference. To achieve this, we conduct joint training by combining both text- and audio-referred datasets. Specifically, to balance the data amount of two modalities, the joint training data is composed of partial Ref-YouTube-VOS (Seo, Lee, and Han 2020) (text reference) and the entire AVSBench S4 (Zhou et al. 2022) (audio reference). We sample a subset of Ref-YouTube-VOS for training (10,093 clips (5 frames per clip) out of 72,920), for which we utilize only one description for videos with multiple text descriptions, and filter out half of the instances based on odd-index positions for training. For text or audio reference, we accordingly switch to their respective encoders for feature encoding, i.e., RoBERTa for text and VGGish for audio. Then, they share the same subsequent network modules, i.e., MTA, visual encoder, visual decoder, MTI, and segments head. By our temporal and crossmodality interaction modules, the jointly trained MUTR can obtain superior performance on either of the two modalities. Experiments Quantitative Results Ref-YouTube-VOS. As shown in Table 1, MUTR outperforms the previous state-of-the-art methods by a large margin under on all datasets. On Ref-YouTube-VOS, MUTR with a lightweight backbone ResNet-50 achieves the superior performance with overall J &F of 61.9%, an improvement of +3.2% than the previous state-of-the-art method Referformer. By adopting a more powerful backbone SwinTransformer (Liu et al. 2021), MUTR improves the performance to J &F 68.4%, which is +4.2% than the previous method ReferFormer (Wu et al. 2022). Using a more strong backbone, our method has a higher percentage of improvement, which better reflects the robustness of our method on the scaled-up model size. To reflect the powerful temporal modeling capability of MUTR, we therefore adopt the video Swin transformer (Liu et al. 2022) as the backbone, which is a spatial-temporal encoder that can effectively capture the spatial and temporal cues simultaneously, to compensate for the temporal limitations of the ReferFormer as discussed in (Hu et al. 2022). It can be observed that our method significantly outperforms the ReferFormer, which demonstrates the effectiveness of the temporal consistency in our model. Ref-DAVIS 2017. On the Ref-DAVIS 2017, our method also achieves the best results under the same backbone setting. Since ReferFormer (Wu et al. 2022) does not include the resultson Ref-DAVIS 2017, we report its results using the official pre-trained models provided by ReferFormer. AV-VOS. Table 2 shows the performance of our MUTR on the AVSBench dataset. MUTR significantly surpasses all the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6453 Method Backbone AVSBench S4 AVSBench MS3 J &F J F J &F J F LVS (Chen et al. 2021) ResNet-18 44.5 37.9 51.0 31.3 29.5 33.0 SST (Duke et al. 2021) ResNet-50 73.2 66.3 80.1 49.9 42.6 57.2 LGVT (Zhang et al. 2021) Swin-B 81.1 74.9 87.3 50.0 40.7 59.3 Baseline (Zhou et al. 2022) ResNet-50 78.8 72.8 84.8 52.9 47.9 57.8 Baseline (Zhou et al. 2022) PvT-V2 83.3 78.7 87.9 59.3 54.0 64.5 MUTR ResNet-50 83.0 78.6 87.3 61.6 57.0 66.1 ResNet-101 83.1 78.5 87.6 63.7 59.0 68.3 PvT-V2 85.1 80.7 89.5 67.9 63.7 72.0 Swin-L 85.7 81.5 89.8 69.0 65.0 73.0 Video-Swin-T 83.0 78.7 87.2 64.0 59.2 68.7 Video-Swin-S 84.1 79.8 88.3 67.3 62.7 71.8 Video-Swin-B 85.7 81.6 89.7 68.8 64.0 73.5 Table 2: Performance of MUTR on AVSBench Dataset. MUTR surpasses the state-of-the-art method. Methods J &F J F ReferFormer (Wu et al. 2022) 32.5 32.6 32.4 MUTR∗ 39.9 39.4 40.5 MUTR 41.3 40.6 42.0 Table 3: Performance of MUTR on Ref-YouTube-VOS by Multi-modality Joint Training. Methods J &F J F Baseline (Zhou et al. 2022) 78.8 72.8 84.8 MUTR∗ 79.7 74.5 84.9 MUTR 81.4 76.8 85.9 Table 4: Performance of MUTR on AVSBench S4 by Multi-modality Joint Training. Components Block Num. J &F J F Multi-scale Temporal ✓ 1 61.3 59.7 62.7 ✓ 1 60.4 58.9 61.9 ✓ ✓ 1 61.9 60.4 63.4 ✓ ✓ 2 60.7 59.3 62.2 ✓ ✓ 3 60.4 59.1 61.7 Table 5: Ablation Study of MTA Module. Components Block Num. J &F J F Encoder Decoder ✓ 3 60.3 58.8 61.9 ✓ 3 61.2 60.0 62.6 ✓ ✓ 3 61.9 60.4 63.4 ✓ ✓ 2 61.1 59.5 62.6 ✓ ✓ 1 60.8 59.3 62.3 Table 6: Ablation Study of MTI Module. previous best competitors (J &F 83.0% VS 78.8%; 61.6% VS 52.9%) with the same ResNet-50 backbone. We also achieve a new state-of-the-art performance with Swin-L (Liu et al. 2021) backbone. By employing a stronger backbone, we observe consistent performance improvement of MUTR, indicating the strong generalization of our approach. Joint Training Datasets. We keep most training hyperparameters consistent with our previous text-referred video object segmentation experiments, and adopt ResNet-50 as the visual backbone. Table 3 and 4 present the performance of MUTR by joint training on Ref-YouTube-VOS and AVSBench S4, respectively. Therein, ReferFormer, the ‘Baseline’, and MUTR∗are all trained exclusively on text- or audio-referred dataset, while MUTR is trained on the multimodality joint dataset. As shown, the single unified MUTR by joint training can achieve even better performance than their separate training. This indicates the effectiveness of our proposed architecture to serve as a unified framework simultaneously for text and audio input. Qualitative Results The first two columns of Figure 4 visualize some qualitative results in comparison with ReferFormer (Wu et al. 2022), which lacks inter-frame interaction in terms of temporal dimension. As demonstrated, along with multiple highly similar objects in the video, ReferFormer (Wu et al. 2022) is easier to misidentifies them. In contrast, MUTR can associate all the objects in temporal, which can better track and segment all targets accurately. The last column of Figure 4 visualizes the audio-visual result compared with Baseline (Zhou et al. 2022) on AVSBbench S4. With temporal consistency, MUTR successfully tracks and segments challenging situations that are surrounded or occluded by similar instances. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6454 ReferFormer MUTR ReferFormer MUTR Baseline MUTR TimeStep Ref-YouTube-VOS Ref-DAVIS 2017 AVSBench ”A fire truck pulling out with another behind it.” References “A man in a black jacket wearing glasses." ”A box in the hands of a man wearing glasses." ”A man in the center wearing a blue and black jacket." Ambulance Siren Figure 4: Qualitative Results of MUTR. We visualize the results between ReferFormer (Wu et al. 2022) and MUTR on R-VOS benchmarks and between Baseline (Zhou et al. 2022) and MUTR on AV-VOS benchmark. Compared with ReferFormer, MUTR performs better on temporal consistency when segmenting multiple similar objects, i.e., fire truck in Ref-YouTube-VOS and box in Ref-DAVIS 2017. Also, compared with the baseline of AV-VOS (Zhou et al. 2022) that denoted as ‘Baseline’ in this figure, MUTR can handle serve occlusion. MTA MTI J &F J F FPS Parameters 60.2 58.7 61.7 19.64 168.1M ✓ 60.8 59.3 62.2 19.53 176.4M ✓ 61.5 60.1 63.0 19.44 169.3M ✓ ✓ 61.9 60.4 63.4 19.37 177.6M Table 7: Ablation Study of the MTA and MTI Modules. Ablation Studies In this section, we perform experiments to analyze the main components and hyper-parameters of MUTR. All the experiments are conducted with the ResNet-50 backbone and evaluate their impact by the Ref-YouTube-VOS performance. Effectiveness of Main Componenets. Table 7 demonstrates the effectiveness of MTA and MTI proposed in our framework. The performance will be seriously degraded from 61.9% to 60.2% by removing MTA and MTI modules. Besides, our MTA and MTI modules introduce a marginal increase in inference latency, demonstrating favorable implementation and parameter efficiency. Ablation Study on MTA. In Table 5, if either the singlescale temporal aggregation or multi-scale aggregation at the image level are adopted, the performance of MUTR would significantly drop to 60.4% and 61.3%, respectively, which demonstrates the necessity of the MTA module. We also ablate the number of MTA blocks. As seen in Table 5, more MTA blocks cannot bring further performance improvement, since (1) not enough videos for training; (2) the embedding space of visual and reference is only 256-dimensional, which is difficult to optimize so many parameters. Ablation Study on MTI. As shown in Table 6, the performance of MUTR is improved by using more MTI blocks. A possible reason is that the larger the MTI blocks, the more sufficient temporal communication between instance-level can be performed. Moreover, using only the encoder or decoder, the performance of MUTR would both decline. Conclusion This paper proposes a MUTR, a Multi-modal Unified Temporal transformer for Referring video object segmentation. A simple yet and effective Multi-scale Temporal Aggregation (MTA) is introduced for multi-modal references to explore low-level multi-scale visual information in videolevel. Besides, the high-level Multi-object Temporal Interaction (MTI) is designed for inter-frame feature communication to achieve temporal correspondence between the instancelevel across the entire video. Aided by the MTA and MTI, our MUTR achieves new state-of-the-art performance on three RVOS/AV-VOS benchmarks compared to previous solutions. We hope the MTA and MTI will help ease the future study of multi-modal VOS and related tasks (e.g., referring video object tracking and video instance segmentation). We do not foresee negative social impact from the proposed work. Acknowledgements This work was supported by National Natural Science Foundation of China (Grant No. 62206272). This work was also supported in part by Scientific and Technological Innovation Action Plan of Shanghai Science and Technology Committee (No.22511101502 and No.21DZ2203300). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6455 References Bellver, M.; Ventura, C.; Silberer, C.; Kazakos, I.; Torres, J.; and Giro-i Nieto, X. 2020. Refvos: A closer look at referring expressions for video object segmentation. arXiv preprint arXiv:2010.00263. Botach, A.; Zheltonozhskii, E.; and Baskin, C. 2022. Endto-end referring video object segmentation with multimodal transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4985–4995. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Chen, D.-J.; Jia, S.; Lo, Y.-C.; Chen, H.-T.; and Liu, T.-L. 2019. See-through-text grouping for referring image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7454–7463. Chen, H.; Xie, W.; Afouras, T.; Nagrani, A.; Vedaldi, A.; and Zisserman, A. 2021. Localizing visual sounds the hard way. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16867–16876. Chen, W.; Hong, D.; Qi, Y.; Han, Z.; Wang, S.; Qing, L.; Huang, Q.; and Li, G. 2022. Multi-Attention Network for Compressed Video Referring Object Segmentation. In Proceedings of the 30th ACM International Conference on Multimedia, 4416–4425. Ding, H.; Liu, C.; Wang, S.; and Jiang, X. 2021. Visionlanguage transformer and query generation for referring segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16321–16330. Ding, H.; Liu, C.; Wang, S.; and Jiang, X. 2022a. VLT: Vision-Language Transformer and Query Generation for Referring Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. Ding, Z.; Hui, T.; Huang, J.; Wei, X.; Han, J.; and Liu, S. 2022b. Language-bridged spatial-temporal interaction for referring video object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4964–4973. Duke, B.; Ahmed, A.; Wolf, C.; Aarabi, P.; and Taylor, G. W. 2021. Sstvos: Sparse spatiotemporal transformers for video object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5912– 5921. Fang, R.; Yan, S.; Huang, Z.; Zhou, J.; Tian, H.; Dai, J.; and Li, H. 2023. InstructSeq: Unifying Vision Tasks with Instruction-conditioned Multi-modal Sequence Generation. arXiv preprint arXiv:2311.18835. Gao, P.; Han, J.; Zhang, R.; Lin, Z.; Geng, S.; Zhou, A.; Zhang, W.; Lu, P.; He, C.; Yue, X.; et al. 2023. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010. Guo, Z.; Zhang, R.; Zhu, X.; Tang, Y.; Ma, X.; Han, J.; Chen, K.; Gao, P.; Li, X.; Li, H.; et al. 2023. Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following. arXiv preprint arXiv:2309.00615. Han, J.; Gong, K.; Zhang, Y.; Wang, J.; Zhang, K.; Lin, D.; Qiao, Y.; Gao, P.; and Yue, X. 2023a. OneLLM: One Framework to Align All Modalities with Language. arXiv preprint arXiv:2312.03700. Han, J.; Zhang, R.; Shao, W.; Gao, P.; Xu, P.; Xiao, H.; Zhang, K.; Liu, C.; Wen, S.; Guo, Z.; et al. 2023b. Imagebindllm: Multi-modality instruction tuning. arXiv preprint arXiv:2309.03905. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770– 778. Heo, M.; Hwang, S.; Oh, S. W.; Lee, J.-Y.; and Kim, S. J. 2022. Vita: Video instance segmentation via object token association. arXiv preprint arXiv:2206.04403. Hershey, S.; Chaudhuri, S.; Ellis, D. P.; Gemmeke, J. F.; Jansen, A.; Moore, R. C.; Plakal, M.; Platt, D.; Saurous, R. A.; Seybold, B.; et al. 2017. CNN architectures for large-scale audio classification. In 2017 ieee international conference on acoustics, speech and signal processing (icassp), 131–135. IEEE. Hu, Z.; Chen, B.; Gao, Y.; Ji, Z.; and Bai, J. 2022. 1st Place Solution for YouTubeVOS Challenge 2022: Referring Video Object Segmentation. arXiv preprint arXiv:2212.14679. Hu, Z.; Feng, G.; Sun, J.; Zhang, L.; and Lu, H. 2020. Bidirectional relationship inferring network for referring image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4424–4433. Kamath, A.; Singh, M.; LeCun, Y.; Synnaeve, G.; Misra, I.; and Carion, N. 2021. Mdetr-modulated detection for endto-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1780–1790. Khoreva, A.; Rohrbach, A.; and Schiele, B. 2019. Video object segmentation with language referring expressions. In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part IV 14, 123–141. Springer. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Li, D.; Li, R.; Wang, L.; Wang, Y.; Qi, J.; Zhang, L.; Liu, T.; Xu, Q.; and Lu, H. 2022. You only infer once: Cross-modal meta-transfer for referring video object segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 1297–1305. Liang, C.; Wu, Y.; Zhou, T.; Wang, W.; Yang, Z.; Wei, Y.; and Yang, Y. 2021. Rethinking cross-modal interaction from a top-down perspective for referring video object segmentation. arXiv preprint arXiv:2106.01061. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Dollár, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6456 Lin, Z.; Liu, C.; Zhang, R.; Gao, P.; Qiu, L.; Xiao, H.; Qiu, H.; Lin, C.; Shao, W.; Chen, K.; et al. 2023. SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models. arXiv preprint arXiv:2311.07575. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Liu, Z.; Ning, J.; Cao, Y.; Wei, Y.; Zhang, Z.; Lin, S.; and Hu, H. 2022. Video swin transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3202–3211. Luo, G.; Zhou, Y.; Sun, X.; Cao, L.; Wu, C.; Deng, C.; and Ji, R. 2020. Multi-task collaborative network for joint referring expression comprehension and segmentation. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 10034–10043. Margffoy-Tuay, E.; Pérez, J. C.; Botero, E.; and Arbeláez, P. 2018. Dynamic multimodal instance segmentation guided by natural language queries. In Proceedings of the European Conference on Computer Vision (ECCV), 630–645. Milletari, F.; Navab, N.; and Ahmadi, S.-A. 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV), 565–571. Ieee. Mo, S.; and Tian, Y. 2023. AV-SAM: Segment Anything Model Meets Audio-Visual Localization and Segmentation. arXiv preprint arXiv:2305.01836. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 658–666. Seo, S.; Lee, J.-Y.; and Han, B. 2020. Urvos: Unified referring video object segmentation network with a large-scale benchmark. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16, 208–223. Springer. Shi, H.; Li, H.; Meng, F.; and Wu, Q. 2018. Key-wordaware network for referring expression image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), 38–54. Wang, K.; Ren, H.; Zhou, A.; Lu, Z.; Luo, S.; Shi, W.; Zhang, R.; Song, L.; Zhan, M.; and Li, H. 2023. MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning. arXiv preprint arXiv:2310.03731. Wang, Z.; Lu, Y.; Li, Q.; Tao, X.; Guo, Y.; Gong, M.; and Liu, T. 2022. Cris: Clip-driven referring image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11686–11695. Wu, J.; Jiang, Y.; Bai, S.; Zhang, W.; and Bai, X. 2021. SeqFormer: Sequential Transformer for Video Instance Segmentation. arXiv preprint arXiv:2112.08275. Wu, J.; Jiang, Y.; Sun, P.; Yuan, Z.; and Luo, P. 2022. Language as queries for referring video object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4974–4984. Xu, N.; Yang, L.; Fan, Y.; Yue, D.; Liang, Y.; Yang, J.; and Huang, T. 2018. Youtube-vos: A large-scale video object segmentation benchmark. arXiv preprint arXiv:1809.03327. Yan, S.; Xu, X.; Zhang, R.; Hong, L.; Chen, W.; Zhang, W.; and Zhang, W. 2023. PanoVOS: Bridging Non-panoramic and Panoramic Views with Transformer for Video Segmentation. arXiv e-prints, arXiv–2309. Yang, Z.; Wang, J.; Tang, Y.; Chen, K.; Zhao, H.; and Torr, P. H. 2022. Lavt: Language-aware vision transformer for referring image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18155–18165. Ye, L.; Rochan, M.; Liu, Z.; and Wang, Y. 2019. Cross-modal self-attention network for referring image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10502–10511. Zhang, J.; Xie, J.; Barnes, N.; and Li, P. 2021. Learning generative vision transformer with energy-based latent space for saliency prediction. Advances in Neural Information Processing Systems, 34: 15448–15463. Zhang, R.; Han, J.; Zhou, A.; Hu, X.; Yan, S.; Lu, P.; Li, H.; Gao, P.; and Qiao, Y. 2023a. LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention. arXiv preprint arXiv:2303.16199. Zhang, R.; Hu, X.; Li, B.; Huang, S.; Deng, H.; Li, H.; Qiao, Y.; and Gao, P. 2023b. Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners. CVPR 2023. Zhang, R.; Jiang, Z.; Guo, Z.; Yan, S.; Pan, J.; Dong, H.; Gao, P.; and Li, H. 2023c. Personalize segment anything model with one shot. arXiv preprint arXiv:2305.03048. Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; and Li, H. 2022. Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification. In ECCV 2022. Springer Nature Switzerland. Zhou, J.; Wang, J.; Zhang, J.; Sun, W.; Zhang, J.; Birchfield, S.; Guo, D.; Kong, L.; Wang, M.; and Zhong, Y. 2022. Audio– Visual Segmentation. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII, 386–403. Springer. Zhu, X.; Zhang, R.; He, B.; Zhou, A.; Wang, D.; Zhao, B.; and Gao, P. 2023. Not all features matter: Enhancing few-shot clip with adaptive prior refinement. ICCV 2023. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6457 | 2024 | 717 |
18,537 | Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning Bang Yang1,2, Yong Dai2, Xuxin Cheng1, Yaowei Li1,2, Asif Raza1, Yuexian Zou1* 1 ADSPLAB, School of ECE, Peking University, Shenzhen, China 2 Pengcheng Laboratory, Shenzhen, China {yangbang, chengxx, ywl, asifraza151, zouyx}@pku.edu.cn, [email protected] Abstract While vision-language pre-trained models (VL-PTMs) have advanced multimodal research in recent years, their mastery in a few languages like English restricts their applicability in broader communities. To this end, there is an increasing interest in developing multilingual VL models via a joint-learning setup, which, however, could be unrealistic due to expensive costs and data availability. In this work, we propose to extend VL-PTMs’ language capacity by continual language learning (CLL), where a model needs to update its linguistic knowledge incrementally without suffering from catastrophic forgetting (CF). We begin our study by introducing a model dubbed CLL-CLIP, which builds upon CLIP, a prevailing VL-PTM that has acquired image-English text alignment. Specifically, CLL-CLIP contains an expandable token embedding layer to handle linguistic differences. It solely trains token embeddings to improve memory stability and is optimized under cross-modal and cross-lingual objectives to learn the alignment between images and multilingual texts. To alleviate CF raised by covariate shift and lexical overlap, we further propose a novel approach that ensures the identical distribution of all token embeddings during initialization and regularizes token embedding learning during training. We construct a CLL benchmark covering 36 languages based on MSCOCO and XM3600 datasets and then evaluate multilingual image-text retrieval performance. Extensive experiments verify the effectiveness of CLL-CLIP and show that our approach can boost CLL-CLIP, e.g., by 6.7% in text-to-image average Recall@1 on XM3600, and improve various state-ofthe-art methods consistently. Our code and data are available at https://github.com/yangbang18/CLFM. Introduction Large-scale vision-language pre-trained models (VL-PTMs) such as CLIP (Radford et al. 2021), Flamingo (Alayrac et al. 2022), and BLIP-2 (Li et al. 2023a) have made great strides in multimodal research (Gan et al. 2022; Chen et al. 2023a). Nevertheless, the majority of the current literature is biased toward a few languages, predominantly English, making it a barrier to the widespread adoption and accessibility of VLPTMs across different linguistic communities. Considering that we are living in a world with roughly 7,000 languages, *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Our Work PTM Back Propagate & Optimize Token Embedding Pre-Trained Model (PTM) Token Embedding Regularization Prior Works Identical Distribution Back Propagate & Optimize Figure 1: For continual language learning, prior works in NLP (Garcia et al. 2021; Huang et al. 2022) train full model parameters to learn a new language, with new token embeddings initialized randomly without considering the distribution of prior ones. Our work requires the least amount of components to be trained (i.e., the token embedding layer) and targets token embedding initialization and regularization to avert catastrophic forgetting. Note that our frozen vision PTM is not plotted for clarity. it is indispensable to strive for greater language inclusivity and diversity in VL-PTMs. To endow VL-PTMs with an ability to understand multilingual contexts, there is an increasing interest in developing multilingual VL-PTMs via a joint-learning setup (Zhou et al. 2021; Zhang, Hu, and Jin 2022; Chen et al. 2023b; Li et al. 2023b), which has shown remarkable performance in tasks like multilingual image-text retrieval. However, two critical issues plague the joint learning. One is the high computational cost and inflexibility of learning new knowledge, as we need to re-train models on new data alongside all previous data. Another one is that data is not always available during the learning cycle due to privacy and other factors. Alternatively, continual language learning (CLL), also known as lifelong language learning, is a more practical setup to extend PTMs’ language capacity with low costs and high flexibility. The goal of CLL is to consolidate multilingual performance into a single, parameter- and memory-constrained model, ensuring that this model can The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6458 evolve under non-stationary data streams without suffering from catastrophic forgetting (McCloskey and Cohen 1989). While CLL has been extensively studied in natural language processing (NLP) (Biesialska, Biesialska, and Costa-juss`a 2020; Escolano, Costa-Juss`a, and Fonollosa 2021; Zhang et al. 2022; M’hamdi, Ren, and May 2023), the effective integration of VL-PTMs with CLL is still under-explored and it presents distinctive challenges like leveraging visual information to aid in language learning. In this paper, we study the multilingual acquisition of VL-PTMs in the CLL setup. We begin our study by selecting CLIP (Radford et al. 2021), a prevailing VL-PTM that can correlate images and English texts into the same latent space, as our backbone. Next, we propose a model dubbed CLL-CLIP to incrementally learn new languages. Specifically, our model contains an expandable token embedding layer to handle linguistic differences. Such design is crucial to prevent our model from encountering a high portion of out-of-vocabulary tokens. During training, CLL-CLIP keeps all pre-trained components frozen except its token embedding layer to retain previously acquired knowledge and is optimized under cross-modal and cross-lingual objectives to learn the alignment between images and multilingual texts. Next, we propose a CLL approach that targets Token Embedding Initialization and Regularization (TEIR) to alleviate catastrophic forgetting (CF). Figure 1 differentiates our TEIR from prior approaches in NLP (Garcia et al. 2021; Huang et al. 2022). In particular, to reduce CF raised by covariate shift (Shimodaira 2000; Ioffe and Szegedy 2015), our approach ensures the identical distribution of all token embeddings during initialization. To mitigate CF caused by the lexical overlap (Pfeiffer et al. 2021), our approach regularizes token embedding learning based on the number of times that tokens appear in the tasks they have already learned by CLL-CLIP. Our insight is that if a token is common in previously learned tasks, its embedding update should be penalized to avoid task interference. To evaluate the effectiveness of our CLL-CLIP model and TEIR approach, we first construct a benchmark covering 36 languages based on MSCOCO (Chen et al. 2015) and XM3600 (Thapliyal et al. 2022) datasets. We then reproduce various state-of-the-art (SOTA) continual learning and parameter-efficient fine-tuning methods based on our CLL-CLIP model on this benchmark. Extensive experiments verify the effectiveness of CLL-CLIP and show that TEIR can boost CLL-CLIP, e.g., by 6.7% in text-to-image average Recall@1 on XM3600, and improve the performance of SOTA methods consistently. Our main contributions are as follows. (1) To the best of our knowledge, we present the first systematic study on enhancing the language capacity of dual-stream VL-PTMs through continual language learning. (2) We design a model named CLL-CLIP for this challenging setup and introduce a novel approach called TEIR that underscores the initialization and regularization of token embeddings to mitigate catastrophic forgetting. (3) We construct a CLL benchmark for evaluating image-text retrieval across 36 languages. Extensive experiments verify the effectiveness of our CLLCLIP and TEIR and demonstrate the generality of TEIR on various SOTA methods. Related Work Multilingual VL Pre-Training As monolingual visuallanguage pre-training models (VL-PTMs) continue to evolve, an increasing amount of effort is directed toward enhancing the adaptability of these models for multilingual scenarios via pre-training. M3P (Ni et al. 2021) and UC2 (Zhou et al. 2021) adopt a BERT-like single-stream architecture (Devlin et al. 2019) for pre-training, yet they diverge in their data augmentation strategies. M3P uses word-level augmentation to obtain code-switched VL pairs, whereas UC2 utilizes translation engines to transform English image captions into other languages. In contrast, MURAL (Jain et al. 2021), M-CLIP (Carlsson et al. 2022), MLA (Zhang, Hu, and Jin 2022), and mCLIP (Chen et al. 2023b) build their model on a dual-stream model like CLIP for better efficiency on retrieval tasks. These models use the same data augmentation strategy as UC2, but MURAL and mCLIP additionally consider annotated translation pairs. Besides retrieval tasks, recent encoder-decoder-based PaLI (Chen et al. 2023c) and WS-mVLP (Li et al. 2023b) have shown their superiority in multilingual VL generation tasks. However, all the above methods develop multilingual VL-PTMs via a joint-learning setup and thus suffer from high costs and inflexibility of learning new languages. In this paper, we focus on endowing dual-stream VL-PTMs with a multilingual understanding ability via a more practical and flexible setup, i.e., continual language learning. Continual Learning (CL) The core aspiration of CL is to enable machines to mimic the strong adaptability of humans to continually acquire, update, organize, and exploit knowledge (Wang et al. 2023). The computer vision (CV) community has witnessed significant advances in CL, which can be mainly divided into four categories. Specifically, regularization-based methods penalize changes to model parameters or predictions (Kirkpatrick et al. 2017; Lee et al. 2019; Ahn et al. 2021); rehearsal-based methods store historical data or features to retain previously acquired knowledge (Chaudhry et al. 2019; Buzzega et al. 2020; Cha, Lee, and Shin 2021); architecture-based methods assign isolated parameters for different tasks (Yoon et al. 2018; Li et al. 2019; Ke, Liu, and Huang 2020); prompt-based methods add parameter-efficient modules into frozen PTMs to harness their power (Wang et al. 2022a,b; Smith et al. 2023; Gao et al. 2023). The success of CL in CV inspires related research in NLP (Biesialska, Biesialska, and Costa-juss`a 2020; Wu et al. 2022; M’hamdi, Ren, and May 2023). In particular, a line of research studies on how to add new languages to pre-trained neural machine translation models. One attempt is to add and train language-specific components, like encoder/decoder (Escolano, Costa-Juss`a, and Fonollosa 2021) and adapter (Berard 2021). Another attempt proposes to substitute models’ vocabulary dynamically (Garcia et al. 2021; Huang et al. 2022). In this paper, we differentiate our work from prior ones in NLP in Figure 1. Unlike those regularization methods that need to estimate parameter importance by feeding data into the model, our approach only requires the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6459 A girl sings on the stage (a) CLL-CLIP ⼀位⼥孩在 舞台上唱歌 ℒ!" VL-PTM ℒ!# Image Encoder Text Encoder Token Embedding Expandable Token Embedding Shared "$ "% "& #$ #% #& $'() Embeddings Copy Expandable Token Embedding Vocab of %' Vocab of the last task $'() &$' Merge Initialize Randomly Initialization CLL-CLIP that has learned ! −1 tasks Training Dataset at the '-th task (%') that covers new language(s) to be learned # = (#$, #%, #&) Sampling (b) TEIR Expandable Token Embedding Regularization Gradients L2 Weight Decay Re-Scaled by , $'()\ &$' $'()⋂&$' &$'\$'() , = / , < 1 , = 1 Preprocess $~&! Ensure Identical Distribution $'()\ &$' $'()⋂&$' &$'\$'() Text Encoder BP & Optimize Figure 2: Overview of our proposals. (a): CLL-CLIP builds upon a two-tower VL-PTM (i.e., CLIP), keeps all pre-trained components frozen, and contains an expandable and trainable token embedding layer for continual language learning. (b): Our TEIR approach eases catastrophic forgetting by underscoring the initialization and regularization of token embeddings. lexical statistics of data. By contrast with the CL of CLIP in visual recognition (Ding et al. 2022; Thengane et al. 2022), we value the CL of CLIP in language acquisition. Approach In our continual language learning (CLL) setting, a model needs to sequentially learn T tasks, each with its corresponding training dataset Dt(t ∈[1, T]) that covers nonoverlapping subsets of languages. After training a model parameterized by ϕt on Dt, the goal of CLL is to ensure the model can perform well in previous t tasks. To achieve that, we propose CLL-CLIP and TEIR, as introduced next. CLL-CLIP Architecture As shown in Figure 2(a), our model builds upon CLIP to avoid from-scratch training and contains an expandable token embedding layer parameterized by θt to vectorize multilingual texts. In particular, CLIP consists of a vision encoder, a text encoder, and a token embedding layer mainly for English1. Let denote their parameters as Ωve, Ωte, and Ωemb, respectively. Then parameters of our model at the t-th task are ϕt = {Ωve, Ωte, Ωemb, θt}, where Ωemb can be discarded during inference. We keep all CLIP parameters Ω∗frozen and solely train θt. This choice is in line with the research on efficient VL pre-training (Zhai et al. 2022; Zhang, Hu, and Jin 2022) and also benefits the preservation of previously acquired knowledge during continual learning (Wang et al. 2022b; Smith et al. 2023). Vocab Substitution Let denote the vocab corresponding to θt as Vt. Before training, V0 is identical to CLIP’s vocab, and θ0 = Ωemb. For t ∈[1, T], Vt needs to be dy1We separate token embeddings from CLIP for clarity, and the text encoder means the rest of the components (positional embeddings, Transformer blocks, projection head) to obtain text features. namically updated to accommodate the lexicon of new languages. Thus, we first adopt the same BPE procedure (Sennrich, Haddow, and Birch 2016) as CLIP to build vocab ˆVt from Dt and then follow (Garcia et al. 2021) to obtain Vt by merging Vt−1 and ˆVt, i.e., Vt = Vt−1 ∪ˆVt. There are two issues to be noted: (1) the embedding initialization of ˆVt\Vt−1 (new tokens that only exist in ˆVt) and (2) the sub-optimal nature of Vt due to lacking comprehensive text statistics. We will address (1) in TEIR and discuss (2) in later experiments. Training Objectives Each training sample for our CLLCLIP is a triplet x = (xI, xE, xF ) that includes an image xI, a text in native language xE (i.e., English text), and a foreign text xF . At the t-th task, we obtain global representations of the triplet x as follows: rI = g(xI; Ωve), rE = g(xE; Ωte, Ωemb), rF = g(xF ; Ωte, θt), (1) where g(·) indicates the feed-forward transformation. We suggest training CLL-CLIP with cross-modal and crosslingual objectives, i.e., Lcm and Lcl, so that CLL-CLIP can correlate rI with rF based on the already acquired knowledge, i.e., the alignment between rI and rE. Following CLIP, we implement Lcm as InfoNCE-based image-text contrast (van den Oord, Li, and Vinyals 2018): Lcm = 1 2 LI→F InfoNCE + LF →I InfoNCE , LY →Z InfoNCE = −1 K K X k=1 log exp(⟨rY k , rZ k ⟩/τ) PK l=1 exp(⟨rY k , rZ l ⟩/τ) , (2) where K denotes the batch size, ⟨·, ·⟩the cosine similarity, and τ a temperature hyper-parameter. Motivated by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6460 (Reimers and Gurevych 2020), we implement Lcl as the mean-square error between paired text features: Lcl = 1 2K K X k=1 ||rE k −rF k ||2 2, (3) where ||·||2 denotes L2-norm. The overall training objective of CLL-CLIP can be formulated as follows: L = γ1 · Lcm + γ2 · Lcl, (4) where γ∗are hyper-parameters to balance two losses. TEIR As shown in Figure 2(b), the key of TEIR is how we treat Vt,old = Vt−1\ ˆVt, Vt,∩= Vt−1∩ˆVt, and Vt,new = ˆVt\Vt−1 differently to mitigate catastrophic forgetting (CF). Initialization Language models building on Transformer (Vaswani et al. 2017) typically initialize token embeddings with a Gaussian distribution N(µ, σ2) with zero mean (µ = 0) and a pre-defined variance σ2. Let denote CLL-CLIP’s token embeddings after training on Dt as θ∗ t . With the assumption that θ∗ t−1 ∼N(µt−1, σ2 t−1), the focus now becomes how we initialize θt properly. Following (Garcia et al. 2021), θt inherits pre-trained embeddings of Vt−1 from θ∗ t−1 to preserve previously acquired linguistic knowledge. Instead of initializing embeddings of Vt,new with a fixed distribution N(µ, σ2), we suggest µ = µt−1 and σ = σt−1 to ensure the identical distribution of new and prior token embeddings. By doing so, our approach alleviates the feature drift (a.k.a. covariate shift) problem, which is a potential factor to arise CF (Ramasesh, Dyer, and Raghu 2021). Regularization Although lexical overlap is beneficial for transfer learning (Pfeiffer et al. 2021), learning the embeddings of Vt,∩without constraints will cause interference to the performance of previous tasks that contain lexically overlapping tokens. Let denote token statistics till the t-th task as ct ∈R|Vt|, where ct,j is the number of times that the j-th token appears in prior t−1 tasks and c1,j is initialized as 1. To overcome CF raised by lexical overlap, we re-scale the rate of L2 weight decay β and gradients ∇L(θt) w.r.t. token embeddings θt as follows, with standard stochastic gradient descent (SGD) with L2 weight decay as an example: θt,j ←(1 −αβλt,j)θt,j −αλt,j∇L(θt,j) (5) where α is a learning rate, and λt,j is defined as: λt,j = 0, if tokenj ∈Vt,old 1/(ct,j + 1), if tokenj ∈Vt,∩ 1, if tokenj ∈Vt,new (6) For sophisticated optimizers with momentum, the scaling operation is still applied on β and ∇L(θt) directly. As indicated by Equation (5) and (6), we keep token embeddings unrelated to the t-th task intact, penalize embedding learning of Vt,∩, while updating embeddings of Vt,new as usual. This method averts task interference and ensures the effective learning of text features (rF ), leading to a better tradeoff between memory stability and learning plasticity. MSCOCO36 XM3600 # Train/Val/Test Images 113,287/5,000/5,000 -/-/3,600 # Languages 1 + 35 36 # Captions per Language 616,767 ≈7260 Table 1: Dataset Statistics. # means “The number of”. MSCOCO36 is obtained by translating the English captions of MSCOCO into the other 35 languages in XM3600 via Google Translator, following (Thapliyal et al. 2022). Experiments Experimental Settings Benchmark We build a CLL benchmark based on MSCOCO (Chen et al. 2015) and XM3600 (Thapliyal et al. 2022) to evaluate the effectiveness of our proposals. Here are the reasons: (1) MSCOCO is a popular VL benchmark and it contains high-quality image-English caption pairs. (2) XM3600 consists of image-caption pairs in 36 languages2 spoken by geographically-diverse people. This dataset covers the most diverse languages to our best knowledge. (3) The multi-lingual VL benchmark IGLUE (Bugliarello et al. 2022) varies in both task types and languages, making it hard to justify the effect of linguistic differences. As shown in Table 1, we use Google Translator3 for data augmentation and thus obtain a multilingual dataset named MSCOCO36. We train models on MSCOCO36 based on the Karpathy split (Karpathy and Fei-Fei 2015). Then, we report in-domain and out-of-domain results on MSCOCO36 and XM3600, respetively. Tasks and Task Order We treat each language as a task and thus obtain T = 36 tasks. Models are trained on the English task first and then the rest 35 tasks in a random order. Metrics Let aj,i (j ≥i) denotes Recall@1 (a popular metric in information retrieval) on the i-th task after training on the j-th task. In line with the continual learning research (Wang et al. 2023), we compute two metrics: • Average Recall: ARj = 1 j Pj i=1 aj,i, a composite metric for a model’s learning capacity and memory stability. • Forgetting: Fj = 1 j−1 Pj−1 i=1 maxk∈[1,j−1](ak,i −aj,i), whose lower value means less catastrophic forgetting. Unless otherwise specified, we report the-end ART and FT performance in percentile and omit the subscript. Implementation Details We follow (Zhang, Hu, and Jin 2022; Yang et al. 2023) to adopt the ViT-B/16 variant of CLIP as the backbone. We follow OpenCLIP (Ilharco et al. 2021) and set the initial temperature of Lcm to 0.07. We 2The 36 languages are Arabic, Bengali, Czech, Danish, German, Greek, English, Spanish, Farsi, Finnish, Filipino, French, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Maori, Dutch, Norwegian, Polish, Portuguese, Cusco Quechua, Romanian, Russian, Swedish, Swahili, Telugu, Thai, Turkish, Ukrainian, Vietnamese, and Chinese-Simplified. 3https://translate.google.com/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6461 Setting Model MSCOCO36 (In-Domain) XM3600 (Out-of-Domain) Image-to-Text Text-to-Image Image-to-Text Text-to-Image AR (↑) F (↓) AR (↑) F (↓) AR (↑) F (↓) AR (↑) F (↓) Joint Learning CLL-CLIP 53.3 31.4 50.7 37.1 M-CLIP (2022) 42.7 25.9 53.6 41.1 PaLI (2023c) 36.0 28.5 Continual Learning CLL-CLIP 29.6 23.2 15.2 15.6 26.4 23.1 17.6 18.4 with TEIR 38.3 (+8.7) 14.7 (+8.5) 20.5 (+5.3) 10.5 (+5.1) 35.0 (+8.6) 15.3 (+7.8) 24.3 (+6.7) 12.5 (+5.9) oEWC (2018) 37.0 15.7 19.3 11.3 32.3 17.2 21.8 14.1 with TEIR 40.2 (+3.2) 12.7 (+3.0) 21.6 (+2.3) 9.3 (+2.0) 36.7 (+4.4) 13.4 (+3.8) 25.6 (+3.8) 11.2 (+2.9) ER (2019) 34.1 17.9 17.8 12.3 29.0 20.0 19.4 16.0 with TEIR 39.3 (+5.2) 12.8 (+5.1) 21.5 (+3.7) 8.8 (+3.5) 35.4 (+6.4) 13.9 (+6.1) 24.7 (+5.3) 11.2 (+4.8) DER (2020) 37.6 14.6 19.5 10.6 31.6 17.4 21.0 14.4 with TEIR 42.7 (+5.1) 9.4 (+5.2) 23.4 (+3.9) 6.9 (+3.7) 38.3 (+6.7) 10.9 (+6.5) 26.7 (+5.7) 9.3 (+5.1) MLA† (2022) 35.9 20.9 18.4 15.0 30.7 21.8 20.6 18.1 with TEIR 46.0 (+10.1) 11.2 (+9.7) 25.2 (+6.8) 8.6 (+6.4) 41.1 (+10.4) 12.3 (+9.5) 29.0 (+8.4) 10.7 (+7.4) P-Tuning† (2022) 30.1 23.9 15.0 16.3 24.9 23.9 16.4 19.3 with TEIR 41.1 (+11.0) 13.3 (+10.6) 22.2 (+7.2) 9.6 (+6.7) 35.5 (+10.6) 13.8 (+10.1) 25.4 (+9.0) 11.5 (+7.8) LoRA† (2022) 31.8 22.5 16.2 15.9 28.0 22.7 18.7 18.9 with TEIR 41.6 (+9.8) 12.9 (+9.6) 22.8 (+6.6) 9.7 (+6.2) 38.0 (+10.0) 13.9 (+8.8) 27.0 (+8.3) 11.7 (+7.2) DualPrompt (2022a) 28.4 23.6 14.1 15.8 25.5 22.9 16.4 18.4 with TEIR 38.3 (+9.9) 14.0 (+9.6) 19.7 (+5.6) 10.6 (+5.2) 35.3 (+9.8) 14.1 (+8.8) 23.6 (+7.2) 12.1 (+6.3) CodaPrompt (2023) 28.9 22.6 14.4 15.2 24.6 22.2 15.9 17.6 with TEIR 41.4 (+12.5) 9.7 (+12.9) 22.3 (+7.9) 7.1 (+8.1) 36.7 (+12.1) 9.3 (+12.9) 25.3 (+9.4) 7.9 (+9.7) Table 2: Retrieval performance on MSCOCO36 and XM3600. †: Task identity is needed during inference. All results are reproduced by ourselves except that of PaLI. Note that PaLI is not optimized for image-text retrieval, but we draw its results from (Chen et al. 2023c) for completeness. The numbers in brackets indicate the absolute improvements brought by our approach. search the hyperparameters γ1 and γ2 in Equation (4) from values {1, 0.1, 0.01} and set γ1 = 0.01 and γ2 = 1 based on the AR metric on the validation set. For models without TEIR, we initialize new token embeddings with N(0, 0.022) following OpenCLIP and set ∀t, ∀j, λt,j = 1 (Equation (6)). For each task, we set the vocab size to 10K. We use batches of 128 samples and AdamW (Loshchilov and Hutter 2019) with L2 weight decay of 0.05 to train models for 3 epochs. We set the learning rate fixed to 5e-5 after 10% warm-up iterations. The model achieving the highest summation of Recall@{1, 5, 10} on the current-task validation set is selected for training on the next task. We conduct experiments in PyTorch on a single NVIDIA V100 card and every run of an experiment takes less than 20 hours. Comparing Methods We reproduce the following SOTA continual learning (CL) and parameter-efficient fine-tuning (PEFT) methods for comparisons: (1) regularization-based online Elastic Weight Consolidation (oEWC) (Schwarz et al. 2018) that penalizes the changes in model parameters; (2) rehearsal-based ER (Chaudhry et al. 2019) that stores historical training samples for current-task learning; (3) rehearsal- and regularization-based DER (Buzzega et al. 2020) that stores features of previously learned samples for knowledge distillation; (4) architecture-based MLA (Zhang, Hu, and Jin 2022), P-Tuning (Liu et al. 2022), and LoRA (Hu et al. 2022) that inserts task-specific adapters (Houlsby et al. 2019), learnable prompt tokens, and decomposed matrices into frozen PTMs, respectively. (5) prompt-based DualPrompt (Wang et al. 2022a) and CodaPrompt (Smith et al. 2023) that rely on a key-query mechanism to generate proper prompts for frozen PTMs. We reproduce all the above methods in the text branch of CLLCLIP with the aforementioned implementation details. Main Results Table 2 provides retrieval results of different models. Specifically, joint-learning models CLL-CLIP and M-CLIP (Carlsson et al. 2022) respectively achieve the highest AR scores on MSCOCO36 and XM3600. As the joint-learning setting covers all languages at the (pre-)training stage, its results can be regarded as the upper bound of CL models. When learning different languages incrementally, all CL models experience different levels of forgetting. Notably, our TEIR can consistently boost all CL models across all metrics and datasets, e.g., with absolute improvements ranging from 3.7% to 10.2% in text-to-image AR on XM3600. The imThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6462 Setting Initialization Regularization Oracle Vocab MSCOCO36 (In-Domain) XM3600 (Out-of-Domain) Identical Distribution Gradient Weight Decay Image-to-Text Text-to-Image Image-to-Text Text-to-Image AR (↑) F (↓) AR (↑) F (↓) AR (↑) F (↓) AR (↑) F (↓) (1): CLL-CLIP × 29.6 23.2 15.2 15.6 26.4 23.1 17.6 18.4 (2) √ × 32.4 21.9 16.8 15.1 29.9 22.5 20.5 18.2 (3) √ × 31.9 20.3 16.9 13.5 28.4 20.2 19.5 16.1 (4) √ × 33.3 19.2 17.0 13.5 30.0 19.0 19.9 15.5 (5) √ √ × 37.2 14.9 19.7 10.6 33.3 15.4 22.8 12.6 (6): (1) + TEIR √ √ √ × 38.3 14.7 20.5 10.5 35.0 15.3 24.3 12.5 (7) √ √ √ √ 42.4 10.5 23.2 7.9 38.4 12.0 27.1 10.0 Table 3: Ablation study on MSCOCO36 and XM3600. By default, we dynamically substitute the model’s vocab when new languages arrive, whereas setting (7) requires the accessibility of corpora of all languages to construct a task-shared vocab. Figure 3: Convergence analysis for different settings in Table 3, focusing (left) the training loss and (right) the Fisher eigenvalues. Lower values respectively indicate closer to global minima and the convergence to flatter minima. proved performance demonstrates the generality of TEIR across various CL and PEFT methods, proves the validity of our approach to maintaining acquired language skills, and highlights the importance of proper token embedding initialization and regularization. Ablations and Additional Analyses In the following, we delve deeper into our proposals via ablation studies and additional analyses, with “CLL-CLIP with TEIR” as the default model unless otherwise specified. Effect of Initialization Table 3(1,2) shows that ensuring identical distribution of new and prior token embeddings during initialization improves AR and F metrics by large margins. Compared with setting (5), setting (6) can still improve the model’s learning capacity without sacrificing memory stability. These results suggest the importance of addressing the covariate shift problem in CLL. Effect of Regularization Table 3(3,4) shows that imposing constraints on gradients or L2 weight decay when updating token embeddings can effectively mitigate the catastrophic forgetting problem of CLL-CLIP. So, it is crucial to penalize the embedding learning of lexically overlapping tokens and keep unrelated token embeddings intact. Moreover, the superiority of setting (5) against (3,4) indicates the complementary nature of these two strategies. Since our regularization method solely relies on the lexical statistics of the Figure 4: Analysis of CLL-CLIP’s core designs: (left) trainable components and (right) training objectives. data, it incurs negligible additional costs, e.g., the training time of settings (1,6) is 11.1 and 11.3 hours, respectively. Effect of Vocab Substitution Strategy We stick to the principle of continual learning and thus dynamically substitute the model’s vocab when new languages arrive. In contrast, if we are allowed to access corpora of all languages, we can build an oracle vocab and only need to substitute the model’s vocab at the beginning. As shown in Table 3(6,7), using the oracle vocab contributes to a boost in performance. Since we employ BPE to construct vocab in this work, the improvements confirm BPE’s capacity to learn more accurate merging operations of sub-word units from extensive text statistics. Therefore, the exploration of refined vocab substitution strategies is a valuable avenue in future studies. Effect of TEIR on Model Convergence We consider the model at the end of training and measure the property of the training minima of settings (1-7) in Table 3. Firstly, we calculate the average loss across all training samples of MSCOCO36. As depicted in Figure 3(left), the training loss of settings (2-7) is lower than that of (1), illustrating that TEIR facilitates the convergence of CLL-CLIP towards a global minimum. Furthermore, we compute the trace of the empirical Fisher information matrix w.r.t. all training samples of MSCOCO36 and treat it as a proxy for Hessian eigenvalues following (Chaudhari et al. 2017; Kirkpatrick et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6463 Figure 5: Translate-test performance on Hebrew data in XM3600. Although translate-test CLIP is a strong pipeline system, our model can process foreign texts directly (η = 0) or achieve better retrieval performance via score fusion when translations are available (η > 0). 2017; Buzzega et al. 2020). As depicted in Figure 3(right), settings (2-7) produce lower eigenvalues than (1), revealing that TEIR helps CLL-CLIP converge to a flatter minimum. Effectiveness of CLL-CLIP We here ablate the core designs of CLL-CLIP, including the trainable components and training objectives. As shown in Figure 4(left), training the full model obtains dramatically degrades as more tasks are processed. Instead, our proposal of solely training token embeddings can preserve knowledge effectively. In Figure 4(right), we can find that the cross-lingual objective is more efficient than the cross-modal objective to align images with multi-lingual texts, and leveraging both of them can achieve better results. This observation indicates the potential of utilizing text-only pairs for CLL. Comparison with Translate-Test CLIP To enable the original CLIP to understand multilingual texts, one intuitive approach is translating foreign texts into English, which is known as Translate Test in the literature (Bugliarello et al. 2022; Li et al. 2023b). In Figure 5, we compare our model with the translate-test CLIP under different η for score fusion. Specifically, we can see that translate-test CLIP is a strong pipeline system that can achieve 44.3 text-to-image Recall@1 on Hebrew. In contrast, our CLL model can process Hebrew texts directly (η = 0). Encouragingly, our single model can even surpass the translate-test CLIP (46.7 vs. 44.3) when including an additional augmented CC3M dataset (Sharma et al. 2018) for training. Therefore, continual language learning presents a viable avenue to evade the computation costs of translation and the error accumulation problem of a translation-based pipeline system. Moreover, when translations are available (η > 0), our model can simultaneously measure image-English text and imageHebrew text similarities to achieve score fusion, leading to better retrieval performance compared with the case η = 0. Comparisons with Multilingual VL-PTMs In Table 4, we compare CLL models with multilingual VL-PTMs on Multi30K (Elliott et al. 2016). As we can see, although Setting Model en de fr cs Avg. Learn in English CLIP (2021) 86.3 38.4 48.9 8.1 45.4 Joint Learning (<10 languages) M3P (2021) 57.9 36.8 27.1 20.4 35.6 UC2 (2021) 66.6 62.5 60.4 55.1 61.2 MLA (2022) 86.4 80.8 80.9 72.9 80.3 (>60 languages) M-CLIP (2022) 84.1 79.1 77.5 76.3 79.3 Continual Learning (36 languages) CLL-CLIP 75.1 36.2 46.5 57.6 53.8 with TEIR 82.5 48.6 60.4 66.5 64.5 MLA (2022) 73.8 42.7 52.7 64.6 58.4 with TEIR 82.4 58.1 68.0 74.1 70.7 Table 4: Zero-shot image-text retrieval results (averaged over recall@{1,5,10} on two directions) on Multi30K under English (en), German (de), French (fr), and Czech (cs). the joint-learning MLA model performs generally the best among the four languages, it obtains inferior performance in Czech compared with “MLA with TEIR” which has learned 36 languages in a continual learning manner. Given the gap between joint learning and continual learning, there is much room for improving CLL models. Conclusion In this paper, we present to our best knowledge the first systematical study on extending the language capacities of dual-stream vision-language pre-trained models (VL-PTMs) under the practical continual language learning setting. We introduce a CLL-CLIP model and a TEIR approach to learn the alignment between images and multilingual texts while mitigating catastrophic forgetting raised by the covariate shift and lexical overlap problems. To comprehensively validate our proposals, we construct a benchmark spanning 36 languages and conduct evaluations on multilingual imagetext retrieval. Through a series of experiments and analyses, we verify the effectiveness of CLL-CLIP and TEIR and gain insights into their inner workings. We hope our research can serve as a basis to enhance the accessibility of VL-PTMs across different linguistic communities. Limitations This paper focuses exclusively on the continual language learning of CLIP-like VL-PTMs, emphasizing evaluations for image-text retrieval. Nonetheless, we posit that our ideas hold the potential to be adaptable to encoderdecoder-based VL-PTMs and generation tasks like visual captioning (Yang, Cao, and Zou 2023). We leave it to our future study. Moreover, TEIR requires current-task text statistics to compute Equation (6), making it difficult to handle the challenges posed by, e.g., boundary-free continual learning (Aljundi, Kelchtermans, and Tuytelaars 2019). Acknowledgements This paper was partially supported by NSFC (No. 6217600 8), the project of Pengcheng Laboratory (PCL2023A08), and Shenzhen Science and Technology Research Program (No. GXWD20201231165807007-20200814115301001). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6464 References Ahn, H.; Kwak, J.; Lim, S.; Bang, H.; Kim, H.; and Moon, T. 2021. SS-IL: Separated Softmax for Incremental Learning. In ICCV, 844–853. Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; Ring, R.; Rutherford, E.; Cabi, S.; Han, T.; Gong, Z.; Samangooei, S.; Monteiro, M.; Menick, J.; Borgeaud, S.; Brock, A.; Nematzadeh, A.; Sharifzadeh, S.; Binkowski, M.; Barreira, R.; Vinyals, O.; Zisserman, A.; and Simonyan, K. 2022. Flamingo: A Visual Language Model for Few-Shot Learning. In NeurIPS, 23716–23736. Aljundi, R.; Kelchtermans, K.; and Tuytelaars, T. 2019. Task-Free Continual Learning. In CVPR, 11246–11255. Berard, A. 2021. Continual Learning in Multilingual NMT via Language-Specific Embeddings. In WMT, 542–565. Biesialska, M.; Biesialska, K.; and Costa-juss`a, M. R. 2020. Continual Lifelong Learning in Natural Language Processing: A Survey. In COLING, 6523–6541. Bugliarello, E.; Liu, F.; Pfeiffer, J.; Reddy, S.; Elliott, D.; Ponti, E. M.; and Vuli´c, I. 2022. IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages. In ICML, 2370–2392. Buzzega, P.; Boschini, M.; Porrello, A.; Abati, D.; and CALDERARA, SIMONE. 2020. Dark Experience for General Continual Learning: A Strong, Simple Baseline. In NeurIPS, 15920–15930. Carlsson, F.; Eisen, P.; Rekathati, F.; and Sahlgren, M. 2022. Cross-Lingual and Multilingual CLIP. In LREC, 6848–6854. Cha, H.; Lee, J.; and Shin, J. 2021. Co2L: Contrastive Continual Learning. In ICCV, 9516–9525. Chaudhari, P.; Choromanska, A.; Soatto, S.; LeCun, Y.; Baldassi, C.; Borgs, C.; Chayes, J.; Sagun, L.; and Zecchina, R. 2017. Entropy-SGD: Biasing Gradient Descent Into Wide Valleys. In ICLR, 1–19. Chaudhry, A.; Rohrbach, M.; Elhoseiny, M.; Ajanthan, T.; Dokania, P. K.; Torr, P. H. S.; and Ranzato, M. 2019. On Tiny Episodic Memories in Continual Learning. arxiv:1902.10486. Chen, F.-L.; Zhang, D.-Z.; Han, M.-L.; Chen, X.-Y.; Shi, J.; Xu, S.; and Xu, B. 2023a. VLP: A Survey on VisionLanguage Pre-Training. Mach. Intell. Res., 20(1): 38–56. Chen, G.; Hou, L.; Chen, Y.; Dai, W.; Shang, L.; Jiang, X.; Liu, Q.; Pan, J.; and Wang, W. 2023b. mCLIP: Multilingual CLIP via Cross-Lingual Transfer. In ACL, 13028–13043. Chen, X.; Fang, H.; Lin, T.-Y.; Vedantam, R.; Gupta, S.; Dollar, P.; and Zitnick, C. L. 2015. Microsoft COCO Captions: Data Collection and Evaluation Server. arxiv:1504.00325. Chen, X.; Wang, X.; Changpinyo, S.; Piergiovanni, A. J.; Padlewski, P.; Salz, D.; Goodman, S.; Grycner, A.; Mustafa, B.; Beyer, L.; Kolesnikov, A.; Puigcerver, J.; Ding, N.; Rong, K.; Akbari, H.; Mishra, G.; Xue, L.; Thapliyal, A.; Bradbury, J.; Kuo, W.; Seyedhosseini, M.; Jia, C.; Ayan, B. K.; Riquelme, C.; Steiner, A.; Angelova, A.; Zhai, X.; Houlsby, N.; and Soricut, R. 2023c. PaLI: A Jointly-Scaled Multilingual Language-Image Model. In ICLR, 1–33. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 4171–4186. Ding, Y.; Liu, L.; Tian, C.; Yang, J.; and Ding, H. 2022. Don’t Stop Learning: Towards Continual Learning for the CLIP Model. arxiv:2207.09248. Elliott, D.; Frank, S.; Sima’an, K.; and Specia, L. 2016. Multi30K: Multilingual English-German Image Descriptions. In ACL Workshop: VL, 70–74. Escolano, C.; Costa-Juss`a, M. R.; and Fonollosa, J. A. R. 2021. From Bilingual to Multilingual Neural-based Machine Translation by Incremental Training. J. Assoc. Inf. Sci. Technol, 72(2): 190–203. Gan, Z.; Li, L.; Li, C.; Wang, L.; Liu, Z.; and Gao, J. 2022. Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends. Found. Trends Comput. Graph. Vis., 14(3–4): 163–352. Gao, Q.; Zhao, C.; Sun, Y.; Xi, T.; Zhang, G.; Ghanem, B.; and Zhang, J. 2023. A Unified Continual Learning Framework with General Parameter-Efficient Tuning. In ICCV, 11483–11493. Garcia, X.; Constant, N.; Parikh, A.; and Firat, O. 2021. Towards Continual Learning for Multilingual Machine Translation via Vocabulary Substitution. In NAACL-HLT, 1184– 1192. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; Laroussilhe, Q. D.; Gesmundo, A.; Attariyan, M.; and Gelly, S. 2019. Parameter-Efficient Transfer Learning for NLP. In ICML, 2790–2799. Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In ICLR, 1–13. Huang, K.; Li, P.; Ma, J.; and Liu, Y. 2022. Entropy-Based Vocabulary Substitution for Incremental Learning in Multilingual Neural Machine Translation. In EMNLP, 10537– 10550. Ilharco, G.; Wortsman, M.; Carlini, N.; Taori, R.; Dave, A.; Shankar, V.; Namkoong, H.; Miller, J.; Hajishirzi, H.; Farhadi, A.; and Schmidt, L. 2021. OpenCLIP. Zenodo. Ioffe, S.; and Szegedy, C. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, 448–456. Jain, A.; Guo, M.; Srinivasan, K.; Chen, T.; Kudugunta, S.; Jia, C.; Yang, Y.; and Baldridge, J. 2021. MURAL: Multimodal, Multitask Representations Across Languages. In EMNLP, 3449–3463. Karpathy, A.; and Fei-Fei, L. 2015. Deep Visual-Semantic Alignments for Generating Image Descriptions. In CVPR, 3128–3137. Ke, Z.; Liu, B.; and Huang, X. 2020. Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks. In NeurIPS, 18493–18504. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; Hassabis, D.; Clopath, C.; Kumaran, D.; and Hadsell, R. 2017. Overcoming Catastrophic Forgetting in Neural Networks. PNAS, 114(13): 3521–3526. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6465 Lee, K.; Lee, K.; Shin, J.; and Lee, H. 2019. Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild. In ICCV, 312–321. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023a. BLIP-2: Bootstrapping Language-Image Pre-Training with Frozen Image Encoders and Large Language Models. In ICML, 19730– 19742. Li, X.; Zhou, Y.; Wu, T.; Socher, R.; and Xiong, C. 2019. Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting. In ICML, 3925– 3934. Li, Z.; Fan, Z.; Chen, J.; Zhang, Q.; Huang, X.; and Wei, Z. 2023b. Unifying Cross-Lingual and Cross-Modal Modeling Towards Weakly Supervised Multilingual Vision-Language Pre-Training. In ACL, 5939–5958. Liu, X.; Ji, K.; Fu, Y.; Tam, W.; Du, Z.; Yang, Z.; and Tang, J. 2022. P-Tuning: Prompt Tuning Can Be Comparable to Fine-Tuning Across Scales and Tasks. In ACL, 61–68. Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. In ICLR, 1–18. McCloskey, M.; and Cohen, N. J. 1989. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. Psychol. Learn. Motiv., 24: 109–165. M’hamdi, M.; Ren, X.; and May, J. 2023. Cross-Lingual Continual Learning. In ACL, 3908–3943. Ni, M.; Huang, H.; Su, L.; Cui, E.; Bharti, T.; Wang, L.; Zhang, D.; and Duan, N. 2021. M3P: Learning Universal Representations via Multitask Multilingual Multimodal PreTraining. In CVPR, 3976–3985. Pfeiffer, J.; Vuli´c, I.; Gurevych, I.; and Ruder, S. 2021. UNKs Everywhere: Adapting Multilingual Language Models to New Scripts. In EMNLP, 10186–10203. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML, 8748–8763. Ramasesh, V. V.; Dyer, E.; and Raghu, M. 2021. Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics. In ICLR, 1–31. Reimers, N.; and Gurevych, I. 2020. Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation. In EMNLP, 4512–4525. Schwarz, J.; Czarnecki, W.; Luketina, J.; GrabskaBarwinska, A.; Teh, Y. W.; Pascanu, R.; and Hadsell, R. 2018. Progress & Compress: A Scalable Framework for Continual Learning. In ICML, 4528–4537. Sennrich, R.; Haddow, B.; and Birch, A. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL, 1715–1725. Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018. Conceptual Captions: A Cleaned, Hypernymed, Image AltText Dataset For Automatic Image Captioning. In ACL, 2556–2565. Shimodaira, H. 2000. Improving Predictive Inference under Covariate Shift by Weighting the Log-Likelihood Function. J. Stat. Plan. Inference, 90(2): 227–244. Smith, J. S.; Karlinsky, L.; Gutta, V.; Cascante-Bonilla, P.; Kim, D.; Arbelle, A.; Panda, R.; Feris, R.; and Kira, Z. 2023. CODA-Prompt: COntinual Decomposed AttentionBased Prompting for Rehearsal-Free Continual Learning. In CVPR, 11909–11919. Thapliyal, A. V.; Pont Tuset, J.; Chen, X.; and Soricut, R. 2022. Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset. In EMNLP, 715–729. Thengane, V.; Khan, S.; Hayat, M.; and Khan, F. 2022. CLIP Model Is an Efficient Continual Learner. arxiv:2210.03114. van den Oord, A.; Li, Y.; and Vinyals, O. 2018. Representation Learning with Contrastive Predictive Coding. arxiv:1807.03748. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention Is All You Need. In NeurIPS, 5998–6008. Wang, L.; Zhang, X.; Su, H.; and Zhu, J. 2023. A Comprehensive Survey of Continual Learning: Theory, Method and Application. arxiv:2302.00487. Wang, Z.; Zhang, Z.; Ebrahimi, S.; Sun, R.; Zhang, H.; Lee, C.-Y.; Ren, X.; Su, G.; Perot, V.; Dy, J.; and Pfister, T. 2022a. DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning. In ECCV, 631–648. Wang, Z.; Zhang, Z.; Lee, C.-Y.; Zhang, H.; Sun, R.; Ren, X.; Su, G.; Perot, V.; Dy, J.; and Pfister, T. 2022b. Learning to Prompt for Continual Learning. In CVPR, 139–149. Wu, T.; Caccia, M.; Li, Z.; Li, Y.-F.; Qi, G.; and Haffari, G. 2022. Pretrained Language Model in Continual Learning: A Comparative Study. In ICLR, 1–17. Yang, B.; Cao, M.; and Zou, Y. 2023. Concept-Aware Video Captioning: Describing Videos With Effective Prior Information. IEEE Trans. Image Process., 32: 5366–5378. Yang, B.; Liu, F.; Wu, X.; Wang, Y.; Sun, X.; and Zou, Y. 2023. MultiCapCLIP: Auto-Encoding Prompts for ZeroShot Multilingual Visual Captioning. In ACL, 11908–11922. Yoon, J.; Yang, E.; Lee, J.; and Hwang, S. J. 2018. Lifelong Learning with Dynamically Expandable Networks. In ICLR, 1–11. Zhai, X.; Wang, X.; Mustafa, B.; Steiner, A.; Keysers, D.; Kolesnikov, A.; and Beyer, L. 2022. LiT: Zero-Shot Transfer With Locked-Image Text Tuning. In CVPR, 18123–18133. Zhang, H.; Zhang, S.; Xiang, Y.; Liang, B.; Su, J.; Miao, Z.; Wang, H.; and Xu, R. 2022. CLLE: A Benchmark for Continual Language Learning Evaluation in Multilingual Machine Translation. In EMNLP, 428–443. Zhang, L.; Hu, A.; and Jin, Q. 2022. Multi-Lingual Acquisition on Multimodal Pre-Training for Cross-Modal Retrieval. In NeurIPS, 29691–29704. Zhou, M.; Zhou, L.; Wang, S.; Cheng, Y.; Li, L.; Yu, Z.; and Liu, J. 2021. UC2: Universal Cross-Lingual Cross-Modal Vision-and-Language Pre-Training. In CVPR, 4153–4163. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6466 | 2024 | 718 |
18,538 | Geometry-Guided Domain Generalization for Monocular 3D Object Detection Fan Yang1,2,3, Hui Chen1,2*, Yuwei He1,2, Sicheng Zhao1,2, Chenghao Zhang1,2, Kai Ni4, Guiguang Ding1,2* 1Tsinghua University 2BNRist 3Hangzhou Zhuoxi Institute of Brain and Intelligence 4HoloMatic Technology [email protected], {jichenhui2012, heyuwei403, schzhao}@gmail.com, [email protected], [email protected], [email protected] Abstract Monocular 3D object detection (M3OD) is important for autonomous driving. However, existing deep learning-based methods easily suffer from performance degradation in realworld scenarios due to the substantial domain gap between training and testing. M3OD’s domain gaps are complex, including camera intrinsic parameters, extrinsic parameters, image appearance, etc. Existing works primarily focus on the domain gaps of camera intrinsic parameters, ignoring other key factors. Moreover, at the feature level, conventional domain invariant learning methods generally cause the negative transfer issue, due to the ignorance of dependency between geometry tasks and domains. To tackle these issues, in this paper, we propose MonoGDG, a geometryguided domain generalization framework for M3OD, which effectively addresses the domain gap at both camera and feature levels. Specifically, MonoGDG consists of two major components. One is geometry-based image reprojection, which mitigates the impact of camera discrepancy by unifying intrinsic parameters, randomizing camera orientations, and unifying the field of view range. The other is geometrydependent feature disentanglement, which overcomes the negative transfer problems by incorporating domain-shared and domain-specific features. Additionally, we leverage a depth-disentangled domain discriminator and a domainaware geometry regression attention mechanism to account for the geometry-domain dependency. Extensive experiments on multiple autonomous driving benchmarks demonstrate that our method achieves state-of-the-art performance in domain generalization for M3OD. Introduction Monocular 3D object detection (M3OD) enables inferring 3D bounding boxes from a single image, considerably reducing the perception cost for autonomous driving (Mousavian et al. 2017). Many research works have focused on deep learning-based methods for M3OD, but they often suffer from performance degradation in real-world scenarios due to the presence of domain gap (Hendrycks and Dietterich *Corresponding Authors. https://MonoGDG.github.io/ Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2019; Recht et al. 2019). To cope with this challenge, recent studies have started exploring cross-domain techniques for M3OD. STMono3D (Li et al. 2022b) proposes the domain adaptation in M3OD. DGMono3D (Li et al. 2022a) investigates domain generalization (DG). Despite remarkable progress, existing methods still suffer from some limitations. In this context, we aim to highlight two vital issues in domain generalization for M3OD. First, the domain gap in M3OD can be attributed to complex factors. We systematically analyze three crucial domain gaps in M3OD. (1) The intrinsic parameter gap, including focal length and field of view (FOV), influences the predicted depth by the detector (Fig. 1(a)). (2) Extrinsic parameter gap: (Fig. 1(b)) demonstrates that the camera’s orientation can impact the M3OD results. (3) Image appearance gap: the variations in image style and environmental conditions, such as weather and lighting (Fig. 1(c)), can considerably affect the features extracted by the model. Existing approaches primarily focus on the first gap, i.e., camera intrinsic parameters, neglecting other domain gaps and thus lacking robustness in real-world scenarios. Secondly, at the feature level, commonly used domain invariant learning methods often lead to the negative transfer issue in M3OD. Li et al. point out that DG aims to learn domain-invariant representations for varying domains (Li et al. 2018a). However, popular feature invariant learning techniques, such as domain adversarial training (Ganin et al. 2016; Chen et al. 2019) and statistical matching (Sun and Saenko 2016; Long et al. 2015), often result in a severe negative transfer problem, leading to a significant performance decay for M3OD (Fig. 2(a)). In light of this, we argue that the fundamental assumption of feature invariant learning, which assumes independence between domains and labels (Ghifary et al. 2017; Xie et al. 2017), does not hold true in M3OD. Specifically, M3OD exhibits a significant geometry-domain dependency, with substantial differences in geometry depth, dimensions, and object rotation across domains (Fig. 2). Through an entropy perspective analysis, we demonstrate that in M3OD, eliminating domainspecific features can disrupt geometry features, and using only domain-shared features is insufficient for geometry prediction. Moreover, depth prediction, widely recognized as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6467 Image View BEV Side View Image View BEV (a) Focal length gap (b) Camera orientation gap (c) Adverse weather and simulation data 2D width depth depth 3D width 3D width 2D width 2D width 2D width Figure 1: The domain gaps in monocular 3D object detection are very complex, including focal length gap, camera orientation gap, image appearance gap, etc. (a) Two vehicles of the same 2D and 3D size are taken at different focal lengths, and their depths vary dramatically. (b) A higher pitch angle of the camera causes objects to appear lower in the image, leading to the trained model predicting closer depths for the objects. (c) Variations in image appearance, such as adverse weather and simulation data can considerably affect the perceived contextual visual information for the M3OD model. (c) Height in different datasets (a) Negative transfer issue (b) Depth in different datasets (d) Depth in different cameras (e) Rotation in different cameras Figure 2: (a) Conventional domain invariant learning techniques often lead to the negative transfer issue in M3OD. The chart shows the DG performance of models trained on nuScenes and Lyft, tested on KITTI. As the extent of domain invariance increases, the accuracy of M3OD will significantly decrease. (b-e) M3OD demonstrates significant geometry-domain dependency, with notable disparity in the geometry distribution of various domains, such as objects’ depth, dimension, and rotation. the most critical task in M3OD, is severely affected by the misalignment caused by domain dependency, consequently impacting the generalization performance. To tackle these challenges, we propose MonoGDG, a Geometry-Guided Domain Generalization framework for Monocular 3D Object Detection. We address the DG challenges in M3OD from the camera and the feature aspects. Specifically, at the camera level, we propose a geometrybased image reprojection strategy that unifies intrinsic parameters, randomizes camera orientations, and unifies the FOV range. These simple yet effective techniques significantly alleviate the camera domain gaps (Table 1). At the feature level, we propose a geometry-dependent feature disentanglement algorithm to mitigate the negative transfer issue. Rather than excluding domain-specific information, which can potentially disrupt geometry features, our algorithm disentangles and integrate both domain-shared and domain-specific features. Moreover, a depth-disentangled domain discriminator is utilized within the domain-shared branch to reduce misalignment among objects with varying depths. A domain-aware geometry regression attention is further used to integrate domain and geometry features. We conduct extensive experiments on multiple datasets for domain-generalizable M3OD in autonomous driving. The results demonstrate that our proposed method considerably outperforms existing methods, achieving state-of-theart performance. Moreover, we utilize simulation data to enhance the model’s DG performance in adverse weather conditions, improving its robustness in real-world scenarios. In summary, our contributions are three-fold: (1) We systematically analyze the complex domain gaps in M3OD, identify the negative transfer issue caused by geometry-domain dependency, and propose leveraging geometry strategy to guide the domain generalization. (2) At the camera level, we introduce geometry-based image reprojection mechanism to address the camera parameter disparity. At the feature level, we propose geometrydependent feature disentanglement to tackle the negative transfer issue. (3) Through extensive experiments on various datasets, our proposed MonoGDG achieves state-of-the-art performance and significantly enhances the models’ domain generalization capabilities. Related Work Domain Generalization Domain generalization aims to generalize to unseen target domains (Erfani et al. 2016). Feature alignment is widely used for acquiring domain-invariant representations (Xiong et al. 2023). There are many approaches to achieving feature alignment (Zhou et al. 2023), such as minimizing moments (Muandet, Balduzzi, and Sch¨olkopf 2013), minimizThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6468 Domain Gap STMono3D DGMono3D MonoGDG (ours) Focal Length ! ! ! FOV Distortion % ! ! FOV Range % % ! Camera Orientation % % ! Image Appearance % % ! Target Data Free % ! ! Table 1: Comparison of the proposed domain generalization method with other state-of-the-art baselines. ing contrastive loss (Motiian et al. 2017), minimizing KL divergence (Wang, Loog, and van Gemert 2020), minimizing maximum mean discrepancy (Li et al. 2018a), and domain adversarial learning (Li et al. 2018b; Shao et al. 2019). Domain Generalization in M3OD FCOS3D (Wang et al. 2021) and CenterNet (Zhou, Wang, and Kr¨ahenb¨uhl 2019) realize M3OD through a simple network architecture. Recently, researchers have attempted to incorporate geometry priors into 3D object detection and achieved encouraging results (Lu et al. 2021; Shi et al. 2021; Yang et al. 2022, 2023). Deep3DBox (Mousavian et al. 2017) employs a neural network to predict objects’ rotation, dimension, and 2D bounding box, providing constraints for the 3D bounding box estimation. There is limited work on domain generalization in M3OD. STMono3D (Li et al. 2022b) explores domain adaptation in M3OD and utilizes the geometry-aligned multi-scale training and self-teacher methods. DGMono3D (Li et al. 2022a) explores singlesource domain generalization in M3OD and employed object scaling and 2D-3D geometry-consistent strategies. OMNI3D (Brazil et al. 2023) combines existing M3OD datasets and proposes CubeRCNN, aimed at multi-dataset fully supervised training rather than domain generalization in unknown target domains. These methods primarily consider the domain gap within the camera, without consideration of the complex gap factors of M3OD, such as camera orientation, image appearance, etc. Methodology Our approach, as depicted in Fig. 3, comprises geometrybased image reprojection at the camera level and geometrydependent feature disentanglement at the feature level. Geometry-Based Image Reprojection Variations in camera parameters across different domains can significantly impact M3OD (Fig. 1). To enhance the robustness of the detector, we propose an image reprojection mechanism, which aims to transform the image into a generalizable meta-camera, thereby improving the generalization capability at the camera level. Intrinsic parameter unification To address the intrinsic parameters gap among different cameras, we utilize a reprojection approach that aligns all images from different domains to a common perspective meta-camera. This perspective meta-camera possesses uniform intrinsic parameters, effectively mitigating the domain gap in intrinsic parameters. Given a source domain camera Ci, which has intrinsic parameter matrix Ki, we could obtain its projection formula: Z[xi, yi, 1]⊤= Ki[X, Y, Z]⊤ (1) Ki = f i x 0 ci x 0 f i y ci y 0 0 1 (2) where xi, yi are a pixel’s image coordinate in camera Ci, X, Y, Z are its spatial position and f i x, f i y, ci x, ci y are the intrinsic parameters of the camera Ci. We define the intrinsic parameter matrix of the perspective meta-camera Cm as Km. f m x , f m y , cm x , cm y are its intrinsic parameters. We project the above spatial points [X, Y, Z] onto the meta-camera Cm’s image pixel xm, ym: Z[xm, ym, 1]⊤= Km[X, Y, Z]⊤ (3) Using Eq. (1), Eq. (3) could be replaced: Z[xm, ym, 1]⊤= KmKi−1Z[xi, yi, 1]⊤ (4) Since Z is a scalar, Z can be eliminated from both sides of the Eq. (4). By solving Eq. (4), we can obtain the reprojection conversion formula from source camera pixel coordinate xi, yi to perspective meta-camera pixel xm, ym: xm = f m x f ix xi + cm x −f m x f ix ci x ym = f m y f iy yi + cm y −f m y f iy ci y (5) Eq. (5) is simply a linear transformation with scaling and translation. As a result, we can easily reproject images from different cameras to the same perspective metacamera, achieving intrinsic parameters unification. Camera orientation randomization Camera extrinsic parameters, including camera orientation and position, considerably impact 3D detection. However, perspective transformations on the camera position are not viable due to the lack of pixel-wise depth annotation information, dictated by geometry principles (Zhao, Kong, and Fowlkes 2021). Additionally, transforming the camera orientation is also a challenging process (Dubrofsky 2009). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6469 GRL Image Appearance Domain Discriminator Domain Classifier GRL C Class Pred. Depth-Disentangled Domain Discriminator Geometry-Dependent Feature Disentanglement Softmax Domain-Aware Geometry Regression Attention Regression Pred: Depth, Dimension, Rotation, Offset Domain-Specific Branch Domain-Shared Branch F P N FOVRU Domain 1 Domain 2 Domain 3..n IPU & SR COR Geometry-Based Image Reprojection C Concatenate Matrix Multiplication depth bins: Figure 3: Overview of the proposed MonoGDG. At the camera level, the Geometry-Based Image Reprojection process is applied to images to address the domain gap of the camera, including Intrinsic Parameter Unification (IPU), Spherical Reprojection (SR), Camera Orientation Randomization (COR), and FOV Range Unification (FOVRU). The extracted features from images then undergo Geometry-Dependent Feature Disentanglement, which disentangles the feature into domain-shared and domainspecific branches. Depth-Disentangled Domain Discriminator disentangles the depth from domain alignment, and DomainAware Geometry Regression Attention is employed to integrate the domain and geometry features. GRL denotes the gradient reversal layer. 9 9 Figure 4: Left: The camera orientations and image field during training and testing are different. Right: Camera orientation randomization is performed in the spherical camera during training to make the model agnostic to camera orientation. The spherical camera eliminates the variance in different view angles (Gu et al. 2021). By reprojecting images onto a spherical camera, we can achieve camera orientation randomization through simple image translation and rotation (Fig. 4), ensuring that the detector becomes agnostic to the camera orientation. Our approach does not introduce any perspective stretch or distortion, making it not only simple to implement but also mathematically rigorous. First of all, we reproject the coordinates xm, ym from the perspective meta-camera onto the spherical meta-camera um, vm: um = f m x arctan(xm −cm x f m x ) + cs x vm = f m y arctan(ym −cm y f m y ) + cs y (6) where cs x, cs y are the new principal point of the spherical meta-camera, the spherical meta-camera has the same focal length as the perspective meta-camera. The camera orientation can be described by Euler angles, including pitch, roll, and yaw. In the spherical meta-camera, We can randomize the camera’s roll angle by rotating the image field, the pitch angle by translating the image field in the vertical direction, and the yaw angle by translating the image field in the horizontal direction. Assuming the roll transformation is θr, the matrix of image rotation around the principal point cs x, cs y is as follows: MR = cos θr −sin θr (1 −cos θr) ∗cs x + sin θr ∗cs y sin θr cos θr (1 −cos θr) ∗cs y + sin θr ∗cs x 0 0 1 (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6470 Cams Images Resolution Horizon Vertical Locations (train+val) FOV FOV KITTI 1 7481 1242x375 81 29 Germany nuScenes 5 204894 1600x900 65 39 BO, SG 1 90 59 Lyft 6 136080 1224x1024 70 60 Palo Alto 1920x1080 82 52 PreSIL 1 51075 1920x1080 90 59 GTA V Table 2: Different M3OD datasets have different cameras and fields of view. “BO” and “SG” are short for “Boston” and “Singapore”, respectively. Assuming the pitch transformation is θp, the yaw transformation is θy, then the image translation matrix is: MT = 1 0 f m x θy 0 1 f m y θp 0 0 1 (8) Therefore, in the spherical meta-camera, the image transformation formula deriving from camera orientation randomization is as follows: [ˆum, ˆvm, 1]⊤= MTMR[um, vm, 1]⊤ (9) where ˆum, ˆvm are the image pixel after camera orientation randomization. FOV range unification and image appearance domain discriminator Different cameras have different field of view (FOV) ranges (Table 2). To ensure that images from different domains possess a consistent FOV range, we just randomly crop the images to the same FOV. Aligning the FOV ranges is essential for domain adversarial learning at feature-level generalization. The discrepancies among different FOV ranges are highly noticeable, which can impact the ability of the domain discriminator to focus on discrepancies in image appearances and geometry clues. Additionally, to confuse the discriminator, the feature extractors would have to equalize the global information amount of images from different FOV ranges, which would undermine the global information. To address image appearance domain gaps, like fog, rain, and simulation images, we propose using an image appearance domain discriminator and gradient reversal layer (GRL). This reduces the impact of texture changes on the neural network, enabling the detector to focus on object shapes instead of texture. It enhances the detector’s generalization ability when facing unknown appearance variations. Geometry-Dependent Feature Disentanglement Theoretical analysis The distribution of objects’ 3D geometry, including depth, dimension, and rotation, is dependent on the domain (Fig. 2). This correlation is influenced by multiple factors: (1) Camera hardware and annotation standards vary in different datasets, affecting the ability to capture distant objects and the distribution of depth annotations. Newer datasets employ advanced cameras with high resolution, enabling better visibility and clearer annotations for distant objects. In contrast, older datasets primarily focus on objects nearby. Synthetic datasets, however, include annotations for objects at much further distances. (2) Different datasets collected in diverse regions result in variations in vehicle sizes. And the depth distribution significantly varies between urban roads, rural roads, and highways. (3) Cameras at different viewpoints capture different scenes. Object vehicles exhibit varied rotation angles from different viewpoints. Moreover, front-facing and rear-facing cameras tend to capture objects at greater depths, while sidefacing cameras capture objects at closer depths. Due to the existence of the geometry-domain dependency, using conventional domain invariant learning would undermine geometry features. Taking domain adversarial learning (Goodfellow et al. 2020; Li et al. 2018b) as an example, we define E, M, D as the parameters of the encoder, 3D detection head, and domain discriminator, respectively. Then the objective is as follows: min E,M max D L(E, M, D) = Ep(x,d,y)[−λLd + L3D] (10) where x, d, y are the input data, domain, and geometry label. λ is a hyperparameter. L3D is the 3D detection loss. Ld is the cross-entropy loss for the domain discriminator. According to Akuzawa et al (Akuzawa, Iwasawa, and Matsuo 2019), the optimization goal of the encoder is: min E L(E) = −λH(d|h) + EpE(h,d,y)L3D (11) where H, h denotes entropy, and latent feature, respectively. The encoder aims to maximize H(d|h), which has H(d) as its upper bound in light of the entropy properties. Therefore, when achieving domain invariance, H(d|h) equals H(d). Additionally, the geometry-domain dependency implies that their mutual information entropy, denoted as I(y, d), is greater than 0. Based on these two prerequisites, we propose the following theory: Theorem 1 if I(y, d) = H(d) −H(d|y) > 0, when H(d|h) = H(d), H(y|h) > 0. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6471 Proof 1 According to the properties of entropy: H(d|h) ≤H(d, y|h) = H(d|h, y) + H(y|h) (12) Since H(d|h) = H(d): H(d|h, y) = H(d|y) (13) Replace Eq. (12) with Eq. (13): H(d|h) ≤H(d|y) + H(y|h) (14) Because H(d|y) < H(d), Eq. (14) can be expand as: H(d|h) ≤H(d|y) + H(y|h) < H(d) + H(y|h) (15) When H(d|h) = H(d), Eq. (15) can be substituted: H(d) < H(d) + H(y|h) (16) Therefore: H(y|h) > 0 (17) Consequently, in the context of monocular 3D object detection, if the geometry label and domain are interdependent, then I(y, d) > 0. When the feature h removes domain information, H(d|h) = H(d). According to Theorem 1, we have H(y|h) > 0, indicating that the geometry features suffer damage. As a result, a negative transfer issue will appear in traditional methods for M3OD. Algorithm To tackle the negative transfer issue, we propose a geometry-dependent feature disentanglement approach. Rather than using conventional domain invariant learning to eliminate domain information, we disentangle the features into domain-specific and domain-shared branches, and leverage both features to enhance the geometry tasks effectively. In the domain-specific branch, we use a domain classifier (without the gradient reversal layer(GRL)) to extract domain-specific features. For domain-shared features, we employ the GRL and object-level depth-disentangled domain discriminators. Depth is widely recognized as a crucial and challenging factor in M3OD, and misalignment of depth poses a significant issue. To tackle this, we propose the object-level depth-disentangled domain discriminator, decoupling domain adversarial learning from depth. In detail, we partition the continuous depth into K bins: [d1, d2, . . . , dK] and utilize K domain discriminators, assigning one discriminator to each bin. Each discriminator performs domain adversarial alignment to objects with depth in the dedicated bin. This ensures accurate alignment without misaligning objects with significantly different depths. Semantic classification tasks in autonomous driving scenes are relatively simple, making aligning decision boundaries between domains easy. We use the feature from the domain-shared branch for semantic classification. In contrast, geometry regression tasks are continuous and challenging to align decision boundaries across domains due to strong domain dependency. We utilize features from both domain-specific and domain-shared branches. Furthermore, we propose a domain-aware geometry regression attention mechanism to enhance the integration of domain-specific and domain-shared information for 3D geometry regression. In detail, as shown in Fig. 3, we first concatenate features from the domain-specific and domainshared branches. These concatenated features are then divided into N groups, denoted as F1, F2, ..., FN. Each group of features is responsible for the geometry regression task within a specific region of the feature space. We compute the keys and values (Vaswani et al. 2017) using these N groups of features: Ki = Fi × WK, Vi = Fi × WV (18) We utilize the domain information h extracted by the domain classifier as the query: Q = h × WQ (19) Finally, the domain-aware geometry regression attention feature Z is obtained through the following computation and used for geometry regression tasks: Z = Softmax(Q KT / p dk) V (20) Experiments Setup and Implementation Details Following (Li et al. 2022b,a), we subsample 1/4 data for nuScenes (Caesar et al. 2020), Lyft (Kesten et al. 2019), and PreSIL (Hurl, Czarnecki, and Waslander 2019) datasets, and use the FCOS3D as the detector for experiments. Following the evaluation metrics in STMono3D (Li et al. 2022b), we use the official AP11 and AP40 and the IoU 0.5 for KITTI (Geiger, Lenz, and Urtasun 2012). Both nuScenes and Lyft provide images from six different camera views. The images within a camera view constitute a source domain. NuScenes and Lyft datasets are each divided into 6 source domains. We employ cross-entropy loss for classification and SmoothL1Loss for regression task, with SGD optimizer and learning rate 0.001 (Ruder 2016). Comparison with State-of-the-art Methods DG performance in common autonomous driving benchmarks Table 3 illustrates the DG performance of different methods. Training Source Only on the source domain yields poor generalization performance. Oracle denotes full supervision training on the target domain. STMono3D follows the domain adaptation setting by incorporating target domain data during training. The KITTI, nuScenes, and Lyft datasets are collected from distinct cities with diverse road environments (Table 2), exhibiting considerable camera orientation and image appearance shifts. Unfortunately, the existing methods inadequately handle these gaps, resulting in mediocre performance. In contrast, our approach comprehensively tackles domain gaps and mitigates the geometrydomain dependency, achieving state-of-the-art performance. Notably, our method surpasses Oracle in most scenarios. DG performance with simulation and adverse weather data Both Table 4 and Table 5 utilize the nuScenes and simulation dataset PreSIL as source domains. Table 5 shows the DG performance in heavy fog and rain. The simulation, heavy fog (Mai et al. 2021), and rain (Halder, Lalonde, and de Charette 2019) considerably affect the appearance of images, posing a challenge to the robustness of neural networks The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6472 nuScenes→KITTI AP11 AP40 Method BEV 3D BEV 3D Easy Mod Hard Easy Mod Hard Easy Mod Hard Easy Mod Hard Source Only 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Oracle 33.46 23.62 22.18 29.01 19.88 17.17 33.70 23.22 20.68 28.33 18.97 16.57 STMono3D 35.63 27.37 23.95 28.65 21.89 19.55 31.85 22.82 19.30 24.00 16.85 13.66 DGMono3D 34.22 28.99 27.82 28.77 24.82 23.67 31.90 26.33 25.60 23.20 18.55 17.72 MonoGDG (ours) 39.50 32.39 32.07 33.48 27.14 26.37 35.73 28.16 27.71 29.30 22.01 21.40 Lyft→KITTI AP11 Lyft→nuScenes Metrics Method BEV 3D Method AP ATE ASE AOE Easy Mod Hard Easy Mod Hard Source Only 0.00 0.00 0.00 0.00 0.00 0.00 Source Only 2.40 1.302 0.190 0.802 Oracle 33.46 23.62 22.18 29.01 19.88 17.17 Oracle 28.20 0.798 0.160 0.209 STMono3D 26.46 20.71 17.66 18.14 13.32 11.83 STMono3D 21.30 0.911 0.170 0.355 DGMono3D 36.18 28.30 27.16 30.03 23.38 22.23 DGMono3D 25.50 0.842 0.169 0.208 MonoGDG (ours) 38.47 30.89 29.58 32.48 26.02 24.96 MonoGDG (Ours) 25.97 0.828 0.158 0.194 Table 3: DG performance of various methods when generalizing from nuScenes to KITTI, Lyft to KITTI, and Lyft to nuScenes. P+n→KIT AP40 BEV AP40 3D Method Easy Mod Hard Easy Mod Hard Oracle 33.70 23.22 20.68 28.33 18.97 16.57 STMono3D 32.47 23.35 19.81 24.43 17.37 14.29 DGMono3D 33.81 27.27 26.84 24.75 20.08 19.58 Ours 38.35 29.48 28.71 31.55 23.31 22.25 Table 4: DG performance of PreSIL+nuScenes→KITTI. in handling texture variations. By utilizing our method’s image appearance domain discriminator and feature decoupling, the neural network can recognize objects based on their shape rather than texture, thereby improving its generalization when facing changes in image appearance. Ablation Study and Analysis The effectiveness of geometry-based image reprojection. In Table 6, the baseline, Exp. (a), incorporates geometrydependent feature disentanglement at the feature level but does not utilize any image reprojection techniques. The significant improvement from Exp. (a) to Exp. (b) illustrates that a unified camera intrinsic parameter can solve the intrinsic parameter gap, which is crucial for M3OD. Exp. (c) further improves the accuracy compared to Exp. (b) by employing FOV range unification. It allows the domain discriminator to focus on more fundamental domain discrepancy rather than simply distinguishing FOV ranges, thus enabling it to perform the intended function. Exp. (d) incorporates camera orientation randomization, which effectively prevents the detector from being biased towards specific camera orientation settings, thus achieving camera orientation agnosticism. By integrating the three aforementioned image reprojection techniques, Exp. (e) successfully eliminates domain gaps at the camera level, leading to the best experimental performance. Figure 5: The 3D BBox and BEV prediction from DGMono3D (in red), MonoGDG (in blue), and ground truth (green in BEV) in PreSIL+nuScenes→KITTI setting. Zoom in for a clear comparison. The effectiveness of geometry-dependent feature disentanglement. In Table 7, Exp. (a) does not use the feature disentanglement. Exp. (b) add domain invariant learning, decreasing AP due to the identified negative transfer issue. Exp. (c) disentangles the features and utilizes domain-specific and domain-shared features, resulting in much improvement. Comparing Exp. (d) and Exp. (c), the domain-aware geometry regression attention integrates domain and geometry features more effectively. Exp. (e) replace the domain discriminator with a depth-disentangled version, resulting in improvements over Exp. (c). The depthdisentangled discriminator effectively mitigates the misalignment issue among objects with varying depths. Exp. (f) is the complete version, tackling the geometry-domain dependency, and achieving the best performance. The effectiveness of image appearance domain discriminator. As shown in Table 8, the image appearance domain discriminator encourages the detector to focus more on object shape rather than texture features, thus improving its generalization when facing appearance variations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6473 Training: Pre+nus Test: fog KITTI AP40 3D Test: rain KITTI AP40 3D Method Easy Mod Hard Easy Mod Hard Oracle 22.25 15.88 14.10 25.58 15.71 14.21 STMono3D 19.59 14.63 13.35 23.18 16.46 13.59 DGMono3D 17.64 12.52 11.84 22.30 15.92 15.26 MonoGDG (ours) 23.58 15.97 15.28 27.69 18.33 17.79 Table 5: DG performance of PreSIL+nuScenes→fog KITTI and PreSIL+nuScenes→rain KITTI dataset. n→K IPU COR FOVRU AP40 BEV AP40 3D Exp Easy Mod. Hard Easy Mod. Hard (a) 13.76 8.92 7.53 9.48 5.27 4.84 (b) ✓ 28.72 22.17 21.51 22.10 16.94 16.58 (c) ✓ ✓ 32.69 25.47 23.94 25.85 19.48 18.71 (d) ✓ ✓ 31.74 25.61 24.73 25.21 21.62 20.11 (e) ✓ ✓ ✓ 35.73 28.16 27.71 29.30 22.01 21.40 Table 6: Ablation study on Intrisic Parameter Unification (IPU), Camera Orientation Randomization (COR), and Field of View Range Unification (FOVRU). n→K DIL DR DAGRA DDDD AP40 BEV AP40 3D Exp Easy Mod. Hard Easy Mod. Hard (a) 26.43 19.42 19.06 20.37 15.82 15.25 (b) ✓ 23.57 16.97 16.31 16.26 10.41 9.93 (c) ✓ 28.43 20.95 20.14 22.15 16.36 15.87 (d) ✓ ✓ 31.54 27.02 26.25 26.82 20.73 20.02 (e) ✓ ✓ 31.88 26.45 25.89 26.92 20.59 19.30 (f) ✓ ✓ ✓ 35.73 28.16 27.71 29.30 22.01 21.40 Table 7: Ablation study on domain invariant learning (DIL), disentangled representation (DR), domain-aware geometry regression attention (DAGRA), and depth-disentangled domain discriminator (DDDD). P+n→KIT AP40 BEV AP40 3D IADD Easy Mod Hard Easy Mod Hard 35.48 28.14 27.56 28.17 21.94 21.37 ✓ 38.35 29.48 28.71 31.55 23.31 22.25 Table 8: Effectiveness of image appearance domain discriminator (IADD). Comparison with other focal length processing methods. Table 9 demonstrates the superiority of our proposed intrinsic parameter unification over the GAMS in STMono3D (Li et al. 2022b). GAMS preset multiple fixed focal lengths during training, which may lead to a performance decline when the focal length deviates from the preset values in unknown target domains. Visualization Results In Fig. 5, we compare the 3D BBox predictions from DGMono3D and MonoGDG. When the camera is tilted relative to the ground, DGMono3D exhibits inaccurate depth predictions due to its lack of consideration for camera orientation gaps. Moreover, DGMono3D fails to handle the image apn→K AP40 BEV AP40 3D Method Easy Mod Hard Easy Mod Hard GAMS 32.52 24.96 24.49 27.15 19.84 19.03 IPU 35.73 28.16 27.71 29.30 22.01 21.40 Table 9: For the camera focal length gap, the comparison between our IPU and STMono3D’s GAMS. pearance gap, leading to a failure in detecting the red van in the third image. Additionally, our approach addresses the geometry-domain dependency, improving the generalization performance of rotation, dimensions, and depth of objects. Conclusion We propose MonoGDG to address the domain generalization challenges for M3OD. Firstly, we introduce geometrybased image reprojection to bridge domain gaps at the camera level. Furthermore, we propose geometry-dependent feature disentanglement to mitigate the negative transfer issue at the feature level. Extensive experimental results demonstrate the remarkable effectiveness of the proposed method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6474 Acknowledgements This work was supported by National Natural Science Foundation of China (Nos. 61925107, 62271281, U1936202, 62021002), Zhejiang Provincial Natural Science Foundation of China under Grant (No. LDT23F01013F01), and CCFDiDi GAIA Collaborative Research Funds for Young Scholars. References Akuzawa, K.; Iwasawa, Y.; and Matsuo, Y. 2019. Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization. In Machine Learning and Knowledge Discovery in Databases, volume 11907, 315–331. Brazil, G.; Kumar, A.; Straub, J.; Ravi, N.; Johnson, J.; and Gkioxari, G. 2023. Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13154–13164. Caesar, H.; Bankiti, V.; Lang, A. H.; Vora, S.; Liong, V. E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; and Beijbom, O. 2020. nuScenes: A Multimodal Dataset for Autonomous Driving. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11618–11628. Chen, X.; Wang, S.; Long, M.; and Wang, J. 2019. Transferability vs. Discriminability: Batch Spectral Penalization for Adversarial Domain Adaptation. In International Conference on Machine Learning, volume 97, 1081–1090. Dubrofsky, E. 2009. Homography estimation. Diplomov´a pr´ace. Vancouver: Univerzita Britsk´e Kolumbie, 5. Erfani, S. M.; Baktashmotlagh, M.; Moshtaghi, M.; Nguyen, V.; Leckie, C.; Bailey, J.; and Ramamohanarao, K. 2016. Robust Domain Generalisation by Enforcing Distribution Invariance. In International Joint Conference on Artificial Intelligence, 1455–1461. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1): 2096–2030. Geiger, A.; Lenz, P.; and Urtasun, R. 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3354–3361. Ghifary, M.; Balduzzi, D.; Kleijn, W. B.; and Zhang, M. 2017. Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7): 1414–1430. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Gu, Q.; Zhou, Q.; Xu, M.; Feng, Z.; Cheng, G.; Lu, X.; Shi, J.; and Ma, L. 2021. PIT: Position-Invariant Transform for Cross-FoV Domain Adaptation. In IEEE/CVF International Conference on Computer Vision, 8741–8750. Halder, S. S.; Lalonde, J.; and de Charette, R. 2019. PhysicsBased Rendering for Improving Robustness to Rain. In IEEE/CVF International Conference on Computer Vision, 10202–10211. Hendrycks, D.; and Dietterich, T. G. 2019. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. In International Conference on Learning Representations. Hurl, B.; Czarnecki, K.; and Waslander, S. L. 2019. Precise Synthetic Image and LiDAR (PreSIL) Dataset for Autonomous Vehicle Perception. In IEEE Intelligent Vehicles Symposium, 2522–2529. Kesten, R.; Usman, M.; Houston, J.; Pandya, T.; Nadhamuni, K.; Ferreira, A.; Yuan, M.; Low, B.; Jain, A.; Ondruska, P.; Omari, S.; Shah, S.; Kulkarni, A.; Kazakova, A.; Tao, C.; Platinsky, L.; Jiang, W.; and Shet, V. 2019. Lyft Level 5 AV Dataset 2019. https://level5.lyft.com/dataset/. Accessed: 2023-02-07. Li, H.; Pan, S. J.; Wang, S.; and Kot, A. C. 2018a. Domain Generalization With Adversarial Feature Learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5400–5409. Li, Y.; Tian, X.; Gong, M.; Liu, Y.; Liu, T.; Zhang, K.; and Tao, D. 2018b. Deep Domain Generalization via Conditional Invariant Adversarial Networks. In European Conference on Computer Vision, volume 11219, 647–663. Li, Z.; Chen, Z.; Li, A.; Fang, L.; Jiang, Q.; Liu, X.; and Jiang, J. 2022a. Towards model generalization for monocular 3d object detection. arXiv:2205.11664. Li, Z.; Chen, Z.; Li, A.; Fang, L.; Jiang, Q.; Liu, X.; and Jiang, J. 2022b. Unsupervised Domain Adaptation for Monocular 3D Object Detection via Self-training. In European Conference on Computer Vision, volume 13669, 245– 262. Long, M.; Cao, Y.; Wang, J.; and Jordan, M. I. 2015. Learning Transferable Features with Deep Adaptation Networks. In International Conference on Machine Learning, volume 37, 97–105. Lu, Y.; Ma, X.; Yang, L.; Zhang, T.; Liu, Y.; Chu, Q.; Yan, J.; and Ouyang, W. 2021. Geometry Uncertainty Projection Network for Monocular 3D Object Detection. In IEEE/CVF International Conference on Computer Vision, 3091–3101. Mai, N. A. M.; Duthon, P.; Khoudour, L.; Crouzil, A.; and Velastin, S. A. 2021. 3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions. Sensors, 21(20): 6711. Motiian, S.; Piccirilli, M.; Adjeroh, D. A.; and Doretto, G. 2017. Unified Deep Supervised Domain Adaptation and Generalization. In IEEE International Conference on Computer Vision, 5716–5726. Mousavian, A.; Anguelov, D.; Flynn, J.; and Kosecka, J. 2017. 3D Bounding Box Estimation Using Deep Learning and Geometry. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5632–5640. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6475 Muandet, K.; Balduzzi, D.; and Sch¨olkopf, B. 2013. Domain Generalization via Invariant Feature Representation. In International Conference on Machine Learning, volume 28, 10–18. Recht, B.; Roelofs, R.; Schmidt, L.; and Shankar, V. 2019. Do ImageNet Classifiers Generalize to ImageNet? In International Conference on Machine Learning, volume 97, 5389–5400. Ruder, S. 2016. An overview of gradient descent optimization algorithms. arXiv:1609.04747. Shao, R.; Lan, X.; Li, J.; and Yuen, P. C. 2019. Multiadversarial discriminative deep domain generalization for face presentation attack detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10023–10031. Shi, X.; Ye, Q.; Chen, X.; Chen, C.; Chen, Z.; and Kim, T. 2021. Geometry-based Distance Decomposition for Monocular 3D Object Detection. In IEEE/CVF International Conference on Computer Vision, 15152–15161. Sun, B.; and Saenko, K. 2016. Deep CORAL: Correlation Alignment for Deep Domain Adaptation. In European Conference on Computer Vision, volume 9915, 443–450. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, T.; Zhu, X.; Pang, J.; and Lin, D. 2021. FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection. In IEEE/CVF International Conference on Computer Vision Workshops, 913–922. Wang, Z.; Loog, M.; and van Gemert, J. 2020. Respecting Domain Relations: Hypothesis Invariance for Domain Generalization. In International Conference on Pattern Recognition, 9756–9763. Xie, Q.; Dai, Z.; Du, Y.; Hovy, E. H.; and Neubig, G. 2017. Controllable Invariance through Adversarial Feature Learning. In Advances in Neural Information Processing Systems, 585–596. Xiong, Y.; Chen, H.; Lin, Z.; Zhao, S.; and Ding, G. 2023. Confidence-based Visual Dispersal for Few-shot Unsupervised Domain Adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11621–11631. Yang, F.; Xu, X.; Chen, H.; Guo, Y.; Han, J.; Ni, K.; and Ding, G. 2022. Ground Plane Matters: Picking Up Ground Plane Prior in Monocular 3D Object Detection. arXiv:2211.01556. Yang, F.; Xu, X.; Chen, H.; Guo, Y.; He, Y.; Ni, K.; and Ding, G. 2023. GPro3D: Deriving 3D BBox from ground plane in monocular 3D object detection. Neurocomputing, 562: 126894. Zhao, Y.; Kong, S.; and Fowlkes, C. C. 2021. Camera Pose Matters: Improving Depth Prediction by Mitigating Pose Distribution Bias. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15759–15768. Zhou, K.; Liu, Z.; Qiao, Y.; Xiang, T.; and Loy, C. C. 2023. Domain Generalization: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4396– 4415. Zhou, X.; Wang, D.; and Kr¨ahenb¨uhl, P. 2019. Objects as points. arXiv:1904.07850. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6476 | 2024 | 719 |
18,539 | Transient Glimpses: Unveiling Occluded Backgrounds through the Spike Camera Jiyuan Zhang1, 2, Shiyan Chen1, 2, Yajing Zheng1, 2*, Zhaofei Yu1, 2, 3*, Tiejun Huang1, 2, 3 1School of Computer Science, Peking University 2National Key Laboratory for Multimedia Information Processing, Peking University 3Institute for Artificial Intelligence, Peking University {jyzhang,2301112005}@stu.pku.edu.cn, {yj.zheng,yuzf12,tjhuang}@pku.edu.cn Abstract The de-occlusion problem, involving extracting clear background images by removing foreground occlusions, holds significant practical importance but poses considerable challenges. Most current research predominantly focuses on generating discrete images from calibrated camera arrays, but this approach often struggles with dense occlusions and fast motions due to limited perspectives and motion blur. To overcome these limitations, an effective solution requires the integration of multi-view visual information. The spike camera, as an innovative neuromorphic sensor, shows promise with its ultra-high temporal resolution and dynamic range. In this study, we propose a novel approach that utilizes a single spike camera for continuous multi-view imaging to address occlusion removal. By rapidly moving the spike camera, we capture a dense stream of spikes from occluded scenes. Our model, SpkOccNet, processes these spikes by integrating multi-view spatial-temporal information via long-short-window feature extractor (LSW) and employs a novel cross-view mutual attention-based module (CVA) for effective fusion and refinement. Additionally, to facilitate research in occlusion removal, we introduce the S-OCC dataset, which consists of real-world spike-based data. Experimental results demonstrate the efficiency and generalization capabilities of our model in effectively removing dense occlusions across diverse scenes. Public project page: https://github.com/Leozhangjiyuan/SpikeDeOcclusion. Introduction The presence of dense occlusions poses challenges to visual algorithms. Recently, frame-based algorithms have been proposed to see the background scenes through occlusions assisted by leveraging multi-view image (Zhang et al. 2017; Wang et al. 2020; Zhang, Shen, and Lin 2021; Li et al. 2021a; Zhang et al. 2022b; Hur et al. 2023). The task is named Synthetic Aperture Imaging (SAI). However, these algorithms often rely on discrete multi-view exposures, which may not provide sufficient background information. Moreover, obtaining sharp frames in high-speed scenarios poses further challenges. In applications such as autonomous driving, effectively removing foreground occlusions (e.g., fences) is crucial for enhancing environment per*Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Moving Trajectory Spike Camera (a) Occlusions (b) Scened (c) Recovered Background (d) Capturing with a spike camera Figure 1: Outline of occlusion removal with a single spike camera. (a) Selected challenging occlusions, (b) Real-world Scenes with occlusions, (c) Recovered backgrounds with the proposed SpkOccNet, (d) We capture dataset with a fastmoving spike camera. ception, particularly at high driving speeds. Consequently, the acquisition of sharp and continuous-view information in high-speed scenarios remains a significant and ongoing challenge. Recently, neuromorphic sensors (Gallego et al. 2020; Brandli et al. 2014; Huang et al. 2022) show remarkable performance in visual tasks. These sensors generate continuous signals asynchronously, enabling high temporalresolution sampling. Two commonly used types of neuromorphic sensors are event cameras and spike cameras. Event cameras (Brandli et al. 2014; Lichtsteiner, Posch, and Delbruck 2008) asynchronously fire events in a differential manner when the light change surpasses a threshold, thus capturing rich motion information. Some studies have utilized event cameras for occlusion removal tasks (Zhang et al. 2021; Yu et al. 2022; Liao et al. 2022). However, event-based algorithms often require refocusing events to align them and provide accurate information for background reconstruction. This reliance on camera intrinsic parameters and distance information between objects and the camera restricts its applicability. Spike cameras (Huang et al. 2022) mimic the sampling mechanism of the fovea in the retina (Masland 2012; W¨assle 2004), with each pixel capturing photons and asynchronously firing spikes when the accumulated intensity surpasses a threshold. The integration mechanism of spikes enables the recording of absolute light The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 637 intensity (Huang et al. 2022; Zheng et al. 2021; Zhu et al. 2019), providing more texture information for reconstructing occluded regions. Compared to frame-based cameras, spike cameras offer more continuous and dense motion cues for occlusion removal (as elaborated in Sec.). In this paper, we propose, for the first time, to utilize spike cameras for foreground occlusion removal tasks, demonstrating their potential in effectively removing occlusions and reconstructing sharp backgrounds with camera motion. How do we define the spike-based SAI task? Conventional cameras often suffer from motion blur when capturing scenes in motion, which hinders the acquisition of multiple perspective views with a single camera. Frame-based algorithms rely on camera arrays to compensate for the limited viewpoints, restricting their applicability in real-world scenarios. Our goal is to achieve foreground removal using only one fast-moving spike camera without complex equipment or calibration. Therefore, the spike-based SAI we defined possesses the following advantages: a) The high temporal resolution of the spike camera allows us to overcome the constraints imposed by the motion speed of the scene; b) a single spike camera is sufficient to capture continuous views, eliminating the need for multiple cameras. How do we design the model? To deal with the spikebased SAI task, we build an end-to-end model named SpkOccNet. Specifically, to exploit the rich spatial-temporal information in spikes, we propose the Long-Short-Window (LSW) module to excavate and ensemble spatial-temporal features with different representations of long/short time windows from various views. To be specific, we segment spikes into three segments: one central and two end parts. We then utilize dense windows to transform spikes within three segments into dense representations while preserving their temporal characteristics, and simultaneously adopt a longer window to transform the central segment into a blurred image-like representation. Due to the motion displacement of foreground occlusions relative to the background being more significant and various as the view changes, spikes from different parts are complementary. Features from various views are fused using the cross-view mutual attention-based (Li et al. 2021b) module (CVA). To enhance the generalization of our method in real-world scenarios, we construct the first real spike-based occlusion removal dataset S-OCC. As depicted in Fig.1(d), we mount a spike camera on a slider and moved it rapidly to capture diverse outdoor scenes featuring different occlusions. Fig.1(a) illustrates the various occlusion types included in the dataset, while Fig.1(b) showcases the occluded scenes contained in S-OCC. Furthermore, Fig.1(c) presents the deocclusion results obtained using the proposed SpkOccNet. Our contributions are summarized as follows: • We explore the spike-based SAI for the first time, utilizing sharp and continuous-view information from spike streams. Our approach incorporates information from dense viewpoints with long/short representations, leveraging mutual attention to fuse features. • We contribute the first real-world spike-based dataset for occlusion removal, verifying the algorithm’s generalization in real-world scenes. • Experiments demonstrate the effectiveness of our method in occlusion removal, relying solely on a single camera with fast motion. Related Works Synthetic Aperture Imaging Image-Based SAI For frame-based cameras, an earlier work (Vaish et al. 2004) utilized a camera array to align the information from multiple viewpoints to a reference viewpoint using coordinate relationships. However, its planar camera array requires stringent hardware calibrations. Vaish et al. (Vaish et al. 2006) take medians and entropy into consideration and proposed a more robust cost function. Zhao et al. (Pei et al. 2013) formulate an energy minimization problem to recognize each pixel from various views whether it belongs to the occlusion. Zhang et al. (Zhang et al. 2017) utilize a moving camera with its IMU data as the clue. Later method (Yang et al. 2014) is capable of predicting all-infocus images. DeOccNet (Wang et al. 2020) includes a residual atrous spatial pyramid pooling module to enlarge receptive fields. Zhang et al. (Zhang, Shen, and Lin 2021) use shifted micro-lens images with a dynamic filter to explore information in the light field. Recent works mainly utilize stronger CNNs to remove occlusions (Li et al. 2021a; Zhang et al. 2022b; Hur et al. 2023). Event-Based SAI Discrete images captured with traditional cameras fail to provide sufficient information in scenarios with extremely dense occlusions due to limited viewpoints. Event cameras show their potential to see through dense occlusions (Zhang et al. 2021; Yu et al. 2022; Liao et al. 2022) due to high temporal resolution. Zhang et al. (Zhang et al. 2021) propose to use SNNs as the encoder and CNNs as the decoder. Later work (Liao et al. 2022) combines events and images. However, the event-based SAI approaches require the camera intrinsic. The translation matrices of the camera and the target depth prior are complicated settings. Spike-based Image Reconstruction Spike cameras possess several advantages, including high temporal resolution (Zheng et al. 2023b), high dynamic range, and rich preservation of spatial texture. These advantages have led to wide applications in various downstream tasks, such as optical flow estimation (Hu et al. 2022; Zhao et al. 2022; Chen, Yu, and Huang 2023), object tracking (Zheng et al. 2023a), and depth estimation (Zhang et al. 2022a; Wang et al. 2022). Among these tasks, the reconstruction task (Zhang et al. 2023) serves as the fundamental basis. In the early stages, Zhu et al. (Zhu et al. 2019) propose to approximate the light intensity by statistically analyzing the spike stream. Zhu et al. (Zhu et al. 2021) and Zheng et al. (Zheng et al. 2021, 2023c) also develop biologically inspired reconstruction algorithms. Recently, Chen et al. explore self-supervised reconstruction methods (Chen et al. 2022) and spike-guided image deblurring (Chen et al. 2023). Existing methods have demonstrated the advantages of spike cameras in recovering textures from high-speed scenes. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 638 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 0 1 0 1 Voltage Light Intensity Readout Threshold Photons Figure 2: Illustration of the spike camera outputs spikes. The voltage always increases and resets with the light change, and spikes are read out with very short intervals. Spike-based SAI Theoretical Analysis We focus on the physical imaging process to explain why spike cameras exhibit superior potential than frame-based cameras in addressing occlusion removal tasks. The photosensitive units of a spike camera consist of an array of H ×W pixels, with each pixel independently capturing photons continuously. The photoelectric conversion unit transforms the captured photons into electrical current Ix,y(t) and accumulates voltage Vx,y. When the voltage Vx,y exceeds a dispatch threshold Θ, the pixel fire a spike, and subsequently, the voltage Vx,y is reset to zero, shown as in Fig. 2. The entire process can be formulated as: V + x,y(t) = ( V − x,y(t) + Ix,y(t), if V − x,y(t) < Θ, 0, otherwise, (1) Sx,y(k) = 1, if ∃t ∈((k −1)η, kη], Vx,y(t) = 0, 0, if ∀t ∈((k −1)η, kη], Vx,y(t) > 0, (2) where V − x,y(t) and V + x,y(t) denotes the voltage before and after receiving the electric current Ix,y(t), k ∈R. The voltage is read out with the very short interval η = 50µs and outputs a spike stream S with the size of H × W × K after K times readout during T µs (K = T η ). While capturing frame-based videos, the actual time interval between successive frames is Tshutter. To reduce motion blur, the exposure time Texpo per frame is kept shorter than Tshutter. Therefore, the continuous changes in light dynamics taking place during the time interval Tvoid = Tshutter −Texpo are not captured. Not considering the dynamic range, the image frame Bi in i-th capture can be formulated as: Bi = 1 Texpo Z Ti+Texpo Ti Ltdt ≃ XTexpo/η n=0 S(n), (3) where Ti is the timestamp exposure begins, and the Lt is the hidden sharp image at any exact moment t. With inappropriate Texpo, Bi would be blurry under fast motion from Eq. 3. As depicted in Fig. 3(c), a moving camera during the Frame-Based Views Spike-Based Views (a) (b) (c) Figure 3: Illustration of the advantages of spike cameras over traditional cameras in seeing through backgrounds from the perspective of the imaging process. Orange regions represent the visible area of a frame or spike camera. exposure time Texpo results in motion blur, thereby causing an enlargement of the foreground occlusion area. Due to the integrative sampling of the spike camera, the sum of spikes directly reflects the light intensity, thus Bi can be also approximately written in terms of S. Imaging process comparison In Tshutter, we denoted the quantity of information captured by frames and spikes as Ωimage and Ωspike, which can be formulated as: Ωimage = Bi + ∅ , Ωspike = XTexpo/η k=0 S(k) + X(Texpo+Tvoid)/η k=Texpo/η S(k) , (4) where ∅denotes empty information. From Eq. 4, the first term indicates that within Texpo, the image records an B, while the spike gets S. However, B loses the temporal dimension, whereas S retains dense temporal information; the second term indicates that within Tvoid, the image captures nothing while the spike camera records continuously. For SAI, in Fig. 3, the orange region represents the visible area during camera motion. The spike camera, due to its dense information as in Eq. 4, offers more clues from continuous viewpoints, allowing the observation of background objects xa and xb, while the frame-based camera lose. New Spike-based Dataset: S-OCC Occlusion removal with spike cameras is a previously unexplored area, lacking relevant existing datasets to this work. Thus, we are dedicated to constructing the first dataset based on spike cameras with various occlusions and ground truths. We name the new dataset as S-OCC. How We Set the Camera. In contrast to event-based and image-based approaches, this work strives for independence from camera intrinsic and extrinsic parameters, and scene prior knowledge, relying on a single spike camera. To record the scene, we place a spike camera on a slider and move it The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 639 Spikes DeOcc Extracter Spike Camera Moving Trajectory Dense Window Representation Long Window Representation Dense Window Representation Dense Window Representation DeOcc Extracter DeOcc Extracter Ensemble Learner Cross-View Attentionbased fusion (CVA) Cross-View Attentionbased fusion (CVA) Cross-View Attentionbased fusion (CVA) Predict Layer Predict Layer Predict Layer Predicted Background Representations Long-Short-Window Feature Extractor Figure 4: Architecture of the SpkOccNet. With the spike camera moving, continuous spikes are fed into the network, processed by the Long-Short-Window feature extractor (LSW) first then Cross-View Attention-based fusion (CVA). quickly during each shot, taking about 0.1 seconds for each movement. What the Occlusions are. We set five intricate occlusions to both enhance model robustness and unveil the potential of spike cameras. They encompassed (1) a square iron mesh, (2) a hexagonal iron mesh, (3) a dense iron frame, (4) a fence, and (5) an irregular fabric net. As Fig. 1(a) visually depicted, these occlusions characterize diverse levels of sparsity and density, thereby introducing complexity to this work. Notably, occluding objects are allowed their own motion during the camera capturing process. How We Construct the Dataset. We record various outdoor scenes utilizing the aforementioned camera motion and occlusion setups, yielding a total of 128 sequences. Among these, 108 sequences are randomly picked for training, while the remaining 20 sequences are for testing. For static scenes without occlusion, We fix the spike camera to capture the background scenes and obtained grayscale images by calculating the spike firing rate (Zhu et al. 2019), which served as the ground truth for the dataset. As a result, each sample in this dataset comprises a spike stream alongside a clear background image with no occlusions. Besides, we claim that our dataset is captured with no sensitive, private information or societal implications involved. Overall Architecture We aim to predict the background image I through continuous spikes. We build the model called SpkOccNet, as shown in Fig. 4. Each input sample is a spike stream denoted as S, generated by the camera through rapid sliding motion. S is in size of H × W × T, where H × W = 250 × 400 represents the spatial resolution and T is the number of time steps (T = 0.1s/50µs = 2000). With camera motion, S records the dense and continuous changes in viewpoints. Upon the analysis from the previous section, we assert S contains all information for reconstructing the background. In S, spikes at the two ends correspond to the largest viewpoint change, where the motion displacement of foreground occlusions relative to the background is more significant. Thus, the background texture captured by the spikes at the two ends is likely to complement the information recorded by the spikes in the middle. Motivated by this, we partition S into three segments for processing: S = Saux − + Sc + Saux + , (5) where Saux − = H × W × T−, Sc = H × W × Tc, Saux + = H ×W ×T+, and T−∈[0, Waux], Tc ∈[Waux, T −Waux], T+ ∈[T −Waux, T], as shown in Fig.4. Waux is the parameter that controls the window length. We set Waux to 300, which is much shorter than T. The SpkOccNet includes two stages, long-short-window feature extractor (LSW) and cross-view attention-based fusion (CVA). In the first stage, each segment of spikes is first pre-processed with image-like representations, then extracted features with various modules. Saux + and Saux − is processed with dense window reprensentaion then DeOcc extracter, and output features f+, f−. Sc is processed with two branches, one is long window reprensentaion with DeOcc extractor, the other is dense window reprensentaion with Ensemble Learner, then output features f long c , f dense c . In the second stage, the CVA module mainly aims to complement the texture of the center moment (f long c , f dense c ) from the texture of two ends (f+, f−). Firstly, f long c , f dense c are fused with a channel-attention layer to get f fuse c . Then, the CVA input with f fuse c , f+, f−for feature fusion with the proposed attention mechanism in which features are enhanced with attention from others along both spatial and channel dimensions. Finally, the prediction layer consisted of Conv layers outputs the final reconstructed background image ˆI. Long-Short-Window Feature Extractor As in Sec. , we split the input S into three windows, Saux −, Sc, Saux + , each of which contains continuous views. The timestamp for reconstruction is the center of Sc. Thus, Sc provides the spatial structure for reference while Saux + , Saux − provides complementary information for texture in Sc. Firstly, Saux + , Saux − are transformed with dense window representation (ReprDW). To be specific, we split spikes into The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 640 LayerNorm LayerNorm MLP LayerNorm Attn LayerNorm MLP Channel Attention Cross-View Attention-based fusion (CVA) ×2 ×2 Linear Linear MatMul Scale Mask SoftMax MatMul MatMul Scale Mask SoftMax MatMul Mutual-Window Attention (MWA) Mutual-Window Attention (MWA) MWA Attn Figure 5: The structure of the proposed Cross-View Attention-based fusion (CVA), consisting of two mutual window attention (MWA) and a channel attention module. dense groups by non-overlapping a sliding window whose length is Wdense = 100. In each group, spikes are accumulated in Sc along the time axis and transformed into Wdense/Waux image-like representations with one channel. After ReprDW, Saux + , Saux − are transformed to R+, R−, then processed by the DeOcc Extracter, a U-shape network comprised of Conv encoders and decoders, respectively. We denote them as M enc−dec + , M enc−dec − . After that, features f+, f−are obtained. The above process is formulated as: R+/−= ReprDW(Saux +/−), f+/−= fM enc−dec +/− (R+/−: θM enc−dec +/− ). (6) For Sc, as it contains spikes from longer time intervals around the central point, we employ two different representations for its transformation. (A) We utilize the long window representation(ReprLW) to transform spikes. The depth difference ensures that the foreground occlusions will exhibit larger displacements on the imaging plane than the background. For spikes at different moments, the visible regions in the background will change accordingly due to the variations in occlusions. Therefore, we accumulate spikes in Sc along the time axis with the fulllength Wlong = T−2Waux, getting the representation Rlong c . This transforms the foreground occlusions into a blurred effect similar to a long-exposure image, allowing the occluded background texture to have the “partially-see-through effect”. The same U-shape feature extractor M enc−dec c receives Rc and output feature f long c . Despite the presence of some motion blur in f long c , it still offers essential structural information for reconstructing the background. (B) Sc is also processed with the ReprDW to get representation Rdense c with continuous view changes. For Rdense c , we employ residual Conv layers M res c to ensemble effective features f dense c from dense viewpoints. Finally, we use a channel attention (Zamir et al. 2022) to fuse the two features (f long c , f dense c ) and get f fuse c . The above process can be formulated as: Rlong c , Rdense c = ReprLW(Sc), ReprDW(Sc), f long c = fM res c (Rlong c : θM res c ), f dense c = fM enc−dec c (Rdense c : θM enc−dec c ), f fuse c = Fuse([f long c , f dense c ]), (7) Cross-View Attention-Based Fusion In the second stage, we propose a novel Cross-View Attention-Based Fusion (CVA) module to fuse and refine the features from the first stage. The CVA takes three features as input, f fuse c , f+, f−. As shown in Fig. 5, the CVA includes two modules with different attention mechanisms: one is the channel attention (CA) MCA that is adapted from the multi-dconv head Transposed self-attention (MDTA) in Restormer (Zamir et al. 2022), the other is our proposed mutual window attention (MWA) MMWA block. Among [f fuse c , f+, f−], f fuse c represents the center longinterval textures of Sc while f−and f+ offering textures from the shorter time windows on two ends of S. We consider using the cross-view mutual attention mechanism for the following reasons: (A) The occluded regions in f−, f+ and f fuse c differ due to the differences in viewpoints. Mutual attention can help to compensate for the occluded parts in one feature map by referring to the non-occluded parts in other feature maps. (B) Due to the high-speed camera motion, there can still be some spatial displacement of the background scenes. The mutual attention mechanism can effectively align features to mitigate the impact of camera motion. The MWA approximately comprises two Transformer blocks. Specifically, the standard Transformer block takes one input f, obtains its Q(query), K(key), and V (value) matrices and operates self-attention. For our MVA, it takes two inputs f1 and f2, obtains Q1, K1, V1 and Q2, K2, V2 matrices, and operates mutual attentions as followings: f1,2, f2,1 = fMMWA(f1, f2 : θMMWA), = Attn(Q1, K2, V2), Attn(Q2, K1, V1). (8) Considering the computing cost, we adopt window-based attention operation in the Swin Transformer blocks (Liu et al. 2021). In the MVA, we compute mutual attention twice, one between f fuse c and f−, the other between f fuse c and f+. The process can be formulated as: fc,−, f−,c = fMMWA(f fuse c , f−: θMMWA), fc,+, f+,c = fMMWA(f fuse c , f+ : θMMWA). (9) Features are then concatenated together along the channel axis and processed by the channel attention layer which contains one MDTA layer for attention and one Conv layer for reducing output channels. It can be formulated as: fCVA = fMCA({fc,−, f−,c, fc,+, f+,c} : θMCA). (10) Finally, the prediction layer consisted of Conv layers, input with fCVA, outputs the predicted background image ˆI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 641 26.43db 28.21db 30.57db 20.01db 21.54db 22.77db 19.98db 20.30db 24.12db 28.85db 28.31db 29.11db 17.29db 20.84db 21.26db (a) Occlusions (b) E-SAI (c) DeOccNet (d) Ours (e) Ground Truth Figure 6: Results on S-OCC for our method compared with E-SAI (Zhang et al. 2021) and DeOccNet (Wang et al. 2020). Experiments We train all networks with PyTorch. L1 loss is used for optimization. Quantitative metrics are peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Detailed information about the S-OCC dataset, training and network implementation details, are included in the supplementary file. Quantitative and Qualitative Results We compare the proposed SpkOccNet with three other methods. Firstly, we use the U-shaped feature extractor used in our model as the baseline whose output channels are set to 1. Secondly, we include DeOccNet (Wang et al. 2020), a model that performs well in the image domain. For these two models, we simulated images as inputs. Specifically, we take the middle 300 spike frames (denoted as Smid c ) of Sc and process Smid c , S+, S−through ReprLW to obtain three accumulated grayscale images, which are similar to sequence captured by a camera with an exposure time of 15ms (300×50µs) and the frame rate closed to 25 fps. Thirdly, we consider the model E-SAI (Zhang et al. 2021), which is based on event cameras and trained with a hybrid SNN-CNN model. We split the spike S into 30 dense windows as input according to the settings in E-SAI. The quantitative results in terms of PSNR and SSIM are presented in Tab. 1. The table shows the performance of each model on the test set for five different occlusion scenarios, as well as the average performance. It can be observed that our model, SpkOccNet, achieves the best performance on the SOCC dataset compared to the other three methods. SpkOccNet achieves a PSNR of 26.83 dB, which is approximately 1.76dB higher than DeOccNet and 1.32 dB higher than the E-SAI method. It is worth noting that SpkOccNet has a parameter size of only about 4.9M, while DeOccNet and E-SAI have parameter size of 39.04M and 18.59M, which are 8.0 and 3.8 times more than ours. The performance and the parameter size demonstrate the stronger advantage of our proposed model for spike-based occlusion removal. Our model exhibits more pronounced advantages in dealing with ‘Fence’, ‘Hexagonal Mesh’, and ‘Fabric net’ occlusions. The ’Fence’ scenario involves extensive occlusions, whereas the ’Fabric Net’ exhibits densely-packed and irregular occlusions, both of which pose substantial challenges. Fig. 6 presents visualized results. Our method achieves higher-quality image reconstructions. Specifically, our method is able to recover clearer background textures and overcome the issue of the change of illumination caused by occlusions. Although the E-SAI can reconstruct relatively smooth images, the reconstructed images appear blurry and the effect of under/overexposure exists in some regions. Besides, DeOccNet performs badly in removing severe occlusions. The results demonstrate that our method outperforms the other two image-domain and event-domain methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 642 Occlusions Methods PSNR ↑ SSIM ↑ Fense Baseline U-Net 25.89 0.774 DeOccNet (2020) 26.06 0.782 E-SAI (2021) 26.68 0.767 SpkOccNet 28.38 0.792 Raster Baseline U-Net 22.71 0.690 DeOccNet (2020) 23.60 0.686 E-SAI (2021) 25.13 0.745 SpkOccNet 26.28 0.746 Square Mesh Baseline U-Net 26.63 0.766 DeOccNet (2020) 25.60 0.762 E-SAI (2021) 28.26 0.772 SpkOccNet 27.35 0.745 Hexagon Mesh Baseline U-Net 28.42 0.713 DeOccNet (2020) 28.17 0.759 E-SAI (2021) 27.48 0.776 SpkOccNet 29.67 0.846 Fabric Net Baseline U-Net 19.50 0.640 DeOccNet (2020) 20.90 0.660 E-SAI (2021) 19.69 0.626 SpkOccNet 22.33 0.683 Total Baseline U-Net 24.60 0.761 DeOccNet (2020) 24.77 0.771 E-SAI (2021) 25.51 0.765 SpkOccNet 26.83 0.805 Table 1: Comparison of various de-occlusion methods on SOCC. The table shows results on five occlusion types. M enc−dec c,+,− M res c MMVA PSNR ↑ SSIM ↑ ! 25.53 0.773 ! 23.72 0.737 ! ! 26.11 0.789 ! ! ! 26.83 0.805 Table 2: Ablation study of the proposed modules. Ablation Studies Ablation on Modules. To validate the effectiveness of the proposed modules in SpkOccNet, we first conduct ablation experiments on modules, and the results are shown in Tab.2. The effectiveness of the LSW is validated through Row 1∼3. Specifically, in the Row 1, only Rlong c , R+, R−are used as inputs which are processed through their M enc−dec c,+,− and the output features are simply concatenated for fusion. In the Row 2, only the dense representation Rdense c of Sc is used as input and processed by the residual feature extractor M res c to obtain features. Row 3 combines the approaches from the previous two rows. Row 1 and Row 3 demonstrate that the dense representation of spikes preserves temporally dense viewpoint information, thereby enhancing performance. Row 2 and Row 3 verify the effectiveness of the representations of the viewpoints at both ends and the longer time window representation in the middle. Row 3 and Row 4 validate the effectiveness of the CVA in the second stage. In Row 4, when performing feature fusion, the proposed MMVA in CVA is used, while in Row 3, both Occluded View LeŌ End Right End Center Figure 7: Visualized dense window representations of twoend spikes and long window representation of center spikes. Input Length 45ms 75ms 105ms Image PSNR ↑ 26.42 26.77 26.83 25.90 SSIM ↑ 0.795 0.807 0.805 0.794 Table 3: Ablation study of input length of spikes and comparison with images as input. simple concatenation and the Conv layer are employed for fusion. These two rows illustrate the effectiveness of CVA in facilitating feature complementarity across different viewpoints. Fig. 7 illustrates the dense window representations of Saux + and Saux −, along with the long window representation Rc of Sc. It is evident that Saux + and Saux − encompass complementary information from occluded viewpoints (red and yellow circles). Rc simulates the overall blurred texture similar to a “long exposure” effect. Analysis on Input. We conducted experiments on the input spike length which represents the extent of viewpoint changes during camera motion. Additionally, to contrast the advantages of spike cameras against frame-based cameras, we simulated images as inputs of SpkOccNet, as described in Sec. . As reported in Tab. 3. In general, even with spikes only recording 45 ms (equivalent to only the time of one captured by a frame-based camera), the performance remains superior to that of image inputs. The result underscores the significance of viewpoint continuity in background reconstruction. As the input length increases, the PSNR improves, indicating that the magnitude of viewpoint changes also influences background reconstruction. Conclusion We explore the first spike-based SAI, utilizing spikes for recovering backgrounds from dense occlusions. Our model SpkOccNet integrates information from different viewpoints and window lengths, employing mutual attention for effective fusion and refinement. We contribute the first realworld spike-based dataset S-OCC for occlusion removal. Remarkably, our algorithm achieves impressive occlusion removal results using a single camera with fast motion. Acknowledgments This work was supported by the National Natural Science Foundation of China (62176003, 62088102, 6230070113) and the China Postdoctoral Science Foundation (2022M720238, 2023T160015). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 643 References Brandli, C.; Berner, R.; Yang, M.; Liu, S.-C.; and Delbruck, T. 2014. A 240× 180 130 db 3 µs latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 49(10): 2333–2341. Chen, S.; Duan, C.; Yu, Z.; Xiong, R.; and Huang, T. 2022. Self-Supervised Mutual Learning for Dynamic Scene Reconstruction of Spiking Camera. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI), 2859–2866. Chen, S.; Yu, Z.; and Huang, T. 2023. Self-Supervised Joint Dynamic Scene Reconstruction and Optical Flow Estimation for Spiking Camera. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1): 350–358. Chen, S.; Zhang, J.; Zheng, Y.; Huang, T.; and Yu, Z. 2023. Enhancing Motion Deblurring in High-Speed Scenes with Spike Streams. In Thirty-seventh Conference on Neural Information Processing Systems. Gallego, G.; Delbr¨uck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A. J.; Conradt, J.; Daniilidis, K.; et al. 2020. Event-based vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 154–180. Hu, L.; Zhao, R.; Ding, Z.; Ma, L.; Shi, B.; Xiong, R.; and Huang, T. 2022. Optical flow estimation for spiking camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17844–17853. Huang, T.; Zheng, Y.; Yu, Z.; Chen, R.; Li, Y.; Xiong, R.; Ma, L.; Zhao, J.; Dong, S.; Zhu, L.; Li, J.; Jia, S.; Fu, Y.; Shi, B.; Wu, S.; and Tian, Y. 2022. 1000× Faster Camera and Machine Vision with Ordinary Devices. Engineering. Hur, J.; Lee, J. Y.; Choi, J.; and Kim, J. 2023. I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 229–238. Li, Y.; Yang, W.; Xu, Z.; Chen, Z.; Shi, Z.; Zhang, Y.; and Huang, L. 2021a. Mask4D: 4D convolution network for light field occlusion removal. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2480–2484. IEEE. Li, Z.; Sun, Y.; Zhang, L.; and Tang, J. 2021b. CTNet: Context-based tandem network for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 9904–9917. Liao, W.; Zhang, X.; Yu, L.; Lin, S.; Yang, W.; and Qiao, N. 2022. Synthetic aperture imaging with events and frames. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17735–17744. Lichtsteiner, P.; Posch, C.; and Delbruck, T. 2008. A 128×128 120 dB 15µs Latency Asynchronous Temporal Contrast Vision Sensor. IEEE Journal of Solid-State Circuits, 43(2): 566–576. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10012–10022. Masland, R. H. 2012. The Neuronal Organization of the Retina. Neuron, 76(2): 266–280. Pei, Z.; Zhang, Y.; Chen, X.; and Yang, Y.-H. 2013. Synthetic aperture imaging using pixel labeling via energy minimization. Pattern Recognition, 46(1): 174–187. Vaish, V.; Levoy, M.; Szeliski, R.; Zitnick, C. L.; and Kang, S. B. 2006. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In Proceedings of the 2004 IEEE 2006 Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2331–2338. IEEE. Vaish, V.; Wilburn, B.; Joshi, N.; and Levoy, M. 2004. Using plane+ parallax for calibrating dense camera arrays. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, I–I. IEEE. Wang, Y.; Li, J.; Zhu, L.; Xiang, X.; Huang, T.; and Tian, Y. 2022. Learning stereo depth estimation with bio-inspired spike cameras. In IEEE International Conference on Multimedia and Expo (ICME), 1–6. Wang, Y.; Wu, T.; Yang, J.; Wang, L.; An, W.; and Guo, Y. 2020. DeOccNet: Learning to see through foreground occlusions in light fields. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 118–127. W¨assle, H. 2004. Parallel processing in the mammalian retina. Nature Reviews Neuroscience, 5(10): 747–757. Yang, T.; Zhang, Y.; Yu, J.; Li, J.; Ma, W.; Tong, X.; Yu, R.; and Ran, L. 2014. All-in-focus synthetic aperture imaging. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI 13, 1–15. Springer. Yu, L.; Zhang, X.; Liao, W.; Yang, W.; and Xia, G.-S. 2022. Learning to See Through with Events. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5728–5739. Zhang, J.; Jia, S.; Yu, Z.; and Huang, T. 2023. Learning Temporal-Ordered Representation for Spike Streams Based on Discrete Wavelet Transforms. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1): 137–147. Zhang, J.; Tang, L.; Yu, Z.; Lu, J.; and Huang, T. 2022a. Spike Transformer: Monocular Depth Estimation for Spiking Camera. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, 34–52. Springer. Zhang, S.; Chen, Y.; An, P.; Huang, X.; and Yang, C. 2022b. Light field occlusion removal network via foreground location and background recovery. Signal Processing: Image Communication, 109: 116853. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 644 Zhang, S.; Shen, Z.; and Lin, Y. 2021. Removing Foreground Occlusions in Light Field using Micro-lens Dynamic Filter. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 1302–1308. Zhang, X.; Liao, W.; Yu, L.; Yang, W.; and Xia, G.-S. 2021. Event-based synthetic aperture imaging with a hybrid network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14235– 14244. Zhang, X.; Zhang, Y.; Yang, T.; and Yang, Y.-H. 2017. Synthetic aperture photography using a moving camera-IMU system. Pattern Recognition, 62: 175–188. Zhao, R.; Xiong, R.; Zhao, J.; Yu, Z.; Fan, X.; and Huang, T. 2022. Learning optical flow from continuous spike streams. Advances in Neural Information Processing Systems (NeurIPS), 35: 7905–7920. Zheng, Y.; Yu, Z.; Wang, S.; and Huang, T. 2023a. SpikeBased Motion Estimation for Object Tracking Through BioInspired Unsupervised Learning. IEEE Transactions on Image Processing, 32: 335–349. Zheng, Y.; Zhang, J.; Zhao, R.; Ding, J.; Chen, S.; Xiong, R.; Yu, Z.; and Huang, T. 2023b. SpikeCV: Open a Continuous Computer Vision Era. arXiv preprint arXiv:2303.11684. Zheng, Y.; Zheng, L.; Yu, Z.; Huang, T.; and Wang, S. 2023c. Capture the moment: High-speed imaging with spiking cameras through short-term plasticity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7): 8127–8142. Zheng, Y.; Zheng, L.; Yu, Z.; Shi, B.; Tian, Y.; and Huang, T. 2021. High-speed image reconstruction through shortterm plasticity for spiking cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6358–6367. Zhu, L.; Dong, S.; Huang, T.; and Tian, Y. 2019. A retinainspired sampling method for visual texture reconstruction. In IEEE International Conference on Multimedia and Expo (ICME), 1432–1437. Zhu, L.; Li, J.; Wang, X.; Huang, T.; and Tian, Y. 2021. NeuSpike-Net: High Speed Video Reconstruction via Bioinspired Neuromorphic Cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2400–2409. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 645 | 2024 | 72 |
18,540 | Diversity-Authenticity Co-constrained Stylization for Federated Domain Generalization in Person Re-identification Fengxiang Yang1,2, Zhun Zhong3, Zhiming Luo1*, Yifan He4, Shaozi Li1, Nicu Sebe2 1Department of Artificial Intelligence, Xiamen University, China 2Department of Information Engineering and Computer Science, University of Trento, Italy 3School of Computer Science, University of Nottingham, UK 4Reconova Technologies Co., Ltd., China Abstract This paper tackles the problem of federated domain generalization in person re-identification (FedDG re-ID), aiming to learn a model generalizable to unseen domains with decentralized source domains. Previous methods mainly focus on preventing local overfitting. However, the direction of diversifying local data through stylization for model training is largely overlooked. This direction is popular in domain generalization but will encounter two issues under federated scenario: (1) Most stylization methods require the centralization of multiple domains to generate novel styles but this is not applicable under decentralized constraint. (2) The authenticity of generated data cannot be ensured especially given limited local data, which may impair the model optimization. To solve these two problems, we propose the Diversity-Authenticity Co-constrained Stylization (DACS), which can generate diverse and authentic data for learning robust local model. Specifically, we deploy a style transformation model on each domain to generate novel data with two constraints: (1) A diversity constraint is designed to increase data diversity, which enlarges the Wasserstein distance between the original and transformed data; (2) An authenticity constraint is proposed to ensure data authenticity, which enforces the transformed data to be easily/hardly recognized by the local-side global/local model. Extensive experiments demonstrate the effectiveness of the proposed DACS and show that DACS achieves state-of-the-art performance for FedDG re-ID. Project: https://github.com/FlyingRoastDuck/DACS official.git 1 Introduction Person Re-identification (re-ID) aims at retrieving target pedestrian in a non-overlapped camera system, which can largely benefit the smart city construction, e.g., finding lost children or escaped criminals. It is reported that deep-based methods (He et al. 2016; Huang et al. 2019; Wang et al. 2018; Ye et al. 2021; Sun et al. 2018) have drastically promoted the development of re-ID. However, these modern methods still suffer from the domain shift caused by different domains, leading to unsatisfactory performance when deployed in novel domains. *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Diversity Authenticity Feature Space : Local Data : Novel Data : Novel but Unreal Data : Diversity Constraint : Our DACS D A C S (2) Reducing Global Uncertainty for Authenticity Local-side Global Model Local Model Local-side Global Model Local Model (1) Enlarging Wasserstein Distance for Diversity Enlarging Wasserstein Distance !x = ⁄ (x−μ) σ x′=#σ ∗!x + %μ x′ x %μ %σ Figure 1: Schematic illustration of the proposed DiversityAuthenticity Co-constrained Stylization (DACS). Middle: We introduce a style transformation model (STM) for each domain to hallucinate novel data with two constraints. (1) Diversity Constraint: STM is encouraged to generate diverse images . (2) Authenticity Constraint: We ensure data authenticity by enforcing the transformed data to be easily / hardly recognized by local-side global model / local model. Recent studies (Zhao et al. 2021; Song et al. 2019; Dai et al. 2021) attempt to solve domain shift issue by designing domain generalizable (DG) algorithms, where they learn generalized re-ID models by training on several labeled source domains. Despite their success, all of them require the centralization of training data, raising data privacy concerns. One promising solution is federated learning (McMahan et al. 2017), which aims to learn a generalized model by accumulating (e.g., averaging) the knowledge of models independently trained on each domain. In such a learning paradigm, the data privacy issue can be largely alleviated. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6477 Nevertheless, it is hard to achieve good generalization without centralizing training data due to the data heterogeneity under different environment conditions (Wu and Gong 2021). Moreover, since re-ID is an open-set (Panareda Busto and Gall 2017) retrieval problem, where each domain has completely different pedestrians, the optimization of federated domain generalization in re-ID (FedDG re-ID) is more challenging than other closed-set tasks like image classification (Li, He, and Song 2021) or segmentation (Liu et al. 2021). To tackle the FedDG re-ID problem, recent works focus on alleviating the overfitting of each domain’s local training (Wu and Gong 2021), or adapting the vanilla federated techniques to the re-ID task (Zhuang et al. 2020). However, the direction of generating novel data through style transfer (Dumoulin, Shlens, and Kudlur 2017; Huang and Belongie 2017) is largely ignored. In the literature, data stylization is a commonly used strategy in DG (Zhou et al. 2021; Zhong et al. 2022; Tang et al. 2021), while most of them are not applicable in FedDG re-ID due to their requirement of centralizing source domains. Moreover, since the data authenticity is not explicitly ensured in these methods, unrealistic data may be generated, which will negatively impact the optimization. In this paper, we tackle the FedDG re-ID problem in the view of data stylization and propose the DiversityAuthenticity Co-constrained Stylization (DACS) to generate diverse and authentic data for learning robust local models. Specifically, we introduce a style transformation model (STM) for each domain and jointly constrain the STM with two losses. (1) To generate diverse data, we encourage STM to produce novel data with a different distribution from the current domain. This is achieved by enlarging the simplified 2-Wasserstein distance (He et al. 2018) between the original and transformed data (see Fig. 1(1)). The larger the distance, the more diverse the generated data will be. In this way, the local model can see as diverse styles as possible during local optimization. However, the unconstrained enlargement of Wasserstein distance may lead to unrealistic stylization, which may impair the optimization. We thus propose the authenticity constraint to solve this problem. (2) Concretely, to ensure data authenticity, we require the generated data to be hardly / easily recognized by the local model / local-side global model. This is achieved by measuring and controlling the entropy produced by these two models (see Fig. 1(2)). By jointly considering the above two constraints, STM can generate diverse and authentic data, which helps us to learn more generalized local model and thereby improves FedDG re-ID accuracy. The main contributions of this paper are three-fold: • We propose a novel data stylization approach for FedDG re-ID, which enables us to generate novel data for local training and promote FedDG re-ID accuracy. • We design a diversity loss to enlarge the distributional discrepancies between original and generated data, which enables STM to generate novel data and avoids overfitting of local models. • We further introduce an authenticity loss to enforce the STM to produce authentic data, allowing the model to better benefit from the stylized images. Extensive experiments conducted on four large-scale reID benchmarks demonstrate the advantage of our method in improving the generalization capability of local models. In addition, our method establishes new state-of-the-art results for FedDG re-ID. 2 Related Work Domain Generalization. Deep neural networks are vulnerable to domain shift among different training domains. Therefore, recent studies resort to domain generalization (DG) (Li et al. 2018; Zhao et al. 2021; Jin et al. 2020; Chattopadhyay, Balaji, and Hoffman 2020) to optimize generalizable models that can be directly deployed in unseen domains. Recently, many methods are proposed to solve the DG problem in person re-ID (Jin et al. 2020; Zhao et al. 2021; Dai et al. 2021). e.g., Zhao et al. (Zhao et al. 2021) adopt metalearning to improve the generalization of models with interpolated features. Dai et al. (Dai et al. 2021) optimize an additional voting network for model aggregation to achieve generalization. Despite their success, most of them require the centralization of data from source domains, raising the risk of privacy leakage. Different from them, this paper tries to solve DG problem under the federated learning scenario (FedDG re-ID), which is a more challenging task. Please refer to supplementary for more explanations. Federated Learning. Federated learning (McMahan et al. 2017; Karimireddy et al. 2020; Liu et al. 2021; Li, He, and Song 2021) aims at optimizing models with decentralized data to protect data privacy. FedAvg (McMahan et al. 2017) is the first federated learning algorithm, which averages locally trained models and redistributes the aggregated model to local clients for further training. Subsequently, FedProx (Li et al. 2020), MOON (Li, He, and Song 2021), and SCAFFOLD (Karimireddy et al. 2020) are proposed to prevent local overfit for better accuracies. These methods are originally designed for closed-set problems, e.g., imageclassification, where the training and testing sets share the same classes. To solve the open-set (Panareda Busto and Gall 2017) tasks, Zhuang et al. (Zhuang et al. 2020) propose FedPav, which adapts FedAvg to the re-ID by only exchanging feature extractors. Wu et al. (Wu and Gong 2021) first explicitly introduce the definition of FedDG re-ID and considers each source domain as an individual local client. They propose to solve the problem with model distillation. The above methods mainly focus on keeping the consistency between local and global models. Instead, this paper considers stylization for local data, which is largely overlooked. Style Transfer. Style transfer (Dumoulin, Shlens, and Kudlur 2017; Huang and Belongie 2017; Choi et al. 2018; Zhu et al. 2017) is widely studied in the image translation, which aims to change the styles of images while retaining their semantic contents. Recently, the idea of style transfer is employed to generate data with different styles, which are used for learning generalized models. Specifically, (Zhou et al. 2021; Tang et al. 2021) interpolate styles of samples to synthesize novel ones while (Zhong et al. 2022; Wang et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6478 (a) Client-Server Collabirative Learning … ② ④ Unseen … ② ④ ⑤ ① Domain #1 fL1 fG1 Domain #N fLN fGN … Merge ③ Central Server ① ②Joint Optimization Lloc Local fL ①Expert Training (b) Diversity-Authenticity Co-constrained Stylization for Local Training !μ !σ !μ !σ fL fG x x’ fG (x’) fL(x’) Lau Lloc fG (x) Ldiv !μ !σ Figure 2: Illustration of our FedDG re-ID method. (a) Illustration of the overall client-server collaborative learning (CSCL), which contains five steps. 1⃝Local Training. 2⃝Model Upload. 3⃝Model Aggregation. 4⃝Redistribution. 5⃝Evaluation on Unseen Domains. Our paper mainly focuses on step 1⃝local training. (b) The overall process of our DACS for local training, which is comprised of 1⃝Expert Training and 2⃝Joint Optimization. 2021) adopt learnable modules to directly optimize novel styles. These methods commonly require centralization of multiple domains to ensure the diversity of the generated styles, which is not applicable in federated learning. In addition, the data authenticity is not explicitly considered, which may hamper the model optimization. In this work, we devise a novel strategy to jointly ensure the diversity and authenticity, which is tailored-made for FedDG re-ID. 3 Methodology Problem Definition. Given N labeled source domains S = {D1, D2, ...DN}, where the i-th domain Di = {Xi, Yi} is comprised of Mi training images Xi and their corresponding identity labels Yi. FedDG re-ID considers each domain as an isolated client and can not exchange data with others1. The objective for FedDG re-ID is optimizing a generalizable re-ID model that performs well on unseen target domains by collaboratively utilizing these isolated clients with the help of central server i.e., client-server collaborative learning (CSCL). The whole process formulates the cross-silo federated learning setting (Kairouz et al. 2021). As data centralization is not allowed, FedDG re-ID becomes more challenging than vanilla domain generalizable re-ID. 3.1 Overview The overall process of CSCL for FedDG re-ID is illustrated in Fig. 2(a), which contains five steps. In step 1 ⃝ Local Training, we deploy two re-ID models and one style transformation model (STM) for each domain. The two re-ID models, including “local model” and “local-side global model”, are designed with different purposes. “Local model” is trained with only local data to retain domainspecific knowledge and will not be shared. On the contrary, “local-side global model” can be uploaded to central server. We adopt STM to generate novel data through our “diversity-authenticity co-constraint stylization” (DACS), which is achieved by using these two re-ID models as the supervision for image stylization. The generated data are subsequently leveraged for the optimization of “local-side global model”. In step 2 ⃝Model Upload, the optimized 1In this paper, “domain” and “client” are interchangeable. “local-side global model” is uploaded to the central server. In step 3 ⃝Model Aggregation, we aggregate the collected models to obtain the “server-side global model”. In step 4 ⃝ Redistribution, the aggregated “server-side global model” is redistributed to each domain to update “local-side global model”. By iterating step 1⃝to 4⃝until convergence, we deploy the “server-side global model” to target domains for evaluation, i.e., step 5 ⃝Evaluation on Unseen Domains. Our paper mainly focuses on step 1⃝and tries to generate novel data with our diversity-authenticity co-constrained stylization (DACS) for local generalization. The details of our two-stage DACS for local training are demonstrated in Fig. 2(b). Concretely, in stage 1⃝“Expert Training”, we conduct vanilla optimization on local model to maintain domain-specific knowledge. Then, in stage 2⃝“Joint Optimization”, we devise diversity and authenticity losses to synthesize novel data for the joint optimization of STM and local-side global model. The former is designed to generate diverse data by enlarging Wasserstein distance between the original and transformed counterparts. The latter ensures the authenticity of generated images by enforcing the them to be hard/easy for the local/local-side global model to recognize through entropy function. Next, we introduce our DACS for local training in detail. Since DACS will be applied to each domain, we omit the subscript i for simplicity. 3.2 Diversity-Authenticity Constrained Stylization Style Transformation Model. We deploy a style transformation model (STM) for each domain to generate data with novel distributions. The STM of each domain is comprised of two trainable parameters ˆµ ∈RC×H×W and ˆσ ∈ RC×H×W , which can transform local data to novel styles through simple scaling and shifting. Specifically, given a batch of local images x ∈RB×C×H×W , we first compute their channel-wise data statistics µ and σ with: µ = 1 HW X h∈H,w∈W xh,w, σ = s 1 HW X h∈H,w∈W (xh,w −µ)2, (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6479 where H and W are the spatial dimensions of input images, C equals to 3 for RGB images, and B is the batch size. The obtained channel-wise µ ∈RB×C and σ ∈RB×C are utilized for stylization with the following equation: x′ = ϕ(x; ˆµ, ˆσ) = ˆσ ∗(x −µ) σ + ϵ + ˆµ, (2) where x′ is the transformed data and ϵ is a small value to avoid zero division. ˆµ and ˆσ are the trainable tensors in STM, which are initialized with Gaussian noise and will be optimized to explore the most plausible styles. Diversity Loss. To generate novel data, we attempt to enlarge the distributional difference between x and x′. Based on our STM design, we can readily estimate the distribution of x′ as P(x′) ∼Nnovel(ˆµ, ˆσ2). Similarly, the distribution of original data x can also be formulated as P(x) ∼ Nori(µ, σ2), where µ and σ are obtained through Eq. 1. We thus define the diversity loss Ldiv as: Ldiv(x; ϕ) = −D(P(x), P(x′)), (3) where D(·, ·) is the distributional metric with various choices like Jensen–Shannon divergence (JSD) (Lin 1991) or KL divergence (Shannon 1948). We adopt simplified 2Wasserstein distance (He et al. 2018) as the distributional metric due to its low computational burden and simple form. Therefore, the final diversity loss can be formulated as: Ldiv(x; ϕ) = Ldiv(x; ˆµ, ˆσ) = −||µ−ˆµ||2 2 −||σ−ˆσ||2 2. (4) By reducing the above diversity loss, we generate novel data x′ with a different data distribution from current domain. However, the unconstrained enlargement of Wasserstein distance may generate unrealistic samples and bring negative effects on local optimization. We thus propose another constraint to ensure data authenticity. Authenticity Loss. In each domain, we have two re-ID models: “local model” and “local-side global model”. The former is solely optimized with local data while the latter is initialized with averaged global model. Therefore, the latter model has more generalized knowledge than the former model and can be utilized to assess the authenticity of input data. To this end, we propose to adopt both models for authenticity estimation. Concretely, (1) we require the transformed data to be well recognized by the localside global model for high authenticity (i.e., keep low global uncertainty). (2) However, consistently reducing global uncertainty is not sufficient as STM may be prone to trivial solution and keep the input images unchanged. To avoid this problem, we additionally enforce the transformed data to be hardly discriminated by the “local model” ( i.e., increasing local uncertainty). As the “local model” contains more domain-specific information, increasing the local uncertainty enables STM to generate out-of-domain data and thus avoid the trivial solution. (3) Moreover, to facilitate the optimization of STM for data transformation, the generated counterparts should also be harder for local-side global model to recognize than the original images. We thus further constrain the global uncertainty of the transformed data to be larger than the original data. Algorithm 1: The Process of Our Local Training. Inputs: Training data from the i-th domain Xi and labels Yi. Local iteration number iter. STM ϕi Outputs: Feature Extractor for i-th domain θGi. 1: function LOCALTRAIN(Domain i, STM ϕi, local model fLi, local-side global model fGi) 2: for iter num in iter do 3: Sample a batch of training data {xi, yi} ; 4: // Stage 1⃝: Expert Training. 5: Unfreeze fLi; 6: Optimize fLi with Eq. 7 and {xi, yi}; 7: // Stage 2⃝: Joint Optimization. 8: Freeze fLi; 9: Transform x to x′ with ϕi via Eq. 2; 10: Compute Eq. 8 with fLi, fGi, and ϕi; 11: Update fGi, and ϕi; 12: end for 13: Return Feature extractor θGi of fGi. 14: end function In this paper, we adopt entropy function H(·) = −P j pj(·) log pj(·) to measure the model’s uncertainty for the given data, where pj(·) is the probability of classifying the input data to the j-th class. Our authenticity constraint can be formulated as: H(fG(x)) < H(fG(x′)) < H(fL(x′)), (5) where x and x′ are original and transformed data. fG(·) and fL(·) are the logits predicted by “local-side global model” and “local model”, respectively. Since these inequalities can not be optimized by back-propagation, we convert Eq. 5 to the following loss: Lau(x; ϕ) = Softplus(H(fG(x)) −H(fG(ϕ(x)))) + Softplus(H(fG(ϕ(x))) −H(fL(ϕ(x)))), (6) where Softplus(·) = ln(1 + exp(·)) is a monotonically increasing function. Minimizing the first term of Eq. 6 is equivalent to achieving H(fG(x)) < H(fG(x′)). Meanwhile, minimizing the second term of Eq. 6 means satisfying H(fG(x′)) < H(fL(x′)). It should be noted that Lau is designed only for optimizing STM ϕ, and other models (local model fL and local-side global model fG) will not be updated in back-propagation. 3.3 Local Training We improve the generalization of local-side global model and optimize STM with our proposed DACS during the local training. As shown in Fig. 2(b), our local training includes two stages: “expert training” and “joint optimization”. Expert Training. In this stage (see Fig. 2(b)- 1⃝), we train the local model with a sampled batch of local data x and their corresponding labels y with the following loss: Lloc(x, y; fL) = Ltri(θL(x), y) + Lce(fL(x), y), (7) where θL is the feature extractor part of local model fL and outputs intermediate features to compute triplet loss The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6480 Algorithm 2: The Process of Our FedDG re-ID method. Inputs: N decentralized domains. Their corresponding training data Xi and labels Yi (1 ≤i ≤N). Local iteration number iter. Total number of Epochs E. Outputs: Generalized Feature Extractor θS. 1: // Client-Server Collaborative Learning (CSCL). 2: function FEDDGTRAIN(Training epochs E, N decentralized domains) 3: Initialize all θL and θG with ImageNet-pretrained weights; 4: for epoch num in E do 5: // Step 1⃝: Local Train via DACS. 6: for i in N do 7: θGi = LocalTrain(i, ϕi, fLi, fGi); 8: end for 9: // Step 2⃝: Model Upload. 10: Upload {θG1, ..., θGN } to server; 11: // Step 3⃝: Model Aggregation. 12: Obtain θS via Eq. 9; 13: // Step 4⃝: Redistribution. 14: Redistribute θS to each domain; 15: Update {θG1, ..., θGN }; 16: end for 17: Return θS for evaluation. 18: end function (e.g., pool-5 features if we adopt ResNet-50 as the backbone). Ltri and Lce are commonly used triplet and crossentropy losses. This stage ensures fL’s capability of retaining domain-specific data distribution. Joint Optimization. In the second stage (see Fig. 2(b)- 2⃝), we adopt the original data and their generated counterparts to optimize both fG and STM. Meanwhile, STM will also be supervised by our diversity and authenticity losses to ensure the data quality used for joint optimization. The final loss for joint optimization is formulated as: LDACS(x, y; fG, ϕ) = Lloc(x, y; fG) + Lloc(ϕ(x), y; fG, ϕ) + λdivLdiv(x; ϕ) + λauLau(x; ϕ), (8) where λdiv and λau are balancing factors. It should be noted that both diversity and authenticity losses will not affect the optimization of local-side global model fG. Therefore, we jointly optimize STM and local-side global model in LDACS. The overall process is illustrated in Alg. 1. 3.4 Subsequent Learning Model Upload. We upload the local-side global model to the central server. Different from federated learning in image classification, each domain has different pedestrians in FedDG re-ID. Therefore, for the local-side global model fGi = {θGi, ψi} in the i-th domain, we only share the feature extractor θGi and keep classifier ψi in its own client. Model Aggregation. We aggregate {θG1, ..., θGN } from all clients to obtain server-side global model θS: θS = N X i=1 Mi Mtotal θGi, (9) where Mtotal = PN i=1 Mi is the total number of images for all clients. Redistribution. The obtained θS will be further redistributed to each domain to update local-side global models for the next epoch of training. Evaluation on Unseen Domains. After iterating previous steps until convergence, the obtained “server-side global model” will be directly deployed to unseen domains for evaluation. The overall process is shown in Alg. 2. 4 Experiments 4.1 Experiment Setup The details of all experiments, including the used datasets, evaluation protocols, and implementation details, are demonstrated in the supplementary. Note that, DukeMTMCreID has been withdrawn and is thus not used in this work. 4.2 Comparison with State of the Art We first compare our algorithm with state-of-the-art methods in Tab. 1. The compared algorithms can be divided into four categories. (1) Classical federated learning, such as SCAFFOLD (Karimireddy et al. 2020) and MOON (Li, He, and Song 2021). (2) Federated re-ID algorithms, including FedPav (Zhuang et al. 2020) and FedReID (Wu and Gong 2021). FedPav can be seen as the baseline for FedDG reID. (3) Single domain generalization (SDG) for re-ID like SNR (Jin et al. 2020) and TransMatcher (Liao and Shao 2021). SDG is the only type of normal DG algorithm that can be used under federated scenario, please find supplementary for more explanations. In Tab. 1, we report the results of SNR and leave the results of TransMatcher in the supplementary. (4) Stylization-based domain generalization, including MixStyle (Zhou et al. 2021) and CrossStyle (Tang et al. 2021). For “SNR”, we adopt its recommended hyperparameters and deploy SNR modules after each ResNet layer to ensure the best results are achieved. The independently trained models are then averaged for evaluation. For “MixStyle” and “CrossStyle”, we directly deploy them in each client to generate novel data by mixing or exchanging local styles because federated learning does not allow data centralization of source domains. “Joint” denotes training re-ID models with centralized source domains. We also report the results of using ViT (Dosovitskiy et al. 2021) as backbone and compare them with FedPav and CrossStyle. From Tab. 1, we have three conclusions. (1) Optimizing with more decentralized domains can achieve better accuracies. Here we take experiments evaluated on Market1501 to demonstrate. When optimizing with only one domain, the best mAP score is 23.3% (“MS→M”), which is lower than all FedDG re-ID methods trained with multiple domains. Therefore, FedDG re-ID with multiple domains is worth researching and has significant meaning for optimizing robust and safe re-ID system. (2) Our method achieves state-of-the-art performance. Specifically, for “MS+C2+C3→M”, we achieve 36.3% in mAP and 61.2% in rank-1 accuracy when using ResNet-50 as backbone, which are the highest accuracies among previous methods. Moreover, our method also outperforms “Joint”, demonstrating The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6481 Methods MS + C3 +C2 →M C2 + C3 +M →MS MS + C2 +M →C3 mAP rank-1 mAP rank-1 mAP rank-1 Single Model (1) 23.3 47.5 2.7 8.8 18.0 18.5 Single Model (2) 13.2 31.1 1.7 6.0 21.6 22.5 Single Model (3) 18.9 41.2 3.3 10.0 10.2 11.2 MOON 26.8 51.1 4.8 14.5 20.9 22.5 SCAFFOLD 26.0 50.5 5.3 15.8 22.9 26.0 FedPav 25.4 49.4 5.2 15.5 22.5 24.3 FedReID 30.1 53.7 4.5 13.7 26.4 26.5 MixStyle 31.2 53.5 5.5 16.0 28.6 31.5 CrossStyle 32.5 59.6 4.6 14.0 27.8 28.0 SNR 32.7 59.4 5.1 15.3 28.5 30.0 Ours 36.3 61.2 10.4 27.5 30.7 34.1 SNR+Ours 37.7 65.9 11.6 29.4 33.6 37.5 Joint 32.2 58.6 5.6 16.3 29.5 29.0 FedPav (ViT) 37.4 62.6 14.6 33.7 23.7 25.0 CrossStyle (ViT) 41.4 65.8 17.9 40.8 31.0 38.4 Ours (ViT) 45.4 70.7 20.3 44.2 36.6 42.1 Table 1: Comparison with state-of-the-arts, we compare our method with state-of-the-art federated learning algorithms. M: Market-1501, C2: CUHK02, C3: CUHK03, MS: MSMT17. Joint: the model jointly trained on all source domains without decentralization constraint. Single Model (N): the model is individually trained on the N-th source domain of each setting. “ViT”: Results of using ViT as Backbone. Method Attributes MS+C2 +C3→M MS+M +C2→C3 ϕ Ldiv Lau mAP rank-1 mAP rank-1 Baseline × × × 25.4 49.4 22.5 24.3 RS ✓ × × 25.1 50.6 20.4 22.8 DC ✓ ✓ × 31.9 57.1 29.1 30.0 AC ✓ × ✓ 34.5 59.7 27.3 28.0 DACS ✓ ✓ ✓ 36.3 61.2 30.7 34.1 Table 2: Ablation study on STM ϕ, diversity loss Ldiv, and authenticity loss Lau. RS: RandStyle. DC: diversity constraint. AC: authenticity constraint. DACS: Ours. its capability of learning generalized re-ID models under federated learning constraint. Similar results can also be found in other settings. When we change the backbone to “ViT” (Dosovitskiy et al. 2021), our method can also outperform FedPav and CrossStyle, demonstrating its effectiveness. (3) Our method is compatible with other domain generalization algorithm. Concretely, when compared with “SNR”, the mAP scores of “SNR+Ours” are improved. In sum, the above results demonstrate the superiority and compatibility of the proposed method for FedDG re-ID. 4.3 Ablation Study To better understand how STM ϕ, diversity loss Ldiv, and authenticity loss Lau affect the FedDG re-ID results, we gradually add them into the training process of “MS+C2+C3→M” and “MS+C2+M→C3” for ablation Method MS+C2 +C3→M MS+M +C2→C3 mAP rank-1 mAP rank-1 Baseline 25.4 49.4 22.5 24.3 (a) The choice of distributional metric JSD 34.2 58.8 29.1 32.8 KL 33.6 56.8 28.0 30.4 WD (ours) 36.3 61.2 30.7 34.1 (b) Data transfer in channel or spatial level Channel level 30.0 55.4 25.1 25.5 Spatial level (Ours) 36.3 61.2 30.7 34.1 (c) The design of authenticity loss H(fG(x)) < H(fG(x′)) 33.3 58.4 27.0 29.2 H(fG(x′)) < H(fL(x′)) 34.6 59.0 26.7 28.6 Complete (Ours) 36.3 61.2 30.7 34.1 Table 3: Further experiments. (a) The choice of distributional metric in Eq. 3. (b) The choice of channel-level or spatial-level transfer. (c) The design of authenticity loss. study. We compare five different training schemes in Tab. 2. Baseline: results of vanilla federated learning. RandStyle (RS): results of transforming data to random styles for optimization. The parameters of STM are randomly initialized and do not have any constraint during the optimization. Diversity Constraint (DC): results of DACS without authenticity constraint for local training. Authenticity Constraint (AC): results of DACS without diversity constraint in local training step. DACS: results of using complete DACS. Effectiveness of solely using diversity or authenticity constraint. By comparing “RS” and “DC” of Tab. 2, we note that diversity loss Ldiv is beneficial to improving the generalization of FedDG re-ID models. Specifically, in row “RS”, we do not use any constraint on STM and its parameters are randomly initialized. In this case, STM can not generate transformed data that are beneficial to the generalization, leading to the stagnant re-ID accuracies. The results in “RS” can not even outperform “baseline”, until taking diversity loss into optimization. Similarly, the comparison between “RS” and “AC” in Tab. 2 demonstrate the effectiveness of using authenticity loss in optimization. We thus conclude that either diversity or authenticity constraint is capable of improving FedDG re-ID accuracies. Effectiveness of diversity-authenticity co-constraint. By comparing “DC”, “AC”, and “DACS” in Tab. 2, we observe that jointly using diversity and authenticity losses can further improve FedDG re-ID accuracies. For example, in “MS+C2+C3→M”, only using diversity or authenticity constraint during model optimization achieves 31.9% and 34.5% mAP scores, respectively. However, after jointly using our two constrains, the mAP score becomes 36.3%. Therefore, we conclude that the proposed diversity and authenticity constraints are complementary to each other. Using both constraints can coherently improve performance. 4.4 Further Experiments The choice of distributional metric. In Eq. 3, we choose to enlarge Wasserstein distance between the original and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6482 x’ in Epoch 10 x’ in Epoch 20 x’ in Epoch 30 x’ in Epoch 40 x Start MS-Novel MS C3 C2 Epoch 10 Epoch 20 Epoch 30 Epoch 40 (b) t-SNE visualizations for “MS+C2+C3→M” (a) Original and Transformed Images in “MS+C2+C3→M” Figure 3: (a) The original and transformed data during the optimization of “MS+C2+C3→M”. (b) t-SNE visualizations for “MS+C2+C3→M”. MS: MSMT-17. C3: CUHK03. C2: CUHK02. MS-Novel: Transformed MS data. transformed data to ensure the data diversity. However, the distributional metric has various choices, such as JSD (Lin 1991) or KL divergence (Shannon 1948). We conduct experiments of using these two metrics in “MS+C2+C3→M” and “MS+M+C2→C3” to find the best distributional metric for diversity loss. As shown in Tab. 3(a), using Wasserstein distance can achieve the best re-ID accuracies for both tasks and we thus use Wasserstein distance in diversity loss. The choice of channel- or spatial-level style transfer. The learnable parameters (ˆµ and ˆσ) in our STM are C × H × W tensors, which enable us to transfer styles for each pixel, i.e., spatial-level style transfer. There is also another type of strategy for data transfer, which defines ˆµ ∈RC and ˆσ ∈RC to uniformly transfer pixels in the same channel to the same style, i.e., channel-level transfer. In Tab. 3(b), we report the results of using channel-level style transfer on two FedDG re-ID tasks. From the results, we observe that using channel-level style transfer achieves lower accuracies than our spatial-level design. We thus choose to transfer styles of input images spatially as it is a more effective manner. Further study on the design of authenticity loss. The authenticity loss in Eq. 6 is proposed to achieve two inequalities, i.e., H(fG(x)) < H(fG(x′)) and H(fG(x′)) < H(fL(x′)). We conduct experiments of independently using the two inequalities as constraints and compare them with the results of using complete Ldiv in Tab. 3(c). From the results, we note that independently using each inequality can bring improvements on the re-ID accuracies, while jointly using both can achieve the best results. Therefore, we conclude that both inequalities are important for Ldiv. 4.5 Visualization We visualize the original / transformed images and features during the optimization of “MS+C2+C3→M” to better understand our algorithm. Specifically, given a batch of MS images x, we obtain their transformed counterparts x′ with STM and show them in Fig. 3(a). Moreover, these data, combined with data from other source domains (C2 and C3), are forwarded to server-side global model θS for feature extraction and t-SNE (Van der Maaten and Hinton 2008) visualization in Fig. 3(b). From these two figures, we observe that at the early stage of training (the first 10 epochs), transformed images x′ are quite different from their original counterparts x in both feature-level and image-level. In image-level (see Fig. 3(a)), the transformed images have high contrast and unrealistic illumination, which may lead to the performance degradation of re-ID model. In featurelevel (see Fig. 3(b)), the transformed data have large discrepancies with their original images in the feature space. However, these discrepancies are gradually reduced at the later stage of training (after 20th epoch). Based on these observations, we conjecture that DACS first focuses on the diversity of the transformed data and gradually improves the data authenticity for further local training. The step-by-step transformation ensures DACS’s effectiveness. 5 Conclusion In this paper, we propose a diversity-reality co-constrained stylization (DACS) method for FedDG re-ID task. Specifically, we adopt STM to generate novel data by jointly consider diversity and authenticity constrains. The diversity loss requires the generated data to be different from local domain by enlarging Wasserstein distance. The authenticity loss enforces the transformed data to be hard / easy for local model / local-side global model to recognize for data authenticity. Extensive experiments show the efficacy of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6483 Acknowledgments This work is supported by the National Natural Science Foundation of China (No. 62276221, No. 62376232), the Natural Science Foundation of Fujian Province of China (No. 2022J01002), and the Science and Technology Plan Project of Xiamen (No. 3502Z20221025). References Chattopadhyay, P.; Balaji, Y.; and Hoffman, J. 2020. Learning to balance specificity and invariance for in and out of domain generalization. In European Conference on Computer Vision. Choi, Y.; Choi, M.; Kim, M.; Ha, J.-W.; Kim, S.; and Choo, J. 2018. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Dai, Y.; Li, X.; Liu, J.; Tong, Z.; and Duan, L.-Y. 2021. Generalizable person re-identification with relevance-aware mixture of experts. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representation. Dumoulin, V.; Shlens, J.; and Kudlur, M. 2017. A learned representation for artistic style. In International Conference on Learning Representation. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition. He, R.; Wu, X.; Sun, Z.; and Tan, T. 2018. Wasserstein CNN: Learning invariant features for NIR-VIS face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. Huang, G.; Liu, Z.; Pleiss, G.; Van Der Maaten, L.; and Weinberger, K. 2019. Convolutional networks with dense connectivity. IEEE Transactions on Pattern Analysis and Machine Intelligence. Huang, X.; and Belongie, S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision. Jin, X.; Lan, C.; Zeng, W.; Chen, Z.; and Zhang, L. 2020. Style normalization and restitution for generalizable person re-identification. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Kairouz, P.; McMahan, H. B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A. N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning. Karimireddy, S. P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; and Suresh, A. T. 2020. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning. Li, D.; Yang, Y.; Song, Y.-Z.; and Hospedales, T. M. 2018. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI Conference on Artificial Intelligence. Li, Q.; He, B.; and Song, D. 2021. Model-Contrastive Federated Learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Li, T.; Sahu, A. K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; and Smith, V. 2020. Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems. Liao, S.; and Shao, L. 2021. TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification. In Advances in Neural Information Processing Systems. Lin, J. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on Information Theory. Liu, Q.; Chen, C.; Qin, J.; Dou, Q.; and Heng, P.-A. 2021. FedDG: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In Proceedings of the Conference on Computer Vision and Pattern Recognition. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. Panareda Busto, P.; and Gall, J. 2017. Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision. Shannon, C. E. 1948. A mathematical theory of communication. The Bell System Technical Journal. Song, J.; Yang, Y.; Song, Y.-Z.; Xiang, T.; and Hospedales, T. M. 2019. Generalizable person re-identification by domain-invariant mapping network. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Sun, Y.; Zheng, L.; Yang, Y.; Tian, Q.; and Wang, S. 2018. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In European Conference on Computer Vision. Tang, Z.; Gao, Y.; Zhu, Y.; Zhang, Z.; Li, M.; and Metaxas, D. N. 2021. CrossNorm and SelfNorm for Generalization Under Distribution Shifts. In Proceedings of the International Conference on Computer Vision. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research. Wang, G.; Yuan, Y.; Chen, X.; Li, J.; and Zhou, X. 2018. Learning discriminative features with multiple granularities for person re-identification. In Proceedings of the ACM International Conference on Multimedia. Wang, Z.; Luo, Y.; Qiu, R.; Huang, Z.; and Baktashmotlagh, M. 2021. Learning to diversify for single domain generalization. In Proceedings of the International Conference on Computer Vision. Wu, G.; and Gong, S. 2021. Decentralised Learning from Independent Multi-Domain Labels for Person ReIdentification. In Proceedings of the AAAI Conference on Artificial Intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6484 Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; and Hoi, S. C. H. 2021. Deep Learning for Person Re-identification: A Survey and Outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhao, Y.; Zhong, Z.; Yang, F.; Luo, Z.; Lin, Y.; Li, S.; and Sebe, N. 2021. Learning to generalize unseen domains via memory-based multi-source meta-learning for person reidentification. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Zhong, Z.; Zhao, Y.; Lee, G. H.; and Sebe, N. 2022. Adversarial style augmentation for domain generalized urbanscene segmentation. In Advances in Neural Information Processing Systems. Zhou, K.; Yang, Y.; Qiao, Y.; and Xiang, T. 2021. Domain generalization with mixstyle. In International Conference on Learning Representation. Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the International Conference on Computer Vision. Zhuang, W.; Wen, Y.; Zhang, X.; Gan, X.; Yin, D.; Zhou, D.; Zhang, S.; and Yi, S. 2020. Performance optimization of federated person re-identification via benchmark analysis. In Proceedings of the ACM International Conference on Multimedia. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6485 | 2024 | 720 |
18,541 | Semantic-Aware Transformation-Invariant RoI Align Guo-Ye Yang1, George Kiyohiro Nakayama2, Zi-Kai Xiao1, Tai-Jiang Mu1*, Xiaolei Huang3, Shi-Min Hu1 † 1BNRist Department of Computer Science and Technology, Tsinghua University 2Stanford University 3College of Information Sciences and Technology, Pennsylvania State University [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Great progress has been made in learning-based object detection methods in the last decade. Two-stage detectors often have higher detection accuracy than one-stage detectors, due to the use of region of interest (RoI) feature extractors which extract transformation-invariant RoI features for different RoI proposals, making refinement of bounding boxes and prediction of object categories more robust and accurate. However, previous RoI feature extractors can only extract invariant features under limited transformations. In this paper, we propose a novel RoI feature extractor, termed Semantic RoI Align (SRA), which is capable of extracting invariant RoI features under a variety of transformations for two-stage detectors. Specifically, we propose a semantic attention module to adaptively determine different sampling areas by leveraging the global and local semantic relationship within the RoI. We also propose a Dynamic Feature Sampler which dynamically samples features based on the RoI aspect ratio to enhance the efficiency of SRA, and a new position embedding, i.e., Area Embedding, to provide more accurate position information for SRA through an improved sampling area representation. Experiments show that our model significantly outperforms baseline models with slight computational overhead. In addition, it shows excellent generalization ability and can be used to improve performance with various state-ofthe-art backbones and detection methods. The code is available at https://github.com/cxjyxxme/SemanticRoIAlign. Introduction As a fundamental computer vision task, object detection aims to locate and recognize objects of interest in input images. In the last decade, great progress has been made in learning-based object detection methods, making them widely useful in our daily uses, such as face recognition, text detection, pedestrian detection, among others. Most existing detection methods can be grouped into two categories, i.e., one-stage detectors (Liu et al. 2016; Redmon et al. 2016) and two-stage detectors (Ren et al. 2015; *Corresponding author is Tai-Jiang Mu. †Arxiv version with appendices: https://arxiv.org/abs/2312. 09609 Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. RoI Pooling Semantic RoI Align … … 1 2 3 4 1 2 3 4 1 2 3 1 2 3 … … , , , , , , , , 4 4 Figure 1: Previous RoI feature extractor versus the proposed Semantic RoI Align (SRA). Top: RoI Pooling samples each feature in some specific positions, making extracted RoI features sensitive to object poses. Bottom: SRA samples each feature from different semantic regions, making it capable of extracting invariant RoI features under various transformations including object pose transformation. He et al. 2017). One-stage detectors directly predict objects with a single neural network in an end-to-end manner. In contrast, two-stage detectors first propose a list of object proposals and then predict the proposals’ labels and refine bounding boxes with extracted RoI features of each proposal. RoI features provide better transformation-invariance for different proposal regions and using them can thus better refine bounding boxes and predict the category of each proposal in the second stage of a two-stage detector. In this paper, we mainly focus on improving the RoI feature extractor for two-stage detectors. Similar objects may show great appearance differences in images due to different environmental conditions, object poses, etc., making detection difficult to be generalizable under various transformations (Girshick 2015). Therefore, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6486 many RoI feature extractors aim to extract transformationinvariant object features. RoI Pooling (Girshick 2015) is the pioneering work for RoI feature extraction, which pools features in a fixed number of sub-regions of the RoI and obtains scale-invariant features. RoI Align (He et al. 2017) further improves the positional accuracy of RoI Pooing via bilinear interpolation. RoI Transformer (Ding et al. 2019) extracts rotation-invariant features by rotating sampling positions with a regressive rotation angle. However, previous methods cannot extract invariant features for more complex transformations like perspective transformation and object pose transformation. Though there exist works like Deformable RoI Pooling (DRoIPooling) (Dai et al. 2017; Zhu et al. 2019) that can extract invariant features under some complex transformations by adaptively adding a regressive offset to each sampling position, experiments show that it only achieves invariance under scaling transformation and cannot easily be extended to handle others such as rotation. This is because the sampling position offsets are regressed with convolutional networks, which need to be trained with transformed data (Krizhevsky, Sutskever, and Hinton 2012) and different transformations would require different kernels since the same position of a convolutional kernel may correspond to different object regions when the object is transformed under different transformations. In this paper, we regard different transformations like perspective and pose transformations as being comprised of spatial transformations of different semantic parts, while also considering that high-level features of semantic regions are more stable under varying transformations. From this perspective, we propose a Semantic RoI Align (SRA) to extract transformation-invariant RoI features by sampling features from different semantic regions. RoI Pooling samples features at specific locations in a RoI. A limitation of such sampling can be seen from the example shown in Figure 1 (top row); the 4-th sampling location extracts features of the background in the red RoI, whereas that location extracts features of a player’s leg in the blue RoI. Such sampling loses invariance under pose transformations. In contrast, in our proposed SRA, we design a semantic attention module to obtain different semantic regions by leveraging the global and local semantic relationship within the RoI. Then we sample features from the semantic regions, and concatenate the sampled features as the RoI feature which is semanticaware and transformation-invariant. Since the computational efficiency of SRA determined by the feature sampling resolution, we propose a Dynamic Feature Sampler to dynamically sample features according to the aspect ratio of different RoIs, which speeds up SRA while minimizing the impact on accuracy. Furthermore, previous positional embedding methods (Zhao, Jia, and Koltun 2020) only encode information of the sampling center, which cannot accurately represent regional information. We thus propose a new positional embedding, namely Area Embedding, which embeds positions in a sampling area into a fixed-length vector, providing more accurate position information. SRA can replace the RoI feature extractor in most two-stage detectors and brings higher detection accuracy with a slight overhead in the number of network parameters and computation. By using SRA as RoI feature extractor for Faster RCNN (Ren et al. 2015), our method achieves 1.7% higher mAP in the COCO object detection task with only additional 0.2M parameters and 1.1% FLOPs compared to the baseline model. Meanwhile, it also exceeds other RoI feature extractors with less computational overhead. To verify the generalizability of SRA, we equip it to various state-ofthe-art backbones and detection methods. Results show that SRA can consistently boost their detection accuracy. In summary, our contributions are: • a novel RoI feature extractor, i.e., Semantic RoI Align, which is able to extract transformation-invariant RoI features and can be plugged into most two-stage detectors to improve detection accuracy with little extra cost, • a Dynamic Feature Sampler which makes SRA implementation efficient, and an Area Embedding which provides more comprehensive and accurate information of sampled positions. • Extensive experiments that demonstrate the superiority of the SRA and its great generalizability to various stateof-the-art backbones and detection methods. Related Work Object Detection and RoI Feature Extractors In recent years, deep learning techniques are dominant in object detection. Most deep object detection methods can be categorized into two types: one-stage detectors (Redmon et al. 2016) and two-stage detectors (Girshick 2015). Faster R-CNN (Ren et al. 2015) is a two-stage network with a Regional Proposal Network (RPN) predicting multiple RoI proposals; RoI features are then extracted by an RoI feature extractor to predict object bounding boxes and categories in the second-stage network. Mask R-CNN (He et al. 2017) proposed a general framework for object instance segmentation tasks. Dynamic head (Dai et al. 2021) proposed to use scale, spatial, and task-aware attention mechanisms to improve detection accuracy. RoI feature extractors are used to extract transforminvariant features in two-stage detectors, so that the secondstage network can refine the bounding boxes and predict object categories more accurately. RoI Pooling (Girshick 2015) performs scale-invariant feature extraction by dividing the RoI into a fixed number of bins, pooling the features in each bin, and concatenating them into a vector of fixed size. RoI Align (He et al. 2017) uses bilinear interpolation to more accurately extract features. RoI Transformer (Ding et al. 2019) extracts rotation-invariant features by correcting the extracted features using a learned rigid transformation supervised by ground-truth oriented bounding boxes. However, these methods only model features invariant to rigid transformations, while ignoring non-rigid transformations. Deformable RoI Pooling (Dai et al. 2017; Zhu et al. 2019) extracts features by adding a regressive offset to each sampling position of RoI Pooling. Our experiments show that it can only extract invariant features under scale transformation which could be due to its learning the regressive offset by convolution, making it hard to generalize to other The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6487 1, 1 Dynamic Feature Sampler Area Embedding 𝑓𝑖 [𝐶, ℎ𝑖, 𝑤𝑖] RoI Descriptor Regressor RoI Descriptor 𝑑𝑖 [𝐾] Conv1x1 … Semantic Features 𝑠𝑖 [𝐾, ℎ𝑖, 𝑤𝑖] Area Embeddings 𝑝𝑖 [𝑃, ℎ𝑖, 𝑤𝑖] ℎ𝑖, 𝑤𝑖 … 1, 2 1, 1 1, 2 ℎ𝑖, 𝑤𝑖 … n-th sub-mask regressor 0.1 0.8 0.0 … Sampling mask 𝑚𝑖,𝑛 [ℎ𝑖, 𝑤𝑖] ℎ𝑖× 𝑤𝑖 ℎ𝑖× 𝑤𝑖 ℎ𝑖× 𝑤𝑖 groups xN 𝑦𝑖,𝑛 [𝐶] Output 𝑦𝑖 [𝑁, 𝐶] Feature Map 𝐹 [𝐶, 𝐻, 𝑊] RoI proposal 𝑅𝑖 [4] Feature of head ℎ𝑖 𝑤𝑖 Figure 2: The network architecture of our Semantic RoI Align. N means matrix multiplication, and L means concatenation. transformations. RoIAttn (Liang and Song 2022) proposes to enhance the RoI features by passing them through multiple self-attention layers. However, simply doing so is limited w.r.t. the ability to obtain invariant RoI features for two reasons: 1) performing self-attention on RoI Align extracted features has more limited flexibility than sampling on the original feature map, 2) the regression ability of typical self-attention is insufficient for identifying specific semantic regions under different transformations. Our SRA obtains different semantic regions with a novel semantic attention structure by leveraging the global and local semantic relationship within the RoI. We then sample the RoI features from the semantic regions, which makes it easy to achieve invariance under more diverse transformations, and thus obtain higher detection accuracy than existing methods. Attention Mechanism In computer vision, attention can be regarded as an adaptive process, which mimics the human visual system’s ability to focus on important regions. RAM (Mnih et al. 2014) is the pioneering work to introduce the attention concept in computer vision. After that, there have been some works (Hu, Shen, and Sun 2018; Zhao, Jia, and Koltun 2020) exploring the use of attention mechanisms for different computer vision tasks. Recently, transformer networks, which have achieved great success in natural language processing (Vaswani et al. 2017), are explored in computer vision and have shown great potential. ViT (Dosovitskiy et al. 2020) is the first work to bring transformer into computer vision by regarding a 16×16 pixel region as a word and an image as a sentence. Due to the strong modeling capability of visual transformer networks, they have been applied to various vision tasks such as image recognition (Liu et al. 2021; Guo et al. 2022), object detection (Carion et al. 2020; Yang et al. 2021), etc. In this paper, we introduce a novel semantic attention mechanism to capture invariance RoI features under more variety of transformations. Methodology In this section, we will first introduce the general architecture of the proposed Semantic RoI Align (SRA). Next, we will detail how the semantic masks of SRA are obtained. We will then present a dynamic feature sampling method to dynamically sample features for SRA according to different RoI aspect ratios, which improves model accuracy and efficiency. Finally, the proposed Area Embedding is introduced to replace the previous position embedding, so as to provide more accurate position information for the model. Semantic RoI Align The pipeline of the proposed Semantic RoI Align (SRA) is shown in Figure 2. The SRA extracted RoI feature of an object consists of N sub-features, each of which is sampled in a specific semantic region, making the sampling position adaptive to image transformations, and thereby improving the transformation-invariance. In Figure 3, we visualize partial semantic masks (left 5 columns) produced by our SRA for 3 RoI proposals (one for each row). The semantic samplings of SRA can sample on the same semantic parts for the object under different perspective transformations such as rotation (top row and middle row) and object pose transformations (top row and bottom row), giving the extracted RoI feature better transformation-invariance, and thus being beneficial to bounding boxes regression and semantic labels prediction in the second-stage network. The inputs of SRA are a feature map F with shape (C, H, W) where C, H and W represent the number of feature channels, height, and width of the feature map, respectively, and a list of RoI proposals R = {Ri} where Ri = {xi,0, yi,0, xi,1, yi,1} indicates a bounding box in the feature map with (xi,0, yi,0) and (xi,1, yi,1) being the coordinates of the top left corner and the bottom right corner, respectively. For each RoI proposal Ri, SRA first exploits the Dynamic Feature Sampler to sample a feature map fi from the input feature map F with the bounding box Ri. We then obtain N semantic masks mi = {mi,n}, 1 ≤n ≤N, which The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6488 2 3 1 3 2 1 2 3 1 (1) (2) (3) (4) (5) (6) Figure 3: Semantic sampling masks of SRA (columns 1 to 5), sampling locations of DRoIPooling (Zhu et al. 2019) (column 6), and sampling mask of directly passing features extracted by RoI Align through a standard self-attention layer (column 7). Each row represents an RoI in an image. The object in the middle row is rotated by 30 degrees from the top row, and the bottom row shows another object of the same class as the top row. Red and yellow sampling masks are overlaid on images and the sampling locations of DRoIPooling are indicated with different colors. have the same size as fi. The output transformation-invariant RoI feature yi of our SRA is finally obtained by sampling fi using the semantic masks mi. More specifically, yi is calculated as the weighted sum of fi elements using mi elements as weights: yi(n, c) = hi X j=1 wi X k=1 fi(c, j, k) · mi,n(j, k), for all n ∈{1, ..., N}, c ∈{1, ..., C}, (1) where hi, wi are the height and width of feature map fi. Next, we will introduce how the semantic masks mi are estimated in the SRA. Obtaining Semantic Masks The pipeline of obtaining semantic masks of SRA is also shown in Figure 2. The goal of SRA is to generate N separable semantic part masks mi for the input RoI proposal Ri and the sampled feature map fi. To achieve this, we want the value of mi,n(j, k) to be positively correlated with the likelihood that position (j, k) in Ri belongs to the n-th semantic part of the object in Ri. Let us denote that likelihood as m′ i,n(j, k). The likelihood is related to two factors, namely, 1) what it is in Ri, and 2) what it is at the position (j, k) of Ri. The former is expressed by a K-dimensional RoI descriptor di representing the overall features of Ri, and the latter is characterized by a semantic feature map si with shape (K, hi, wi) meaning the semantic feature at different positions in Ri. To make the final RoI feature computed based on Eq. 1 transformation-invariant, the mi should transform accordingly when the object transforms. Therefore, we obtain the likelihood m′ i by using a same regressor at different positions (j, k): m′ i,n(j, k) = ξn([di, si(:, j, k)]), for all n ∈{1, ..., N}, j ∈{1, ..., hi}, k ∈{1, ..., wi}, (2) where the ξ are N learnable sub-mask regressors, each composed of two lots of Norm-ReLU-Linear, [·, ·] means the concatenate operation, and si(:, j, k) represents the semantic feature in the position (j, k). By doing so, if some transformation of the object causes the feature at position (j, k) to move to position (j′, k′), the transformed masks ˆmi,n(j′, k′) = ξn([ ˆdi, ˆsi(:, j′, k′)]) will have similar value with mi,n(j, k), since the transformed ˆdi and ˆsi(:, j′, k′) are similar to di and si(:, j, k), respectively. This means the semantic masks transform accordingly with the transformation. We obtained si by performing a 1×1 convolution on fi, and we explored various forms of the RoI Descriptor Regressor to obtain di: - Concatenation: di = ψ(Flatten(fi)) - Maximum: di = ψ( hi max j=1 wi max k=1 fi,(:,j,k)) - Average: di = ψ( 1 hi × wi hi X j=1 wi X k=1 fi,(:,j,k)) (3) Here ψ is a linear layer with K output channels. Sampling features based on semantic masks may cause the model to lose position information, which is important for the object detection task. We thus use a position embedding pi (see (Zhao, Jia, and Koltun 2020)) to provide position information for the model, and use positional embedded m′ i,n(j, k) = ξn([di, si(:, j, k), pi(:, j, k)]) instead of Eq. 2. The pi is obtained by performing a 1×1 convolution with output channels of P on p′ i, where p′ i with shape (2, hi, wi) is the relative position of each position in the RoI, and is normalized to [−1, 1]: p′ i(1, j, k) =j/hi × 2 −1, p′ i(2, j, k) =k/wi × 2 −1, (4) The semantic masks mi is then obtained by mi = softmax(m′ i · γ), where γ is an amplification factor that amplifies the backpropagation response of masks, and the softmax acts on the last two dimensions to ensure the sum of each mask to 1. Finally, the output RoI feature yi is obtained by summing up elements of fi weighted by the N semantic masks mi as shown in Eq. 1. Dynamic Feature Sampler In SRA, the semantic masks mi are estimated via the subsampled feature map fi, and thus the computational overhead of SRA is proportional to the input size of fi, i.e. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6489 (b) Fixed size (a) Original size (c) Dynamic size Figure 4: Different methods to determine the size of the subsampled feature map. (a) Using a size that is the same resolution as the original feature map is costly as it results in too many samples. (b) Using a fixed size may cause the aspect ratio of the region represented by each sub-sampled feature to be inconsistent for different RoIs, which we believe is harmful to the model, e.g. a ratio of approximately 2 for the upper RoI and 0.5 for the lower RoI. (c) Our Dynamic Feature Sampler overcomes the limitations of the above two methods, yielding both consistent and limited samples. hi × wi. The size of feature map fi can be set to different values for different RoIs. A straightforward solution is to set the size to the original resolution of Ri. However, this would lead to a large computational cost for some large RoIs. Another way is to set the size to a fixed value. However, as shown in Figure 4(b), this may cause the aspect ratio of the region represented by each feature of fi to be inconsistent for different RoIs, and experiments show that this will lead to loss of accuracy. To balance sampling quality and computational efficiency, we propose a Dynamic Feature Sampler to select the size of fi for each proposal, which keeps the aspect ratio of each sub-sampled region close to 1, and has a limited size. Specifically, for each RoI Ri, we pick the size that has the closest aspect ratio to Ri while not exceeding a maximal area M. Mathematically, this can be formulated as: hi, wi = arg min (h′,w′)∈Z+ 2 with h′·w′≤M h′ w′ −xi,1 −xi,0 yi,1 −yi,0 . (5) The sub-sampled feature map fi is then obtained by dividing Ri region of F into hi ×wi blocks and averaging the feature values in each block. With the Dynamic Feature Sampler, our SRA yields a good performance with a small computational overhead. Area Embedding In Eq. 4, we use position embedding of each grid center in the mask to provide sampling position information to the model. However, as shown in Figure 5(a), since we use a dynamic way to determine the size of the sub-sampled feature map, the same center position may represent different sampling areas. We thus propose Area Embedding to encode the Figure 5: Schematic diagram comparison of traditional Position Embedding and proposed Area Embedding. (a) Position embedding only embeds the coordinates of the sampling center, while the same center position may represent different sampling areas. (b) Our Area Embedding embeds the entire sampling area. sampled area of each point in the output feature with two fixed-length vectors, each representing both the position and the coverage on the horizontal and vertical axes. We set the length of this vector to M, which is the maximal number of samples per axis. For each point (j, k) ∈Z2 sampled by SRA, we calculate p′ i by: p′ i(1 · · · M, j, k) = Upsample (OneHot(j; hi); M) , p′ i((M + 1) · · · 2M, j, k) = Upsample (OneHot(k; wi); M) . (6) where the OneHot(b; a) operator takes an integer b less or equal to a as input and produces the one hot embedding of b within a vector of length a, and Upsample(v; M) upsamples vector v to a M-sized vector. The upsampling method can vary; in Figure 5, we use nearest sampling for convenience of illustration, while in our experiments we use linear sampling for higher accuracy. The Area Embedding provides the model more accurate sampling position information and experiments show that it improves the accuracy of the model. Experiments We conduct our experiments on the MS COCO dataset (Lin et al. 2014), and use the train2017 for training and use the val2017 and the test2017 for testing. We report the standard COCO evaluation metrics including mean Average Precision (AP) under different Intersection over Union (IoU) thresholds and at different object scales, denoted as AP for the object detection task and AP m for the instance segmentation task. Our model is implemented based on Jittor (Hu et al. 2020) and JDet1 library. The implementation details of our model are given in the supplementary material. Ablation Studies We first conduct a series of ablation experiments to verify the effectiveness of each part of the proposed model. The ablation experiments are conducted on the object detection task using the MS COCO validation set. We couple our proposed feature extractor with Faster-RCNN using ResNet-50 as the backbone. RoI Align (He et al. 2017) is used as our baseline model, if not specifically mentioned. 1https://github.com/Jittor/JDet The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6490 Method AP AP50 AP75 APS APM APL Params FLOPs RoI Align 37.5 58.2 40.8 21.8 41.1 48.1 41.8M 340.9G DRoIPooling 37.9 59.4 41.8 22.4 41.4 49.3 149.4M 349.0G w/ Conv. 36.4 57.4 39.6 21.5 40.0 46.9 71.9M 350.1G w/ SA 37.6 58.0 40.8 20.9 41.1 48.7 42.7M 348.1G SRA (Ours) 39.2 59.6 42.6 22.5 42.6 51.9 42.0M 344.2G Table 1: Results of comparative and ablation experiments between our SRA and baseline models. The Effectiveness of SRA. To verify the effectiveness of the proposed SRA, we replaced it with RoI Align and Modulated Deformable RoI Pooling (DRoIPooling) (Zhu et al. 2019). The results in Table 1 show that, our model outperforms the baseline model by 1.7% AP with a minor computational and parameters cost, and also outperforms the DRoIPooling by 1.3% AP with a much smaller model. We also compared SRA with two other baselines, by performing some simple operations on the features extracted by RoI Align: applying a convolutional layer on the features to obtain the sampling masks and re-sampling the features with these masks (denoted as “w/ Conv.”), as well as directly passing the features through a standard multi-head self-attention layer (Vaswani et al. 2017) (denoted as “w/ SA”). The results in Table 1 show that our model achieves a gain of 2.8% and 1.6% in AP, respectively, with a smaller number of parameters and FLOPs. This improvement can be attributed to its enhanced capability in identifying consistent semantic parts across diverse transformations, leading to better transformation-invariance. We also visualize some samplings of SRA, DRoIPooling, and “w/ SA” in Figure 3. Rows 1 & 2 together show how samplings respond to different transformations of the same object, while rows 1 & 3 together indicate how samplings respond to different objects of the same class. We found the sampling masks of SRA (columns 1 to 5) can be divided into two classes. The first class samples on different semantic parts. For example, columns 1-4 show the samples on the human’s feet, head, and body, and around the human, respectively. The second class of sampling is for positioning, which is only activated in certain positions. For example, the 5th column is only activated on the bottom position of the RoI. Our semantic samplings can sample on the same semantic parts for the object under different transformations, which gives the extracted RoI feature better transformation invariance. In comparison, the sampling locations of DRoIPooling (column 6 of Figure 3) are distributed mostly inside the object as the object transforms, however, they will not vary accordingly with object pose transformations. Taking 3 samplings (in column 6) as an example, in the top row and middle row, the circles numbered from 1 to 3 do not rotate with the object, which means DRoIPooling can not always achieve transformation-invariance under some complex transformations like rotation. Also, simply passing features through a self-attention layer (w/ SA, column 7 of Figure 3) cannot ensure sampling on the same semantic parts, D S A AP AP50 AP75 APS APM APL ✓ ✗ ✗ 31.4 52.2 32.7 17.8 35.1 40.0 ✗ ✓ ✗ 36.2 57.5 38.6 21.0 39.8 46.6 ✗ ✗ ✓ 37.3 58.3 40.7 21.5 41.0 48.1 ✗ ✓ ✓ 38.9 59.3 42.3 22.5 42.5 51.3 ✓ ✗ ✓ 37.4 58.4 40.4 22.1 41.0 48.4 ✓ ✓ ✗ 36.4 57.3 39.0 20.9 39.9 46.7 ✓ ✓ PE 38.8 59.2 42.5 22.6 42.3 50.7 ✓ ✓ ✓ 39.2 59.6 42.6 22.5 42.6 51.9 Table 2: Ablation study on the effectiveness of each module in our SRA. D, S, and A denote RoI descriptor, semantic feature map, and Area Embedding respectively. Setting AP AP50 AP75 APS APM APL DR=Con. 38.9 59.3 41.9 22.5 42.3 51.1 DR=Max. 39.0 59.3 42.9 22.5 42.6 51.1 DR=Avg. 39.2 59.6 42.6 22.5 42.6 51.9 γ = 1 38.3 58.7 41.8 22.2 41.8 50.2 γ = 5 38.7 59.2 42.0 22.1 42.5 51.0 γ = 50 39.2 59.6 42.6 22.5 42.6 51.9 γ = 500 36.8 57.6 39.9 21.2 40.2 48.1 Table 3: Experiments on different module settings. DR denotes RoI Descriptor Regressor, and γ denotes the amplification factor. Setting AP AP50 AP75 APS APM APL Params FLOPs N = 9 38.0 58.2 41.3 21.8 41.3 49.9 31.5M 340.7G N = 25 38.6 59.0 41.9 22.8 42.3 50.5 35.7M 342.1G N = 49 39.2 59.6 42.6 22.5 42.6 51.9 42.0M 344.2G N = 100 39.4 59.9 42.9 22.9 42.6 51.7 55.4M 348.7G size = 32 36.3 57.3 39.0 20.6 39.9 46.4 31.3M 337.8G size = 52 37.2 58.0 40.7 21.4 40.8 47.8 35.5M 339.0G size = 72 37.3 58.3 40.7 21.5 41.0 48.1 41.8M 340.9G size = 102 37.6 58.4 40.8 22.0 41.3 48.8 55.1M 344.9G Table 4: Comparison between SRA with a different number of masks (N) and the baseline model with comparable RoI sizes. thus failing to obtain transformation-invariant RoI features. Structure of SRA and Area Embedding. We also conduct experiments to verify the effectiveness of different components in the SRA by controlling whether to concatenate the RoI descriptor (D), semantic feature map (S), and Area Embedding (A) when regressing the masks. The results are listed in Table 2. Comparing the last row with the 5th row in the table, our model with semantic feature map obtains a gain of 1.8% in AP, as our model determines the masks for sampling based on semantic features, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6491 Setting AP AP50 AP75 Avg. S Params FLOPs fixed 38.8 59.2 42.4 64 42.0M 344.1G M = 32 38.3 59.1 42.0 16 42.0M 341.7G M = 64 38.8 59.5 42.5 32 42.0M 342.5G M = 128 39.2 59.6 42.6 64 42.0M 344.2G M = 256 39.1 59.5 42.5 128 42.0M 347.9G Table 5: Experiments on the Dynamic Feature Sampler. M denotes the dynamic feature map size limit. Method AP AP50 AP75 APS APM APL RoI Pooling 37.2 58.9 40.3 21.5 40.2 46.0 RoI Align 37.7 58.9 40.6 21.9 40.7 46.4 DRoIPooling 38.1 60.0 42.0 22.0 41.2 47.2 Ada. RoI Align 37.7 58.8 40.7 21.8 40.7 46.5 Pr. RoI Align 37.8 58.9 40.9 22.1 40.8 46.7 RoIAttn 38.0 59.3 40.9 22.4 41.1 46.9 SRA 39.2 59.8 42.6 22.6 42.1 49.0 Table 6: Comparison with different RoI extractors on the MS COCO detection test-dev set. which makes the sampled features invariant under a variety of transformations, thus achieving better performance. We also tested our model with or without Area Embedding (8th row and 6th row respectively) and replaced the Area Embedding with position embedding (denoted as PE, 7th row). The results show that the model with AE obtains 2.8% higher AP than without AE, and 0.4% higher AP than with PE, which demonstrates AE can describe more accurately the sampling information of Dynamic Feature Sampler and provide better position information for the model. Choices of the RoI Descriptor Regressor. We tested various choices of the RoI Descriptor Regressor in Eq. 3, denoted as DR=Con., Max., and Avg., respectively, where the choice of concatenation is tested on the 8 × 8 fixed size feature sampler as it cannot be adapted to the Dynamic Feature Sampler. The results are shown in Table 3. Though the choice of DR=Con. shows slightly better performance than 8 × 8 fixed size feature sampler with the choice of average (38.8% AP in Table 5), considering that it cannot be adapted to the Dynamic Feature Sampler and will lead to a larger amount of calculation and more parameters, we finally use the average RoI Descriptor Regressor in our model. Parameters Setting. The number of masks N determines the size of the RoI features extracted by our model. We tested different settings of N (denoted as “N = x”) and compared them with the baseline model with different settings of RoI Align output size (denoted as “size = x”). The results in Table 4 show that, with the same RoI feature size (comparing the 1st row with the 5th row, the 2nd row with the 6th row, etc.), our model has a 1.4%-1.9% higher AP, which proves that the transformation-invariant features extracted by our model contain richer information under the same feature length and are more conducive to object detection. Considering the balance between model parameters and accuracy, we finally set N = 49. We also tested different settings of the amplification factor γ, denoted as “γ = x” in Table 3. Results show that setting it to an appropriate value is beneficial to the regression of the semantic mask, so we set γ = 50 according to the experiment. Dynamic Feature Sampler. To evaluate the effectiveness of the Dynamic Feature Sampler, we compared 8 × 8 fixed size feature sampler (denoted as fixed) with Dynamic Feature Sampler (M = 128), which has the same number of average samplings and similar FLOPs. As shown in the 1st and 4th row of Table 5, the Dynamic Feature Sampler obtained better results as its sub-sampled feature represented region has a more consistent aspect ratio. We also tested different dynamic feature map size limit M. A larger M brings a higher feature sampling resolution. In general, the accuracy increases with the increment in the resolution; however, the accuracy improvement brought by the increase in sampling resolution is limited by the resolution of the original feature map, and will gradually tend to zero. So, we choose M = 128 based on the experiment. Comparison with Other Methods We also compared our model with other methods on the COCO test set. We first compared ours with different RoI feature extractors: RoI Pooling (Girshick 2015), RoI Align (He et al. 2017), Adaptive RoI Align (Jung et al. 2018), Precise RoI Align (Jiang et al. 2018), DRoIPooling (Zhu et al. 2019), and RoIAttn (Liang and Song 2022) on Faster R-CNN with ResNet50 as backbone, trained for 12 epochs. The results are shown in Table 6. One can see that our model achieves the best performance. In particular, compared to the RoIAttn which incorporates several self-attention layers on the RoI Align extracted feature, our model obtains 1.2% higher AP. The results demonstrate the advantage of our SRA method for extracting RoI features that are invariant to more types of transformations. To verify the generalizability of our method, we also examined the performance gain by using SRA across different detection methods and backbones. Please refer to the supplementary material for more details. Our model improves the accuracy of various detection methods and backbone networks, demonstrating the generalizability of our model. Conclusions In this paper, we proposed SRA, a transformation-invariant RoI feature extractor. It regresses semantic masks based on a novel semantic attention structure, and obtains RoI features by sampling the feature map with these semantic masks, making it invariant under more diverse transformations. We further proposed the Dynamic Feature Sampler to speed up the process while minimizing the impact on accuracy, and proposed Area Embedding to provide more accurate sampling area information. Benefiting from the capability and generalizability of SRA, experiments show that its utilization can bring significant performance improvement to various baselines and state-of-the-art models with a small computational and parameter overhead. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6492 Acknowledgments This work was supported by the National Key Research and Development Program of China under Grant 2021ZD0112902, the National Natural Science Foundation of China under Grant 62220106003, and the Research Grant of Beijing Higher Institution Engineering Research Center and Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology. References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, 764–773. Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; and Zhang, L. 2021. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7373– 7382. Ding, J.; Xue, N.; Long, Y.; Xia, G.-S.; and Lu, Q. 2019. Learning RoI transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2849–2858. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Girshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, 1440–1448. Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; and Hu, S.-M. 2022. Visual attention network. arXiv preprint arXiv:2202.09741. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961–2969. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141. Hu, S.-M.; Liang, D.; Yang, G.-Y.; Yang, G.-W.; and Zhou, W.-Y. 2020. Jittor: a novel deep learning framework with meta-operators and unified graph execution. Science China Information Sciences, 63: 1–21. Jiang, B.; Luo, R.; Mao, J.; Xiao, T.; and Jiang, Y. 2018. Acquisition of localization confidence for accurate object detection. In Proceedings of the European conference on computer vision (ECCV), 784–799. Jung, I.; Son, J.; Baek, M.; and Han, B. 2018. Real-time mdnet. In Proceedings of the European conference on computer vision (ECCV), 83–98. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25: 1097–1105. Liang, X.; and Song, P. 2022. Excavating roi attention for underwater object detection. In 2022 IEEE International Conference on Image Processing (ICIP), 2651–2655. IEEE. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; and Berg, A. C. 2016. Ssd: Single shot multibox detector. In European conference on computer vision, 21– 37. Springer. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10012–10022. Mnih, V.; Heess, N.; Graves, A.; et al. 2014. Recurrent models of visual attention. Advances in neural information processing systems, 27. Redmon, J.; Divvala, S.; Girshick, R.; and Farhadi, A. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Yang, G.-Y.; Li, X.-L.; Martin, R. R.; and Hu, S.-M. 2021. Sampling equivariant self-attention networks for object detection in aerial images. arXiv preprint arXiv:2111.03420. Zhao, H.; Jia, J.; and Koltun, V. 2020. Exploring selfattention for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10076–10085. Zhu, X.; Hu, H.; Lin, S.; and Dai, J. 2019. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9308–9316. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6493 | 2024 | 721 |
18,542 | FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks Hunmin Yang1,2,*, Jongoh Jeong1,*, Kuk-Jin Yoon1 1Visual Intelligence Lab., KAIST 2Agency for Defense Development {hmyang, jeong2, kjyoon}@kaist.ac.kr Abstract Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples. Despite the success of recent generative model-based attacks demonstrating strong transferability, it still remains a challenge to design an efficient attack strategy in a real-world strict black-box setting, where both the target domain and model architectures are unknown. In this paper, we seek to explore a feature contrastive approach in the frequency domain to generate adversarial examples that are robust in both crossdomain and cross-model settings. With that goal in mind, we propose two modules that are only employed during the training phase: a Frequency-Aware Domain Randomization (FADR) module to randomize domain-variant low- and highrange frequency components and a Frequency-Augmented Contrastive Learning (FACL) module to effectively separate domain-invariant mid-frequency features of clean and perturbed image. We demonstrate strong transferability of our generated adversarial perturbations through extensive crossdomain and cross-model experiments, while keeping the inference time complexity. Introduction Deep neural networks have brought forth tremendous improvements in visual recognition tasks. However, the inherent transferable nature of adversarial examples still exposes the security vulnerability to malicious attackers targeting such susceptible classifiers, causing serious threats and undesirable outcomes in real-world applications. The majority of current attack methods can be primarily classified into two main categories: iterative or optimization-based approaches, and generative model-based approaches. Over the past years, iterative attack methods (Goodfellow, Shlens, and Szegedy 2015; Madry et al. 2018, 2017; Croce and Hein 2020; Lorenz et al. 2021; Dong et al. 2018; Xie et al. 2019; Lu et al. 2020; Naseer et al. 2020) have been the standard attack protocol for its simplicity and effectiveness. However, this iterative approach is frequently constrained by inefficient time complexity and the potential risk of over-fitting to the training data and models. Moreover, it has shown limited applicability in practical situations due to the low transferability to unknown models and domains. *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Data-level Input domain-variant (e.g., texture) High FCs Mid FCs Low FCs domain-invariant (e.g., shape, structure) domain-variant (e.g., color) Band-specific Characteristics in Frequency Domain Randomize Preserve Randomize Keep Perturb Keep Feature-level Figure 1: To boost the transferability of adversarial examples, we exploit band-specific characteristics of natural images in the frequency domain. Our method randomizes domain-variant low- and high-band frequency components (FCs) in the data space, and contrasts domain-invariant midrange clean and perturbed feature pairs in the feature space. Regarding the transferability of adversarial attacks, threat model is typically carried out in three different settings (i.e., white-box, black-box, and strict black-box) depending on the prior knowledge of the model architecture and data distributions by the adversary. In each respective setting, the adversary has either complete knowledge of the target model profile (i.e., architecture and weights) and data distributions reflecting the target domain, query access to the limited black-box only, or no information at all. In this work, we specifically consider the strict black-box case in which the victim attributes are completely unknown to the attacker since such a scenario is commonly encountered in practical real-world settings. We believe that crafting adversarial examples in this strict black box setting has practical values towards stronger transferabilty, as well as safe and reliable deployment of deep learning models. In this light, generative attacks (Poursaeed et al. 2018; Naseer et al. 2019; Salzmann et al. 2021; Naseer et al. 2021; Zhang et al. 2022) have recently gained attention, demonstrating the high transferability across unknown models and domains. Moreover, generator-based attacks yield lower time complexity than iterative or optimization-based methods in the inference stage, which is also a crucial part The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6494 for real-world attacks. While the current chain of generative attack methods (Poursaeed et al. 2018; Naseer et al. 2019, 2021; Salzmann et al. 2021; Zhang et al. 2022; Wu et al. 2020) are time-efficient and effective against various black-box settings, we remark that their methods do not actively leverage domain-related characteristics to facilitate more transferable attacks. In that sense, our work is inspired by frequency domain manipulations (Yin et al. 2019; Wang et al. 2020a,b) in domain adaptation (DA) (Yang and Soatto 2020) and generalization (DG) (Huang et al. 2021; Xu et al. 2021), demonstrating the superior generalization capabilities of the trained model. As we target transferable attack on unknown target domains and victim models to boost the transferability in a similar setting, we seek to exploit domain-related characteristics from simple yet effective frequency manipulations. Several recent studies have focused on frequency-based adversarial attacks to manipulate adversarial examples, aimed at deeper understanding of their dataset dependency (Maiya et al. 2021), adversarial robustness (Duan et al. 2021), and the security vulnerability (Dziugaite, Ghahramani, and Roy 2016). With a slightly different motive, SSAH (Luo et al. 2022) aims to improve the perceptual quality, whereas (Guo, Frank, and Weinberger 2019) designs low-frequency perturbations to enhance the efficiency of black-box queries. Although low-frequency perturbations are efficient, they are known to provide less effective transfer between models (Sharma, Ding, and Brubaker 2019). As such, we delve deeper into frequency-driven approaches that effectively enhance the transferability of adversarial examples, especially crafted in a generative framework. To this end, we propose a novel generative attack method, FACL-Attack, to facilitate transferable attacks across various domains and models from the frequency domain perspective. In our training, we introduce frequency-aware domain randomization and feature contrastive learning, explicitly leveraging band-specific characteristics of image attributes such as color, shape, and texture, as illustrated in Figure 1. We highlight our contributions as follows: • We propose two modules to boost the adversarial transferability, FADR and FACL, in which FADR randomizes domain-variant data components while FACL contrasts domain-invariant feature pairs in the frequency domain. • We achieve the state-of-the-art attack transferability across various domains and model architectures, demonstrating the effectiveness of our method. • Our plug-and-play modules can be easily integrated into existing generative attack frameworks, further boosting the transferability while keeping the time complexity. Related Work Generator-based Adversarial Attack Generative attack (Poursaeed et al. 2018; Naseer et al. 2019, 2021; Salzmann et al. 2021; Zhang et al. 2022) employs the concept of adversarial training (Goodfellow et al. 2020) to create perturbations across entire data distributions. This is achieved by regarding a pre-trained surrogate model as a discriminator, and it is advantageous due to the ability of generating diverse forms of perturbations across multiple images simultaneously. Existing methods aim to enhance the generator training by leveraging both the crossentropy (CE) loss (Poursaeed et al. 2018) and the relativistic CE loss (Naseer et al. 2019), improving the transferability across domains and models. Recent studies (Salzmann et al. 2021; Zhang et al. 2022) utilize features extracted from the mid-level layers of the surrogate model, which encompass a higher degree of shared information among different model architectures. We follow the traces of the recent works and explore a method to further enhance the transferability by introducing a novel perspective from the frequency domain. Frequency-based Approach for Generalization Convolutional neural networks are known to exhibit intriguing attributes within the frequency domain (Yin et al. 2019; Tsuzuku and Sato 2019; Yin et al. 2019; Wang et al. 2020a,b), demonstrating proficient generalization capability by effectively harnessing the band-specific information derived from Fourier filtering (Dziugaite, Ghahramani, and Roy 2016; Guo et al. 2017; Long et al. 2022). Spectral manipulations for enhancing the generalization capability can be achieved through simple yet powerful transformations like the Fast Fourier Transform (FFT), which dissects an image into amplitude components that vary across domains and phase components that remain consistent across different domains (Xu et al. 2021). The Discrete Cosine Transform (DCT) also serves as an efficient technique to decompose spectral elements into domain-agnostic mid-frequency components (mid-FCs) and domain-specific low- and highFCs, which contributed to the effective spectral domain randomization in FSDR (Huang et al. 2021). In our work, we also employ the DCT to decompose images into domainagnostic and domain-specific frequency components, facilitating the effective domain randomization and feature-level contrastive learning for transferable attacks. Feature Constrastive Learning Manipulating image representations in the feature space has demonstrated significant performance improvement in realworld scenarios characterized by domain shifts. In the field of DA and DG, common approaches such as feature alignment (Yang et al. 2022) and intra-class feature distance minimization with inter-class maximization (Kang et al. 2019; Luo et al. 2022; Jeong and Kim 2022) are successful in mitigating the domain discrepancies. Specifically, several studies (Wang et al. 2022; Kim et al. 2021) have directly addressed the domain gap issue by manipulating pairs of domain-invariant representations from various domains that correspond to samples of the same class. Continuing in the realm of generative attacks, recent studies have employed CLIP-based (Aich et al. 2022) and object-centric (Aich et al. 2023) features for effective training of the perturbation generator. In our work, we leverage frequency-augmented feature contrastive learning on domain-agnostic mid-band feature pairs. Simultaneously, we reduce the significance of domain-specific features in the low- and high-bands to improve the adversarial transferability. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6495 𝑃𝑃(ȉ) Perturbed features (mid-freq) Perturbed features (low&high-freq) Clean features (mid-freq) Clean features (low&high-freq) Unknown victim model 1 … Cross-Domain Transferability Cross-Model Transferability Unknown victim model 2 Unknown victim model N Unknown domain 1 Unknown domain 2 Unknown domain N 𝒛𝒛𝒎𝒎 ′ 𝒛𝒛𝒎𝒎 𝒛𝒛𝒍𝒍𝒍𝒍 𝒛𝒛𝒍𝒍𝒍𝒍 ′ … 𝒍𝒍∞ Perturbation Generator 𝑮𝑮𝜽𝜽(ȉ) Perturbed Perturbation Generator Clean Proposed Generator Training 𝑃𝑃(ȉ) Perturbation Projector Adversarial FADR Spectral Decomposition 𝑮𝑮𝜽𝜽(ȉ) Surrogate Model 𝒛𝒛𝒎𝒎 ′ 𝒛𝒛𝒎𝒎 𝒛𝒛𝒍𝒍𝒍𝒍 𝒙𝒙𝒔𝒔 FACL 𝒛𝒛𝒍𝒍𝒍𝒍 ′ Clean Adversarial Proposed Module Push Pull Frozen Clean 𝒍𝒍∞ 𝒇𝒇𝒌𝒌(ȉ) 𝒙𝒙𝒔𝒔′ Inference Stage Perturbation Projector Figure 2: Overview of FACL-Attack. From the clean input image, our FADR module outputs the augmented image after spectral transformation, which is targeted to randomize only the domain-variant low/high FCs. The perturbation generator Gθ(·) then produces the l∞-budget bounded adversarial image x ′ s with perturbation projector P(·) from the randomized image. The resulting clean and adversarial image pairs are decomposed into mid-band (domain-agnostic) and low/high-band (domainspecific) FCs, whose features fk(·) extracted from the k-th layer of the surrogate model are contrasted in our FACL module to boost the adversarial transferability. The adversarial image x ′ s is colorized only for visualization. Proposed Attack Method: FACL-Attack Problem definition. Generating adversarial examples revolves around solving an optimization problem, whereas generating transferable adversarial examples addresses the challenge of generalization. Our goal is to train a generative model Gθ(·) to craft adversarial perturbations δ that are well transferable to arbitrary domains and victim models aimed to trigger mispredictions on the image classifier f(·). Specifically, the generator maps the clean image x to its corresponding adversarial example x ′ = Gθ(x) containing perturbations constrained by ∥δ∥∞≤ϵ. Overview of FACL-Attack. Our method aims to train a robust perturbation generator that yields effective adversarial examples given arbitrary images from black-box domains to induce the unknown victim model to output misclassification. It consists of two key modular operations in the frequency domain, each applied to the input image data and features extracted from the surrogate model only during the training stage, as illustrated in Figure 2. As inspired by the power of frequency domain augmentation in domain generalization (Huang et al. 2021; Xu et al. 2021), our first module, Frequency-Aware Domain Randomization (FADR), transforms a pixel-domain image to the frequency-domain components using DCT. It randomizes domain-variant low- and high-frequency band components and preserves domain-invariant mid-frequency components in the input image. Then a perturbation generator is trained to craft bounded adversarial images x ′ s, i.e., perturbation δ added to the clean image x s and constrained by perturbation projector P(·). We then spectrally decompose the randomized x s and x ′ s into each low- and high-band, and mid-band frequency component, which are inversely transformed to the image domain by IDCT and passed through the pre-defined surrogate model for feature extraction. Following the recent line of works (Salzmann et al. 2021; Zhang et al. 2022) on transferable generative attacks, we leverage the mid-layer features fk(·) for feature contrastive learning. Each band-specific clean and perturbed feature pair is contrasted in our Frequency-Augmented Contrastive Learning (FACL) module, whereby domain-agnostic mid-band FC pair is to repel and domain-specific low- and high-band FC pair is to attract each other. This straightforward but effective data- and feature-level guidance in the frequency domain significantly contributes to boost the adversarial transferability as demonstrated in the following sections. Frequency-Aware Domain Randomization This subsection describes our FADR module designed to boost the robustness of perturbation generator Gθ(·) against arbitrary domain shifts in practical real-world scenarios. Inspired by recent works that convert the training image from pixel space into frequency space for boosting the doThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6496 Figure 3: Visualization of spectral transformation in FADR. From the clean input image (column 1), our FADR decomposes the image into mid-band (column 2) and low/highband (column 3) FCs. The FADR only randomizes the low/high-band FCs, yielding the augmented output in column 4. Here we demonstrate transformations with large hyper-parameters of ρ = 0.5 and σ = 8 for visualization. main generalization capabilities (Huang et al. 2021; Xu et al. 2021), we decompose the input training images into multiple-range FCs by leveraging DCT, and apply random masked filtering operation on domain-specific image attributes that lie in the low- and high-frequency bands. While FSDR (Huang et al. 2021) and FACT (Xu et al. 2021) each employs histogram matching and Fourier-based amplitude mix-up, our proposed FADR module explicitly manipulates the DCT coefficients to diversify input images, aligning with a recent work (Long et al. 2022) that narrows the gap between the surrogate model and possible victim models via spectrum transformation. We remark that our approach applies domain randomization exclusively to domain-specific FCs that are subject to change from various domains, whereas the existing work (Long et al. 2022) applies spectral transformation over the whole frequency bands containing not only domain-specific information, but also domain-agnostic semantic details. In converting the input images into the frequency domain, we apply DCT to each channel separately. We then apply random masked filtering to diversify the input images for boosting the cross-domain transferability. Our spectral transformation operation TFADR(·) for source images x s can be mathematically expressed as follows: TFADR(xs) = ϕ−1 (ϕ(xs + ξ) ⊙M , (1) with the mask M defined as follows: M = U(1 −ρ, 1 + ρ), if f < fl, 1, if fl ≤f < fh, U(1 −ρ, 1 + ρ), if f ≥fh, (2) where ⊙, ϕ, ϕ−1 denote Hadamard product, DCT, and inverse DCT (IDCT) operation, respectively. The random noise ξ ∼N(0, σ2I) is sampled from a Gaussian distribution, and the mask values are randomly sampled from Uniform distribution, denoted as U. For the random mask matrix M which has same dimension with the DCT output, we assign its matrix component values as defined in Equation 2, where we set the low and high thresholds as fl, and fh, respectively, to distinguish low-, mid-, and high-frequency bands. Note that we can parameterize our FADR module with hyper-parameters ρ and σ. The spectral transformation in our FADR module is conceptually illustrated in Figure 3. The augmented image output from FADR is then fed as input to the generator Gθ(·) to yield the adversarial image x ′ s = P(Gθ(TFADR(x s))), after the perturbation projection within the pre-defined budget of ∥δ∥∞≤ϵ. Frequency-Augmented Contrastive Learning Recent works on multi-object scene attacks have highlighted the importance of feature-level contrast for transferable generative attacks. In a similar approach to their ideas of exploiting local patch differences (Aich et al. 2023) or CLIP features (Aich et al. 2022), our FACL module seeks to apply feature contrast specifically in the domain-agnostic midfrequency range for improving the generalization capability of the trained perturbation generator Gθ(·). Spectral decomposition. According to the training pipeline of our FACL-Attack in Figure 2, the generated adversarial image x ′ s undergoes spectral decomposition before feature extraction from the surrogate model. This process is carried out by using a band-pass filter M bp and a band-reject filter M br, to decompose the surrogate model inputs into mid- and low/high-band FCs, respectively. The spectral decomposition operator is defined as follows: M bp = 1, if fl ≤f < fh, 0, otherwise, (3) where M br is the opposite of M bp, holding its values in reverse. Then the spectrally decomposed features from the surrogate model f are defined as: zband = fk ϕ−1 ϕ(x input) ⊙M band , (4) where M band is set to either M bp or M br, and fk(·) denotes the k-th layer of f . Given x s and x ′ s as input, we finally obtain two pairs of band-specific frequencyaugmented features to contrast, i.e., (zm, z′ m) for repelling, and (zlh, z′ lh) for attracting each other. Loss function. The baseline loss Lorig for attacking the surrogate model via contrasting clean and adversarial feature pairs is defined as follows: Lorig = sim(fk(x s), fk(x ′ s)), (5) where sim refers to the standard cosine similarity metric. To boost the attack transferability, our FACL module effectively exploits the spectrally decomposed feature pairs in our proposed FACL loss function defined as follows: LFACL = sim(zm, z′ m) −sim(zlh, z′ lh), (6) where the goal of LFACL is to reinforce the effectiveness of domain-agnostic mid-band feature contrast (zm, z′ m), while minimizing the importance of domain-specific lowand high-band feature difference (zlh, z′ lh). In this approach, our LFACL facilitates the push-pull action among the bandspecific feature pairs, further guiding the perturbation generation towards more robust regime, as shown in Figure 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6497 Figure 4: Clean image, unbounded adversarial images from baseline and FACL, and the final difference map (Diff(baseline, baseline+FACL)), from left to right. Our generated adversarial perturbations are more focused on domain-agnostic semantic region such as shape, facilitating more transferable attack. Final learning objective. We train our perturbation generator by minimizing the total loss function as follows: min θ (λorig · Lorig + λFACL · LFACL), (7) where λorig and λFACL are loss coefficients. The objective guides our generator Gθ(·) to generate more robust perturbations to domain shifts as well as model variances. Experiments Experimental Setup Datasets and attack settings. We evaluate our method over challenging strict black-box settings (i.e., cross-domain and cross-model) in the image classification task. We set the target domain and victim model to be different from the source domain and surrogate model. The perturbation generator is trained on ImageNet-1K (Russakovsky et al. 2015) and evaluated on CUB-201-2011 (Wah et al. 2011), Stanford Cars (Krause et al. 2013), and FGVC Aircraft (Maji et al. 2013). As BIA (Zhang et al. 2022) highlights the importance of using a large-scale dataset for training, we train on ImageNet-1K accordingly. For the cross-model setting, we evaluate our method over black-box models but whitebox domain (i.e., ImageNet-1K) setting. The details for the datasets are described in Table 1. Surrogate and victim models. Our perturbation generator is trained against ImageNet-1K pre-trained surrogate models (e.g., VGG-16 (Simonyan and Zisserman 2015)). For the cross-model evaluation, we investigate other architectures including VGG-19 (Simonyan and Zisserman 2015), ResNet50 (Res-50), ResNet152 (Res-152) (He et al. 2016), DenseNet121 (Dense-121), DenseNet169 (Dense169) (Huang et al. 2017) and Inception-v3 (Inc-v3) (Szegedy et al. 2016). For the cross-domain setting (i.e., CUB-2012011, Stanford Cars, and FGVC Aircraft), we use finegrained classification models trained under DCL framework (Chen et al. 2019) with three different backbones, which include Res-50, SENet154 and SE-ResNet101 (SERes101) (Hu, Shen, and Sun 2018). Dataset # Class # Train / Val. Resolution ImageNet-1K 1,000 1.28 M / 50,000 224×224 CUB-200-2011 200 5,994 / 5,794 448×448 Stanford Cars 196 8,144 / 8,041 448×448 FGVC Aircraft 100 6,667 / 3,333 448×448 Table 1: Description of datasets. Implementation details. We closely follow the implementation of recent works on generative attacks (Naseer et al. 2019; Salzmann et al. 2021; Zhang et al. 2022; Aich et al. 2022) for fair comparison. Our perturbation generator consists of down-sampling, residual, and up-sampling blocks that translate clean images into adversarial examples. The surrogate model layer from which we extract features is Maxpool.3 for VGG-16. We train with an Adam optimizer (β1 = 0.5, β2 = 0.999) (Kingma and Ba 2015) with the learning rate of 2 × 10−4, and the batch size of 16 for 1 epoch. The perturbation budget for crafting the adversarial image is l∞≤10. For the FADR hyper-parameters, we follow a prior work (Huang et al. 2021) to set the low and high frequency threshold to fl = 7 and fh = 112, respectively. We use ρ = 0.01 and σ = 8 for spectral transformation and describe more details in Supplementary. Evaluation metric and competitors. We choose the top1 classification accuracy after attacks as our main evaluation metric, unless otherwise stated. The reported results are the average values obtained from three random seed runs. The competitors include the state-of-the-art generative attacks such as GAP (Poursaeed et al. 2018), CDA (Naseer et al. 2019), LTP (Salzmann et al. 2021), and BIA (Zhang et al. 2022). We set BIA as our baseline. Main Results Cross-domain evaluation results. We compare our FACL-Attack with the state-of-the-art generative-model based attacks on various black-box domains with blackbox models. During the training stage, we leverage the ImageNet-1K as the source domain to train a perturbation generator against a pre-trained surrogate model. In the inference stage, the trained perturbation generator is evaluated on various black-box domains (i.e., CUB-200-2011, Stanford Cars, and FGVC Aircraft) with black-box victim models. The victim models include pre-trained models which were trained via DCL framework with three different backbones (i.e., Res-50, SENet154, and SE-Res101). As shown in Table 2, our FACL-Attack outperforms on most cross-domain benchmarks, among which are also cross-model, by significant margins. This demonstrates the strong and robust transferable capability of the generator trained by our novel approach with data- and feature-level guidance in the frequency domain. We posit that the remarkable generalization ability of FACL-Attack owes to the synergy between our two proposed modules that effectively guide feature-level separation in the domain-agnostic mid-frequency band (i.e., FACL), complemented by datalevel randomization only applied to the domain-specific freThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6498 Method CUB-200-2011 Stanford Cars FGVC Aircraft AVG. Res-50 SENet154 SE-Res101 Res-50 SENet154 SE-Res101 Res-50 SENet154 SE-Res101 Clean 87.35 86.81 86.56 94.35 93.36 92.97 92.23 92.08 91.90 90.85 GAP (Poursaeed et al. 2018) 68.85 74.11 72.73 85.64 84.34 87.84 81.40 81.88 76.90 79.30 CDA (Naseer et al. 2019) 69.69 62.51 71.00 75.94 72.45 84.64 71.53 58.33 63.39 69.94 LTP (Salzmann et al. 2021) 30.86 52.50 62.86 34.54 65.53 73.88 15.90 60.37 52.75 49.91 BIA (Zhang et al. 2022) 32.74 52.99 58.04 39.61 69.90 70.17 28.92 60.31 46.92 51.07 FACL-Attack (Ours) 24.74 44.06 53.75 26.58 65.71 61.40 19.72 52.01 48.51 44.05 Table 2: Cross-domain evaluation results. The perturbation generator is trained on ImageNet-1K with VGG-16 as the surrogate model and evaluated on black-box domains with black-box models. We compare the top-1 classification accuracy after attacks with the perturbation budget of l∞≤10 (the lower, the better). Method Venue VGG-16 VGG-19 Res-50 Res-152 Dense-121 Dense-169 Inc-v3 AVG. Clean 70.14 70.95 74.61 77.34 74.22 75.75 76.19 74.17 GAP (Poursaeed et al. 2018) CVPR’18 23.63 28.56 57.87 65.50 57.94 61.37 63.30 55.76 CDA (Naseer et al. 2019) NeurIPS’19 0.40 0.77 36.27 51.05 38.89 42.67 54.02 32.01 LTP (Salzmann et al. 2021) NeurIPS’21 1.61 2.74 21.70 39.88 23.42 25.46 41.27 22.30 BIA (Zhang et al. 2022) ICLR’22 1.55 3.61 25.36 42.98 26.97 32.35 41.20 24.86 FACL-Attack (Ours) 1.45 2.92 19.72 36.61 21.34 25.61 29.97 19.66 Table 3: Cross-model evaluation results. The perturbation generator is trained on ImageNet-1K with VGG-16 as the surrogate model and evaluated on black-box models including white-box model (i.e., VGG-16). We compare the top-1 classification accuracy after attacks with the perturbation budget of l∞≤10 (the lower, the better). quency components (i.e., FADR). In other words, our spectral approach does help improve the generalization capability of the perturbation generator to other black-box domains as well as unknown network architectures. Moreover, our proposed training modules are complementary with existing generative-model based attack frameworks and can further improve the attack transferability, as demonstrated in Supplementary. Cross-model evaluation results. Although we demonstrated the effectiveness of FACL-attack on boosting the transferability in strict black-box settings (i.e., cross-domain as well as cross-model) as shown in Table 2, we further investigated on the black-box model scenario in a controlled white-box domain (i.e., ImageNet-1K). In other words, the generator is trained against a surrogate model (i.e., VGG16) and evaluated on various victim models which include VGG-16 (white-box), VGG-19, Res-50, Res-152, Dense121, Dense-169, and Inc-v3. As shown in Table 3, ours also outperforms on most generative attacks where they seem to partially overfit to the white-box model (i.e., VGG-16). Our outperforming results validate the strong transferability in cross-model attacks, in addition to cross-domain. We posit that the frequencyaugmented feature learning could help the perturbation generator craft more robust perturbations, which exhibit better generalization capability to unknown feature space. This aligns with a recent finding (Long et al. 2022) that spectral data randomization contributes to enhance the transferability via simulating diverse victim models. Method Lorig TFADR LFACL Cross-Domain Cross-Model Clean 90.85 74.17 Baseline ✓ 51.07 24.86 FADR ✓ ✓ 46.24 20.28 FACL ✓ ✓ 45.36 20.70 Ours ✓ ✓ ✓ 44.05 19.66 Table 4: Ablation study on our proposed modules. TFADR and LFACL are defined in Eq. 1 and 6, respectively. More Analyses Ablation study on our proposed modules. We examined different attack designs to find out how our proposed modules contribute to the attack transferability. As shown in Table 4, we trained the perturbation generator by employing each method and evaluated under realistic black-box settings. Cross-Domain is defined as ImageNet-1K →{CUB200-2011, Stanford Cars, FGVC Aircraft} and Cross-Model indicates VGG-16 →{VGG-16, VGG-19, Res-50, Res-152, Dense-121, Dense-169, Inc-v3}. Baseline is trained with Lorig without any data randomization or band-specific feature contrast. FADR is trained with Lorig and frequencyaware domain randomization using TFADR. FACL is trained with Lorig and band-specific feature contrast using LFACL. As shown in Table 4, Baseline trained with naive midlayer feature contrast (i.e., Lorig) does not perform well due to the domain bias and model over-fitting. FADR and FACL each outperforms Baseline by a large margin, demonstratThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6499 Method Clean Baseline All-Rand Ours Cross-Domain 90.85 51.07 47.24 44.05 Cross-Model 74.17 24.86 21.68 19.66 Table 5: Comparison with domain randomization on the entire frequency band. Method Accuracy ↓ SSIM↑PSNR↑LPIPS↓ BIA (l∞≤10) 24.86 0.73 28.71 0.49 Ours (l∞≤10) 19.66 0.72 28.61 0.49 Ours (l∞≤9) 23.85 0.75 29.48 0.47 Table 6: Comparison on image quality of adversarial examples with cross-model accuracy on ImageNet-1K. Figure 5: Average cross-domain evaluation results across various frequency thresholds. ing the importance of selectively randomizing the domainvariant data components and contrasting domain-invariant feature pairs for boosting the black-box transferability, respectively. Furthermore, Ours performs the best consistently. We speculate that FADR and FACL are complementary since data augmentation through our FADR facilitates the stable feature contrastive learning. Comparison with full-band randomization. We further investigated on the effectiveness of our domain randomization scheme, comparing with the full-band frequency randomization as practiced before (Long et al. 2022). As shown in Table 5, our novel domain-aware approach is superior to the naive full-range randomization method (i.e., All-Rand). Remarkably, All-Rand is closely related to a recent work, namely SSA (Long et al. 2022), which improves the iterative attack transferability by full-range spectral augmentation. Compared to SSA, our method exclusively randomizes the domain-specific low/high-FCs and exploits the frequency-augmented feature contrast. Ours outperforms All-Rand by 3.19%p and 2.02%p in each crossdomain and cross-model evaluation. Without identifying and preserving domain-agnostic information, even the state-ofthe-art method could excessively randomize images, resulting in the degradation of image semantics and leading to the sub-optimal adversarial perturbation generation. Sensitivity on frequency thresholds. We investigated the sensitivity of the chosen frequency thresholds to verify the robustness of our approach. As shown in Figure 5, our method shows robust attack performance across adjacent threshold values, surpassing the baseline performance. Figure 6: Qualitative results. Clean images (row 1), unbounded adversarial images (row 2), and bounded (l∞≤ 10) adversarial images (row 3) are shown for various domains. All of the final unbounded adversarial image samples cause victim classifier models to make incorrect predictions. This implies that mid-frequency range, in general, contains domain-agnostic information that is effective in generating more transferable perturbations against arbitrary domains and model architectures. Analysis on image quality. Although our work is focused on generating more powerful adversarial perturbations, the image quality of the crafted adversarial examples should also be carefully examined. As shown in Figure 6, FACLAttack can craft effective and high-quality adversarial images with imperceptible perturbations. We also conducted a quantitative evaluation of image dissimilarity metrics between clean and adversarial image pairs, including SSIM, PSNR, and LPIPS. As shown in Table 6, we found that ours with a lower perturbation of l∞≤9 demonstrates superior image quality than the baseline with l∞≤10 while achieving better attack performance. In other words, it can yield better attack transferability with lower perturbation power and better image quality, which are very remarkable assets for real-world black-box attacks. We refer to Supplementary for more qualitative and quantitative evaluation results. Conclusion In this paper, we have introduced a novel generator-based transferable attack method, leveraging spectral transformation and feature contrast in the frequency domain. Our work drew inspiration from domain generalization approaches that utilize frequency domain techniques, adapting and enhancing them for the attack framework. In our method, we target spectral data randomization on domain-specific image components, and domain-agnostic feature contrast for training a more robust perturbation generator. Extensive evaluation results validate the effectiveness in practical black-box scenarios with domain shifts and model variances. It can also be integrated into existing attack frameworks, further boosting the transferability while keeping the inference time. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6500 Acknowledgments This work was partially supported by the Agency for Defense Development grant funded by the Korean Government, and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2022R1A2B5B03002636). References Aich, A.; Khang-Ta, C.; Gupta, A.; Song, C.; Krishnamurthy, S. V.; Asif, M. S.; and Roy-Chowdhury, A. K. 2022. GAMA: Generative Adversarial Multi-Object Scene Attacks. arXiv preprint arXiv:2209.09502. Aich, A.; Li, S.; Song, C.; Asif, M. S.; Krishnamurthy, S. V.; and Roy-Chowdhury, A. K. 2023. Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1308–1318. Chen, Y.; Bai, Y.; Zhang, W.; and Mei, T. 2019. Destruction and Construction Learning for Fine-Grained Image Recognition. In CVPR. Croce, F.; and Hein, M. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, 2206–2216. PMLR. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting Adversarial Attacks with Momentum. In CVPR. Duan, R.; Chen, Y.; Niu, D.; Yang, Y.; Qin, A. K.; and He, Y. 2021. Advdrop: Adversarial attack to dnns by dropping information. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7506–7515. Dziugaite, G. K.; Ghahramani, Z.; and Roy, D. M. 2016. A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations. Guo, C.; Frank, J. S.; and Weinberger, K. Q. 2019. Low Frequency Adversarial Perturbation. In Globerson, A.; and Silva, R., eds., Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, July 22-25, 2019, volume 115 of Proceedings of Machine Learning Research, 1127–1137. AUAI Press. Guo, C.; Rana, M.; Cisse, M.; and Van Der Maaten, L. 2017. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In CVPR. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141. Huang, G.; Liu, Z.; van der Maaten, L.; and Weinberger, K. Q. 2017. Densely Connected Convolutional Networks. In CVPR. Huang, J.; Guan, D.; Xiao, A.; and Lu, S. 2021. Fsdr: Frequency space domain randomization for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6891–6902. Jeong, J.; and Kim, J.-H. 2022. Doubly Contrastive End-toEnd Semantic Segmentation for Autonomous Driving under Adverse Weather. In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press. Kang, G.; Jiang, L.; Yang, Y.; and Hauptmann, A. G. 2019. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4893–4902. Kim, D.; Yoo, Y.; Park, S.; Kim, J.; and Lee, J. 2021. Selfreg: Self-supervised contrastive regularization for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9619–9628. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Krause, J.; Deng, J.; Stark, M.; and Fei-Fei, L. 2013. Collecting a Large-Scale Dataset of Fine-Grained Cars. Long, Y.; Zhang, Q.; Zeng, B.; Gao, L.; Liu, X.; Zhang, J.; and Song, J. 2022. Frequency domain model augmentation for adversarial attack. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV, 549–566. Springer. Lorenz, P.; Harder, P.; Straßel, D.; Keuper, M.; and Keuper, J. 2021. Detecting autoattack perturbations in the frequency domain. arXiv preprint arXiv:2111.08785. Lu, Y.; Jia, Y.; Wang, J.; Li, B.; Chai, W.; Carin, L.; and Velipasalar, S. 2020. Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction. In CVPR. Luo, C.; Lin, Q.; Xie, W.; Wu, B.; Xie, J.; and Shen, L. 2022. Frequency-driven imperceptible adversarial attack on semantic similarity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15315–15324. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR. Maiya, S. R.; Ehrlich, M.; Agarwal, V.; Lim, S.-N.; Goldstein, T.; and Shrivastava, A. 2021. A frequency perspective of adversarial robustness. arXiv preprint arXiv:2111.00861. Maji, S.; Rahtu, E.; Kannala, J.; Blaschko, M. B.; and Vedaldi, A. 2013. Fine-Grained Visual Classification of Aircraft. volume abs/1306.5151. Naseer, M.; Khan, S.; Hayat, M.; Khan, F. S.; and Porikli, F. 2021. On generating transferable targeted perturbations. In The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6501 Proceedings of the IEEE/CVF International Conference on Computer Vision, 7708–7717. Naseer, M.; Khan, S. H.; Hayat, M.; Khan, F. S.; and Porikli, F. 2020. A Self-supervised Approach for Adversarial Robustness. In CVPR. Naseer, M. M.; Khan, S. H.; Khan, M. H.; Shahbaz Khan, F.; and Porikli, F. 2019. Cross-domain transferability of adversarial perturbations. Advances in Neural Information Processing Systems, 32. Poursaeed, O.; Katsman, I.; Gao, B.; and Belongie, S. 2018. Generative adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4422–4431. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. S.; Berg, A. C.; and Li, F. 2015. ImageNet Large Scale Visual Recognition Challenge. IJCV. Salzmann, M.; et al. 2021. Learning transferable adversarial perturbations. Advances in Neural Information Processing Systems, 34: 13950–13962. Sharma, Y.; Ding, G. W.; and Brubaker, M. A. 2019. On the Effectiveness of Low Frequency Perturbations. Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Bengio, Y.; and LeCun, Y., eds., ICLR. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the Inception Architecture for Computer Vision. In CVPR. Tsuzuku, Y.; and Sato, I. 2019. On the structural sensitivity of deep convolutional networks to the directions of fourier basis functions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 51–60. Wah, C.; Branson, S.; Welinder, P.; Perona, P.; and Belongie, S. 2011. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, California Institute of Technology. Wang, H.; Wu, X.; Huang, Z.; and Xing, E. P. 2020a. High-frequency component helps explain the generalization of convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8684–8694. Wang, R.; Wu, Z.; Weng, Z.; Chen, J.; Qi, G.-J.; and Jiang, Y.-G. 2022. Cross-domain contrastive learning for unsupervised domain adaptation. IEEE Transactions on Multimedia. Wang, Z.; Yang, Y.; Shrivastava, A.; Rawal, V.; and Ding, Z. 2020b. Towards frequency-based explanation for robust cnn. arXiv preprint arXiv:2005.03141. Wu, W.; Su, Y.; Chen, X.; Zhao, S.; King, I.; Lyu, M. R.; and Tai, Y.-W. 2020. Boosting the transferability of adversarial samples via attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1161– 1170. Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; and Yuille, A. L. 2019. Improving Transferability of Adversarial Examples with Input Diversity. In CVPR. Xu, Q.; Zhang, R.; Zhang, Y.; Wang, Y.; and Tian, Q. 2021. A fourier-based framework for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14383–14392. Yang, C.; Cheung, Y.-M.; Ding, J.; Tan, K. C.; Xue, B.; and Zhang, M. 2022. Contrastive learning assisted-alignment for partial domain adaptation. IEEE Transactions on Neural Networks and Learning Systems. Yang, Y.; and Soatto, S. 2020. Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4085–4095. Yin, D.; Gontijo Lopes, R.; Shlens, J.; Cubuk, E. D.; and Gilmer, J. 2019. A fourier perspective on model robustness in computer vision. Advances in Neural Information Processing Systems, 32. Zhang, Q.; Li, X.; Chen, Y.; Song, J.; Gao, L.; He, Y.; and Xue, H. 2022. Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. arXiv preprint arXiv:2201.11528. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6502 | 2024 | 722 |
18,543 | Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking Mingzhan Yang1,2*, Guangxin Han1*, Bin Yan1, Wenhua Zhang1, Jinqing Qi1, Huchuan Lu1, Dong Wang1† 1 Dalian University of Technology 2 Shenzhen Tvt Digital Technology Co., Ltd {18742525689, hanguangxindlut, yan bin, 1062894314zwh}@mail.dlut.edu.cn, {jinqing, lhchuan, wdice}@dlut.edu.cn Abstract Multi-Object Tracking (MOT) aims to detect and associate all desired objects across frames. Most methods accomplish the task by explicitly or implicitly leveraging strong cues (i.e., spatial and appearance information), which exhibit powerful instance-level discrimination. However, when object occlusion and clustering occur, spatial and appearance information will become ambiguous simultaneously due to the high overlap among objects. In this paper, we demonstrate this long-standing challenge in MOT can be efficiently and effectively resolved by incorporating weak cues to compensate for strong cues. Along with velocity direction, we introduce the confidence and height state as potential weak cues. With superior performance, our method still maintains Simple, Online and Real-Time (SORT) characteristics. Also, our method shows strong generalization for diverse trackers and scenarios in a plug-and-play and trainingfree manner. Significant and consistent improvements are observed when applying our method to 5 different representative trackers. Further, with both strong and weak cues, our method Hybrid-SORT achieves superior performance on diverse benchmarks, including MOT17, MOT20, and especially DanceTrack where interaction and severe occlusion frequently happen with complex motions. The code and models are available at https://github.com/ymzis69/HybridSORT. Introduction Recently, tracking-by-detection (Bewley et al. 2016; Wojke, Bewley, and Paulus 2017; Zhang et al. 2021, 2022; Du et al. 2023; Ren et al. 2023; Cao et al. 2023) has become the most popular paradigm in Multi-Object-Tracking (MOT), which divides the problem into two sub-tasks. The first task is to detect objects in each frame. The second task is to associate them in different frames. The association task is primarily solved by explicitly or implicitly utilizing strong cues, including spatial and appearance information. This design is reasonable because these strong cues provide powerful instance-level discrimination. However, the commonly used strong cues suffer from degradation under challenging situations such as occlusion and clustering (ID 1 and 2 in Figure 1). Specifically, when two objects are highly overlapped *These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. in the current frame, the Intersection over Union (IoU) between detections and estimated tracklet locations becomes ambiguous, and the appearance features of both objects are dominated by the foreground ones (red dash arrow in the Strong Cues part of Figure 1). In the Weak Cues part of Figure 1, we demonstrate that weak cues, such as confidence state, height state, and velocity direction, can effectively alleviate the ambiguous associations where strong cues become unreliable. However, to the best of our knowledge, weak cues have been ignored in most methods except for very few (e.g., OC-SORT (Cao et al. 2023), MT-IOT (Yan et al. 2022)), as they only possess reliable discrimination among certain objects. As shown in Figure 1, the confidence state is only reliable for distinguishing ID 2 from other IDs. In this paper, we select the confidence state and height state as potential types of weak cues, in addition to the velocity direction used in OC-SORT (Cao et al. 2023). The confidence state can explicitly indicate the occluding/occluded (i.e., foreground/background) relations among clustered objects, providing a critical clue that strong cues (i.e., spatial and appearance information) lack. Height state is a stable property of objects which is usually robust to diverse object poses and contains some degree of depth information (i.e., reflects the distance from the camera to the objects). To maintain the Simple, Online and Real-Time (SORT) characteristics, we propose simple yet effective strategies to exploit the aforementioned weak cues, namely Tracklet Confidence Modeling (TCM) and Height Modulated IoU (HMIoU). For TCM, we use Kalman Filter and Linear Prediction to estimate the confidence state of tracklets, which is then used as a metric to associate with detections. For HMIoU, the height state is also modeled by Kalman Filter. The height cost matrix for the association is first defined as the IoU along the height axis for the estimated tracklet box and detection box, then fused with the standard IoU matrix based on the area metric. To evaluate the generalization ability of our design, we apply the proposed designs to 5 different representative trackers, including SORT (Bewley et al. 2016), DeepSORT (Wojke, Bewley, and Paulus 2017), MOTDT (Chen et al. 2018), ByteTrack (Zhang et al. 2022), and OC-SORT (Cao et al. 2023). Both of our designs for confidence state and height state consistently achieve significant improvements, demonThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6504 strating the importance of weak cues for MOT. Further, to advance the state-of-the-art performance of Simple, Online and Real-Time (SORT) MOT methods, we modify the current state-of-the-art SORT-like algorithm OCSORT (Cao et al. 2023) as our strong baseline. Firstly, we modify the velocity direction modeling in OC-SORT, namely Observation-Centric Momentum (OCM), by extending the box center to four box corners and the fixed temporal interval to multiple intervals. Secondly, we include an additional association stage for low-confidence detection following ByteTrack (Zhang et al. 2022). Along with the proposed TCM and HMIoU, our method Hybrid-SORT achieves superior performance on all DanceTrack, MOT17, and MOT20 benchmarks by leveraging both strong and weak cues, while still maintaining Simple, Online and Real-Time (SORT). We hope that the generalization ability, plug-and-play and training-free characteristics of Hybrid-SORT make it attractive for diverse scenarios and edge devices. • We demonstrate the long-standing challenges of occlusion and clustering in MOT can be substantially alleviated by incorporating weak cues (i.e., confidence state, height state and velocity direction) as compensation for commonly used strong cues. • We introduce simple Tracklet Confidence Modeling (TCM) and Height Modulated IoU (HMIoU) to model and leverage the confidence state and height state. With delicate modeling, the weak cues effectively and efficiently relieve the ambiguous matches generated by strong cues with negligible additional computation. • The plug-and-play and training-free design generalizes well over diverse scenarios and trackers. We implement our design on 5 representative trackers, achieving consistent and significant improvements. Finally, Our method Hybrid-SORT achieves superior performance on DanceTrack, MOT17, and MOT20 benchmarks. Related Work Heuristic Matcher Spatial-based Heuristic Matcher Spatial information is the most widely used strong cue in high-FPS benchmarks. When time intervals between frames are short, the movement of an object is also small and can be treated as linear. This makes spatial information an accurate metric in the short-term association. The pioneer work SORT (Bewley et al. 2016) uses Kalman Filter (Kalman et al. 1960) to predict the spatial locations of tracklets and perform associates based on the IoU metric. Subsequent works, such as CenterTrack (Zhou, Koltun, and Kr¨ahenb¨uhl 2020), ByteTrack (Zhang et al. 2022), MotionTrack (Qin et al. 2023), and OC-SORT (Cao et al. 2023), are all heuristic matching that only utilize spatial information for association. However, even the most advanced method, OC-SORT (Cao et al. 2023), still suffers from heavy occlusion and clustering. Appearance-based Heuristic Matcher Unlike spatial information, appearance information possesses relatively stable consistency throughout the whole video, thus benefiting 0.95 0.95 0.98 0.85 frame 𝑡+1 detections frame 𝑡tracklets 3 1 2 4 Reliable Unreliable Strong Cues 0.9 0.9 0.9 0.9 1.0 -0.1 IoU Discrimination 0.4 0.8 0.9 0.2 1.0 0.0 Appr. Discrimination Conf. Discrimination Height Discrimination Vel. Dire. Discrimination Weak Cues 0.2 0.6 0.6 1.0 0.3 0.4 0.3 1.0 0.1 0.9 0.0 0.8 0.8 0.1 1.0 0.0 1.0 0.5 Figure 1: The discrimination capacity of strong and weak cues. Green solid arrows represents reliable discrimination between pairwise objects, while red dashed arrows indicate unreliable discrimination. The higher the value of the arrow, the more reliable the discrimination is. long-term association. Following SORT, DeepSORT (Wojke, Bewley, and Paulus 2017) and GHOST (Seidenschwarz et al. 2023) utilize an independent ReID model to extract appearance features for the association. Then the following work JDE (Wang et al. 2020), FairMOT (Zhang et al. 2021), CSTrack (Liang et al. 2022), QDTrack (Pang et al. 2021), FineTrack (Ren et al. 2023) and UTM (You et al. 2023) integrated the detection and ReID models for joint training and designed improved network architectures to enhance performance. However, we observe that among clustered objects, both spatial and appearance cues suffer from severe discrimination degradation, even if delicate network architectures and association strategies are designed. Learnable Matcher Graph-based Learnable Matcher Graph-based learnable matchers formulate the association task as an edge classification task, where the edge label is 1 for tracklet nodes and detection nodes with the same ID and vice versa. MOTSolv (Bras´o and Leal-Taix´e 2020) and GMTracker (He et al. 2021) are based on Graph Neural Network (GNN) and make the data association step differentiable. Most recently, SUSHI (Cetintas, Bras´o, and Leal-Taix´e 2023) leverages graph models to hierarchically connect short tracklets into longer tracklets in an offline fashion. However, the major limitation of graph-based matchers is that the training and inference pipeline is often complicated or even offline, which restricts their practical use in online tracking scenarios that impose strict real-time demands, such as auThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6505 tonomous driving. Transformer-based Learnable Matcher Since the Transformer became popular in vision tasks, many works are proposed to utilize its powerful attention mechanism to model the association task. TrackFormer (Meinhardt et al. 2022) and MOTR (Zeng et al. 2022) utilize both track queries and standard detection queries to jointly perform trajectory propagation and initialization. Most recently, MOTRv2 (Zhang, Wang, and Zhang 2023) introduces a separate detector to MOTR, trying to resolve the conflict between detection and association. However, the Transformer-based matchers involve a significant number of self-attention and cross-attention operations, preventing the algorithm from achieving real-time capability. Method Hybrid-SORT and Hybrid-SORT-ReID follow the SORT paradigm, which utilizes Kalman Filter for motion estimation of tracklets with or without ReID module for appearance modeling. The association task is solved by Hungarian algorithm as bipartite graph matching. The cost matrices for Hungarian algorithm are computed by measuring the pairwise representation similarity between tracklets and detections. The association pipeline is shown in Figure 2. Weak Cues Modeling Tracklet Confidence Modeling The reason why the confidence state helps association is straightforward. Specifically, when both commonly used strong cues (i.e., spatial and appearance information) fails as multiple objects are highly overlapped, the confidence of objects provides explicit foreground/background (i.e., occluding/occluded) relationships, which is exactly what strong cues lack. Based on this insight, we introduce two modeling approaches for tracklet confidence to association with highconfidence and low-confidence detections. When objects are unobstructed or only slightly occluded, Kalman Filter is an ideal model for modeling and estimating the continuous state. Therefore, we extend the widely used standard Kalman Filter in SORT (Bewley et al. 2016) with two additional states: the tracklet confidence c and its velocity component ˙c. For better clarity, we first revisit the standard Kalman Filter states in SORT, depicted in Eq. 1. Here, u and v denote the object’s center, while s and r represent the object box’s scale (area) and aspect ratio, respectively. The velocity components are denoted by ˙u, ˙v, and ˙s. x = [u, v, s, r, ˙u, ˙v, ˙s] (1) With the two newly introduced states c and ˙c, the complete states of Kalman Filter in TCM are shown in Eq. 2. x = [u, v, s, c, r, ˙u, ˙v, ˙s, ˙c] (2) For low-confidence detections in the second association step, we utilize Linear Prediction to estimate the tracklet confidence. The confidence of objects will rapidly increase or decrease during the occlusion starts or ends. Unfortunately, Kalman Filter exhibits significant lag when attempting to estimate sudden changes in the confidence state, as shown in Figure 3. However, we observe clear directionality in the trend of confidence changes during this short period. Therefore, we use a simple Linear Prediction based on trajectory history to address this issue. The formula for linear modeling is given by Eq. 3, where ctrk represents the confidence of tracklets saved in Tracklet Memory. ˆctrk = ct−1 trk , ct−2 trk = None ct−1 trk −(ct−2 trk −ct−1 trk ), else (3) When utilizing either Kalman Filter or Linear Prediction, the confidence cost is calculated as the absolute difference between the estimated tracklet confidence ˆctrk and detection confidence cdet following Eq. 4. CConf = |ˆctrk −cdet| (4) Height Modulated IoU Identifying the temporally stable properties of objects is one of the most critical aspects of multiple object tracking (MOT). The height state can provide informative clues that help to compensate for the discrimination of strong cues. Specifically, height state enhances association in two aspects. Firstly, the height of objects reflects depth information to some extent. For datasets such as DanceTrack, the heights of detection boxes mainly depend on the distance between objects and the camera. This makes the height state an effective cue for distinguishing highly overlapped objects. Secondly, the height state is relatively robust to diverse poses, making it an accurately estimated state and a high-quality representation of objects. Specifically, we define the two boxes as b1 = (x1 1, y1 1, x1 2, y1 2) and b2 = (x2 1, y2 1, x2 2, y2 2) in which x1 and y1 represents the top-left corner while x2 an y2 represents the bottom-right corner. Also, we define the areas of two boxes as A and B. The computation of conventional IoU is shown in Eq. 5, which is based on the area metric. Further, the Height IoU (HIoU) can be generated by computing the IoU based on the height metric, as in Eq. 6. IoU = |A ∩B| |A ∪B| (5) HIoU = min(y1 2, y2 2) −max(y1 1, y2 1) max(y1 2, y2 2) −min(y1 1, y2 1) (6) To better utilize the height state, we introduce Height Modulated IoU (HMIoU) by combining Height IoU (HIoU) with the conventional IoU, as shown in Eq. 7. The · means element-wise multiplication. Considering the HIoU represents the height state which is a weak cue, and IoU represents the spatial information which is a strong cue, we use HIoU to modulate the IoU by element-wise multiplication, achieving enhanced discrimination for clustered objects. HMIoU = HIoU · IoU (7) Hybrid-SORT Robust Observation-Centric Momentum In OC-SORT, the Observation-Centric Momentum (OCM) considers the velocity direction of object centers in the association. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6506 2 1 3 4 frame +1 trajectory 0.95 0.95 0.98 0.82 frame detections 3 1 2 4 frame trajectory Appearance Features Location Estimation Conf. Estimation Height Estimation Velocity Direction Kalman Filter Association 0.95 0.98 0.98 0.85 Cues Generation Location Estimation Conf. Estimation Height Estimation Velocity Direction Weak Cues Strong Cues Tracklet Memory [optional] Appr. Features 3 4 21 Figure 2: Pipeline of Hybrid-SORT and Hybrid-SORT-ReID. For strong cues, we utilize IoU as the metric for spatial information, and utilize cosine distance for appearance features. For weak cues, we incorporate the confidence state, height state, and velocity direction. Velocity direction is illustrated by centers instead of corners for better clarity. 420 0.97 0.85 0.68 0.48 0.4 0.6 0.8 1.0 0.51 0.48 0.84 Confidence 0.4 0.6 0.8 1.0 Detection Kalman Filter Linear Prediction Frames 425 435 430 434 435 436 Figure 3: The confidence curve of an object. Kalman Filter estimation lags behind the actual confidence during occlusion while Linear Prediction performs effectively. cost metric used in OCM is the absolute difference between the tracklet velocity direction θt and the tracklet-to-detection velocity direction θd in radians format, which is expressed as ∆θ = |θt −θd|. The tracklet velocity direction is obtained from two box centers in the tracklet at a temporal interval ∆t, and the tracklet-to-detection velocity direction is obtained from the centers of a tracklet historical box and a new detection box. Given two points (u1, v1) and (u2, v2), the velocity direction is computed as Eq. 8. However, the modeling of the original OCM is vulnerable to noise caused by fixed temporal intervals and sparse points (i.e., only object centers). θ = arctan v1 −v2 u1 −u2 (8) We improve the OCM by introducing more robust modeling of the velocity direction, namely Robust Observationframe 𝑡−1 frame 𝑡 frame 𝑡+ 1 tracklet tracklet-to-detection ID ID Figure 4: Velocity direction of the center and corners. While the velocity direction of some corners maintains high similarity, the direction of the center is completely opposite. Centric Momentum (ROCM). The modifications include two aspects. Firstly, we extend the fixed time interval of 3 frames to the stack of multiple intervals ranging from 1 to 3. Secondly, we use the four corners of the object instead of its center point to calculate the velocity direction. With multiple temporal intervals and points, the calculation formula for the ROCM is as Eq. 9. Figure 4 illustrates that for objects with complex motions, the velocity direction of corners maintains high similarity, while the direction of the center is nearly opposite. CV el = 3 X ∆t=1 (Clt ∆t + Crt ∆t + Clb ∆t + Crb ∆t) 4 (9) Appearance Modeling We incorporate appearance information using an independent ReID model, as illustrated in Figure 1. Following BoT-SORT, our pipeline first detects objects and then feeds the resulting cropped patches into the ReID model. We model tracklet appearance information using Exponential Moving Average (EMA), and utilize cosine distance as the metric for computing cost CAppr between The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6507 the tracklet and detection appearance features. Note that the ReID components are not the focus of our paper. Algorithm Framework The association stage primarily consists of three stages: the first association stage for highconfidence objects, the second association stage for lowconfidence objects (BYTE in ByteTrack), and the third association stage to recover lost tracklets with their last detection (OCR in OC-SORT). Taking into account all the strong and weak cues, the final cost matrix basically comprises the following terms: C = CHMIoU + λ1CV el + λ2CConf + λ3CAppr (10) Experiments Experimental Setting Datasets We evaluated our design on various MOT benchmarks, including DanceTrack (Sun et al. 2022), MOT20 (Dendorfer et al. 2020) and MOT17 (Milan et al. 2016). DanceTrack is currently one of the most challenging benchmarks in the MOT field, characterized by diverse non-linear motion patterns as well as frequent interactions and occlusions. It is noteworthy that the detection task in DanceTrack is relatively easy, making it an ideal benchmark to evaluate association performance. MOT20 was developed to evaluate algorithms under dense objects and severe occlusions. MOT17 is a widely used standard benchmark in MOT, in which the motion is mostly linear. Given the characteristics of these benchmarks, we primarily focus on comparing our method on DanceTrack as we aim to improve association performance with weak cues in challenging situations. We use MOT17 and MOT20 to evaluate the generalization ability of our method under diverse scenarios. The MOT17 validation set follows a widely adopted convention (Zhou, Koltun, and Kr¨ahenb¨uhl 2020), where the train set is split into halves for training and validation. Metrics We selected HOTA (Luiten et al. 2021) as our primary metric due to its higher-order nature. HOTA combines several sub-metrics that evaluate algorithms from different perspectives, providing a comprehensive assessment of algorithm performance. We also include other wellestablished metrics, such as MOTA (Bernardin and Stiefelhagen 2008) and IDF1 (Ristani et al. 2016). IDF1 reflects the association aspect of the tracker, while MOTA is primarily influenced by detection performance. Implementation Details To ensure a fair comparison and demonstrate the superiority of our Hybrid-SORT, we directly adapt publicly available detection and ReID models from existing works. Specifically, for the detection part, we use the same detection model (i.e., YOLOX (Ge et al. 2021)) as our baseline OC-SORT. Likewise, for the ReID part, we use the model (i.e., BoT (Luo et al. 2019)) in BoT-SORT (Aharon, Orfaig, and Bobrovsky 2022). The dimension of the appearance feature is 2048. The weight hyper-parameter of the confidence cost matrix in the first and second association stages are 1.5 and 1.0 on DanceTrack, 1.0 and 1.0 on other benchmarks. The weight of ROCM cost is 0.2, the same as OCM in OC-SORT. The IoU threshold to reject a match is set to 0.15 on DanceTrack, and 0.25 on other benchmarks. Following ByteTrack (Zhang et al. 2022), FPS is measured with FP16-precision (Micikevicius et al. 2018) with batchsize of 1. The hardware is a single V100 GPU with Intel Xeon(R) Silver 4214R CPU @ 2.40GHz. Benchmark Results In this section, we present benchmark results on DanceTrack, MOT20 and MOT17. Methods with identical detection results are grouped together at the bottom of each Table. We emphasize that Hybrid-SORT consistently outperforms the baseline OC-SORT in all three datasets with negligible additional computation and still maintains Simple, Online and Real-Time (SORT) characteristics, even though its performance lags slightly behind by a few works with much heavier models (i.e., MOTRv2), offline pipelines (i.e., SUSHI) or complex pipelines (i.e., MotionTrack and FineTrack) on certain datasets. The limited improvement of Hybrid-SORT on MOT17/20 largely attributes to the inherent shortcomings of the datasets themselves. Prominent studies such as DanceTrack (Sun et al. 2022) and PersonPath22 (Shuai et al. 2022) present two key arguments. First, the performance of methods may not be accurately assessed due to the limited sizes of MOT17/20, which are nearly 10× smaller than DanceTrack. Second, the two datasets mostly consist of simple linear motions and the performance becomes relatively saturated. DanceTrack Compared to the previous state-of-the-art heuristic tracker OC-SORT, Hybrid-SORT exhibits significantly superior performance (i.e., 7.6 HOTA), with identical association inputs and nearly identical computational complexity (refer to Table 1). The results provide convincing evidence that the introduction and modeling of multiple types of weak cues, such as confidence state and height state, can effectively and efficiently resolve ambiguous and incorrect matches where strong cues fail. Further, with an independent ReID model, Hybrid-SORT-ReID achieves a state-of-theart HOTA of 65.7 on DanceTrack for the heuristic tracker. For trackers with learnable matcher which show higher performance than Hybrid-SORT, MOTRv2 is also based on YOLOX detector but utilized a modified Deformable DETR (Zhu et al. 2020) with 6 layers of Transformer encoder and 6 layers of Transformer decoder as the matcher, while SUSHI employs GNNs as the matcher with a totally offline pipeline. MOT20 Hybrid-SORT achieves superior performance in the MOT20 test set (as shown in Table 2) with high inference speed. Specifically, Hybrid-SORT surpasses OC-SORT in all metrics (i.e., 0.4 HOTA, 0.3 IDF1, and 0.9 MOTA), with practically indistinguishable additional computation. By utilizing an independent ReID model, Hybrid-SORT achieves a state-of-the-art performance of HOTA 63.9 on MOT20 for the heuristic tracker. The results demonstrate the effectiveness, robustness, and generalization of the proposed method in modeling weak cues for clustered and heavily occluded scenarios with dense objects. MOT17 We present the performance of Hybrid-SORT on MOT17 in Table 3. Specifically, Hybrid-SORT surpasses The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6508 Tracker HOTA ↑ IDF1 ↑ MOTA ↑ Learnable Matcher: MOTR 54.2 51.5 79.7 MOTRv2 69.9 71.7 91.9 SUSHI 63.3 63.4 88.7 Heuristic Matcher: CenterTrack 41.8 35.7 86.8 FairMOT 39.7 40.8 82.2 QDTrack 45.7 44.8 83.0 FineTrack 52.7 59.8 89.9 SORT 47.9 50.8 91.8 DeepSORT 45.6 47.9 87.8 ByteTrack 47.3 52.5 89.5 GHOST 56.7 57.7 91.3 OC-SORT 54.6 54.6 89.6 Hybrid-SORT 62.2 63.0 91.6 Hybrid-SORT-ReID 65.7 67.4 91.8 Table 1: Results on DanceTrack test set. Methods in the gray block share the same detections. The highest-ranking heuristic matcher is emphasized in bold. the previous state-of-the-art tracker OC-SORT in all metrics (i.e., 0.4 HOTA, 0.9 IDF1, and 1.3 MOTA) with negligible additional computation. By incorporating an independent ReID model, Hybrid-SORT further accomplishes performance improvements, setting a superior HOTA of 64.0 on MOT17. It is important to note that our method is primarily designed to address the challenges of object clustering and complex motion patterns. Nevertheless, even when applied to the MOT17 dataset, which represents a more general and easier scenario of linear motion patterns, our method consistently exhibits enhanced tracking performance. Ablation Study Component Ablation As shown in Table 4. The results demonstrate the effectiveness and high efficiency of the proposed modules in Hybrid-SORT. The confidence state modeled by TCM significantly enhances the performance, with improvements of 4.0 HOTA. And notably, TCM only has a minor impact on inference speed (-0.7 FPS). Similarly, the utilization of height state by HMIoU leads to clear improvements in HOTA by 1.6 while barely affecting inference speed (-0.1 FPS). ROCM also enhances the association performance in HOTA by 0.6. However, ROCM reduces the inference speed by 1.5 FPS due to more temporal intervals and modeled points. With a commonly used ReID model, Hybrid-SORT-ReID further boosts the HOTA by 3.7, but the inference speed becomes near real-time. Note that the efficient incorporation of the ReID model into the MOT framework is beyond the scope of this paper. Modeling Strategies in TCM In Table 5, we investigate the performance of Kalman Filter and Linear Prediction for confidence state modeling on the DanceTrack-val. In the first association stage with high-confidence detections, Kalman Filter significantly boosts the association performance by 2.9 HOTA, while Linear Prediction decreases HOTA by 1.1. We attribute the results to the fact that high-confidence deTracker HOTA ↑ IDF1 ↑ MOTA ↑ Learnable Matcher: TrackFormer 54.7 65.7 68.6 MOTRv2 61.0 73.1 76.2 UTM 62.5 76.9 78.2 SUSHI 64.3 79.8 74.3 Heuristic Matcher: FairMOT 54.6 67.3 61.8 CSTrack 54.0 66.6 68.6 FineTrack 63.6 79.0 77.9 MotionTrack 62.8 76.5 78.0 ByteTrack 61.3 75.2 77.8 BoT-SORT 63.3 77.5 77.8 GHOST 61.2 75.2 73.7 OC-SORT 62.1 75.9 75.5 Hybrid-SORT 62.5 76.2 76.4 Hybrid-SORT-ReID 63.9 78.4 76.7 Table 2: Results on MOT20-test with the private detections. Methods in the gray block share the same detections. The highest-ranking heuristic matcher is emphasized in bold. Tracker HOTA ↑ IDF1 ↑ MOTA ↑ Learnable Matcher: TrackFormer 57.3 68.0 74.1 MOTR 57.8 68.6 73.4 MOTRv2 62.0 75.0 78.6 UTM 64.0 78.7 81.8 SUSHI 66.5 83.1 81.1 Heuristic Matcher: CenterTrack 52.2 64.7 67.8 QDTrack 53.9 66.3 68.7 FairMOT 59.3 72.3 73.7 CSTrack 59.3 72.6 74.9 FineTrack 64.3 79.5 80.0 MotionTrack 65.1 80.1 65.1 ByteTrack 63.1 77.3 80.3 BoT-SORT 65.0 80.2 80.5 GHOST 62.8 77.1 78.7 OC-SORT 63.2 77.5 78.0 Hybrid-SORT 63.6 78.4 79.3 Hybrid-SORT-ReID 64.0 78.7 79.9 Table 3: Results on MOT17-test with the private detections. Methods in the gray block share the same detections. The highest-ranking heuristic matcher is emphasized in bold. tections usually do not suffer from heavy occlusion, thus the confidence is stable and does not exhibit a clear directional trend. So Kalman Filter models the confidence state well but Linear Prediction fails. In the second association stage with low-confidence detections, both Kalman Filter and Linear Prediction perform well (0.7 and 1.1 HOTA, respectively). The confidence of occluded objects can decrease or increase rapidly depending on whether the clustering starts or ends. Kalman Filter is relatively incapable of modeling such sudden changes and the estimations usually lag behind the actual confidence. However, Linear Prediction can model the directional changes well. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6509 ROCM TCM HMIoU ReID HOTA ↑ FPS ↑ 53.1 30.1 ✓ 53.7 28.6 ✓ ✓ 57.7 27.9 ✓ ✓ ✓ 59.3 27.8 ✓ ✓ ✓ ✓ 63.0 15.5 Table 4: Components ablation on DanceTrack-val. Consistent and significant improvements are observed using the proposed metrics TCM, HMIoU, and ROCM while maintaining real-time capacity. first stage second stage HOTA ↑ IDF1 ↑ MOTA ↑ – – 53.7 53.2 88.9 Kalman – 56.6 56.6 89.2 Kalman Kalman 57.3 57.9 89.2 Linear – 52.6 52.1 89.0 Linear Linear 53.9 53.1 89.2 Kalman Linear 57.7 58.5 89.4 Table 5: Different confidence modeling on DanceTrack-val. The Kalman Filter is effective for unobstructed objects, while Linear Prediction is suitable for occluded objects. Height State or Width State We argue the height state, rather than the width state, can benefit association. Similar to the HMIoU, We propose Width Modulated IoU (WMIoU) by replacing height with width. As shown in Table 6, width state significantly hurt association performance, whereas the height state is beneficial. The reason is the box width varies irregularly due to pose changes or limb movements, posing a challenge for precise estimation by the Kalman Filter. In contrast, the height state undergoes relatively short and continuous changes during squatting or standing up, making it effectively modeled by the Kalman Filter. Generality on Other Trackers We applied our design to other 4 representative heuristic trackers, namely SORT (Bewley et al. 2016), DeepSORT (Wojke, Bewley, and Paulus 2017), MOTDT (Chen et al. 2018), and ByteTrack (Zhang et al. 2022). Among these trackers, SORT, and ByteTrack rely solely on spatial information, while MOTDT and DeepSORT jointly utilize both spatial and appearance information. The results are presented in Table 7 and Table 8, where a significant improvement can be observed in both DanceTrack and MOT17 datasets for all aforementioned trackers. For instance, our design TCM improves DeepSORT by 4.9 HOTA in DanceTrack and 0.9 HOTA in MOT17, while our HMIoU boosts SORT by 1.6 HOTA in DanceTrack and 1.0 HOTA in MOT17. These results provide convincing evidence that our insight of introducing weak cues like confidence state and height state as compensation for strong cues is effective and generalizes well across different trackers and scenarios. Moreover, our method can be readily applied to existing trackers in a plug-and-play and training-free manner for enhanced performance. HOTA ↑ IDF1 ↑ MOTA ↑ IoU 57.7 58.5 89.4 WMIoU 52.6 52.0 89.0 HMIoU 59.3 60.6 89.5 Table 6: Results of different IoU in DanceTrack-val. The regular height state provides benefits while the irregular width state causes harm. Tracker TCM DanceTrack MOT17 ByteTrack 47.06 67.85 ✓ 49.32 (+2.3) 68.03 (+0.2) SORT 48.34 66.32 ✓ 51.80 (+3.5) 66.52 (+0.2) MOTDT 36.47 65.32 ✓ 37.66 (+1.2) 65.62 (+0.3) DeepSORT 40.38 63.45 ✓ 45.29 (+4.9) 64.36 (+0.9) Table 7: TCM in other representative trackers. TCM consistently enhances tracking performance. Tracker HMIoU DanceTrack MOT17 ByteTrack 47.06 67.85 ✓ 49.68 (+2.6) 67.70 (-0.2) SORT 48.34 66.32 ✓ 49.96 (+1.6) 67.30 (+1.0) MOTDT 36.47 65.32 ✓ 36.83 (+0.4) 65.21 (-0.1) DeepSORT 40.38 63.45 ✓ 41.23 (+0.9) 63.64 (+0.2) Table 8: HMIoU in other representative trackers. HMIoU consistently enhances tracking performance. Conclusion In this paper, we demonstrate that the common and longstanding challenge of heavy occlusion and clustering can be effectively and efficiently alleviated with previously overlooked weak cues (e.g. confidence state, height state, and velocity direction). These weak cues can compensate for the limitations of strong cues. Then, we propose HybridSORT by introducing simple modeling for the newly incorporated weak cues and leveraging both strong and weak cues, which significantly improves the association performance. Furthermore, Hybrid-SORT still maintains Simple, Online and Real-Time (SORT) characteristics, and can be readily applied to existing trackers in a plug-and-play and training-free way. Extensive experiments demonstrate the strong generalization ability of Hybrid-SORT across diverse trackers and scenarios. With widely used appearance information, Hybrid-SORT achieves superior performance over state-of-the-art methods, with a much simpler pipeline and faster association. We hope that the aforementioned characteristics of Hybrid-SORT make it attractive for diverse scenarios and devices with limited computational resources. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6510 Acknowledgments The paper is supported in part by National Natural Science Foundation of China (Nos.U23A20384, 62293542), in part by Talent Fund of Liaoning Province (XLYC2203014), and in part by Fundamental Research Funds for the Central Universities (No.DUT22QN228). References Aharon, N.; Orfaig, R.; and Bobrovsky, B.-Z. 2022. BoTSORT: Robust Associations Multi-Pedestrian Tracking. arXiv:2206.14651. Bernardin, K.; and Stiefelhagen, R. 2008. Evaluating multiple object tracking performance: the clear mot metrics. EURASIP Journal on Image and Video Processing, 2008: 1–10. Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; and Upcroft, B. 2016. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP), 3464–3468. IEEE. Bras´o, G.; and Leal-Taix´e, L. 2020. Learning a neural solver for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6247–6257. Cao, J.; Pang, J.; Weng, X.; Khirodkar, R.; and Kitani, K. 2023. Observation-centric sort: Rethinking sort for robust multi-object tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9686– 9696. Cetintas, O.; Bras´o, G.; and Leal-Taix´e, L. 2023. Unifying Short and Long-Term Tracking with Graph Hierarchies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22877–22887. Chen, L.; Ai, H.; Zhuang, Z.; and Shang, C. 2018. Realtime multiple people tracking with deeply learned candidate selection and person re-identification. In 2018 IEEE international conference on multimedia and expo (ICME), 1–6. IEEE. Dendorfer, P.; Rezatofighi, H.; Milan, A.; Shi, J.; Cremers, D.; Reid, I.; Roth, S.; Schindler, K.; and Leal-Taix´e, L. 2020. MOT20: A benchmark for multi object tracking in crowded scenes. arXiv:2003.09003. Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; and Meng, H. 2023. Strongsort: Make deepsort great again. IEEE Transactions on Multimedia. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; and Sun, J. 2021. YOLOX: Exceeding YOLO Series in 2021. arXiv:2107.08430. He, J.; Huang, Z.; Wang, N.; and Zhang, Z. 2021. Learnable graph matching: Incorporating graph partitioning with deep feature learning for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5299–5309. Kalman, R. E.; et al. 1960. Contributions to the theory of optimal control. Bol. soc. mat. mexicana, 5(2): 102–119. Liang, C.; Zhang, Z.; Zhou, X.; Li, B.; Zhu, S.; and Hu, W. 2022. Rethinking the competition between detection and ReID in multiobject tracking. IEEE Transactions on Image Processing, 31: 3182–3196. Luiten, J.; Osep, A.; Dendorfer, P.; Torr, P.; Geiger, A.; LealTaix´e, L.; and Leibe, B. 2021. Hota: A higher order metric for evaluating multi-object tracking. International journal of computer vision, 129: 548–578. Luo, H.; Jiang, W.; Gu, Y.; Liu, F.; Liao, X.; Lai, S.; and Gu, J. 2019. A strong baseline and batch normalization neck for deep person re-identification. IEEE Transactions on Multimedia, 22(10): 2597–2609. Meinhardt, T.; Kirillov, A.; Leal-Taixe, L.; and Feichtenhofer, C. 2022. Trackformer: Multi-object tracking with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8844–8854. Micikevicius, P.; Narang, S.; Alben, J.; Diamos, G.; Elsen, E.; Garcia, D.; Ginsburg, B.; Houston, M.; Kuchaiev, O.; Venkatesh, G.; and Wu, H. 2018. Mixed Precision Training. arXiv:1710.03740. Milan, A.; Leal-Taixe, L.; Reid, I.; Roth, S.; and Schindler, K. 2016. MOT16: A Benchmark for Multi-Object Tracking. arXiv:1603.00831. Pang, J.; Qiu, L.; Li, X.; Chen, H.; Li, Q.; Darrell, T.; and Yu, F. 2021. Quasi-dense similarity learning for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 164–173. Qin, Z.; Zhou, S.; Wang, L.; Duan, J.; Hua, G.; and Tang, W. 2023. MotionTrack: Learning Robust Short-term and Longterm Motions for Multi-Object Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17939–17948. Ren, H.; Han, S.; Ding, H.; Zhang, Z.; Wang, H.; and Wang, F. 2023. Focus On Details: Online Multi-object Tracking with Diverse Fine-grained Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11289–11298. Ristani, E.; Solera, F.; Zou, R.; Cucchiara, R.; and Tomasi, C. 2016. Performance measures and a data set for multitarget, multi-camera tracking. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 810 and 15-16, 2016, Proceedings, Part II, 17–35. Springer. Seidenschwarz, J.; Bras´o, G.; Serrano, V. C.; Elezi, I.; and Leal-Taix´e, L. 2023. Simple Cues Lead to a Strong MultiObject Tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13813– 13823. Shuai, B.; Bergamo, A.; Buechler, U.; Berneshawi, A.; Boden, A.; and Tighe, J. 2022. Large scale real-world multiperson tracking. In European Conference on Computer Vision, 504–521. Springer. Sun, P.; Cao, J.; Jiang, Y.; Yuan, Z.; Bai, S.; Kitani, K.; and Luo, P. 2022. Dancetrack: Multi-object tracking in uniform appearance and diverse motion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20993–21002. Wang, Z.; Zheng, L.; Liu, Y.; Li, Y.; and Wang, S. 2020. Towards real-time multi-object tracking. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6511 August 23–28, 2020, Proceedings, Part XI 16, 107–122. Springer. Wojke, N.; Bewley, A.; and Paulus, D. 2017. Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP), 3645–3649. IEEE. Yan, F.; Li, Z.; Luo, W.; jie, Z.; Liang, F.; Wei, X.; and Ma, L. 2022. Multiple Object Tracking Challenge Technical Report for Team MT IoT. arXiv:2212.03586. You, S.; Yao, H.; Bao, B.-K.; and Xu, C. 2023. UTM: A Unified Multiple Object Tracking Model With Identity-Aware Feature Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 21876–21886. Zeng, F.; Dong, B.; Zhang, Y.; Wang, T.; Zhang, X.; and Wei, Y. 2022. Motr: End-to-end multiple-object tracking with transformer. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVII, 659–675. Springer. Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; and Wang, X. 2022. Bytetrack: Multiobject tracking by associating every detection box. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXII, 1–21. Springer. Zhang, Y.; Wang, C.; Wang, X.; Zeng, W.; and Liu, W. 2021. Fairmot: On the fairness of detection and re-identification in multiple object tracking. International Journal of Computer Vision, 129: 3069–3087. Zhang, Y.; Wang, T.; and Zhang, X. 2023. Motrv2: Bootstrapping end-to-end multi-object tracking by pretrained object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22056–22065. Zhou, X.; Koltun, V.; and Kr¨ahenb¨uhl, P. 2020. Tracking objects as points. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, 474–490. Springer. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6512 | 2024 | 723 |
18,544 | Multi-Modal Prompting for Open-Vocabulary Video Visual Relationship Detection Shuo Yang1,2*, Yongqi Wang2*, Xiaofeng Ji2, Xinxiao Wu2,1† 1Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, China 2Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology Beijing Institute of Technology, China {shuoyang,3120230916,jixf,wuxinxiao}@bit.edu.cn Abstract Open-vocabulary video visual relationship detection aims to extend video visual relationship detection beyond annotated categories by detecting unseen relationships between objects in videos. Recent progresses in open-vocabulary perception, primarily driven by large-scale image-text pre-trained models like CLIP, have shown remarkable success in recognizing novel objects and semantic categories. However, directly applying CLIP-like models to video visual relationship detection encounters significant challenges due to the substantial gap between images and video object relationships. To address this challenge, we propose a multi-modal prompting method that adapts CLIP well to open-vocabulary video visual relationship detection by prompt-tuning on both visual representation and language input. Specifically, we enhance the image encoder of CLIP by using spatio-temporal visual prompting to capture spatio-temporal contexts, thereby making it suitable for object-level relationship representation in videos. Furthermore, we propose vision-guided language prompting to leverage CLIP’s comprehensive semantic knowledge for discovering unseen relationship categories, thus facilitating recognizing novel video relationships. Extensive experiments on two public datasets, VidVRD and VidOR, demonstrate the effectiveness of our method, especially achieving a significant gain of nearly 10% in mAP on novel relationship categories on the VidVRD dataset. Introduction The task of Open-vocabulary Video Visual Relationship Detection (Open-VidVRD) (Gao et al. 2023) aims to detect video relationships between two objects as triplet format ⟨subject, predicate, object⟩following an open-vocabulary setting, where the model is learned on base relationship categories during training and is applied to infer both base and novel relationship categories during testing. As shown in Figure 1, the novel categories of ⟨person, wear, hat⟩and ⟨person, pull, dog⟩, which are absent during training, can be recognized during testing. Different from the closed-set setting that does not involve novel categories, the OpenVidVRD task is more essential and practical for real-world *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. TRAINING TEST base categories base categories novel categories person shirt shoes run front wear wear dog walk behind person shirt shoes wear wear dog hat wear stand next to pull Open-VidVRD (a) (b) Open-VidVRD Figure 1: Illustration of the Open-VidVRD task. Training is performed on base categories. Testing is performed on both base categories and novel categories that are absent in training. scenarios characterized by complex and diverse object relationships. The recent emergence and advancement of large-scale pre-trained image-text models (Jia et al. 2021; Pham et al. 2021; Radford et al. 2021) present a promising avenue for Open-VidVRD, leveraging their learned vision-language joint embedding space, which contains rich semantic knowledge about objects, scenes, actions, and interactions (Gao et al. 2022b; Gu et al. 2022; Kuo et al. 2023; Ni et al. 2022; Weng et al. 2023; Xu et al. 2023). However, directly applying these models to open-vocabulary video visual relationship detection entails confronting two challenges. The first challenge pertains to handling the domain gap between images and video object relationships. Most image-text pretrained models are learned on images and their corresponding textual descriptions, which typically encompass the entire image content rather than specific local objects. Consequently, such models lack the ability to capture both temporal context information and object-level details, limiting their applicability in video object relationship analysis. The second challenge revolves around leveraging comprehensive semantic knowledge learned in these pre-trained models to discover novel video visual relationships. In this paper, we propose a multi-modal prompting The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6513 method for Open-VidVRD1, which prompts the pre-trained image-text model, i.e., CLIP (Radford et al. 2021), on both visual and language sides, to improve the alignment between visual and language representations of relationships. To address the first challenge, we propose spatio-temporal visual prompting to improve CLIP’s ability to capture spatial and temporal relationships between objects. Starting with using CLIP to identify and detect novel objects, we then introduce sequential Transformer blocks (Vaswani et al. 2017) to model the spatial and temporal context among these objects. Therefore, we adapt CLIP well to the spatio-temporal object-level relationship domain, ultimately enhancing its applicability to video visual relationship detection. To address the second challenge, we propose vision-guided language prompting to exploit CLIP’s comprehensive semantic knowledge for discovering novel relationships. We introduce two kinds of prompts, learnable continuous prompts and learnable conditional prompts, where the former imparts task-specific prior knowledge and the latter dynamically adapts to visual cues. Through the integration of these two prompts, we not only obtain shared task-specific priors, but also retain the ability to effectively incorporate novel categories. By integrating the spatio-temporal visual prompting and the vision-guided language prompting, our method successfully bridges the domain gap and exploits the semantic knowledge embedded in CLIP for video visual relationship detection in the open-vocabulary scenario. Extensive experiments show that our method achieves significant performance improvements over existing methods, especially achieving nearly 10% mAP gains on the novel relationship categories on the VidVRD dataset. To summarize, our main contributions are three-fold: (1) We propose a multi-modal prompting method for detecting video visual relationships in an open-vocabulary setting, which prompts CLIP on both the visual and language sides. (2) We propose spatio-temporal visual prompting to imbue CLIP with the capabilities of spatial and temporal modeling, effectively bridging the gap between images and video visual relationships. (3) We propose vision-guided language prompting, which exploits semantic knowledge from CLIP to discover novel visual relationships in videos. Related Work Open-vocabulary Visual Relationship Detection. The task of visual relationship detection in images (Lu et al. 2016) or videos (Shang et al. 2017), involving the classification and localization of relationship triplets, has become a hot topic in the field of computer vision (Tang et al. 2020; Li et al. 2022b; Cong, Yang, and Rosenhahn 2023; Zheng, Chen, and Jin 2022; Xu et al. 2022; Chen, Xiao, and Chen 2023). This field has also explored the concept of zero-shot detection (Shang et al. 2021), where all object and relationship categories are seen during training, but some certain triplet combinations remain unseen during test. 1Codes are at https://github.com/wangyongqi558/MMP OV VidVRD In recent years, open-vocabulary visual relationship detection (He et al. 2022; Gao et al. 2023) has emerged, aiming to recognize visual relationships involving objects or predicates that are completely unseen in the training data. SVRP (He et al. 2022) adopts a two-step method for open-vocabulary visual relationship detection, including visual-relationship pre-training and prompt-based finetuning. However, this method caters to relationships within static images, but not within videos. In contrast, Repro (Gao et al. 2023) pioneers the open-vocabulary video visual relationship detection by leveraging the video-language pretraining model ALPro (Li et al. 2022a), sidestepping the necessity of training from scratch. However, Repro predominantly aligns the embedding space of video relationships from a linguistic perspective, ignoring the inherent spatial and temporal contextual dependencies in object relationships. The multi-modal prompting method proposed in this paper adapts the more broadly applicable image-text pretrained CLIP on both visual and language sides to capture the spatio-temporal contexts of relationships and exploit prior semantic knowledge of novel relationships. Prompting CLIP. Recently, visual-language pre-trained models (Radford et al. 2021; Alayrac et al. 2022; Li et al. 2020; Luo et al. 2020) have demonstrated significant progress in many downstream vision-language tasks. As one of the most successful visual-language pre-training models, CLIP (Radford et al. 2021), is extensively pre-trained using 400 million image-text pairs from the Internet, resulting in a visual-language embedding space with comprehensive semantic knowledge. Various techniques for prompt learning (Gu et al. 2022; Ding, Wang, and Tu 2022; Xu et al. 2023; Kuo et al. 2023; Gao et al. 2022b; Wang, Xing, and Liu 2021) have emerged to facilitate the efficient transfer of knowledge from CLIP to the downstream tasks. These tasks range from few/zeroshot classification (Pham et al. 2021; Zhou et al. 2022), open-vocabulary recognition and detection (Gu et al. 2022; Du et al. 2022; Ding, Wang, and Tu 2022; Ma et al. 2022; Wang et al. 2022; Kuo et al. 2023), to video-related applications (Xu et al. 2021; Ju et al. 2022; Wang, Xing, and Liu 2021; Ni et al. 2022; Weng et al. 2023). However, these approaches prompt CLIP in a single modality, either visual or language, resulting in sub-optimal performance. A method closely related to our method is MaPLe (Khattak et al. 2023), which prompts CLIP in both vision and language to enhance the alignment between visual and language representations. It is worth noting that MaPLe primarily focuses on image recognition tasks. In contrast, our method is tailored for video visual relationship detection, which is more challenging than image domain tasks. Our Method Overview Video Visual Relationship Detection (VidVRD) aims to detect instances of visual relationships of interest in a video, where a visual relationship instance is represented by a triplet ⟨subject, predicate, object⟩with the trajectories of subject and object. For the Open-vocabulary Video Visual The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6514 … … … … Spatiotemporal Visual Prompting image encoder object categories text encoder A photo of a [CLS]. text encoder predicate categories VPN: visual-guided prompting network fixed parameters trainable parameters cosine similarity calculation learnable continuous prompts learnable conditional prompts Relationship Indication Object Mapping temporal pooling Relationship Mapping v cl kv ̂𝑙𝑙𝑐𝑐 s cl o cl u cl b cl Pre-trained Object Tracklet Detector VPN … [CLS] … VPN … [CLS] … VPN … [CLS] … VPN … [CLS] … Visual-guided Language Prompting C C 𝑃𝑃(𝑐𝑐) Open-vocabulary Tracklet Detection Figure 2: Overview of proposed method. Relationship Detection (Open-VidVRD), the categories of objects and predicates are divided into base and novel splits, i.e., base objects (CO b ), novel objects (CO n ), base predicate (CP b ), and novel predicate (CP n ). Only base object and predicate categories are used in the training stage, and all categories are used in the testing stage. To address the Open-VidVRD task, we propose a multimodal prompting method to prompt the well-known CLIP model on both visual representation and language input. Our method consists of three main components: an openvocabulary object tracklet detection module for both base and novel objects, a spatio-temporal prompting module to enable the spatio-temporal modeling of object relationships, and a language prompting module with both learnable continuous prompts and learnable conditional prompts. The overview of our method is shown in Figure 2. Open-vocabulary Object Tracklet Detection We use a pre-trained tracklet detector (Gao et al. 2022a) to obtain N class-agnostic visual object tracklets T = {T i}N i=1, T i = {Bt}T t=1, where T i are visual object trajectories with T frames and Bt is the bounding box of object in frame t. We classify each tracklet using CLIP. Specifically, we extract the visual representations of cropped object regions corresponding to detected bounding boxes using CLIP’s image encoder and average the features along temporal axis of the tracklet. Meanwhile, we extract text embeddings for all object categories by feeding handcrafted prompts, i.e., “a photo of [CLS]” into the text encoder of CLIP, where [CLS] can be replaced with the names of objects. Then, we assign each tracklet an object category label c by its maximum scores within all objects: p(c) = exp(cos(vi, ˆlc)/τ) P c′∈Co b ∪Co n exp(cos(vi, ˆlc)/τ) , (1) where c ∈Co b ∪Co n , τ is a temperature parameter, and cos(vi, ˆlc) is the cosine similarity between the visual features vi of the i-th object trajectory and the text embedding features ˆlc of the object category c. Spatio-temporal Visual Prompting Pre-trained image-text models such as CLIP are typically trained using images and their associated textual descriptions. These descriptions usually encapsulate the entire content of the image, rather than focusing on individual objects. Therefore, these models lack detailed object-level information and temporal context. To address the disparity in relationships between the image and video domains, we introduce the spatio-temporal visual prompting, which is integrated into the image encoding process of CLIP, to bridge the domain gap and enhance the CLIP’s ability to capture both spatial and temporal visual relationships in videos. For each pair of tracklet, i.e., a subject tracklet and an object tracklet, we first set the regions outside the bounding boxes of the tracklets to 0, resulting in four masked frames: frames corresponding to the subject, object, their union, and background (whole image). And then, we extract their features using CLIP and capture their spatio-temporal relationships by standard Transformer Blocks. To reduce the computational complexity, we decouple the spatio-temporal modeling into separate and successive modules, namely spatial modeling and temporal modeling, as illustrated in Figure 3. Spatial Modeling. Spatial relationships between objects are typically defined by their positional orientations, such as being in front of or above each other. Therefore, spatial modeling requires combining four key elements: features of subject region, features of object region, features encompassing their union region (i.e., the smallest area covering subject and object), and features representing the backgrounds (i.e., the whole image). This process involves modeling interactions between objects and their backgrounds to enhance object features, thus capturing the spatial context. Specifically, given the masked frames of subject, object, their union and background, we first extract their features by the image encoder of CLIP, denoted by vk, k ∈{s, o, u, b}. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6515 spatial Transformer spatial Transformer spatial Transformer temporal Transformer temporal Transformer temporal Transformer temporal Transformer ·𝑣𝑣𝑠𝑠 i-1 ·𝑣𝑣𝑜𝑜 i-1 ·𝑣𝑣𝑢𝑢 i-1 ·𝑣𝑣𝑠𝑠 i ·𝑣𝑣𝑢𝑢 i ·𝑣𝑣𝑏𝑏 i ·𝑣𝑣𝑠𝑠 i+1 ·𝑣𝑣𝑜𝑜 i+1 ·𝑣𝑣𝑢𝑢 i+1 ·𝑣𝑣𝑏𝑏 i+1 ·𝑣𝑣𝑠𝑠 i-1 · ··𝑣𝑣𝑠𝑠 i ··𝑣𝑣𝑠𝑠 i+1 ··𝑣𝑣𝑜𝑜 i-1 ··𝑣𝑣𝑜𝑜 i ··𝑣𝑣𝑜𝑜 i+1 ··𝑣𝑣𝑢𝑢 i-1 ··𝑣𝑣𝑢𝑢 i ··𝑣𝑣𝑢𝑢 i+1 ··𝑣𝑣𝑏𝑏 i-1 ··𝑣𝑣𝑏𝑏 i ··𝑣𝑣𝑏𝑏 i+1 ·𝑣𝑣𝑜𝑜 i ·𝑣𝑣𝑠𝑠 i-1 · ··𝑣𝑣𝑜𝑜 i-1 ··𝑣𝑣𝑢𝑢 i-1 ··𝑣𝑣𝑏𝑏 i-1 ··𝑣𝑣𝑠𝑠 i ··𝑣𝑣𝑜𝑜 i ··𝑣𝑣𝑢𝑢 i ··𝑣𝑣𝑏𝑏 i ··𝑣𝑣𝑠𝑠 i+1 ··𝑣𝑣𝑜𝑜 i+1 ··𝑣𝑣𝑢𝑢 i+1 ··𝑣𝑣𝑏𝑏 i+1 𝑣𝑣𝑠𝑠 i-1 𝑏𝑏𝑠𝑠 𝑟𝑟𝑠𝑠 𝑣𝑣𝑜𝑜 i-1 𝑣𝑣𝑢𝑢 i-1 𝑣𝑣𝑏𝑏 i-1 𝑏𝑏𝑜𝑜 𝑏𝑏𝑢𝑢 𝑏𝑏𝑏𝑏 𝑟𝑟𝑜𝑜 𝑟𝑟𝑢𝑢 𝑟𝑟𝑏𝑏 𝑏𝑏𝑠𝑠 𝑟𝑟𝑠𝑠 𝑏𝑏𝑜𝑜 𝑏𝑏𝑢𝑢 𝑏𝑏𝑏𝑏 𝑟𝑟𝑜𝑜 𝑟𝑟𝑢𝑢 𝑟𝑟𝑏𝑏 𝑣𝑣𝑠𝑠 i 𝑣𝑣𝑜𝑜 i 𝑣𝑣𝑢𝑢 i 𝑣𝑣𝑏𝑏 i 𝑣𝑣𝑠𝑠 i+1 𝑏𝑏𝑠𝑠 𝑟𝑟𝑠𝑠 𝑣𝑣𝑜𝑜 i+1 𝑣𝑣𝑢𝑢 i+1 𝑣𝑣𝑏𝑏 i+1 𝑏𝑏𝑜𝑜 𝑏𝑏𝑢𝑢 𝑏𝑏𝑏𝑏 𝑟𝑟𝑜𝑜 𝑟𝑟𝑢𝑢 𝑟𝑟𝑏𝑏 ·𝑣𝑣𝑏𝑏 i-1 𝑡𝑡𝑖𝑖-1 𝑡𝑡𝑖𝑖 𝑡𝑡𝑖𝑖-1 𝑡𝑡𝑖𝑖+1 ·𝑣𝑣𝑠𝑠 i-1 ·𝑣𝑣𝑠𝑠 i ·𝑣𝑣𝑠𝑠 i+1 ·𝑣𝑣𝑜𝑜 i 𝑡𝑡𝑖𝑖-1 𝑡𝑡𝑖𝑖 𝑡𝑡𝑖𝑖+1 ·𝑣𝑣𝑜𝑜 i-1 ·𝑣𝑣𝑜𝑜 i+1 𝑡𝑡𝑖𝑖+1 𝑡𝑡𝑖𝑖 ·𝑣𝑣𝑢𝑢 i-1 ·𝑣𝑣𝑢𝑢 i ·𝑣𝑣𝑢𝑢 i+1 𝑡𝑡𝑖𝑖+1 𝑡𝑡𝑖𝑖 𝑡𝑡𝑖𝑖-1 ·𝑣𝑣𝑏𝑏 i+1 ·𝑣𝑣𝑏𝑏 i ·𝑣𝑣𝑏𝑏 i-1 𝑡𝑡𝑖𝑖-1 Figure 3: Overview of spatial modeling and temporal modeling. bk, rk, k ∈{s, o, u, b} denotes the spatial position embedding and role embedding corresponding to subject, object, their union, and background, respectively. tt is the temporal embedding of t-th frame. Note that the same spatial modeling is used across different frames, thus we omit the frame index here for simplify. And then we add two types of learnable embedding: positional embedding bk, k ∈{s, o, u, b}, which are learned related to the normalized bounding box, and role embedding rk, k ∈ {s, o, u, b}. These two types of embeddings are learned and shared over all video frames, respectively. Finally, we update the visual features by ( ˙vs, ˙vo, ˙vu, ˙vb) = STrans(Is, Io, Iu, Ib), (2) where Ik = vk + bk + rk, k ∈{s, o, u, b} and STrans(·) denotes the spatial Transformer blocks. Temporal Modeling. Temporal relationships of objects are time-dependent, such as toward or away, so the inputs of temporal modeling contain two items: visual features and temporal embeddings. Note that the same temporal modeling is used for different roles, i.e., subject, object, their union, and background. Through the exploration of dynamic state transformations, visual features are systematically updated. Specifically, given the spatial encoded visual features ˙v = { ˙vs t, ˙vo t, ˙vu t , ˙vb t}T t=0, we collect each role features separately from all frames as ˙vk = { ˙vk t }T t=0, k ∈{s, o, u, b}, and add temporal embedding tt, which are learned related to frame t and shared over all video frames. For each region, the visual features are then updated by ¨vk = {¨vk t }T t=0 = TTrans( ˙I k 0, ˙I k 1, · · · , ˙I k T ), (3) where ˙I k t = ˙vk t + tt, and TTrans(·) denotes the temporal Transformer blocks. The updated features are then reorganized by their frame indexes for the next layer. Vision-guided Language Prompting The main goal of the Open-VidVRD task is to discover novel video visual relationships. To achieve this goal, considering both task-related prior knowledge and visual-related prior knowledge, we propose vision-guided language prompting to leverage the rich semantic knowledge stored in CLIP by combining learnable continuous language prompts and learnable conditional language prompts. Learnable Continuous Language Prompts. For each predicate category [CLS], [CLS] ∈ CP b when training and [CLS] ∈CP n ∪CP b when testing, Nς-token language prompts corresponding to each of the four roles (i.e., subject, object, union, and background) are initialized as ςk = [ςk 1, ςk 2, · · · , ςk Nς], k ∈{s, o, u, b} and are learned by gradient backpropagation. Learnable Conditional Language Prompts. For each predicate category [CLS], [CLS] ∈CP b when training and [CLS] ∈CP n ∪CP b when testing, Nζ-token learnable conditional language prompts corresponding to each of the four roles (subject, object, union, and background) are learned by taking into account their visual features: ζk = [ζk 1, ζk 2, · · · , ζk Nζ] = φk(¨vk), (4) where φk(·) denotes the vision-guided prompting network, consisting of two linear layers. Learnable Vision-guided Language Prompts. We concatenate the token of learnable continuous language prompts and token of learnable conditional language prompts interlaced, and then insert the [CLS] token of predicate categories into the later middle (75% of the length) of token sequence, resulting in the final language prompts ℓk CLS = h ςk 1, ζk 1, ςk 2, ζk 2, · · · , CLS, · · · , ςk Nς, ζk Nζ i , k ∈{s, o, u, b}. The final language features of category c corresponding to each visual region are generated by lk c = π(ℓk c), (5) where π(·) is the text encoder of CLIP. Training Loss The training loss of our method consists of three parts: a relationship contrastive loss Lrel, an object contrastive loss Lobj, and an interaction loss Lint, as shown in Figure 2. The overall objective loss function is given by L = Lrel + αLobj + βLint. (6) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6516 Split Method DET Trajectory GT Trajectory mAP R@50 R@100 mAP R@50 R@100 Novel ALPro (Li et al. 2022a) 1.05 3.14 4.62 4.09 9.42 10.41 CLIP (Radford et al. 2021) 2.88 3.80 4.96 4.54 7.27 11.74 VidVRD-II (Shang et al. 2021) 3.57 8.59 12.39 7.35 18.84 26.44 RePro (Gao et al. 2023) 6.10 13.38 16.52 12.74 25.12 33.88 Ours 15.99 16.69 18.68 21.14 30.41 37.85 All ALPro (Li et al. 2022a) 3.20 2.62 3.18 4.97 4.50 5.79 CLIP (Radford et al. 2021) 5.03 3.04 3.68 6.49 5.21 6.54 VidVRD-II (Shang et al. 2021) 12.74 9.90 12.59 19.73 18.17 24.90 RePro (Gao et al. 2023) 21.33 12.92 15.94 34.90 25.50 32.49 Ours 27.06 16.71 19.61 38.08 30.47 37.46 Table 1: Results of different methods on the VidVRD dataset.“DET” and “GT” denote using detected trajectory and groundtruth trajectory, respectively. where α and β represent balance factors. Relationship Contrastive Loss. Given the visual representations ¨vk and the language representations lk c, the prediction score of the relationship category c is calculated by ˆyrel c = σ(cos(ψ(˜v), ˜lc)) (7) where ˜v = ||¨vs, ¨vo, ¨vu, ¨vb||; ˜lc = ||ls c, lo c, lu c , lb c||; || · || denotes concatenation; σ(·) is the sigmoid function; cos(·, ·) is the cosine similarity and ψ(·) denotes the relationship mapping layer in Figure 2. Then the relationship contrastive loss is formulated by using the binary cross-entropy loss (BCE): Lrel = 1/|CP b | · X c∈CP b BCE(ˆyrel c , yrel c ), (8) where yrel c = 1 when the class c is the ground-truth predicate category, otherwise yrel c = 0. Object Contrastive Loss. To avoid the visual features drift caused by the proposed sptio-temporal prompting, we introduce an object contrastive loss to enforce the spatial encoded object features to have the ability to distinguish each other as the original CLIP. Specifically, after the spatial modeling, we collect subject and object features from all frames and average them as ¯vk = avg({ϕ( ˙vk t )}T t=0), k ∈{s, o} and ϕ(·) denotes the object mapping layer in Figure 2. Meanwhile, we extract the text embeddings for all subject or object categories by feeding the handcrafted prompts (i.e.,“a photo of [CLS]”) into the text encoder of CLIP, where [CLS] can be replaced with the names of subjects or objects. The similarity between the visual feature and the language features of category c is calculated by ˆyk c = cos(¯vk, ˆlc), k ∈{s, o}. Finally, the object contrastive loss is computed over all object categories using the crossentropy loss (CE): Lobj = CE(ˆys, ys) + CE(ˆyo, yo), (9) where ˆys is the predicted subject similarity between visual features and language features of base object categories (CO b ), and ˆyo is the corresponding predicted object similarity. ys and yo denote the ground-truth category labels of the subject and object, respectively. Interaction Loss. There may be no annotated relationships between some subjects and objects, that is, there is no interaction. For each pair of subject and object, if there are any relationship categories between them in video frame t, we set the ground-truth interaction by yint t = 1, otherwise yint t = 0. To learn this weak interaction, we concatenate all the features in frame t and predict the interaction probability by ˆyint t = ψ(||¨vs t, ¨vo t, ¨vu t , ¨vb t||), where ψ(·) denotes the relationship indication layer in Figure 2. The interaction loss is then computed using the binary cross-entropy loss (BCE): Lint = 1/T · XT t=1 BCE(ˆyint t , yint t ). (10) Experiment Datasets and Evaluation Metrics Datasets. We evaluate our method on the VidVRD (Shang et al. 2017) and VidOR (Shang et al. 2019) datasets. The VidVRD dataset contains 1000 videos, 800 videos for training and 200 for testing, covering 35 object categories and 132 predicate categories. The VidOR dataset contains 10000 videos, 7000 videos for training, 835 videos for validation, and 2165 videos for testing, covering 80 object categories and 50 predicate categories. Evaluation Settings. For the open-vocabulary evaluation, the base and novel categories are selected based on frequency. Following Repro (Gao et al. 2023), we choose the common object and predicate categories as base categories and the rare ones as novel categories. Training is performed on base categories. Test is performed under two settings: (1) Novel-split evaluation involves all object categories and novel predicate categories; (2) All-split evaluation involves all object categories and all predicate categories, which is a standard evaluation. Note that the test is performed on both the test set of VidVRD and the validation set of VidOR (the annotations of test set of VidOR are not available). To remove the impact of inaccurately detected trajectories, we also evaluate the methods using ground-truth trajectories, focusing on relationship detection with accurately detected objects. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6517 Split Method mAP R@50 R@100 Novel ALPro 8.35 9.79 CLIP 1.08 5.48 7.20 VidVRD-II 4.32 4.89 RePro 7.20 8.35 Ours 3.58 9.22 11.53 All ALPro 2.61 3.66 CLIP 1.29 1.71 3.13 VidVRD-II 24.81 34.11 RePro 27.11 35.76 Ours 38.52 33.44 43.80 Table 2: Results of different methods using ground-truth trajectory on the VidOR dataset. Vis Lan Novel All AVG DET GT DET GT 5.03 5.23 20.61 27.14 14.50 ✓ 11.04 16.83 26.35 37.10 22.83 ✓ 12.33 17.98 26.94 36.79 23.51 ✓ ✓ 15.99 21.14 27.06 38.08 25.57 Table 3: Performance (mAP) of ablation study for multimodal prompting on the VidVRD dataset. “Vis” and “Lan” denote visual prompting and language prompting, respectively. Metrics. We use mean Average Precision (mAP) and Recall@K (R@K) with K=50,100 for evaluation. The detected triplet is considered correct if it matches a triplet in the ground-truth and the IoU between them is greater than a threshold (i.e., 0.5). Implementation Details For all experiments, video frames are sampled every 30 frames. We adopt the ViT-B/16 version of CLIP while keeping the parameters fixed. The number of Transformer blocks of spatio-temporal visual prompting is set to 1 and 2 for the VidVRD dataset and the VidOR dataset, respectively. The head number of multi-head self-attention of Transformer blocks is set to 8, and the dropout rate is set to 0.1. For language prompting, we set the number of tokens for both learnable continuous prompts and conditional prompts to 8. The [CLS] token is positioned at 75% of the token length. For optimization, we use the AdamW (Loshchilov and Hutter 2019) algorithm with an initial learning rate of 0.001. A multi-step decay schedule is applied at epochs 15, 20, and 25, reducing the learning rate by a factor of 0.1 each time. The batch size is set to 32. Comparison with Existing Methods We conduct a comprehensive comparison of our method with the state-of-the-art methods, including RePro (Gao et al. 2023), VidVRD-II (Shang et al. 2021), and the pretrained models ALPro (Li et al. 2022a) and CLIP. The comparison results on the VidVRD dataset are shown in Table 1. It is interesting to observe that our method Spa Tem Novel All AVG DET GT DET GT 12.33 17.98 26.94 36.79 23.51 ✓ 13.92 17.54 26.79 36.73 23.75 ✓ 9.10 12.41 23.27 31.39 19.07 ✓ ✓ 15.99 21.14 27.06 38.08 25.57 Table 4: Performance (mAP) of ablation study for the spatiotemporal visual prompting on the VidVRD dataset. Note that we add linear layers to keep similar amount parameters when a module is absent. “Spa” and “Tem” denote spatial modeling and temporal modeling, respectively. Variants Novel All AVG DET GT DET GT Manual 11.04 16.83 26.35 37.10 22.83 Continuous 14.21 18.21 27.27 36.94 24.16 Conditional 15.29 21.88 25.85 36.44 24.87 Ours 15.99 21.14 27.06 38.08 25.57 Table 5: Performance (mAP) of ablation study for the language prompting on the VidVRD dataset. achieves the best performance in terms of all evaluation metrics, and specifically achieves nearly 10% improvement in mAP using both detected trajectories (denoted as “DET”) and ground-truth trajectories (denoted as “GT”) in the “Novel” split. This clearly validates the superiority of the proposed multi-modal prompting method in Open-VidVRD. The comparison results on the VidOR dataset are shown in Table 2. Since the existing methods only provide the results using ground-truth trajectory on the VidOR dataset, we only report the results using ground-truth trajectory for comparison. Our method outperforms the other methods by 0.87% and 1.74% on R@50 and R@100, respectively, in the “Novel” split. Moreover, our method achieves gains of 6.8% and 8.04% on R@50 and R@100, respectively, in the “All” split. Abaltion Studies We perform in-depth ablation studies on the VidVRD dataset to evaluate each component of our method. Effectiveness of multi-modal prompting. To evaluate the multi-modal prompting, we replace the visual prompting (denoted as “Vis”) with linear layers and replace the language prompting (denoted as “Lan”) with handcraft language prompting. The results are shown in Table 3. The consistent improvements in the “Novel” split and the “All” split demonstrate the effectiveness of the proposed visual promoting and language prompting. Effectiveness of spatio-temporal visual prompting. To evaluate the components of the spatio-temporal visual prompting, we replace the spatial modeling module (denoted as “Spa”) or the temporal modeling module (denoted as “Tem”) with linear layers. From the results shown in Table 4, the average gain in mAP exceeds 2% when performThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6518 24 24.5 25 25.5 26 26.5 27 0 25 50 75 100 mAP 8 tokens 16 tokens 32 tokens 12 12.5 13 13.5 14 14.5 15 15.5 16 0 25 50 75 100 mAP 8 tokens 16 tokens 32 tokens (a) Novel (b) All Figure 4: Effectiveness of tokens of vision-guided languge prompts on the VidVRD dataset. The x-axis represents the percentage of tokens from conditional prompts, from 0 (all tokens are from learnable continuous prompts) to 100% (all tokens are from learnable conditional prompts), and different colors denote different lengths. ing both spatial and temporal modeling. Furthermore, we observe that the performance drops significantly when only temporal modeling is performed without incorporating spatial modeling. This is in line with expectations, as it is difficult to recognize object relationships based only on the dynamic state changes of individual objects. Effectiveness of vision-guided language prompting. To evaluate the effectiveness of the vision-guided language prompting, we design three variants of our method for comparison: (1) “Manual” involves pre-defined templates for subjects (i.e., “An image of a person or object [CLS] something”), objects (i.e., “An image of something [CLS] a person or object”), unions and backgrounds (i.e., “An image of the visual relationship [CLS] between two entities”); (2) “Continuous” involves learnable continuous prompts; (3) “Conditional” tailors all prompts to input visual features. From the results shown in Table 5, it is obvious that the proposed viual-guided language prompting (“Ours”) outperforms the other variants on average. Specifically, an impressive improvement of nearly 5% is achieved in the “Novel” split using the detected trajectories. We also observe that the learnable prompts, including the learnable continuous prompts and the learnable conditional prompts, generally achieve better results than the manually designed prompts. Effectiveness of tokens of languge prompting. We analyze the effectiveness of tokens of vision-guided language prompts by alternately placing tokens of learnable continuous prompts and conditional prompts and inserting the [CLS] token at the 75% position of the length, as shown in Figure 4. We observe that the performance initially ascends and then declines with the increasing token number. And along with the increasing percentages of tokens from learnable conditional prompts, the results first increase and then become unstable. The best performance surfaces when the token number is 16, and half of the tokens are from learnable conditional prompts. These observations highlight the importance of combining task-specific knowledge and visual cues, further validating the effectiveness of our vision-guided prompting on combining learnable continuous prompts and learnable conditional prompts. (a) Feature distributions of base predicate categories. fly_with jump_beneath follow sit_above play ride left touch stop_behind walk_with (b) Feature distributions of novel predicate categories. fly_away sit_next_to drive walk_past stand_above swim_behind above creep_toward move_past stop_next_to CLIP Ours Ours CLIP Figure 5: Qualitative results of visual feature (union region of subject and object) distributions by T-SNE. Qualitative Analysis We visualize the feature distributions of randomly selected 10 predicate categories by projecting the features of the union regions onto a 2D plane using T-SNE (Hinton and Roweis 2002), to demonstrate how well our spatio-temporal visual prompting method adapts the image encoder of CLIP. As shown in Figure 5, features of our method (the right parts of Figure 5 (a) , (b)) within the same categories are pulled closer while features across different categories are pushed further apart, improving the discrimination on both base and novel categories. These qualitative results further verify the effectiveness of our spatio-temporal visual prompting method. Conclusion We have presented a multi-modal prompting method for open-vocabulary video visual relationship detection. By introducing spatio-temporal visual prompting and visionguided language prompting to leverage the large-scale pretrained image-text model, our method demonstrates remarkable potential in bridging the domain gap between image and video relationships and discovering novel objects and relationships. Extensive experiments conducted on public datasets show the superiority of our method, substantiated by a notable gain of performance on novel relationship categories while keeping the performance of base categories. A limitation of our method is the dependence on a pre-trained trajectory detector, so investigating an end-to-end pipeline to alleviate this dependency is an interesting avenue for future research. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6519 Acknowledgements This work was supported in part by the Natural Science Foundation of Shenzhen under Grant No. JCYJ20230807142703006 and the Natural Science Foundation of China (NSFC) under Grant No 62072041. References Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for fewshot learning. In Advances in Neural Information Processing Systems (NeurIPS), volume 35, 23716–23736. Chen, S.; Xiao, J.; and Chen, L. 2023. Video scene graph generation from single-frame weak supervision. In Proceedings of the International Conference on Learning Representations (ICLR). Cong, Y.; Yang, M. Y.; and Rosenhahn, B. 2023. Reltr: Relation transformer for scene graph generation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Ding, Z.; Wang, J.; and Tu, Z. 2022. OpenVocabulary panoptic segmentation with MaskCLIP. arXiv, abs/2208.08984. Du, Y.; Wei, F.; Zhang, Z.; Shi, M.; Gao, Y.; and Li, G. 2022. Learning to prompt for open-vocabulary object detection with vision-language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14084–14093. Gao, K.; Chen, L.; Niu, Y.; Shao, J.; and Xiao, J. 2022a. Classification-then-grounding: Reformulating video scene graphs as temporal bipartite graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19497–19506. Gao, K.; Chen, L.; Zhang, H.; Xiao, J.; and Sun, Q. 2023. Compositional prompt tuning with motion cues for openvocabulary video relation detection. In Proceedings of the International Conference on Learning Representations (ICLR). Gao, M.; Xing, C.; Niebles, J. C.; Li, J.; Xu, R.; Liu, W.; and Xiong, C. 2022b. Open vocabulary object detection with pseudo bounding-box labels. In Proceedings of the European Conference on Computer Vision (ECCV), 266–282. Springer. Gu, X.; Lin, T.-Y.; Kuo, W.; and Cui, Y. 2022. Openvocabulary object detection via vision and language knowledge distillation. In Proceedings of the International Conference on Learning Representations (ICLR). He, T.; Gao, L.; Song, J.; and Li, Y.-F. 2022. Towards openvocabulary scene graph generation with prompt-based finetuning. In Proceedings of the European Conference on Computer Vision (ECCV), 56–73. Springer. Hinton, G. E.; and Roweis, S. 2002. Stochastic neighbor embedding. In Advances in Neural Information Processing Systems (NeurIPS), volume 15, 833–840. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In Proceedings of the International Conference on Machine Learning (ICML), 4904– 4916. PMLR. Ju, C.; Han, T.; Zheng, K.; Zhang, Y.; and Xie, W. 2022. Prompting visual-language models for efficient video understanding. In Proceedings of the European Conference on Computer Vision (ECCV), 105–124. Springer. Khattak, M. U.; Rasheed, H.; Maaz, M.; Khan, S.; and Khan, F. S. 2023. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19113–19122. Kuo, W.; Cui, Y.; Gu, X.; Piergiovanni, A.; and Angelova, A. 2023. Open-vocabulary object detection upon frozen vision and language models. In Proceedings of the International Conference on Learning Representations (ICLR). Li, D.; Li, J.; Li, H.; Niebles, J. C.; and Hoi, S. C. 2022a. Align and prompt: Video-and-language pre-training with entity prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4953–4963. Li, L.; Chen, L.; Huang, Y.; Zhang, Z.; Zhang, S.; and Xiao, J. 2022b. The devil is in the labels: Noisy label correction for robust scene graph generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18869–18878. Li, L.; Chen, Y.-C.; Cheng, Y.; Gan, Z.; Yu, L.; and Liu, J. 2020. Hero: Hierarchical encoder for video+language omnirepresentation pre-training. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2046–2065. Loshchilov, I.; and Hutter, F. 2019. Decoupled weight decay regularization. In Proceedings of the International Conference on Learning Representations (ICLR). Lu, C.; Krishna, R.; Bernstein, M.; and Fei-Fei, L. 2016. Visual relationship detection with language priors. In Proceedings of the European Conference on Computer Vision (ECCV), 852–869. Luo, H.; Ji, L.; Shi, B.; Huang, H.; Duan, N.; Li, T.; Li, J.; Bharti, T.; and Zhou, M. 2020. Univl: A unified video and language pre-training model for multimodal understanding and generation. arXiv, abs/2002.06353. Ma, C.; Yang, Y.; Wang, Y.; Zhang, Y.; and Xie, W. 2022. Open-vocabulary semantic segmentation with frozen visionLanguage models. In Proceedings of the British Machine Vision Conference (BMVC), 45. Ni, B.; Peng, H.; Chen, M.; Zhang, S.; Meng, G.; Fu, J.; Xiang, S.; and Ling, H. 2022. Expanding language-image pretrained models for general video recognition. In Proceedings of the European Conference on Computer Vision (ECCV), 1–18. Springer. Pham, H.; Dai, Z.; Ghiasi, G.; Liu, H.; Yu, A. W.; Luong, M.-T.; Tan, M.; and Le, Q. V. 2021. Combined scaling for zero-shot transfer learning. arXiv, abs/2111.10050. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6520 Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning (ICML), 8748– 8763. PMLR. Shang, X.; Li, Y.; Xiao, J.; Ji, W.; and Chua, T.-S. 2021. Video visual relation detection via iterative inference. In Proceedings of the ACM International Conference on Multimedia (ACM MM), 3654–3663. Shang, X.; Ren, T.; Guo, J.; Zhang, H.; and Chua, T.-S. 2017. Video visual relation detection. In Proceedings of the ACM International Conference on Multimedia (ACM MM), 1300–1308. Shang, X.; Xiao, J.; Di, D.; and Chua, T.-S. 2019. Relation understanding in videos: A grand challenge overview. In Proceedings of the ACM International Conference on Multimedia (ACM MM), 2652–2656. Tang, K.; Niu, Y.; Huang, J.; Shi, J.; and Zhang, H. 2020. Unbiased scene graph generation from biased training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3716–3725. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), volume 30, 5998–6008. Wang, M.; Xing, J.; and Liu, Y. 2021. Actionclip: A new paradigm for video action recognition. arXiv, abs/2109.08472. Wang, S.; Duan, Y.; Ding, H.; Tan, Y.-P.; Yap, K.-H.; and Yuan, J. 2022. Learning transferable human-object interaction detector with natural language supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 939–948. Weng, Z.; Yang, X.; Li, A.; Wu, Z.; and Jiang, Y.-G. 2023. Transforming CLIP to an open-vocabulary video model via interpolated weight optimization. arXiv, abs/2302.00624. Xu, H.; Ghosh, G.; Huang, P.-Y.; Okhonko, D.; Aghajanyan, A.; Metze, F.; Zettlemoyer, L.; and Feichtenhofer, C. 2021. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 6787–6800. Xu, L.; Qu, H.; Kuen, J.; Gu, J.; and Liu, J. 2022. Meta spatio-temporal debiasing for video scene graph generation. In Proceedings of the European Conference on Computer Vision (ECCV), 374–390. Springer. Xu, M.; Zhang, Z.; Wei, F.; Hu, H.; and Bai, X. 2023. Side adapter network for open-vocabulary semantic segmentation. arXiv, abs/2302.12242. Zheng, S.; Chen, S.; and Jin, Q. 2022. Vrdformer: Endto-end video visual relation detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 18836–18846. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022. Learning to prompt for vision-language models. International Journal of Computer Vision (IJCV), 130(9): 2337–2348. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6521 | 2024 | 724 |
18,545 | Learning Dense Correspondence for NeRF-Based Face Reenactment Songlin Yang1,2, Wei Wang2*, Yushi Lan3, Xiangyu Fan4, Bo Peng2, Lei Yang4, Jing Dong2 1School of Artificial Intelligence, University of Chinese Academy of Sciences, China 2CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences, China 3S-Lab, Nanyang Technological University, Singapore 4SenseTime, China [email protected], {wwang, bo.peng, jdong}@nlpr.ia.ac.cn, [email protected], {fanxiangyu, yanglei}@sensetime.com Abstract Face reenactment is challenging due to the need to establish dense correspondence between various face representations for motion transfer. Recent studies have utilized Neural Radiance Field (NeRF) as fundamental representation, which further enhanced the performance of multi-view face reenactment in photo-realism and 3D consistency. However, establishing dense correspondence between different face NeRFs is non-trivial, because implicit representations lack groundtruth correspondence annotations like mesh-based 3D parametric models (e.g., 3DMM) with index-aligned vertexes. Although aligning 3DMM space with NeRF-based face representations can realize motion control, it is sub-optimal for their limited face-only modeling and low identity fidelity. Therefore, we are inspired to ask: Can we learn the dense correspondence between different NeRF-based face representations without a 3D parametric model prior? To address this challenge, we propose a novel framework, which adopts tri-planes as fundamental NeRF representation and decomposes face tri-planes into three components: canonical triplanes, identity deformations, and motion. In terms of motion control, our key contribution is proposing a Plane Dictionary (PlaneDict) module, which efficiently maps the motion conditions to a linear weighted addition of learnable orthogonal plane bases. To the best of our knowledge, our framework is the first method that achieves one-shot multi-view face reenactment without a 3D parametric model prior. Extensive experiments demonstrate that we produce better results in finegrained motion control and identity preservation than previous methods. Introduction One-shot face reenactment (Hong et al. 2022) aims to utilize motion conditions from the driving image, such as facial expressions and head poses, to animate the face in the source image. The main challenge is establishing dense correspondence between different face representations to transfer motion conditions. Recent studies (Li et al. 2023b; Ma et al. 2023b) have utilized Neural Radiance Field (NeRF) (Mildenhall et al. 2021) as fundamental representation, which further enhanced the performance of multi-view face reenactment in photo-realism and 3D consistency. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. However, establishing dense correspondence between different face NeRFs is non-trivial. Unlike mesh-based representations which have index-aligned vertexes as groundtruth correspondence annotations, the NeRF-based representations lack an explicit surface descriptor that constructs correspondence of spatial points (Lan, Loy, and Dai 2022). Although introducing 3D parametric models (e.g., 3DMM (Blanz and Vetter 1999), FLAME (Li et al. 2017), and DECA (Feng et al. 2021)) as motion conditions make it feasible to achieve explicit motion control for cross-identity face reenactment (Zeng et al. 2022; Ma et al. 2023b; Li et al. 2023b), aligning mesh-based parametric space with latent space of NeRF-based generative models brings a significant optimization burden. Additionally, the 3D parametric models themselves have some limitations, such as their focus being predominantly on the facial region, requiring additional processing for hair and eyes. These limitations inspire us to ask: Can we learn the dense correspondence between different NeRF-based face representations without a 3D parametric model prior? To address the challenge of learning dense correspondences between NeRF-based face representations, the first issue is the selection of NeRF representations. The vanilla NeRF (Mildenhall et al. 2021) employs an MLP network to capture the spatial information of the target object, which tends to suffer from overfitting and can lead to a loss of 3D consistency when animating the representation network. In order to strike a balance between 3D consistency and animation capabilities, we have opted to utilize the tri-plane representation proposed by EG3D (Chan et al. 2022) as our fundamental NeRF representation, which adopts three spatiallyorthogonal plane feature maps to represent an object. This choice allows us to maintain 3D consistency within the triplane representation itself, while also leveraging the strong modeling capacity of the StyleGAN-based (Karras et al. 2020) generator to handle feature deformations. In this work, we propose a novel framework that can realize one-shot multi-view face reenactment, as shown in Fig. 1. Our framework utilizes tri-planes as fundamental NeRF representation and decomposes face tri-planes into three components: canonical tri-planes, identity deformations, and motion. The plane feature deformations regarding motion conditions differ from those caused by identity conditions since the rules governing motion are shared beThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6522 Source Driving Ours 𝟎°(𝐘𝐚𝐰) −𝟑𝟎° 𝟑𝟎° Cross-Identity Reenactment Muti-View Synthesis (Ours) Source Driving Ours 𝟎°(𝐏𝐢𝐭𝐜𝐡) −𝟑𝟎° 𝟑𝟎° Fine-Grained Control & Identity Preservation Source Driving HiDeNeRF Ours Source Driving HiDeNeRF Ours 0.013/0.035 0.008/0.040 0.012/0.032 0.010/0.036 Figure 1: Results of one-shot multi-view face reenactment. We present both cross-identity reenactment and multi-view synthesis at various yaw and pitch angles. Through comparisons with state-of-the-art HiDeNeRF, we illustrate that our PlaneDict module excels in fine-grained motion control, particularly for non-facial elements such as eyes, and offers better identity preservation (vertex distance from source identity↓/ vertex distance from driving identity↑) than utilizing 3DMM as correspondence. tween various identities. Thus, we design a Plane Dictionary module, referred to as PlaneDict, to efficiently maps motion conditions to a linear weighted addition of learnable orthogonal plane bases. Extensive experiments demonstrate that our method achieves better results in fine-grained motion control and identity preservation than previous work. To summarize, the contributions of our approach are: • We propose a novel decomposition method of face triplane representation, making it suitable for learning the dense correspondence between different face tri-planes and realizing motion transfer. • We propose a Plane Dictionary (PlaneDict) module for tri-plane representation, which efficiently maps motion conditions to a linear weighted addition of learnable orthogonal plane bases. • To the best of our knowledge, we propose the first method to achieve one-shot multi-view face reenactment without a 3DMM prior, which achieves better results in fine-grained motion control and identity preservation than previous work. Related Work Face Implicit Representation Compared with 2D (Liu et al. 2015) and 3D parametric representation (Li et al. 2017), 3D implicit representation has advantages of photo-realism and 3D-consistency. Previous work based on 3D scene representation has tried to use Neural Radiance Field (Gu et al. 2022; Yang et al. 2023a), Signed Distance Field (Or-El et al. 2022; Ma et al. 2023a), and Tri-Planes (Chan et al. 2022) to model face as static objects. Considering the dynamic synthesis requirements of faces, two strategies have been proposed: First, constructing the deformable neural radiance fields, such as NeRFies (Park et al. 2021a) and HyperNeRF (Park et al. 2021b), which maps each observed point into a canonical space through a continuous deformation field, but it tends to handle small movements or person-specific rendering. Second, adopting NeRF with 3DMM (Li et al. 2017) prior for explicit motion control, such as RigNeRF (Athar et al. 2022), NeRFace (Gafni et al. 2021), MoFaNeRF (Zhuang et al. 2022), OmniAvatar (Xu et al. 2023), and some 3D GAN Inversion methods (Lin et al. 2022; Lan et al. 2023; Yang et al. 2023b). However, the dense correspondence provided by 3DMM has limitations in non-facial regions (e.g., eyes and hair) and brought an optimization burden to align the 3DMM representation and NeRF-based latent space. Therefore, a better dense correspondence of different 3D implicit representations is needed. One-Shot Face Reenactment Previous face reenactment methods can be divided into warping-based, mesh-based, and NeRF-based. Warpingbased methods (Dong et al. 2018; Geng et al. 2018; Liu et al. 2019; Ha et al. 2020; Drobyshev et al. 2022; Zhao and Zhang 2022; Wang et al. 2022) warp the source features by estimated motion field to transport driving expressions and poses into the source face for 2D generation. Among them, FOMM (Siarohin et al. 2019) builds a 2D motion field from the sparse keypoints detected by an unsupervised trained deThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6523 Canonical Tri-Planes 𝑷𝑷𝒄𝒄 Driven Tri-Planes 𝑷𝑷 Source 𝑰𝑰𝒔𝒔 Driving 𝑰𝑰𝒅𝒅 Driven 𝑰𝑰 Encoder 𝑬𝑬 ID Deformation Generator 𝑮𝑮𝒊𝒊𝒊𝒊 𝝎𝝎𝒔𝒔𝒊𝒊𝒊𝒊𝝎𝝎𝒔𝒔𝒎𝒎 𝝎𝝎𝒅𝒅 𝒊𝒊𝒊𝒊𝝎𝝎𝒅𝒅 𝒎𝒎 ID Deformations 𝚫𝚫𝑷𝑷𝒊𝒊𝒊𝒊 Motion 𝚫𝚫𝑷𝑷𝒎𝒎 Plane Dictionary 512 512 256x256x3 256x256x3 256x256 x32x3 512 512 Tri-Plane Decoder Volume Renderer 𝒇𝒇𝑿𝑿𝑿𝑿 𝒇𝒇𝑿𝑿Z 𝒇𝒇YZ 𝑪𝑪𝒐𝒐𝒐𝒐𝒐𝒐𝒐𝒐 𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫𝑫 Raw Image 𝑰𝑰𝑹𝑹𝑹𝑹𝑹𝑹 Features 𝑰𝑰𝒇𝒇 128x128x32 Super Resolution 128x128x3 Camera Direction 𝑪𝑪 C 𝝎𝝎𝒔𝒔𝒊𝒊𝒊𝒊 𝝎𝝎𝒅𝒅 𝒎𝒎 𝝎𝝎𝒅𝒅 𝒎𝒎 512 Motion Generator 𝑮𝑮𝒎𝒎 … … 256x256x3 #1 #N #1 #N Orthogonal Plane Basis Dictionary 𝓑𝓑 Linear Decomposition Factors 𝑳𝑳 : Addition C : Concatenation : Hadamard Product Channel-Wise Summation Plane Dictionary Motion 𝚫𝚫𝑷𝑷𝒎𝒎 Neural Rendering Figure 2: Method overview. We decompose face tri-planes into three components: canonical tri-planes, identity deformations, and motion. In terms of motion control, which is the topology transformation rules shared by different identities, we propose a Plane Dictionary (PlaneDict) module to transfer motion conditions between different face tri-planes for face reenactment. tector, while DaGAN (Hong et al. 2022) incorporates the depth estimation to supplement the missing 3D geometry information in 2D motion field. Mesh-based methods employ 3DMM uses a single image to create realistic photos in a rigged mesh format such as ROME (Khakhulin et al. 2022). In terms of NeRF-based methods (Guo et al. 2021; Liu et al. 2022; Shen et al. 2022; Li et al. 2023a), using one image to build a 3D implicit representation is an ill-posed problem, because the lack of multi-view information makes the failure of learning the dense spatial information from one image. FDNeRF (Zhang et al. 2022) relaxes the constraint to the required number of images, while FNeVR (Zeng et al. 2022) takes the merits of 2D warping and 3D neural rendering. As for the motion control of tri-plane representation, OTAvatar (Ma et al. 2023b) designs a motion encoding strategy for pre-trained EG3D (Chan et al. 2022) with the 3DMM prior, while HiDeNeRF (Li et al. 2023b) proposes a multiscare tri-plane feature extractor, as well as 3DMM-based implicit motion-conditioned deformation field, to train a generative model from scratch. However, these 3DMM-based methods still suffer from the limitations brought by 3DMM itself. Therefore, we aim to tackle the more tricky challenge that is learning the dense correspondence between different tri-planes without 3DMM prior and achieving matchable or even better results than previous methods, which can have the potential of animating arbitrary objects which lack a sophisticated 3D parametric modeling like human faces. Method We propose a novel framework to achieve one-shot multiview face reenactment, which can learn the dense correspondence between different face tri-planes and realize motion transfer. In terms of motion control, our key contribution is to construct a Plane Dictionary (PlaneDict) module to efficiently map motion conditions to feature deformations of tri-planes, which realizes fine-grained motion transfer. Overview Tri-Plane Representation. Previous studies have utilized NeRF (Mildenhall et al. 2021) as fundamental implicit representation. Notably, 3D generative models like StyleNeRF (Gu et al. 2022) and EG3D (Chan et al. 2022) combine NeRF-based representation with StyleGAN-based (Karras et al. 2020) generator. These models extend identity-specific overfitting of vanilla NeRFs to a GAN space which can render 3D-consistent multi-identity face images with diverse expressions and poses. Among these NeRF variants, the tri-planes proposed by EG3D achieve a superior balance between information density and photo-realism, while also enabling the construction of a diverse latent space for manipulation. In contrast to the vanilla NeRF, which employs an MLP network to record spatial points in the space, EG3D adopts a latent vector to represent it and maps it to three spatially-orthogonal plane feature maps (i.e., triplanes) through a StyleGAN-based generator. The tri-planes effectively provide sufficient information for rendering a spatial point and can be queried efficiently. Consequently, we adopt tri-planes as our fundamental representation. Decomposition Strategy. The computer graphics researchers (Blanz and Vetter 1999; Li et al. 2017) typically decompose the canonical space (also known as the template mesh) and vertex deformations caused by identity and motion conditions in order to enhance the stability and interpretability of the learning process used in the meshThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6524 based pipeline for modeling human faces. Unfortunately, the EG3D pipeline did not incorporate this decomposition structure, resulting in a lack of explicit control over the identity and motion of human faces. This limitation poses challenges for downstream applications such as facial attribute editing and face reenactment. To address this issue, previous methods (Lin et al. 2022; Ma et al. 2023b) have employed GAN inversion to embed real face images into the latent space of EG3D, which tend to utilize the encoder of a face parametric model to obtain explicit control over identities, expressions, and poses. However, this roadmap only distills the correspondence from the parametric models and inherits their limitations. Therefore, we decompose the canonical space, identity deformations, and motion to learn dense correspondence of tri-planes for more flexible motion transfer. Specifically, as shown in Fig. 2, we adopt an encoder E to extract the style codes of the input image Iinput, which can be embedded as identity code ωid and motion code ωm: (ωid, ωm) = E(Iinput). (1) The dense correspondence and motion transfer are achieved by identity deformations ∆Pid and motion ∆Pm. We feed the identity code ωid into a StyleGAN-based generator Gid to obtain the identity deformations ∆Pid. However, the motion ∆Pm aims to achieve motion transfer between different face tri-planes, which means a shared deformation method should be proposed to handle this challenge. Therefore, we propose our PlaneDict module to obtain the motion ∆Pm, which will be presented in the next section. The tri-planes P which represents the driven face image to be rendered consist of the learnable canonical tri-planes Pc, the identity deformations ∆Pid from the source face image Is, and the motion ∆Pm from the driving face image Id, which can be formulated as follows: P (ωid s , ωm d ) = Pc + ∆Pid + ∆Pm, (2) ∆Pid = Gid(ωid s ), (3) ∆Pm = P laneDict (Gm(ωm d ); B), (4) where Gm and B are the motion deformation generator and the learnable plane bases of the PlaneDict module. Finally, when we query a spatial point according to its location (x, y, z) and the camera direction C, we sample features (fXY , fXZ, and fY Z) from the driven tri-planes P , aggregate by summation, and process the aggregated features with a lightweight tri-plane decoder. This decoder outputs a scalar density and a 32-channel feature, and both of them are then processed by a volume renderer to project the 3D feature volume into a 2D feature image. For training efficiency, we render 32-channel feature maps If at a resolution of 1282, with 96 total depth samples per ray. And we adopt the Super Resolution module to increase the final image size to 2562, which utilizes two blocks of StyleGAN-modulated convolution layers that upsample and refine the 32-channel feature maps If into the final RGB image I. Plane Dictionary (PlaneDict) Preliminary. In the 3D Morphable Model (3DMM) (Blanz and Vetter 1999), every face is represented by a shared topology which consists of vertexes with the aligned index. The hundreds of these 3D faces are high dimensional data and then reduced through Principal Component Analysis (PCA) to several orthogonal vector bases. These bases are further divided into identity-related and expression-related bases. When we fit the mesh model to a target face, we only need to linearly add these orthogonal bases to obtain the identity and expression deformations of every vertex relative to the template mesh, and finally get the 3D representation of the target face. It is worth noting that the expression representation of different faces can be obtained through linear addition of expression bases, which means that through the above modeling method, we can achieve dense correspondence between different face representations. Motivation. The modeling approach of graphics inspires us to establish dense correspondence and realize motion transfer between different face tri-planes. However, if we have achieved such dense correspondence, how can we transform these motion conditions into deformations of each implicit spatial point? The previous methods (Lin et al. 2022; Li et al. 2023b; Ma et al. 2023b) either skipped this issue and directly learned the mapping relationship between 3DMM and their latent space of generative models, and then used dense correspondence of 3DMM to control their own generative model; or conducted the learning of dense correspondence and motion transfer in the latent vector space corresponding to each implicit representation. The former is limited by 3DMM and cannot handle non-facial areas such as hair and eyes; The latter, which ignores the inherent characteristics of implicit representation, cannot perform more fine-grained expression control. Therefore, we propose a Plane Dictionary (PlaneDict) module, which can obtain the plane feature deformations driven by motion conditions by linearly adding a set of orthogonal plane bases. Specifically, as shown in Fig. 2 and Eqn. (4), the driving motion code ωm d is first fed into the motion generator Gm to obtain the linear decomposition factors L, which consists of N feature maps. These decomposition factors are then multiplied by the orthogonal plane bases in the plane dictionary B through the Hadamard product. The plane bases in B are channel-wise orthogonal, i.e., N vectors that have the same channel index in these orthogonal plane feature maps are reduced by QR decomposition to maintain the orthogonality, and they are learnable in the training stage. Finally, the Hadamard product of L and B is channel-wisely summed to output the motion ∆Pm. Note that our face motion conditions include facial expressions and head poses. Optimization The goal of proposing our framework is to learn the dense correspondence. We have the assumption that different identities have different topological structures which is suitable for modeling by a StyleGAN-based generator, and the rules of topological transformations are shared among different identities which can be modeled by our PlaneDict module. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6525 So we only adopt a self-supervision manner to train our framework. If this self-supervision successfully disentangles the deformations brought by identity and motion conditions, it precisely indicates that our aforementioned assumptions are valid, and our framework can exactly learn dense correspondence without a 3D parametric model prior. The cross-identity driving is our ultimate goal, and we should fully utilize the paired data and learn the motion deformations with ground truth. Therefore, we adopt disentanglement loss Ldis to model the different deformations brought identity and motion conditions, and reconstruction loss Lrecons to improve the image quality and rendering details as follows: L = Ldis + λ1Lrecons. (5) Disentanglement Loss. We denote the input source image as Is, the input driving image as Id and the output driven image as I. We adopt the encoder E to extract ωid s & ωid d and ωm s & ωm d from Is and Id respectively. Moreover, we use E to extract the ωid & ωm from I. Our optimization goal is to minimize the distance between ωid s and ωid, and the distance between ωm d and ωm. Therefore, we propose our disentanglement loss Ldis: Ldis = ∥ωid s −ωid∥2 + λ2∥ωm d −ωm∥2. (6) Reconstruction Loss. When we conduct the experiments, we found that the regions of eyes and mouths take longer training time to learn the distribution, which are highfrequency details with small areas and big variations. Therefore, as for the reconstruction loss Lrecons, we include L1 loss and mask loss, as follows: Lrecons = ∥Id −I∥1 + λ3∥M(Id) −M(I)∥1, (7) where M is a mask that indicates the eye and mouth part of the face image. Training Strategy. We use two types of paired data and use different losses to optimize the network parameters. One type is to use the source image and target image from the same identity, and we adopt L for training. The other is to use the source image and target image from different identities, and we only use Ldis for training. Experiments Experimental Settings Implementation Details. The encoder E is a ResNet10 (He et al. 2016) network. The generators Gid and Gm are two StyleGAN-based generators (Karras et al. 2020). The Super Resolution module utilizes two blocks of StyleGAN-modulated layers. To further improve the image resolution and quality, we adopt the pre-trained dual discriminator from EG3D (Chan et al. 2022) and sample data from FFHQ (Karras, Laine, and Aila 2019) as real images to fine-tune our networks in a GAN manner. The λ1, λ2, and λ3 are set as 1.0, 1.5, and 10. The iteration ratio of selfreenactment and cross-identity reenactment is better set at 2:1. Using Adam optimizer (set learning rate as 0.0001), the training takes about 4 days on 8 Tesla V100 GPUs while the fine-tuning takes 1 day. Source Driving FOMM DaGAN FNeVR ROME HiDeNeRF Ours Figure 3: Qualitative results of self-reenactment. Baselines. We select five state-of-the-art methods from different perspectives, including 2D-warping-based FOMM (Siarohin et al. 2019) & DaGAN (Hong et al. 2022), mesh-based ROME (Khakhulin et al. 2022), NeRF-based FNeVR (Zeng et al. 2022), and tri-plane-based HiDeNeRF (Li et al. 2023b). For fair comparisons, these methods are trained with VoxCeleb dataset (Nagrani, Chung, and Zisserman 2017; Chung, Nagrani, and Zisserman 2018). Datasets. We conduct experiments over three commonly used datasets: VoxCeleb1 (Nagrani, Chung, and Zisserman 2017), VoxCeleb2 (Chung, Nagrani, and Zisserman 2018), and TalkingHead-1KH (Wang, Mallya, and Liu 2021). We follow the FOMM to pre-process these videos, in which each frame is aligned and cropped into 2562 resolution. We follow the EG3D (Chan et al. 2022) to extract camera extrinsics, which is based on an off-the-shelf pose estimator (Deng et al. 2019). Furthermore, we use face-parsing.Pytorch (zllrunning 2019) to provide region masks of face, hair, and torso, and set the background region as black, which can reduce the impact of complex backgrounds. The selected videos for the test are not overlapped with the training videos. Metrics. We evaluate different methods from three perspectives: (1) Visual quality: We adopt SSIM (Wang et al. 2004), PSNR, LPIPS (Zhang et al. 2018), and FID (Heusel et al. 2017) as quality metrics. (2) Identity fidelity and motion accuracy: Following the previous works (Ha et al. 2020; Hong The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6526 SSIM↑PSNR↑LPIPS↓ FOMM 0.690 19.2 0.112 DaGAN 0.807 23.2 0.088 FNeVR 0.901 21.1 0.092 ROME 0.833 21.6 0.085 HiDeNeRF 0.862 21.9 0.084 Ours 0.870 22.1 0.079 Table 1: Visual quality evaluation of self-reenactment. CSIM↑AUCON↑PRMSE↓AVD↓ET↓ FOMM 0.837 0.872 2.88 0.021 1.98 DaGAN 0.875 0.921 1.79 0.016 4.08 FNeVR 0.880 0.929 2.22 0.016 2.01 ROME 0.906 0.918 1.68 0.013 5.28 HiDeNeRF 0.931 0.956 1.66 0.010 5.44 Ours 0.946 0.961 1.60 0.009 1.72 Table 2: Identity fidelity and motion accuracy evaluation of self-reenactment. et al. 2022), we adopt CSIM, PRMSE, and AUCON to evaluate the identity preservation of the source image, the accuracy of head poses, and the precision of expression. (3) Multi-view consistency: We adopt the AVD proposed by HiDeNeRF (Li et al. 2023b) to evaluate multi-view identity preservation. Furthermore, we propose the ET (Eye Tracking) metric to evaluate fine-grained motion control of gaze, which calculate the error of eye locations (Antoine et al. 2022) between the source image and driving image (they are all aligned face images). Self-Reenactment The self-reenactment experiments are using the source and driving images of the same identity, which have the ground truth of the synthesized results for comparisons. As shown in Fig. 3, we show the qualitative results of different methods. Because we use the motion features extracted from the driving images, instead of 3DMM parameters, we can not only realize more fine-grained motion control than 3DMMbased methods but also handle special regions like hair and eyes. We list the quantitative results in Tab. 1 and Tab. 2, and we achieve matched or better scores than other state-of-theart methods which are based on the correspondence from the 3DMM prior. These results show that, instead of using 3DMM parameter control at the cost of missing details, we can directly establish dense correspondence between different tri-plane representations, which overcomes the optimization burden of aligning 3DMM space and the latent space of NeRF-based generative models. Cross-Identity Reenactment The cross-identity reenactment is using the source and driving images of different identities, which is a more difficult challenge for source identity preservation and fine-grained motion transfer between different face tri-planes. We first qualitatively compare different state-of-the-art methods and Source Driving FOMM DaGAN FNeVR ROME HiDeNeRF Ours Figure 4: Qualitative results of cross-identity reenactment. 0.037/0.026 0.028/0.032 0.031/0.029 0.025/0.037 0.011/0.038 0.004/0.041 0.040/0.021 0.028/0.019 0.031/0.020 0.019/0.024 0.021/0.027 0.016/0.029 HiDeNeRF Source Driving FOMM DaGAN FNeVR ROME Ours Figure 5: Mesh evaluation for identity preservation of crossidentity reenactment (vertex distance from source identity↓/ vertex distance from driving identity↑). show their synthesized results in Fig. 4. Although the fusion of identity and motion information is a hard problem, our framework with the PlaneDict module is able to generate cross-identity reenactment results with better image quality and identity fidelity without any artifacts. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6527 VoxCeleb1 CSIM↑AUCON↑PRMSE↓FID↓AVD↓ET↓ FOMM 0.748 0.752 3.66 86 0.044 6.08 DaGAN 0.790 0.880 3.06 87 0.036 6.16 FNeVR 0.812 0.884 3.32 82 0.041 6.10 ROME 0.833 0.871 2.64 76 0.016 7.08 HiDeNeRF 0.876 0.917 2.62 57 0.012 7.02 Ours 0.911 0.928 2.50 49 0.011 5.18 VoxCeleb2 CSIM↑AUCON↑PRMSE↓FID↓AVD↓ET↓ FOMM 0.680 0.707 4.16 85 0.047 6.23 DaGAN 0.693 0.815 3.93 86 0.040 6.62 FNeVR 0.699 0.829 3.90 84 0.047 5.99 ROME 0.710 0.821 3.08 76 0.019 7.29 HiDeNeRF 0.787 0.889 2.91 61 0.014 7.30 Ours 0.790 0.894 2.83 58 0.012 5.33 TalkingHead-1KH CSIM↑AUCON↑PRMSE↓FID↓AVD↓ET↓ FOMM 0.723 0.741 3.71 76 0.039 6.17 DaGAN 0.766 0.872 2.98 73 0.035 6.59 FNeVR 0.775 0.879 3.39 73 0.037 6.03 ROME 0.781 0.864 2.66 68 0.017 6.97 HiDeNeRF 0.828 0.901 2.60 52 0.011 7.09 Ours 0.831 0.912 2.55 49 0.010 5.42 Table 3: Cross-identity reenactment evaluation. To further compare the identity preservation ability of our framework, as shown in Fig. 5, we evaluate and visualize the identity fidelity (i.e., the shape preservation of the source identity) using 3D face reconstruction models (Deng et al. 2019). The quantitative experiments are performed with VoxCeleb1 (Nagrani, Chung, and Zisserman 2017), VoxCeleb2 (Chung, Nagrani, and Zisserman 2018), and TalkingHead-1KH (Wang, Mallya, and Liu 2021), which are shown in Tab. 3 respectively. Multi-View Synthesis The one-shot multi-view cross-identity reenactment is the most challenging task. It requires not only using one face image from the source identity to construct a 3D face representation for multi-view rendering but also this representation can be controlled by motion conditions for novel expression and pose reenactment. We adopt the state-of-theart HiDeNeRF (Li et al. 2023b) as the baseline method for comparison. As shown in Fig. 6, we render the driven results from different view directions. Our method achieves better image quality than HiDeNeRF and does not show artifacts or 3D inconsistency in some angles. As shown in Tab. 4, we further evaluate ours and HiDeNeRF in quantitative experiments. Our framework with the PlaneDict module can replace the 3DMM to model the dense correspondence between different tri-plane representations. In this way, instead of aligning the implicit spaces of the two generative models, we learn the dense correspondence without any loss of the 3D consistency of the target NeRF-based model. Result 𝟎𝟎° −𝟏𝟏𝟏𝟏𝟏 −𝟑𝟑𝟎𝟎° 3𝟎𝟎° 𝟏𝟏𝟏𝟏° Ours Ours HiDeNeRF HiDeNeRF Source Driving Source Driving Figure 6: Qualitative results of multi-view synthesis. CSIM↑AUCON↑PRMSE↓AVD↓ HiDeNeRF 0.829 0.864 3.78 0.014 Ours 0.840 0.881 3.53 0.008 Table 4: Quantitative evaluation of multi-view synthesis. CSIM↑AUCON↑PRMSE↓AVD↓ET↓ w/o PlaneDict 0.763 0.809 3.10 0.035 7.58 w PlaneDict (5) 0.679 0.718 3.93 0.058 9.92 w PlaneDict (10) 0.802 0.824 3.18 0.038 7.36 w PlaneDict (15) 0.899 0.871 2.92 0.019 6.69 w PlaneDict (20) 0.911 0.928 2.50 0.011 5.18 Table 5: Ablation study. Ablation Study As shown in Tab. 5, in the ablation study of whether to use the PlaneDict module, we adopt the same structure as identity deformations for obtaining motion and they are trained with the same time and dataset. Limited by the hardware, the max number of plane bases is 23. However, since 20, there has been almost no improvement. Therefore, we adopt 20 as the number of plane bases in our PlaneDict module to balance the quality and optimization difficulty. Conclusions In this paper, we propose a novel framework to learn the dense correspondence between different face tri-planes without a 3D parametric model prior. With the PlaneDict module, our framework can achieve fine-grained motion driving of face tri-planes without any 3D inconsistency. Extensive experiments demonstrate our better image quality, fine-grained motion control, and identity fidelity of one-shot multi-view face reenactment than previous methods. Limitations and Ethical Concerns. Due to the inherent biases in the datasets, we are not able to handle extreme poses and expressions. We strongly oppose any misuse of our technology but we believe it has the potential to achieve multiview animation of diverse objects without relying on sophisticated 3D parametric models like human faces. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6528 Acknowledgments This work is supported by the National Key Research and Development Program of China under Grant No. 2021YFC3320103, the National Natural Science Foundation of China (NSFC) under Grants 62372452, U19B2038. References Antoine, L.; Shuvam, K.; Labintsev; Wing-Fung, K.; and Vazquez, R. 2022. Gaze Tracking. https://github.com/ antoinelame/GazeTracking. Eye Tracking library. Athar, S.; Xu, Z.; Sunkavalli, K.; Shechtman, E.; and Shu, Z. 2022. Rignerf: Fully controllable neural 3d portraits. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 20364–20373. Blanz, V.; and Vetter, T. 1999. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 187–194. Chan, E. R.; Lin, C. Z.; Chan, M. A.; Nagano, K.; Pan, B.; De Mello, S.; Gallo, O.; Guibas, L. J.; Tremblay, J.; Khamis, S.; et al. 2022. Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16123– 16133. Chung, J. S.; Nagrani, A.; and Zisserman, A. 2018. Voxceleb2: Deep speaker recognition. INTERSPEECH. Deng, Y.; Yang, J.; Xu, S.; Chen, D.; Jia, Y.; and Tong, X. 2019. Accurate 3d face reconstruction with weaklysupervised learning: From single image to image set. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 0–0. Dong, H.; Liang, X.; Gong, K.; Lai, H.; Zhu, J.; and Yin, J. 2018. Soft-gated warping-gan for pose-guided person image synthesis. Advances in neural information processing systems, 31. Drobyshev, N.; Chelishev, J.; Khakhulin, T.; Ivakhnenko, A.; Lempitsky, V.; and Zakharov, E. 2022. Megaportraits: Oneshot megapixel neural head avatars. In Proceedings of the 30th ACM International Conference on Multimedia, 2663– 2671. Feng, Y.; Feng, H.; Black, M. J.; and Bolkart, T. 2021. Learning an animatable detailed 3D face model from in-thewild images. ACM Transactions on Graphics (ToG), 40(4): 1–13. Gafni, G.; Thies, J.; Zollhofer, M.; and Nießner, M. 2021. Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8649– 8658. Geng, J.; Shao, T.; Zheng, Y.; Weng, Y.; and Zhou, K. 2018. Warp-guided gans for single-photo facial animation. ACM Transactions on Graphics (ToG), 37(6): 1–12. Gu, J.; Liu, L.; Wang, P.; and Theobalt, C. 2022. StyleNeRF: A style-based 3d-aware generator for high-resolution image synthesis. ICLR. Guo, Y.; Chen, K.; Liang, S.; Liu, Y.-J.; Bao, H.; and Zhang, J. 2021. Ad-nerf: Audio driven neural radiance fields for talking head synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5784–5794. Ha, S.; Kersner, M.; Kim, B.; Seo, S.; and Kim, D. 2020. Marionette: Few-shot face reenactment preserving identity of unseen targets. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 10893–10900. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30. Hong, F.-T.; Zhang, L.; Shen, L.; and Xu, D. 2022. Depthaware generative adversarial network for talking head video generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3397–3406. Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; and Aila, T. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8110–8119. Khakhulin, T.; Sklyarova, V.; Lempitsky, V.; and Zakharov, E. 2022. Realistic one-shot mesh-based head avatars. In European Conference on Computer Vision, 345–362. Springer. Lan, Y.; Loy, C. C.; and Dai, B. 2022. Correspondence Distillation from NeRF-based GAN. arXiv preprint arXiv:2212.09735. Lan, Y.; Meng, X.; Yang, S.; Loy, C. C.; and Dai, B. 2023. Self-Supervised Geometry-Aware Encoder for StyleBased 3D GAN Inversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20940–20949. Li, J.; Zhang, J.; Bai, X.; Zhou, J.; and Gu, L. 2023a. Efficient Region-Aware Neural Radiance Fields for HighFidelity Talking Portrait Synthesis. arXiv preprint arXiv:2307.09323. Li, T.; Bolkart, T.; Black, M. J.; Li, H.; and Romero, J. 2017. Learning a model of facial shape and expression from 4D scans. ACM Trans. Graph., 36(6): 194–1. Li, W.; Zhang, L.; Wang, D.; Zhao, B.; Wang, Z.; Chen, M.; Zhang, B.; Wang, Z.; Bo, L.; and Li, X. 2023b. One-Shot High-Fidelity Talking-Head Synthesis With Deformable Neural Radiance Field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17969–17978. Lin, C. Z.; Lindell, D. B.; Chan, E. R.; and Wetzstein, G. 2022. 3d gan inversion for controllable portrait image animation. arXiv preprint arXiv:2203.13441. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6529 Liu, W.; Piao, Z.; Min, J.; Luo, W.; Ma, L.; and Gao, S. 2019. Liquid warping gan: A unified framework for human motion imitation, appearance transfer and novel view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5904–5913. Liu, X.; Xu, Y.; Wu, Q.; Zhou, H.; Wu, W.; and Zhou, B. 2022. Semantic-aware implicit neural audio-driven video portrait generation. ECCV. Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, 3730–3738. Ma, T.; Li, B.; He, Q.; Dong, J.; and Tan, T. 2023a. Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field. arXiv preprint arXiv:2302.01579. Ma, Z.; Zhu, X.; Qi, G.-J.; Lei, Z.; and Zhang, L. 2023b. OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16901–16910. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1): 99–106. Nagrani, A.; Chung, J. S.; and Zisserman, A. 2017. Voxceleb: a large-scale speaker identification dataset. INTERSPEECH. Or-El, R.; Luo, X.; Shan, M.; Shechtman, E.; Park, J. J.; and Kemelmacher-Shlizerman, I. 2022. Stylesdf: Highresolution 3d-consistent image and geometry generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13503–13513. Park, K.; Sinha, U.; Barron, J. T.; Bouaziz, S.; Goldman, D. B.; Seitz, S. M.; and Martin-Brualla, R. 2021a. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5865–5874. Park, K.; Sinha, U.; Hedman, P.; Barron, J. T.; Bouaziz, S.; Goldman, D. B.; Martin-Brualla, R.; and Seitz, S. M. 2021b. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228. Shen, S.; Li, W.; Zhu, Z.; Duan, Y.; Zhou, J.; and Lu, J. 2022. Learning Dynamic Facial Radiance Fields for FewShot Talking Head Synthesis. In ECCV. Siarohin, A.; Lathuili`ere, S.; Tulyakov, S.; Ricci, E.; and Sebe, N. 2019. First order motion model for image animation. Advances in neural information processing systems, 32. Wang, T.-C.; Mallya, A.; and Liu, M.-Y. 2021. One-shot free-view neural talking-head synthesis for video conferencing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10039–10049. Wang, Y.; Yang, D.; Bremond, F.; and Dantcheva, A. 2022. Latent Image Animator: Learning to Animate Images via Latent Space Navigation. In International Conference on Learning Representations. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Xu, H.; Song, G.; Jiang, Z.; Zhang, J.; Shi, Y.; Liu, J.; Ma, W.; Feng, J.; and Luo, L. 2023. OmniAvatar: GeometryGuided Controllable 3D Head Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12814–12824. Yang, S.; Wang, W.; Ling, J.; Peng, B.; Tan, X.; and Dong, J. 2023a. Context-Aware Talking-Head Video Editing. In Proceedings of the 31st ACM International Conference on Multimedia, 7718–7727. Yang, S.; Wang, W.; Peng, B.; and Dong, J. 2023b. Designing A 3d-Aware Stylenerf Encoder for Face Editing. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Zeng, B.; Liu, B.; Li, H.; Liu, X.; Liu, J.; Chen, D.; Peng, W.; and Zhang, B. 2022. FNeVR: Neural volume rendering for face animation. Advances in Neural Information Processing Systems, 35: 22451–22462. Zhang, J.; Li, X.; Wan, Z.; Wang, C.; and Liao, J. 2022. Fdnerf: Few-shot dynamic neural radiance fields for face reconstruction and expression editing. In SIGGRAPH Asia 2022 Conference Papers, 1–9. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. Zhao, J.; and Zhang, H. 2022. Thin-plate spline motion model for image animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3657–3666. Zhuang, Y.; Zhu, H.; Sun, X.; and Cao, X. 2022. Mofanerf: Morphable facial neural radiance field. In European Conference on Computer Vision, 268–285. Springer. zllrunning. 2019. face-parsing.PyTorch. https://github.com/ zllrunning/face-parsing.PyTorch. Using modified BiSeNet for face parsing in PyTorch. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6530 | 2024 | 725 |
18,546 | Motion Deblurring via Spatial-Temporal Collaboration of Frames and Events Wen Yang1,2, Jinjian Wu1,2∗, Jupo Ma1,2∗, Leida Li1, Guangming Shi1,2 1School of Artificial Intelligence, Xidian University, Xi’an 710071, China 2Pazhou Lab, Huangpu, 510555, China [email protected], {jinjian.wu, majupo, ldli, gmshi}@xidian.edu.cn Abstract Motion deblurring can be advanced by exploiting informative features from supplementary sensors such as event cameras, which can capture rich motion information asynchronously with high temporal resolution. Existing event-based motion deblurring methods neither consider the modality redundancy in spatial fusion nor temporal cooperation between events and frames. To tackle these limitations, a novel spatial-temporal collaboration network (STCNet) is proposed for event-based motion deblurring. Firstly, we propose a differential-modality based cross-modal calibration strategy to suppress redundancy for complementarity enhancement, and then bimodal spatial fusion is achieved with an elaborate cross-modal coattention mechanism to weight the contributions of them for importance balance. Besides, we present a frame-event mutual spatio-temporal attention scheme to alleviate the errors of relying only on frames to compute cross-temporal similarities when the motion blur is significant, and then the spatio-temporal features from both frames and events are aggregated with the custom cross-temporal coordinate attention. Extensive experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-theart performance. Project website: https://github.com/wyangvis/STCNet. Introduction Motion blur is commonly inevitable due to camera shake or object motion over the period of exposure time, which not only deteriorates the visual experience for humans but hinders other computer vision tasks such as tracking (Jin, Favaro, and Cipolla 2005; Mei and Reid 2008), video stabilization (Matsushita et al. 2006), etc. To eliminate the adverse effects, the task of motion deblurring has received much research attention recently. Traditional motion deblurring techniques explicitly utilize image priors and various constraints (Bar et al. 2007; Cho, Wang, and Lee 2012; Wulff and Black 2014; Bahat, Efrat, and Irani 2017; Kotera, ˇSroubek, and Milanfar 2013; Levin et al. 2009) that are handcrafted with empirical observations. However, it is challenging to design such priors and constraints to model the inherent properties of latent ∗Corresponding authors. Copyright c⃝2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Blurry Image DS-Deblur ERDNet Event EFNet STCNet(Ours) Figure 1: Comparison of visualized deblurring results with state-of-the-art event-based motion deblurring methods DSDeblur (Yang et al. 2022), ERDNet (Chen et al. 2022a), EFNet (Sun et al. 2022), and our STCNet. frames and motion blur. Due to the success of deep neural networks (DNNs), some deep convolutional neural network (CNN)-based methods (Zhang et al. 2019; Zamir et al. 2021; Chen et al. 2021; Cho et al. 2021), recurrent neural network (RNN)-based methods (Nah, Son, and Lee 2019; Zhong et al. 2020; Zhou et al. 2019; Zhu et al. 2022) and Transformer-based methods (Liang et al. 2021; Wang et al. 2022b; Liang et al. 2022b,a) have been proposed for motion deblurring, which implicitly learn more general prior information from large-scale training data. Despite their good performance, these learning-based deblurring methods may fail to deal with severe blur. Motion deblurring cannot be solved trivially from the input blur set alone, as it is a highly ill-posed problem with infinite feasible solutions. Event cameras are bio-inspired sensors that can record per-pixel intensity changes asynchronously with high temporal resolution and output a stream of events encoding time, location and polarity of intensity changes (Vitoria et al. 2023) if the intensity changes surpass a threshold. Understandably, with the attractive properties that offer motion information with microsecond accuracy, event cameras have been attempted to address motion deblurring. Recently, some event-based motion deblurring methods are proposed (Pan et al. 2019; Jiang et al. 2020; Lin et al. 2020; The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6531 Sun et al. 2022; Chen et al. 2022a; Sun et al. 2023), and have achieved promising performance of deblurring. Crucially, these methods, on the one hand, adopt only simple fusion strategy for spatial complementary fusion and also do not consider the modality redundancy; on the other, neglect the role of the event itself and the event-frame interaction in the temporal domain. These insufficient collaboration of events and frames limits the overall performance. In this paper, we develop a novel spatial-temporal collaboration network (STCNet) to learn the collaborative fusion of frames and events both in spatial and temporal aspects for motion deblurring. Generally, different modalities are usually complementary but also redundant. We present a differential-modality guided cross-modal calibration strategy to enhance complementarity, which leverages the global interaction of differential-modality and two modalities. The calibration operation allows to later fuse the features better, and potentially avoids the modality redundancy. Then considering the disparity of the contributions of different modality features, we elaborate a cross-modal co-attention scheme to balance the contributions of multi-modality features for spatial complementary fusion. Besides, exploiting spatio-temporal dependencies is useful for motion deblurring. There may be errors in estimating the cross-temporal similarities relying only on frames when fast motions are present, i.e., spatio-temporal modeling lacks the guidance of motion information. Fortunately, benefiting from the rich motion information in the event, we propose a frame-event mutual spatio-temporal attention scheme to model the crosstemporal dependencies by conducting the communication of cross-frames and cross-events, alleviating that issue of cross-temporal similarities computation. Based on the mutual spatio-temporal attention, not only spatio-temporal features from frames, but also additional from events, are aggregated to the current feature with a custom cross-temporal coordinate attention. Coupled with the above spatial and temporal collaboration strategy, our framework achieves stateof-the-art performance of event-based motion deblurring (some visual comparisons are shown in Figure 1). The main contributions of our work are as follows. • We propose a novel spatial-temporal collaboration network (STCNet) for event-based motion deblurring, which facilitates the collaborative fusion of frames and events in both spatial and temporal domains. Extensive experiments show that our model outperforms state-ofthe-art event-based and image/video-based methods. • We present a differential-modality guided cross-modal calibration strategy to enhance complementarity and suppress redundancy of multi-modality features. Then the calibrated multi-modality features are fused by a crossmodal co-attention scheme to adaptively balance the modality contributions. • We propose a mutual spatio-temporal attention to model the cross-temporal dependencies by enjoying the extra assistance of motion information in events. Based on this, informative features from temporal neighbors of both frames and events are fused with current features via a custom cross-temporal coordinate attention. Related Work Motion Deblurring Early methods focus on explicitly using image priors and constraints (Cho, Wang, and Lee 2012; Hyun Kim and Mu Lee 2015; Bahat, Efrat, and Irani 2017; Kotera, ˇSroubek, and Milanfar 2013; Levin et al. 2009) that are handcrafted with empirical observations. With the development of deep learning, researchers have made significant progress on motion deblurring. State-of-the-art learning-based deblurring methods use a single image or multiple frames. Image Deblurring. Contemporary successful deep learning-based image deblurring methods can be roughly categorized as follows. 1) Single-Stage Approaches. These methods are based on a single-stage design, using the convolutional neural network (CNN) (Zhang et al. 2020) or Generative Adversarial Network (GAN) (Kupyn et al. 2018, 2019). 2) Multi-Stage Approaches. These methods aim to recover clean images in a progressive manner with multistage (Nah, Hyun Kim, and Mu Lee 2017; Tao et al. 2018; Zamir et al. 2021; Chen et al. 2021), which decompose the image deblurring task into smaller easier subtasks. 3) Coarse-to-Fine Strategies. These methods typically stack sub-networks with multi-scale inputs and gradually improve sharpness of images (Park et al. 2020; Cho et al. 2021). 4) Attention Modules. Attention mechanisms can help learn cross-spatial/channel correlations to better address deblurring (Suin, Purohit, and Rajagopalan 2020; Tsai et al. 2022; Purohit and Rajagopalan 2020; Liang et al. 2021). Video Deblurring. The spatio-temporal correlation between adjacent inputs is critical for video deblurring. Recurrent neural network (RNN) or convolutional neural network (CNN) are adopted to exploit temporal information (Nah, Son, and Lee 2019; Zhong et al. 2020; Zhou et al. 2019; Su et al. 2017; Zhu et al. 2022). To improve the deblurring performance further, some extra multiple frames aligning methods were proposed to model spatio-temporal correlation, such as optical flow based methods (Pan, Bai, and Tang 2020; Xiang, Wei, and Pan 2020), deformable and dynamic convolutions based methods (Wang et al. 2019; Zhou et al. 2019). Recently, the emergence of Transformer provides an alternative for effective temporal modeling for video deblurring (Liang et al. 2022b,a; Lin et al. 2022), due to its advantages of modeling long-range spatial dependencies. Event-Based Motion Deblurring Event cameras provide visual information with low latency and with strong robustness against motion blur, which offers great potential for motion deblurring. Event-based motion deblurring methods can be divided into two categories (Xu et al. 2021), i.e., model driven and data driven algorithms. Model driven methods formulate the relation from blurry images to sharp images with the physical event generation principle (Scheerlinck, Barnes, and Mahony 2018). Specifically, Pan et al. (Pan et al. 2019) modeled the blur-generation process by associating event to a latent frame with an Event-based Double Integral (EDI) algorithm for deblurring. Scheerlinck et al. (Scheerlinck, Barnes, and Mahony 2018) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6532 Figure 2: Framework of our STCNet, containing two parts: modality spatial collaboration and modality temporal collaboration. presented a continuous-time formulation of event-based intensity estimation using complementary filtering to combine frames with events. Regrettably, there is inevitable noise in events due to the non-ideality of physical sensors (Zhang and Yu 2022), resulting in degraded performance. Data driven methods tackle above limitations by learningbased approaches (Lin et al. 2020). LEMD (Jiang et al. 2020) presented a sequential formulation of event-based motion deblurring, and unfolded its optimization with deep architecture. eSL-Net (Wang et al. 2020) proposed an event enhanced degeneration model for the high-quality image recovery. Shang et al. (Shang et al. 2021) proposed an event fusion module to utilize beneficial information from events, which can be incorporated into existing motion deblurring methods. EFNet (Sun et al. 2022) first introduced a symmetric cumulative event representation and then proposed a cross-modal attention module to fuse image and event. ERDNet (Chen et al. 2022a) proposed a residual learning approach to learn event-based motion deblurring. Method Problem Statement Given a blurry frame B and the corresponding event stream ET ≜{(xi, yi, pi, ti)}ti∈T containing all events triggered during exposure time T, where p = ±1 is polarity, which denotes the direction (increase or decrease) of the intensity changes at that pixel (x, y) and time t, the proposed method is to recover a sharp frame I by exploiting both blurry frame B and event stream ET , which can be modeled as I = G (B, ET ), where G is deep learning model. Principled Framework of STCNet In our work, a novel spatial-temporal collaboration network (STCNet) is proposed for event-based motion deblurring, which can facilitate the collaborative fusion of frames and events in both spatial and temporal domains. Figure 2 shows the overview of STCNet. We first use symmetric feature encoder to extract target features ΦB t and ΦE t from blurry frame and its corresponding events, separately. Next, we conduct modality spatial collaboration with first differentialmodality guided cross-modal calibration (CMC) for complementary enhancement and then cross-modal aggregation (CMA) for contribution balance, obtaining Ft. Besides, we conduct modality temporal collaboration with first frameevent mutual spatio-temporal attention (STA) for crosstemporal dependencies modeling and then cross-temporal fusion (CTF) for spatio-temporal features fusion, obtaining Ot. Finally, feature decoder reconstructs the deblurred result It. Below we detail the main parts: modality spatial collaboration and modality temporal collaboration. Modality Spatial Collaboration (MSC) Generally, different modalities usually have complementary features (discrepancy) for each other and also have their shared features (commonalities). Differential features are what cross-modal fusion focuses on, while common features are redundant information. We advocate first calibrating the modality features to enhance complementarity and suppress redundancy, and then considering the disparity of the contributions of different modality features for multi-modality fusion, shown in Figure 3. Firstly, considering that differential-modality contains complementary cues, a differential-modality guided crossmodal calibration strategy is presented to enhance complementarity. The main idea is leveraging global interaction of differential-modality and two modalities to infer attention maps, then the attention maps are multiplied to the input features respectively for feature enhancement. The calibration process is realized by the CMC in Figure 3. Given the frame features ΦB t and event features ΦE t , we first obtain the differential-modality features Fdm by direct subtraction of two modalities: Fdm = ΦB t −ΦE t , (1) then, based on the traditional self-attention (Vaswani et al. 2017), we further put forward an efficient cross-attention mechanism applied to ΦB t , ΦE t and Fdm to infer the attention maps. ΦE t and ΦB t are transformed into Key Ke, Value Ve and Key Kb, Value Vb, respectively. Fdm is transformed into Query Qd. Then attention maps can be calculated as: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6533 Figure 3: Details of modality spatial collaboration. AB = Softmax QdKT b , AE = Softmax QdKT e , (2) where attention maps AB and AE contain complementarity clues of frame and event respectively. Thus, the calibrated features can be represented as: ˆΦB t = ABVb, ˆΦE t = AEVe. (3) Next, considering the disparity of the contributions of different modality features, we elaborate a cross-modal coattention scheme to balance the contributions of multimodality features for bimodal spatial complementary fusion. Meanwhile, we use a bi-directional fusion strategy, i.e., from frames to events as well as in the opposite direction. The aggregation process is realized by the CMA model in Figure 3. Specifically, taking the image-to-event example, given the calibrated feature ˆΦB t and original ΦE t , they are first combined using concatenation and convolution operation. Then the combined features are split evenly along the channel dimension into two sub-branches. The sigmoid function and global average pooling are performed on each sub-branch to obtain co-attention scores gB and gE, which model the importance of different modal features for the further fusion, which can be formulated as: g = Avg Sig Conv Cat ˆΦB t , ΦE t , (4) where Avg(·) is the global average pooling, Sig(·) denotes sigmoid function, Conv(·) refers to the convolution layer and Cat(·) is the concatenation operation. And we apply the co-attention scores to the corresponding features to generate gated features ˆΦB t and ΦE t : ˆΦB t = ˆΦB t ∗gB, ΦE t = ΦE t ∗gE. (5) Further, we use channel-wise and spatial-wise attentions to emphasize the supplementary features: ˆΦB t ′ = CA(ˆΦB t ) ∗ˆΦB t , ˆΦB t ′′ = SA(ˆΦB t ′ ) ∗ˆΦB t ′ , (6) then we devise the aggregation operation as an element-wise addition of the two modalities: E = ΦE t + ˆΦB t ′′ (7) Similarly, we can obtain B. The final fusion feature can be denoted as Ft = cat(E, B). Modality Temporal Collaboration (MTC) Exploring the useful information from neighboring inputs is crucial for motion deblurring. There may be errors in estimating the cross-temporal similarities relying only on frames when fast motions are present, i.e., spatio-temporal modeling lacks the guidance of motion information. Fortunately, on the one hand, events contain rich motion information that can assist frames to better model cross-temporal relevance; on the other hand, spatio-temporal dependencies of event sequences can also be explored. Thus, by enjoying the extra assistance of events, we propose a frame-event mutual spatio-temporal attention to model the cross-temporal dependencies and then aggregate informative features from temporal neighbors of both frames and events via a custom cross-temporal coordinate attention, illustrated in Figure 4. In our work, the features of the two adjacent moments before and after are fused to the features of the current moment. We take time t and time t + 1 for example. Given the features of frame ΦB t and ΦB t+1, as well as event ΦE t and ΦE t+1, the key of the proposed cross-temporal dependencies capturing is to conduct the communication of cross-frames and cross-events. As shown in STA of Figure 4, we first transform ΦB t into Query Qb, and ΦB t+1 into Key Kb, Value Vb, as well as ΦE t into Query Qe, and ΦE t+1 into Key Ke, Value Ve. Then intra-modality individual cross-temporal attention is first estimated by multiplying the queries from one moment and the keys from the other moment: SB = Softmax QbKT b , SE = Softmax QeKT e . (8) Then we joint inter-modality cross-temporal attention to obtain the mutual spatio-temporal attention SM: SM = SBSE. (9) Then the informative spatio-temporal features eΦB t+1 and eΦE t+1 from both frame and event domains are obtained with the guidance of mutual attention SM: eΦB t+1 = SMVb, eΦE t+1 = SMVe. (10) Following, we fuse the current features Ft with eΦB t+1 and eΦE t+1 separately. Taking the fusion of Ft and eΦB t+1 as an example, the CTF is developed based on coordinate attention (Hou, Zhou, and Feng 2021), which can concurrently capture channel and location importance and long-range dependencies. Figure 4 shows the structure of CTF. Specifically, given the Ft and eΦB t+1, we use two spatially scoped pooling kernels (H, 1) or (1, W) encode each channel of the two features along the horizontal and vertical orientations. The aggregated features are represented as: F h t = XAP(Ft), F w t = Y AP(Ft), Bh t+1 = XAP(eΦB t+1), Bw t+1 = Y AP(eΦB t+1), (11) where XAP and Y AP denote the average pooling along the vertical and horizontal directions, respectively. Then we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6534 Figure 4: Details of modality temporal collaboration. join the four aggregated feature by spatial concatenation, followed by convolution and batch normalization (BN): R = BN(Cat(F h t , F w t , Bh t+1, Bw t+1)). (12) Then, we split R into four separate tensors Rh f, Rw f , Rh b and Rw b . We use four 1 × 1 convolution to transform each of the four split tensors into a tensor with the same number of channels as the input features: W h f = σ(Conv(Rh f)), W w f = σ(Conv(Rw f )), W h b = σ(Conv(Rh b )), W w b = σ(Conv(Rw b )), (13) where W h f , W w f , W h b , W w b represent Ft and eΦB t+1 coordinate attention weight in the vertical and horizontal directions, respectively. The final weighted features can be defined as: Mf = FtW h f W w f , Mb = eΦB t+1W h b W w b , (14) then we devise the aggregation operation as an element-wise addition of the two features: Mfb = Mf + Mb. (15) Similarly, we can obtain Mfe. The final fusion feature can be denoted as Ot = cat(Mfb, Mfe). Loss Function In this paper, we use the Charbonnier loss (Charbonnier et al. 1994) to train our network in an end-to-end fashion: Lchar = 1 CHW q ∥I −G∥2 + ε2, (16) where I and G is deblurred out and ground truth, respectively, C, H, W are dimensions of frame, and constant ε is empirically set to 10−3 as in (Zamir et al. 2021). Experiments Experimental Settings Datasets. Our STCNet is evaluated on 1) Synthetic dataset. GoPro (Nah, Hyun Kim, and Mu Lee 2017) and DVD (Su et al. 2017) datasets are widely adopted for image-only and event-based deblurring such as (Sun et al. 2022), which contains synthetic blurring images and sharp ground-truth images, as well as synthetic events generated by simulation algorithm ESIM (Rebecq, Gehrig, and Scaramuzza 2018). 2) Real dataset. REB dataset is a real event dataset captured by us with the DAVIS346 event camera, including both real events and clear ground-truth images captured under various conditions both indoors and outdoors, that are well-exposed and minimally motion-blurred. The blurring images are generated by using the same strategy as the GoPro. There are 60 videos of REB, 40 of which are used for training and 20 for testing. In addition, several sequences are collected under fast camera movement or fast moving scenes for qualitative comparison, without ground truth. Implementation Details. Our method is implemented using Pytorch on NVIDIA RTX 3090 GPU. The size of training patch is 256 × 256 with minibatch size of 8. The optimizer is ADAM (Kingma and Ba 2015), and the learning rate is initialized at 2 × 10−4 and decreased by the cosine learning rate strategy with a minimum learning rate of 10−6. For data augmentation, each patch is horizontally flipped with the probability of 0.5. The Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) are adopted as the evaluation metrics. Comparison With State-of-the-Art Methods We compare our STCNet to state-of-the-art image/videoonly deblurring methods, including MemDeblur (Ji and Yao 2022), MMP-RNN (Wang et al. 2022a), MPRNet (Zamir et al. 2021), MIMO-UNet++ (Cho et al. 2021), Restormer (Zamir et al. 2022), RNN-MBP (Zhu et al. 2022), NAFNet (Chen et al. 2022b), VRT (Liang et al. 2022a), DFFN (Kong et al. 2023), DSTN (Pan et al. 2023), and event-based deblurring methods, including RED* (Xu et al. 2021), eSL-Net* (Wang et al. 2020), D2Nets* (Shang et al. 2021), DS-Deblur* (Yang et al. 2022), ERDNet* (Chen et al. 2022a), EFNet* (Sun et al. 2022), REFID* (Sun et al. 2023). GoPro: We report the performance of compared motion deblurring approaches on GoPro dataset in Table 1. Overall, our method achieves the best performance against other algorithms (1.40dB improvement in terms of PSNR over best image/video-only methods and 0.54dB improvement over best event-based methods). Moreover, we show the qualitative visual quality comparison in Figure 5. Overall, visual quality comparisons demonstrate that our method can recover sharper texture details that are closer to the groundtruth, while the results restored by other methods still suffer from motion blur, losing sharp edge information. DVD: The STCNet is trained on GoPro dataset and tested on DVD dataset. Table 2 reports the quantitative results on the DVD dataset. Our method significantly outperforms other state-of-the-art competitors (2dB improvement in terms of PSNR over best image/video-only methods and 0.79dB improvement over best event-based methods), demonstrating the superior generalization ability of the proposed framework. REB: The quantitative performance of real-world dataset REB is shown in Table 3. Our method significantly outperforms other competitors (2.69dB improvement in terms of PSNR over best image/video-only methods and 0.53dB improvement over best event-based methods). We show the deblurring visual comparison on real blurs in Figure 6. Our The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6535 Method RED* eSL-Net* D2Nets* MemDeblur MMP-RNN MPRNet MIMO-UNet++ Restormer DS-Deblur* PSNR 28.98 30.23 31.76 31.76 32.64 32.66 32.68 32.92 33.13 SSIM 0.8499 0.8703 0.9430 0.9230 0.9359 0.9590 0.9590 0.9610 0.9465 Method RNN-MBP NAFNet DFFN ERDNet* VRT DSTN EFNet* REFID* STCNet(Ours)* PSNR 33.32 33.69 34.21 34.25 34.81 35.05 35.46 35.91 36.45 SSIM 0.9627 0.9670 0.9692 0.9534 0.9724 0.9733 0.9720 0.9730 0.9809 Table 1: Comparison of motion deblurring methods on GoPro dataset. * denotes event-based methods. Blurry D2Nets* MPRNet DS-Deblur* ERDNet* Blurry Image Ground-truth VRT EFNet* REFID* STCNet(Ours)* Figure 5: Visual comparisons on GoPro datatset. * denotes event-based methods. Best viewed on a screen and zoomed in. Method D2Nets* MPRNet eSL-Net* DS-Deblur* NAFNet ERDNet* VRT EFNet* REFID* STCNet(Ours)* PSNR 26.64 27.80 27.50 31.63 27.94 32.29 31.94 32.85 33.15 33.94 SSIM 0.8819 0.9091 0.8914 0.9436 0.9126 0.9506 0.9602 0.9571 0.9611 0.9692 Table 2: Comparison of motion deblurring methods on DVD dataset. * denotes event-based methods. Method MMP-RNN Restormer D2Nets* NAFNet DS-Deblur* ERDNet* eSL-Net* REFID* EFNet* STCNet(Ours)* PSNR 30.66 32.21 32.47 32.75 32.84 34.02 34.55 34.84 34.91 35.44 SSIM 0.9122 0.9505 0.9585 0.9570 0.9583 0.9663 0.9710 0.9723 0.9720 0.9772 Table 3: Comparison of motion deblurring methods on REB dataset. * denotes event-based methods. method achieves the most visually plausible deblurring results with sharper textures while others produce results with more artifacts and cannot remove severe blur effectively. Complexity Comparison We further calculate the parameters and average runtime for complexity analysis. All experiments are conducted with image size of 1280 × 720 × 3. Results in average running time and parameters are presented in Table 4. It is obvious that our method has comparable parameters and running time with consideration of acceptable calculation consumption to achieve promising deblurring performance. Ablation Study To evaluate the effectiveness of the key components (MSC and MTC) in our model, we conduct ablation studies on GoPro dataset and REB dataset. A baseline is first experimented with, which simply concatenates frame features ΦB t and event features ΦE t and neglects the spatio-temporal correlation between successive inputs. First row of Table 5 shows the performance of baseline. Effectiveness of MSC Module. We append it to Baseline to conduct cross-modal fusion using a calibration-thenaggregation strategy. There is a great performance gap in the first two rows of Table 5, which shows that MSC can efficiently fuse events with frames. Then, we validate the importance of differential-modality guided cross-modal calibration (DM-CMC) strategy in MSC. The DM-CMC is appended to the baseline to calibrate ΦB t and ΦE t and the calibrated bi-modal features are simply concatenated, and the results are shown in the first two rows in Table 7, showing that DM-CMC can enhance complementarity. Further, the validity of CMA in MSC is tested, which is designed to weight different contribution of modalities. The CMA is appended to the baseline to adaptively fusion, ignoring modality redundancy problem, and the results are shown in the first and third rows in Table 7. Apparently, CMA can efficiently emphasize modality own importance for better fusion. Effectiveness of MTC Module. We append MTC module to baseline to capture sharp information from temporal neighbors of both frames and events, and the results are shown in the first and third rows in Table 5. Apparently, cross-temporal relevance can be modeled by MTC to improve the deblurring performance. Then the mutual spatiotemporal attention scheme in MTC is validated. We model spatio-temporal relevance with only frames, only events, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6536 Blurry Image MPRNet Restormer NAFNet Event REFID* EFNet* STCNet(Ours)* Figure 6: Visual comparison on real blur set of REB dataset. * denotes event-based methods. Best viewed on a screen and zoomed in. Method eSL-Net* D2Nets* MemDeblur MPRNet MIMO-UNet++ Restormer DS-Deblur* ERDNet* REFID* STCNet(Ours)* Params (M) 0.19 32.63 6.1020 20.10 16.10 26.09 15.60 18.08 15.9 16.25 Runtime (s) 0.015 1.340 0.9110 0.117 0.025 1.1546 0.292 0.020 0.072 0.098 PSNR(dB) 30.23 31.76 31.76 32.66 32.68 32.92 33.13 34.25 35.91 36.45 Table 4: Complexity comparison with other methods. * denotes event-based methods. MSC MTC Gropo REB PSNR SSIM PSNR SSIM 33.40 0.9615 33.12 0.9610 35.93 0.9780 35.00 0.9721 35.05 0.9733 34.20 0.9682 36.45 0.9809 35.44 0.9772 Table 5: Ablation study on MSC and MTC in STCNet. Cross-frame Cross-event Gropo REB PSNR SSIM PSNR SSIM 33.40 0.9615 33.12 0.9610 34.37 0.9681 33.58 0.9645 33.96 0.9658 33.15 0.9608 34.61 0.9706 33.79 0.9652 Table 6: Ablation study on mutual attention in MTC. joint them by STA, and the spatio-temporal features and current features are simply concatenated. Table 6 shows that mutual attention better captures spatio-temporal dependence. Besides, we test the validity of the CTF, which adaptively fuses above mutual attention-guided spatio-temporal features and current features. Table 8 shows the superiority of CTF. Conclusion In this work, we explore the complementary fusion of events and frames for motion deblurring. A novel spatial-temporal collaboration network is introduced to facilitate the crossmodal fusion both in spatial and temporal aspects. We first DM-CMC CMA Gropo REB PSNR SSIM PSNR SSIM 33.40 0.9615 33.12 0.9610 34.73 0.9712 33.97 0.9667 35.67 0.9768 34.66 0.9703 35.93 0.9780 35.00 0.9721 Table 7: Ablation study on calibration-aggregation in MSC. CTF Gropo REB PSNR SSIM PSNR SSIM 34.61 0.9706 33.79 0.9652 35.05 0.9733 34.20 0.9682 Table 8: Ablation study on cross-temporal fusion in MTC. conduct cross-modal spatial fusion with first differentialmodality guided cross-modal calibration for complementary enhancement and then co-attention based cross-modal aggregation for adaptive fusion. And then to attach importance to the temporal correlation among adjacent neighbors, we prepose the frame-event mutual spatio-temporal attention for cross-temporal dependencies modeling and then fuse spatio-temporal features with a cross-temporal coordinate attention based cross-temporal fusion. Extensive evaluations show that our method achieves state-of-the-art performance. Acknowledgments This work was partially supported by the National Natural Science Foundation of China under contract 62022063. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6537 References Bahat, Y.; Efrat, N.; and Irani, M. 2017. Non-uniform blind deblurring by reblurring. In ICCV. Bar, L.; Berkels, B.; Rumpf, M.; and Sapiro, G. 2007. A variational framework for simultaneous motion estimation and restoration of motion-blurred video. In ICCV. Charbonnier, P.; Blanc-Feraud, L.; Aubert, G.; and Barlaud, M. 1994. Two deterministic half-quadratic regularization algorithms for computed imaging. In ICIP. Chen, H.; Teng, M.; Shi, B.; Wang, Y.; and Huang, T. 2022a. A Residual Learning Approach to Deblur and Generate High Frame Rate Video With an Event Camera. IEEE TMM. Chen, L.; Chu, X.; Zhang, X.; and Sun, J. 2022b. Simple baselines for image restoration. ECCV. Chen, L.; Lu, X.; Zhang, J.; Chu, X.; and Chen, C. 2021. HINet: Half instance normalization network for image restoration. In CVPR. Cho, S.; Wang, J.; and Lee, S. 2012. Video deblurring for hand-held cameras using patch-based synthesis. ACM TOG. Cho, S.-J.; Ji, S.-W.; Hong, J.-P.; Jung, S.-W.; and Ko, S.-J. 2021. Rethinking Coarse-to-Fine Approach in Single Image Deblurring. In ICCV. Hou, Q.; Zhou, D.; and Feng, J. 2021. Coordinate attention for efficient mobile network design. In CVPR. Hyun Kim, T.; and Mu Lee, K. 2015. Generalized video deblurring for dynamic scenes. In CVPR. Ji, B.; and Yao, A. 2022. Multi-Scale Memory-Based Video Deblurring. In CVPR. Jiang, Z.; Zhang, Y.; Zou, D.; Ren, J.; Lv, J.; and Liu, Y. 2020. Learning event-based motion deblurring. In CVPR. Jin, H.; Favaro, P.; and Cipolla, R. 2005. Visual tracking in the presence of motion blur. In CVPR. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Kong, L.; Dong, J.; Ge, J.; Li, M.; and Pan, J. 2023. Efficient Frequency Domain-based Transformers for HighQuality Image Deblurring. In CVPR. Kotera, J.; ˇSroubek, F.; and Milanfar, P. 2013. Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors. In CAIP. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; and Matas, J. 2018. Deblurgan: Blind motion deblurring using conditional adversarial networks. In CVPR. Kupyn, O.; Martyniuk, T.; Wu, J.; and Wang, Z. 2019. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In ICCV. Levin, A.; Weiss, Y.; Durand, F.; and Freeman, W. T. 2009. Understanding and evaluating blind deconvolution algorithms. In CVPR. Liang, J.; Cao, J.; Fan, Y.; Zhang, K.; Ranjan, R.; Li, Y.; Timofte, R.; and Van Gool, L. 2022a. Vrt: A video restoration transformer. arXiv preprint arXiv:2201.12288. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021. Swinir: Image restoration using swin transformer. In ICCV. Liang, J.; Fan, Y.; Xiang, X.; Ranjan, R.; Ilg, E.; Green, S.; Cao, J.; Zhang, K.; Timofte, R.; and Van Gool, L. 2022b. Recurrent Video Restoration Transformer with Guided Deformable Attention. NeurlPS. Lin, J.; Cai, Y.; Hu, X.; Wang, H.; Yan, Y.; Zou, X.; Ding, H.; Zhang, Y.; Timofte, R.; and Van Gool, L. 2022. Flowguided sparse transformer for video deblurring. ICML. Lin, S.; Zhang, J.; Pan, J.; Jiang, Z.; Zou, D.; Wang, Y.; Chen, J.; and Ren, J. 2020. Learning event-driven video deblurring and interpolation. In ECCV. Matsushita, Y.; Ofek, E.; Ge, W.; Tang, X.; and Shum, H.-Y. 2006. Full-frame video stabilization with motion inpainting. IEEE TPAMI. Mei, C.; and Reid, I. 2008. Modeling and generating complex motion blur for real-time tracking. In CVPR. Nah, S.; Hyun Kim, T.; and Mu Lee, K. 2017. Deep multiscale convolutional neural network for dynamic scene deblurring. In CVPR. Nah, S.; Son, S.; and Lee, K. M. 2019. Recurrent neural networks with intra-frame iterations for video deblurring. In CVPR. Pan, J.; Bai, H.; and Tang, J. 2020. Cascaded deep video deblurring using temporal sharpness prior. In CVPR. Pan, J.; Xu, B.; Dong, J.; Ge, J.; and Tang, J. 2023. Deep Discriminative Spatial and Temporal Network for Efficient Video Deblurring. In CVPR. Pan, L.; Scheerlinck, C.; Yu, X.; Hartley, R.; Liu, M.; and Dai, Y. 2019. Bringing a blurry frame alive at high framerate with an event camera. In CVPR. Park, D.; Kang, D. U.; Kim, J.; and Chun, S. Y. 2020. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In ECCV. Purohit, K.; and Rajagopalan, A. 2020. Region-adaptive dense network for efficient motion deblurring. In AAAI. Rebecq, H.; Gehrig, D.; and Scaramuzza, D. 2018. ESIM: an open event camera simulator. In CoRL. Scheerlinck, C.; Barnes, N.; and Mahony, R. 2018. Continuous-time intensity estimation using event cameras. In ACCV. Shang, W.; Ren, D.; Zou, D.; Ren, J. S.; Luo, P.; and Zuo, W. 2021. Bringing Events Into Video Deblurring With NonConsecutively Blurry Frames. In ICCV. Su, S.; Delbracio, M.; Wang, J.; Sapiro, G.; Heidrich, W.; and Wang, O. 2017. Deep video deblurring for hand-held cameras. In CVPR. Suin, M.; Purohit, K.; and Rajagopalan, A. 2020. Spatiallyattentive patch-hierarchical network for adaptive motion deblurring. In CVPR. Sun, L.; Sakaridis, C.; Liang, J.; Jiang, Q.; Yang, K.; Sun, P.; Ye, Y.; Wang, K.; and Van Gool, L. 2022. Event-Based Fusion for Motion Deblurring with Cross-modal Attention. In ECCV. Sun, L.; Sakaridis, C.; Liang, J.; Sun, P.; Cao, J.; Zhang, K.; Jiang, Q.; Wang, K.; and Van Gool, L. 2023. Event-Based Frame Interpolation with Ad-hoc Deblurring. In CVPR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6538 Tao, X.; Gao, H.; Shen, X.; Wang, J.; and Jia, J. 2018. Scalerecurrent network for deep image deblurring. In CVPR. Tsai, F.-J.; Peng, Y.-T.; Tsai, C.-C.; Lin, Y.-Y.; and Lin, C.W. 2022. BANet: A Blur-aware Attention Network for Dynamic Scene Deblurring. IEEE TIP. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In NeurIPS. Vitoria, P.; Georgoulis, S.; Tulyakov, S.; Bochicchio, A.; Erbach, J.; and Li, Y. 2023. Event-Based Image Deblurring with Dynamic Motion Awareness. In ECCVW. Wang, B.; He, J.; Yu, L.; Xia, G.-S.; and Yang, W. 2020. Event enhanced high-quality image recovery. In ECCV. Wang, X.; Chan, K. C.; Yu, K.; Dong, C.; and Change Loy, C. 2019. Edvr: Video restoration with enhanced deformable convolutional networks. In CVPRW. Wang, Y.; Lu, Y.; Gao, Y.; Wang, L.; Zhong, Z.; Zheng, Y.; and Yamashita, A. 2022a. Efficient video deblurring guided by motion magnitude. ECCV. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; and Li, H. 2022b. Uformer: A general u-shaped transformer for image restoration. In CVPR. Wulff, J.; and Black, M. J. 2014. Modeling blurred video with layers. In ECCV. Xiang, X.; Wei, H.; and Pan, J. 2020. Deep video deblurring using sharpness features from exemplars. IEEE TIP. Xu, F.; Yu, L.; Wang, B.; Yang, W.; Xia, G.-S.; Jia, X.; Qiao, Z.; and Liu, J. 2021. Motion Deblurring with Real Events. In ICCV. Yang, W.; Wu, J.; Ma, J.; Li, L.; Dong, W.; and Shi, G. 2022. Learning for Motion Deblurring with Hybrid Frames and Events. In ACM MM. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In CVPR. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2021. Multi-stage progressive image restoration. In CVPR. Zhang, H.; Dai, Y.; Li, H.; and Koniusz, P. 2019. Deep stacked hierarchical multi-patch network for image deblurring. In CVPR. Zhang, X.; and Yu, L. 2022. Unifying Motion Deblurring and Frame Interpolation with Events. In CVPR. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2020. Residual dense network for image restoration. IEEE TPAMI. Zhong, Z.; Gao, Y.; Zheng, Y.; and Zheng, B. 2020. Efficient spatio-temporal recurrent neural network for video deblurring. In ECCV. Zhou, S.; Zhang, J.; Pan, J.; Xie, H.; Zuo, W.; and Ren, J. 2019. Spatio-temporal filter adaptive network for video deblurring. In ICCV. Zhu, C.; Dong, H.; Pan, J.; Liang, B.; Huang, Y.; Fu, L.; and Wang, F. 2022. Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In AAAI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6539 | 2024 | 726 |
18,547 | DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval Xiangpeng Yang1, Linchao Zhu2, Xiaohan Wang2, Yi Yang2 * 1 ReLER, AAII, University of Technology Sydney 2 CCAI, Zhejiang University [email protected], [email protected], {zhulinchao,yangyics}@zju.edu.cn Abstract Text-video retrieval is a critical multi-modal task to find the most relevant video for a text query. Although pretrained models like CLIP have demonstrated impressive potential in this area, the rising cost of fully finetuning these models due to increasing model size continues to pose a problem. To address this challenge, prompt tuning has emerged as an alternative. However, existing works still face two problems when adapting pretrained image-text models to downstream video-text tasks: (1) The visual encoder could only encode frame-level features and failed to extract globallevel general video information. (2) Equipping the visual and text encoder with separated prompts failed to mitigate the visual-text modality gap. To this end, we propose DGL, a cross-modal Dynamic prompt tuning method with GlobalLocal video attention. In contrast to previous prompt tuning methods, we employ the shared latent space to generate local-level text and frame prompts that encourage intermodal interaction. Furthermore, we propose modeling video in a global-local attention mechanism to capture global video information from the perspective of prompt tuning. Extensive experiments reveal that when only 0.67% parameters are tuned, our cross-modal prompt tuning strategy DGL outperforms or is comparable to fully finetuning methods on MSR-VTT, VATEX, LSMDC, and ActivityNet datasets. Code will be available at https://github.com/knightyxp/DGL Introduction With the recent advancement of large-scale contrastive image-text pretraining methods i.e., CLIP (Radford et al. 2021), the field of TVR (Text-Video Retrieval) has experienced many works (Luo et al. 2022; Gorti et al. 2022; Zhao et al. 2022; Liu et al. 2022; Ma et al. 2022; Wang et al. 2022a) to adapt image-text pretrained models like CLIP to the video-text domain and already achieve the promising performance. These approaches incur a large storage burden in actual scenarios because they need to store distinct new models for different tasks. However, as the capacity of pretrained models is rapidly expanding nowadays, i.e., BEIT-3 (Wang et al. 2022b) has 1.9B parameters, and BLIP-L/14 has 578M parameters. Fully finetuning the entire model for *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: In Fig (a), we observed that frame attention methods, like CLIP4Clip, often emphasize non-semantic corners, missing the protagonist’s action. This led us to design the global-local video attention for capturing global-level crossframe dynamics. Fig (b) showcases a performance comparison on MSRVTT: DGL outperforms six PEFL methods and fine-tuned CLIP4Clip while updating minimal parameters. each downstream task requires maintaining separate model weights for every dataset, hindering the feasibility of deployment given the growing model capacities. To address this problem, inspired by the recent success of prompt tuning in both NLP (Lester, Al-Rfou, and Constant 2021; Liu et al. 2021) and common visual recognition tasks (Jia et al. 2022), we continue to introduce prompt tuning to the cross-modal domain. In this way, we only need to store the parameters of a few prompt vectors for various retrieval tasks and keep the pretrained model backbone frozen, thus The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6540 reducing the total parameter cost. Efficient Prompt (Ju et al. 2022) is the first work that has attempted prompt tuning in this area, introducing learnable prompt vectors in the text input while viewing the video as separate frames. Despite incorporating an additional transformer for temporal encoding, the performance remains unsatisfactory. VoP (Huang et al. 2023), another recent prompt tuning approach, designs three kinds of videospecific prompts but optimizes the dual branches’ prompts independently. We argue these current methods still fail to handle two key challenges when applying prompt tuning in text-video retrieval. (1) Cross-modal alignment: Existing schemes, like those in VoP and Efficient Prompt, optimize the two branches separately, making it challenging for the model to learn mutual cross-modal information effectively. (2) General video information extraction: Since CLIP is pretrained on image-text pairs, its primary focus is on locallevel frame features rather than holistic, global-level video information. This inherent design leads to potential pitfalls when used directly for TVR tasks. In the top images of Fig 1 (a), the attention weights of CLIP4Clip’s CLS tokens reveal this limitation. Specifically, the CLS tokens overlook the action of “hit the ball” and instead allocate more attention to the upper-right corner – a semantically void region. To address the aforementioned issues, we propose dynamic global-local prompt tuning (coined as DGL) for textvideo retrieval. Our approach generates dynamic local-level prompts (text and frame prompts) from a shared latent space. This allows for joint optimization and ensures the alignment of the two modalities. Moreover, we propose global-local video attention to model videos from both the global and local levels, capturing inter-frame temporal information with the global prompt and focusing on each frame’s information with the local frame prompts. From a qualitative standpoint, the bottom image of Fig 1(a) clearly shows the effectiveness of DGL. In contrast to CLIP4Clip, our DGL can focus on the “hit” action and the ball’s trajectory into the pit. This demonstrates that our method can efficiently capture temporal dynamics. Besides, on the quantitative front, as shown in Fig 1(b), our method achieves the best trade-off between trainable parameters and performance. More specifically, with only 9.57M parameters updated, DGL achieves 45.8 R@1 on MSRVTT. These results demonstrate the importance of cross-modal interaction and a comprehensive understanding of video information. We undertake extensive experiments on four benchmarks, including MSR-VTT, VATEX, LSMDC, and ActivityNet. Our contributions can be summarized as follows: • We propose to generate dynamic cross-modal prompts from the shared latent space to ensure the cross-modal interaction. • We propose a global-local attention mechanism for a comprehensive understanding of input video, facilitating effective learning of cross-frame temporal dynamics. • Compared to the fully finetuning CLIP4Clip and other prompt tuning methods, our DGL achieves superior or equivalent performance on R@1 across four public datasets while reducing 99.3% of trainable parameters. Related Work Text-Video Retrieval. Text-video retrieval is a prevalent task in multimodal learning. Previous works like (Zhu and Yang 2020; Wang, Zhu, and Yang 2021; Sun et al. 2019; Bain et al. 2021; Lei et al. 2021; Liang et al. 2023) utilize abundant video information for multimodal learning. With the pretrained models like CLIP (Radford et al. 2021) gaining traction, CLIP4Clip (Luo et al. 2022) proposed to finetune CLIP on text-video retrieval by adding extensive similarity calculation mechanisms, which shows good performance on several benchmarks. This inspired follow-up research (Gorti et al. 2022; Bogolin et al. 2022; Liu et al. 2022; Zhao et al. 2022; Fang et al. 2021) which delved deeper into cross-modal learning. Recent research (Wu et al. 2023; Jin et al. 2023a,b) have introduced external tools for enhanced retrieval but predominantly utilize features extracted from CLIP. Our approach continues to build upon the foundation set by CLIP4Clip, emphasizing efficient parameter learning within the encoder. Parameter Efficient Methods. Fully fine-tuning is a common approach to adapting pretrained models into downstream tasks, but it can be inefficient due to large parameter sizes and time costs. To address this, parameter-efficient learning (PEFL) has been proposed, including adapter and prompt tuning methods. Adapters (Houlsby et al. 2019) offers a plug-and-play approach by adding modules to pretrained networks. VL-Adapter (Sung, Cho, and Bansal 2022) further extends the adapter to vision-and-language tasks. Recently, (Jiang et al. 2022) introduced a weight-share mechanism and adopted the query-scoring frame features reweighting method proposed in (Bain et al. 2022) to boost performance. (Zhang et al. 2023) proposed a temporal adaptation and cross-modal interaction modules. Prompt tuning (Lester, Al-Rfou, and Constant 2021) is another parameterefficient choice by introducing additional learnable parameters at the model’s input. (Liu et al. 2021) further applies prompts to each encoder layer for more knowledge probing. Adaptations to models like CLIP for specific tasks have been explored in (Zhou et al. 2022b,a), with further refinements in image and cross-modal domains (Jia et al. 2022; Zang et al. 2022; Khattak et al. 2022). In the text-video retrieval task, Efficient Prompt (Ju et al. 2022) tried incorporating additional prompts into text queries but overlooked the potential of visual prompts in this context. While (Huang et al. 2023) made advancements with video-specific prompts, they still hard to address cross-modal interactions in prompt tuning. In this paper, we delve deeper into cross-modal prompt tuning and seek effective ways to represent videos, considering their inherent complexity compared to text. Methods In this section, we will illustrate the details of our method. Firstly, we provide a comprehensive overview of the proposed DGL framework. Furthermore, we will introduce how to design global-local video attention to learn discriminative features from holistic video information. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6541 Figure 2: Overview of our Dynamic Global-Local prompt tuning Framework. DGL consists of the Local Prompt Generation, Text Encoder, and Visual Encoder. During downstream training, all the encoders are frozen, and only the parameter pictured with fire is trainable. The Local Prompt Generation ensures cross-modal interaction at the word-frame level, and the GlobalLocal Video Attention hints to the visual encoder to extract general video information from different perspectives. Global-Local Prompt Tuning Framework Current parameter-efficient methods in TVR, such as Efficient Prompt (Ju et al. 2022), and VoP (Huang et al. 2023) often neglect the crucial interaction between the visual and text modalities, as they focus only on inserting prompt vectors in the text input or prompting the dual branches separately. Additionally, deep prompts are necessary for more complex tasks like TVR, as evidenced by studies like VPT (Jia et al. 2022), which showed shallow prompts to be less effective for traditional visual tasks like classification and segmentation. Meanwhile, classic fully finetuning methods like CLIP4Clip (Luo et al. 2022), TS2Net, Cap4Video, and HBI (Liu et al. 2022; Wu et al. 2023; Jin et al. 2023a) still process videos as discrete frames, owing to CLIP’s textimage pretraining. This complicates modeling inter-frame relationships and capturing temporal information. To address these issues, we introduce DGL, a dynamic global-local prompt tuning method that facilitates global video-level information learning and ensures cross-modal alignment between frame-level visual features and wordlevel text features. Our DGL framework, as illustrated in Fig 2, consists of a local prompt generation module, a text encoder, and a visual encoder. In the text branch, following (Ju et al. 2022), we design to learn a set of deep text prompts, including prefix text prompts T pre i = {T pre;j i ∈Rd|j ∈N, 1 ≤j ≤npre} and postfix text prompts T post i = {T post;j i ∈Rd|j ∈N, 1 ≤ j ≤npost} for each layer index i. The prefix text prompts are added to the input text query before the word embedding, while the postfix text prompts are placed afterward. Here, d is the dimension of the prompt vectors, npre and npost denote the numbers of prefix and postfix prompts, respectively. In the visual branch, to perform global-local video attention, we consider learning a single layer of ng global prompts G = {Gj ∈Rd|j ∈N, 1 ≤j ≤ng} to capture the global information and a set of deep frame prompts F k i = {F k;j i ∈Rd|j ∈N, 1 ≤k ≤t, 1 ≤j ≤nf} to extract frame information. Here, k is frame index, t is the number of frames, j is the length index for frame prompts. nf and ng are the length of each frame prompts and global prompts. The visual encoder input contains global video prompts, frame patch tokens, and frame prompts. The text encoder input consists of text prompts and word tokens. Given the different choices of prompt generation modules, we dub our method as DGL-Transformer and DGL-Linear, respectively. Local Prompt Generation We utilize two methods to generate local-level cross-modal prompts from the shared latent space and optimize them jointly. The first approach is the unified prompt transformer, and the second approach is the unified linear projection. We describe the details of these two methods as follows: Unified Prompt Transformer. To exploit the crossmodal interaction at the fine-grained level, inspired by UPT (Zang et al. 2022), we propose to generate frame prompts and text prompts from a unified transformer (short as “trans”). For each layer, we merge text and visual prompts to form the unified prompt Ui = [T pre i , T post i , F k i ], processed in a unified prompt transformer for cross-modal interaction. We learn layer-wise unified prompts U trans i for both text and visual encoders. After transformation, U trans i splits into three parts {T pre i , T post i , F k i }, where the first two are sent into the text encoder and the last into the visual encoder. Notably, our unified prompt transformer only has a single layer. The hidden dimension matches the visual encoder’s. Besides, we use an MLP Layer to adjust the text The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6542 prompts’ dimension. Unified Linear Projection. To further reduce the parameter cost, we consider using two simple linear layers, U pre linear and U post linear, to map the frame prompts to the text prefix and text postfix prompts in each encoder layer respectively. This process can be formulated as follows: T pre i = U pre linear(F k i ) 1 ≤i ≤N (1) T post i = U post linear(F k i ) 1 ≤i ≤N (2) Here i is the layer index, N is the total layers, and is 12 in our DGL, the same as the total encoder layers in CLIP. The length of each frame prompt and text prefix/postfix prompts are the same. All layers’ multi-modal local prompts share the two projection layers. Therefore, the unified linear projection minimizes the parameter cost. The two local prompt generation modules enable efficient interaction by mapping different modal prompts from the shared latent space (either the lightweight transformer or the linear layers), which ensures the cross-modal alignment between video frames and text words. Text Encoder and Visual Encoder Text Encoder. In the text branch, combined with the text prefix prompts T pre i and text postfix prompts T post i , we get the ith layer’s text embedding as follows: [ , Wi, ] = Lt i([T pre i−1, Wi−1, T post i−1 ]) (3) where [·, ·, ·] refers to the concatenation operation, Lt i represents the ith text encoder layer, Wi is the word embedding of ith text encoder layer. The prefix text prompts and postfix prompts are updated by the local prompt generation module in each layer. We get the final text representation by projecting from the final layer’s word embedding WN. Visual Encoder. For the ith ViT layer, combined with the global prompts Gi and local frame prompts F k i , the prompt augmented ViT layer can be formulated as: [Gi, Ck i , , Ek i ] = Lv i ([Gi−1, Ck i−1, F k i−1, Ek i−1]) (4) where i is the layer index, and k is the frame index. Lv i represents the ith visual encoder layer. Ck i , Ek i represent each frame’s [CLASS] embeddings and patch embeddings, respectively. Besides, since the global prompts Gi are only prepended in the first visual layer, therefore, {Gi|i ̸= 1} means the global prompt embedding for the ith ViT layer. Leveraging Global-Local Video Attention In the text-video matching task, some text caption is related to single or short-latest frames. Thus the local frames feature is a must and basic. While some text query is a summary of behaviors of a video, therefore global video feature is also significant. Following (Xue et al. 2022), we propose to devise local frame and global video attention in a share-parameter manner to extract frame and global video features based on the frame prompts and global prompts, respectively. Local Frame Attention. Specifically, for the local frame attention, we want each frame prompt could perceive each G G F F F F F F F G F F F G F F Repeat𝐹𝑟𝑎𝑚𝑒 (𝑡 −1) 𝐹𝑟𝑎𝑚𝑒 (𝑡+ 1) 𝐹𝑟𝑎𝑚𝑒 𝑡 (a) Local Frame Attention F Query Key Frame Prompt Global Prompt Mask G (b) Global Video Attention F G F F F G F F Figure 3: Illustration of Global-Local Video Attention. The patches in the green mask serve as the queries in selfattention, and the patches in the orange mask are the key or value in self-attention. In the local frame attention, frame prompts serve as the query to investigate fine-grained local information in each frame; In global video attention, the global prompt acts as the query to excavate the global-level video information from all frames. local frame information. As shown in Fig 3 (a), in the ith ViT layer, we concatenate the [CLASS] embeddings, frame prompt embeddings, and frame patch embeddings along the temporal dimension k, therefore, [Ck i−1, F k i−1, Ek i−1] serve as the query Qloc i−1. To ensure each frame prompt could perceive global information, we repeat global prompt embeddings k times and concatenate them all. Therefore, we get [Gk i−1, Ck i−1, F k i−1, Ek i−1] as the key Kloc i−1. Our Local frame attention can be formulated as follows: Qloc i−1 = [Ck i−1, F k i−1, Ek i−1] (5) Kloc i−1 = [Gk i−1, Ck i−1, F k i−1, Ek i−1] (6) [Ck i , F k i , Ek i ] = Att(Qloc i−1, Kloc i−1, V loc i−1) (7) Global Video Attention. For global video attention, as shown in Fig 3 (b), global prompts Gi need to learn global discriminant information. Therefore global prompts are attended to all frames’ patch embeddings and prompt embeddings. This process can be formulated as follows: Qglo i−1 = Gi−1 (8) Kglo i−1 = [Gi−1, · · · , Xk i−1, V k i−1, Ek i−1] (9) Gi = Att(Qglo i−1, Kglo i−1, V glo i−1) (10) For each visual encoder layer, our local frame attention and global video attention are multi-head attention, using the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6543 CLIP pretrained visual encoder corresponding layer’s parameter. But the query, key, and value are not the same. We perform the two attentions in query mode so that the query in the proposed mechanisms can perceive local frame-level and global video-level representations. Additionally, the two attention mechanisms are implemented in the sharingparameter manner, which has two advantages: (1) Sharing parameter could reduce half parameters cost in the visual encoder. (2) Sharing parameter could excavate the pretrained CLIP visual encoder’s potential extremely, which is validated in Table 3 (b). Similarity Calculation. Since the global prompts perceive each frame’s information, we consider them a combination of fine-grained frames and global-discriminant video representations. Thus, we output the first global prompt, computing its similarity with the text representation. Compared to the parameter-rich calculator, such as using four transformer layers to fuse temporal information in CLIP4Clip (Luo et al. 2022), our method is parameterfree in similarity computing. In addition, compared to the video feature re-weighting methods, like computing queryrelated frame features by cross-attention in (Gorti et al. 2022), through inner-product (Bain et al. 2022) or TopK (Liu et al. 2022), and then re-weighting the output video feature according to the frame-query similarity, our method is faster and query-agnostic. Especially in realistic applications, we could save much inference time because we only compute video and text features once. In contrast, the above re-weight methods need to be computed twice. Objective Function. In the training process, following (Luo et al. 2022), we still adopt symmetric cross-entropy loss. During downstream training, both the text and visual encoders are frozen. For various text-video retrieval scenarios, only the parameters of text/visual prompts and the local prompt generation module need to be stored. We only need to reuse a copy of the pretrained model (i.e. CLIP), which reduces storage costs to the greatest extent. Discussion of other PEFL methods. Adapters employ down-projection with nonlinear activation and up-projection mappings in each layer. However, adapters need a large intermediate compression dimension to maintain performance, undermining efficiency. Our test (Table 1) shows that adapter methods exceed the GPU memory of fully finetuned CLIP4Clip by over 50%, using above 30GB against CLIP4Clip’s 20.8GB. Also, these methods alter the original model’s structure, complicating deployment. Therefore, we mainly focus on prompt tuning in this study. Experiments Datasets and Evaluation Metrics. We conduct experiments on four datasets including MSR-VTT (Xu et al. 2016), LSMDC (Rohrbach et al. 2015), ActivityNet (Heilbron et al. 2015) and VATEX (Wang et al. 2019). To measure the retrieval performance, we use standard metrics: recall at rank k (R@K, higher is better) and mean rank (MnR, lower is better). R@K computes the percentage of correct videos among the top K videos retrieved, we report the R@1, R@5, and R@10 results for each experiment. Mean rank computes the average rank of all correct answers. Compared Baselines. We evaluate our approach against six strong baselines. Efficient Prompt (Ju et al. 2022) introduces prompt tuning in TVR by adding text prompts to text encoder input and a two-layer transformer for temporal modeling. VPT (Jia et al. 2022) is a visual recognition method using prompt tuning, with VPT-deep showing notable results. UPT (Zang et al. 2022) generates both visual and text prompts from a unified transformer layer. Visual-Text Adapter. Following (Houlsby et al. 2019), we add visual/text adapters after self-attention in each layer. Video-Text Adapter. Based on Visual-Text Adapter, we replace the adapter in the visual encoder with ST-adapter (Pan et al. 2022) to enhance the capability of extracting temporal information, which is inserted before multi-head attention. CLIP4CLip (Luo et al. 2022) is the fully finetuning baseline. We only compare the mean-pooling type for fairness, since our similarity calculator is also parameter-free. Implementation Details. We use the CLIP (ViT-B/32) as the pre-trained model. During training, all the original parameters of CLIP are frozen unless explicitly mentioned. We apply a warm-up strategy followed by a cosine learning rate policy, using the AdamW optimizer with decoupled weight decay set to 0.2. The initial learning rate is 1e-2 for LSMDC and 5e-3 for the other three datasets. The max epochs are 10 for all datasets. Following CLIP4Clip, we uniformly sample 12 frames for MSRVTT, LSMDC, and VATEX and set the caption token length to 32. For ActivityNet, the frame length and caption length are set to 64. All the videos’ short sides resize to 224, and the frame per second (fps) is 3. By default, the lengths of the frame prompts, text prefix/postfix prompts, and global prompts are all set to 4. Also, the depth of frame prompts and text prefix/postfix prompts is set to 12 by default. The inner dim of the adapter is set to 368. All experiments are done with mixed precision. Results on Benchmarks Results on MSR-VTT. Table 1 presents MSRVTT-9K results. With only 0.83MB parameters trainable, DGL-Linear (ViT-B/16) enhances R@1 by 2.7% over CLIP4Clip. For ViT-B/32, DGL-Transformer tops the list and exceeds Efficient Prompt and VoP in T→V R@1 by 9.1% and 1.2%. Across the board, DGL surpasses all adapters and prompt methods. Notably, DGL-Linear consumes just 18.75 GB of GPU memory, less than CLIP4Clip’s 20.8 GB. Results on the other three datasets. Table 2 displays the retrieval results on VATEX, LSMDC, and ActivityNet. For ViT-B/32, tuning only 0.83MB parameters, DGL surpasses CLIP4Clip by 0.3% and 0.7% in T→V R@1 on VATEX and LSMDC and comparable performance on ActivityNet. This underscores our method’s efficiency. Notably, we outperform Efficient Prompt by 8.0% in LSMDC’s T→V R@1 and achieve a 4.0% lead over VoP on ActivityNet. Ablation Study In this section, we thoroughly ablate DGL on MSR-VTT-9K using DGL-Transformer (ViT-B/32) unless specified. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6544 Trainable Memory Text →Video Video →Text Type Methods Params(MB) ↓Usage(GB) ↓R@1↑R@5↑R@10↑MnR↓R@1↑R@5↑R@10↑MnR↓ CLIP-ViT-B/32 Finetune CLIP4Clip 123.54 20.80 43.1 70.4 80.8 16.2 43.1 70.5 81.2 12.4 Adapter Visual-Text Adapter 11.82 30.71 39.2 65.7 76.1 17.6 40.7 68.8 77.6 13.7 Video-Text Adapter 11.94 31.59 41.1 67.0 77.1 17.4 42.6 68.4 78.4 13.8 Prompt Efficient Prompt (Ju et al. 2022) 6.35 36.7 64.6 VPT (Jia et al. 2022) 0.18 20.98 42.0 66.6 77.3 19.2 39.4 66.8 77.2 16.2 UPT (Zang et al. 2022) 9.57 23.46 42.1 67.7 78.2 16.5 42.6 70.3 79.3 12.3 VoPF+C (Huang et al. 2023) 14.10 44.6 69.9 80.3 16.3 44.5 70.7 80.6 11.5 DGL-Linear(Ours) 0.83 18.75 44.7 70.5 79.2 16.2 42.1 70.0 80.6 13.4 DGL-Transformer(Ours) 9.57 20.69 45.8 69.3 79.4 16.3 43.5 70.5 80.7 13.1 + QB-Norm(Bogolin et al. 2022) 9.57 20.69 47.0 70.4 81.0 16.4 44.9 70.7 79.6 13.3 CLIP-ViT-B/16 CLIP4Clip 123.54 25.70 45.6 71.2 80.9 15.2 43.2 72.5 80.7 10.9 VoPF+C 14.10 47.7 72.4 82.2 12.0 DGL-Linear(Ours) 0.83 22.86 48.3 71.8 80.6 13.4 45.7 74.0 82.9 10.9 + QB-Norm(Bogolin et al. 2022) 0.83 22.86 49.7 73.1 82.3 15.1 47.8 74.1 83.3 10.6 Table 1: Retrieval results on the MSR-VTT-9K dataset. VATEX LSMDC ActivityNet Type Methods R@1↑R@5↑R@10↑MnR↓R@1↑R@5↑R@10↑MnR↓R@1↑R@5↑R@10↑MnR↓ Finetune CLIP4Clip 55.9 89.2 95.0 3.9 20.7 38.9 47.2 65.3 40.5 72.4 7.5 Adapter Visual-Text Adapter 53.1 85.0 92.3 4.9 18.0 34.4 43.5 75.2 33.5 64.8 77.5 10.9 Video-Text Adapter 53.5 85.0 92.4 4.7 18.3 35.5 44.0 74.8 36.4 66.1 79.6 10.0 Prompt Efficient Prompt (Ju et al. 2022) 13.4 29.5 VoPF+P (Huang et al. 2023) 20.7 40.7 49.7 59.1 36.1 65.5 78.5 10.9 DGL-Transformer(Ours) 54.3 85.5 92.3 4.9 21.2 37.8 48.8 66.5 40.1 69.5 80.9 9.1 DGL-Linear(Ours) 56.2 87.1 93.5 4.1 21.4 39.4 48.4 64.3 38.6 69.2 81.6 9.0 + QB-Norm(Bogolin et al. 2022) 57.3 87.0 93.3 4.2 21.6 39.3 49.0 64.4 43.1 72.3 82.7 8.6 Table 2: Combined Retrieval Results for VATEX, LSMDC, and ActivityNet Datasets. cartoon of a squid on a bike looking up at a treehouse a man runs into the crowd when trying to catch a basketball DGL R@1 CLIP4Clip R@1 DGL R@1 CLIP4Clip R@1 Figure 4: Visualization of text-video retrieval results. Frames in the green box are DGL R@1 results, while those in the red box are CLIP4clip R@1 results. The output of the visual encoder. We compare four different types of visual encoder output as the final video representation as shown in Table 3 (a). We found that using the first global prompt performs best, this is because our attention is designed for the global prompt, which can perceive global information and local frame details. Ablation of global-local video attention. We assess global-local video attention by testing global/local attention and shared parameters separately. Table 3 (b) shows that global-local video attention with sharing parameters outperforms the other methods, demonstrating that both global and local information is crucial for text-video retrieval. Effect of the postfix text prompt. Following Efficient Prompt (Ju et al. 2022), which adds text prefix and postfix prompts only in the input layer, we extend this to all encoder layers. Table 3 (c) shows [4+X+4] deep text prompts outperforming [8+X] or [4+X], maximizing prompt potential. Verify DGL on other baselines. We evaluated DGL on different structures and other CLIP-based methods. (1) Following Token Mix (Liu et al. 2023), we integrated DGL with BLIP (ViT-B/16) (Li et al. 2022), applying global-local video attention in the frozen visual encoder. Table 3 (d) top part shows that DGL surpasses the fully finetuning/PEFL method. (2) For CLIP-based method comparison, we focus on parameter-efficient designs and compare with X-CLIP (Ma et al. 2022) by freezing the CLIP backbone for fairness. The bottom part demonstrates DGL’s effectiveness. Why project Linear from visual to text? Visual features are more complex than textual, as videos typically contain more information. Projecting simpler text to complex visual features is challenging. Table 3 (e) shows Visual2Text proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6545 Visual Output R@1↑R@5↑R@10↑MnR↓ Text →Video First Global Prompt 45.8 69.3 79.4 16.3 Avg Global Features 43.5 69.7 79.7 16.9 Avg Local Features 41.5 68.3 77.2 16.0 Avg GL Features 42.5 68.8 77.6 15.6 Video →Text First Global Prompt 43.5 70.5 80.7 13.1 Avg Global Features 43.1 70.0 79.9 12.5 Avg Local Features 41.9 70.9 79.3 12.4 Avg GL Features 44.3 69.6 79.9 12.4 (a) Comparison of different visual output. “GL” indicates Global-Local. global local share R@1↑R@5↑R@10↑MnR↓ Text →Video ✓ 40.4 67.8 77.3 18.3 ✓ 42.0 68.7 78.2 16.9 ✓ ✓ 43.5 70.1 79.0 17.1 ✓ ✓ ✓ 45.8 69.3 79.4 16.3 Video →Text ✓ 41.3 67.8 76.8 14.3 ✓ 42.2 69.7 78.6 12.6 ✓ ✓ 43.1 69.4 80.0 13.6 ✓ ✓ ✓ 43.5 70.5 80.7 13.1 (b) Ablation of global-local video attention Position R@1↑R@5↑R@10↑MnR↓ Text →Video 4+X 43.9 70.0 79.0 16.1 8+X 45.0 70.2 80.3 15.0 4+X+4 45.8 69.3 79.4 16.3 Video →Text 4+X 42.4 70.5 80.3 12.8 8+X 43.0 70.1 81.3 12.4 4+X+4 43.5 70.5 80.7 13.1 (c) Effect of text prompt position Text →Video Methods UP(M)↓R@1↑R@5↑R@10↑ BLIP(Li et al. 2022) Full(Liu et al. 2023) 226.51 47.6 73.4 81.8 Token Mix 7.07 47.1 70.8 80.5 DGL(ours) 0.30 48.6 71.4 79.7 X-CLIP(Ma et al. 2022) X-CLIP* 8.0 39.6 66.8 76.4 +DGL-Linear 2.9 44.0 69.9 79.6 (d) Verifying DGL on other baselines, “*” indicates freeze backbone. Direction R@1↑R@5↑R@10↑MnR↓ Text →Video T→V 43.4 69.6 79.7 16.2 V→T 44.7 70.5 79.2 16.2 Video →Text T→V 43.0 70.0 79.5 12.6 V→T 42.1 70.0 80.6 13.4 (e) Abalation experiment of DGL-Linear projection direction. Method R@1↑R@5↑R@10↑MnR↓ Text →Video Baseline 43.8 68.7 80.2 16.2 DGL-Transformer 45.8 69.3 79.4 16.3 Video →Text Baseline 43.9 69.4 80.1 12.2 DGL-Transformer 43.5 70.5 80.7 13.1 (f) Effect of generating cross-modal prompts from the shared latent space. Table 3: Ablation studies on the MSRVTT-9K dataset jection achieves higher R@1, validating our claim. Generating from the shared latent space. DGLTransformer enhances cross-modal interaction and local consistency by generating prompts from a shared latent space. Compared with the divided prompt baselines while maintaining global-local video attention, the 2% T→V R@1 improvement in Table 3 (f) demonstrates its effectiveness. Retrieval results comparison. In Fig 4 above, our DGL model captures global details like“look up at a tree house”, while CLIP4Clip sees local cues, such as “cartoon of a squid on a bike.” In Fig 4 below, DGL identifies actions like “run into the crowd” and “catch a basketball,” whereas CLIP4Clip only recognizes “catch a basketball.” Thus, the results show that DGL perceives global video information. What can global prompt learn? As shown in Fig 5, we visualize the attention weights of the global prompt on each frame, which is the output of the visual encoder. The top figure illustrates that our global prompt effectively focuses on the temporal dynamics of “wrestling.” The bottom figure demonstrates that our global prompt can associate local information to extract global information. For example, it attends to “talks” in the first frame, “two generals” in the second and third frames, “war” in the fourth frame, and “king” in the fifth frame, and after summarizing and generalizing this information, successfully retrieves the relevant video clip. Our visualization results demonstrate that the global prompt in DGL can effectively capture temporal dynamics and global information. men are doing wrestling a man talks about a war between two generals one of which became king Figure 5: Visualization of global prompt, we plot the global prompt’s attention weight on each frame. The red text in the query corresponds to the video’s discriminative features. Conclusion In this work, we propose DGL, which generates local-level prompts for text and vision branches from a shared latent space, enhancing cross-modal interaction. Also, we propose a new attention mechanism for creating local and global prompts tailored to videos, which stands out in comparison to the existing literature where each frame is encoded separately by a fixed encoder. Extensive experiments show that, compared to the fully finetuning method or naive PEFL methods, our method only trains 0.83M parameters and outperforms them on four text-video retrieval datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6546 Acknowledgments This work was supported in part by the Australian Research Council (ARC) under Grant DP200100938. We thank Yiyuan Yang and Shuai Zhao for their helpful discussions. References Bain, M.; Nagrani, A.; Varol, G.; and Zisserman, A. 2021. Frozen in time: A joint video and image encoder for end-toend retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1728–1738. Bain, M.; Nagrani, A.; Varol, G.; and Zisserman, A. 2022. A CLIP-Hitchhiker’s Guide to Long Video Retrieval. arXiv preprint arXiv:2205.08508. Bogolin, S.-V.; Croitoru, I.; Jin, H.; Liu, Y.; and Albanie, S. 2022. Cross modal retrieval with querybank normalisation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5194–5205. Fang, H.; Xiong, P.; Xu, L.; and Chen, Y. 2021. Clip2video: Mastering video-text retrieval via image clip. arXiv preprint arXiv:2106.11097. Gorti, S. K.; Vouitsis, N.; Ma, J.; Golestan, K.; Volkovs, M.; Garg, A.; and Yu, G. 2022. X-pool: Cross-modal languagevideo attention for text-video retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5006–5015. Heilbron, F. C.; Escorcia, V.; Ghanem, B.; and Niebles, J. C. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In 2015 IEEE conference on computer vision and pattern recognition (CVPR), 961–970. IEEE. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; and Gelly, S. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, 2790–2799. PMLR. Huang, S.; Gong, B.; Pan, Y.; Jiang, J.; Lv, Y.; Li, Y.; and Wang, D. 2023. VoP: Text-Video Co-Operative Prompt Tuning for Cross-Modal Retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6565–6574. Jia, M.; Tang, L.; Chen, B.-C.; Cardie, C.; Belongie, S.; Hariharan, B.; and Lim, S.-N. 2022. Visual prompt tuning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, 709–727. Springer. Jiang, H.; Zhang, J.; Huang, R.; Ge, C.; Ni, Z.; Lu, J.; Zhou, J.; Song, S.; and Huang, G. 2022. Cross-modal adapter for text-video retrieval. arXiv preprint arXiv:2211.09623. Jin, P.; Huang, J.; Xiong, P.; Tian, S.; Liu, C.; Ji, X.; Yuan, L.; and Chen, J. 2023a. Video-text as game players: Hierarchical banzhaf interaction for cross-modal representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2472–2482. Jin, P.; Li, H.; Cheng, Z.; Li, K.; Ji, X.; Liu, C.; Yuan, L.; and Chen, J. 2023b. Diffusionret: Generative text-video retrieval with diffusion model. arXiv preprint arXiv:2303.09867. Ju, C.; Han, T.; Zheng, K.; Zhang, Y.; and Xie, W. 2022. Prompting visual-language models for efficient video understanding. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV, 105–124. Springer. Khattak, M. U.; Rasheed, H.; Maaz, M.; Khan, S.; and Khan, F. S. 2022. Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117. Lei, J.; Li, L.; Zhou, L.; Gan, Z.; Berg, T. L.; Bansal, M.; and Liu, J. 2021. Less is more: Clipbert for video-andlanguage learning via sparse sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7331–7341. Lester, B.; Al-Rfou, R.; and Constant, N. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691. Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022. BLIP: Bootstrapping Language-Image Pre-training for Unified VisionLanguage Understanding and Generation. In ICML. Liang, C.; Wang, W.; Zhou, T.; Miao, J.; Luo, Y.; and Yang, Y. 2023. Local-global context aware transformer for language-guided video segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. Liu, X.; Ji, K.; Fu, Y.; Du, Z.; Yang, Z.; and Tang, J. 2021. P-tuning v2: Prompt tuning can be comparable to finetuning universally across scales and tasks. arXiv preprint arXiv:2110.07602. Liu, Y.; Xiong, P.; Xu, L.; Cao, S.; and Jin, Q. 2022. Ts2net: Token shift and selection transformer for text-video retrieval. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIV, 319–335. Springer. Liu, Y.; Xu, L.; Xiong, P.; and Jin, Q. 2023. Token Mixing: Parameter-Efficient Transfer Learning from ImageLanguage to Video-Language. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI). Luo, H.; Ji, L.; Zhong, M.; Chen, Y.; Lei, W.; Duan, N.; and Li, T. 2022. CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning. Neurocomputing, 508: 293–304. Ma, Y.; Xu, G.; Sun, X.; Yan, M.; Zhang, J.; and Ji, R. 2022. X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval. In Proceedings of the 30th ACM International Conference on Multimedia, 638–647. Pan, J.; Lin, Z.; Zhu, X.; Shao, J.; and Li, H. 2022. Stadapter: Parameter-efficient image-to-video transfer learning. Advances in Neural Information Processing Systems, 35: 26462–26477. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rohrbach, A.; Rohrbach, M.; Tandon, N.; and Schiele, B. 2015. A dataset for movie description. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3202–3212. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6547 Sun, C.; Myers, A.; Vondrick, C.; Murphy, K.; and Schmid, C. 2019. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, 7464–7473. Sung, Y.-L.; Cho, J.; and Bansal, M. 2022. Vladapter: Parameter-efficient transfer learning for vision-andlanguage tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5227–5237. Wang, Q.; Zhang, Y.; Zheng, Y.; Pan, P.; and Hua, X.-S. 2022a. Disentangled representation learning for text-video retrieval. arXiv preprint arXiv:2203.07111. Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; Liu, Q.; Aggarwal, K.; Mohammed, O. K.; Singhal, S.; Som, S.; et al. 2022b. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv preprint arXiv:2208.10442. Wang, X.; Wu, J.; Chen, J.; Li, L.; Wang, Y.-F.; and Wang, W. Y. 2019. Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4581–4591. Wang, X.; Zhu, L.; and Yang, Y. 2021. T2vlad: global-local sequence alignment for text-video retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5079–5088. Wu, W.; Luo, H.; Fang, B.; Wang, J.; and Ouyang, W. 2023. Cap4Video: What Can Auxiliary Captions Do for TextVideo Retrieval? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10704–10713. Xu, J.; Mei, T.; Yao, T.; and Rui, Y. 2016. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5288–5296. Xue, H.; Sun, Y.; Liu, B.; Fu, J.; Song, R.; Li, H.; and Luo, J. 2022. CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment. arXiv preprint arXiv:2209.06430. Zang, Y.; Li, W.; Zhou, K.; Huang, C.; and Loy, C. C. 2022. Unified vision and language prompt learning. arXiv preprint arXiv:2210.07225. Zhang, B.; Jin, X.; Gong, W.; Xu, K.; Zhang, Z.; Wang, P.; Shen, X.; and Feng, J. 2023. Multimodal video adapter for parameter efficient video text retrieval. arXiv preprint arXiv:2301.07868. Zhao, S.; Zhu, L.; Wang, X.; and Yang, Y. 2022. Centerclip: Token clustering for efficient text-video retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 970–981. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhu, L.; and Yang, Y. 2020. Actbert: Learning global-local video-text representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8746–8755. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6548 | 2024 | 727 |
18,548 | Diverse and Stable 2D Diffusion Guided Text to 3D Generation with Noise Recalibration Xiaofeng Yang1, Fayao Liu2, Yi Xu3, Hanjing Su4, Qingyao Wu5, Guosheng Lin1* 1Nanyang Technological University, Singapore 2Institute for Infocomm Research, A*STAR, Singapore 3OPPO US Research Center, USA 4Tencent, China 5South China University of Technology, China {yang.xiaofeng, gslin}@ntu.edu.sg Abstract In recent years, following the success of text guided image generation, text guided 3D generation has gained increasing attention among researchers. Dreamfusion is a notable approach that enhances generation quality by utilizing 2D text guided diffusion models and introducing SDS loss, a technique for distilling 2D diffusion model information to train 3D models. However, the SDS loss has two major limitations that hinder its effectiveness. Firstly, when given a text prompt, the SDS loss struggles to produce diverse content. Secondly, during training, SDS loss may cause the generated content to overfit and collapse, limiting the model’s ability to learn intricate texture details. To overcome these challenges, we propose a novel approach called Noise Recalibration algorithm. By incorporating this technique, we can generate 3D content with significantly greater diversity and stunning details. Our approach offers a promising solution to the limitations of SDS loss. Introduction Text guided 3D generation is a challenging task that aims to generate 3D content based on textual prompts. This approach has numerous applications in various fields such as gaming, virtual environments, automation, AI augmented design, and 3D data augmentations. However, the lack of annotated 3D data makes this task extremely difficult. Current 3D generation methods (Chan et al. 2022; Liao et al. 2020; Henzler, Mitra, and Ritschel 2019; Nguyen-Phuoc et al. 2019, 2020; Wu et al. 2016; Zhu et al. 2018; Zhou et al. 2021; Yu et al. 2021), which typically focus on generating categorical objects, often require pose supervision during training, resulting in a significant gap between 3D generation and text guided generation. To address this challenge and achieve photo-realistic 3D object and scene generation, recent methods have utilized 2D models trained on 2D image data. For example, Dreamfield (Jain et al. 2022) leverages the contrastive image text model CLIP (Radford et al. 2021) to train Neural Radiance Field (NeRF) (Mildenhall et al. 2021) by measuring the *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. similarity between rendered images and text. Meanwhile, Dreamfusion (Poole et al. 2022) represents a significant breakthrough in improving generation quality by using 2D language-guided diffusion models to train NeRF. Specifically, Dreamfusion (Poole et al. 2022) starts from a random initialized NeRF and adjusts the NeRF weights by calculating the Score Distillation Sampling (SDS) loss between the NeRF rendered images and the text prompt based on the 2D diffusion model’s output. A detailed illustration of the Dreamfusion method can be found in the third Section. Despite the success of SDS loss, it faces two major issues. First, as also witnessed in Dreamfusion, the SDS loss can hardly generate diversified content given different random seeds during NeRF training. The authors attribute the reason to that the smoothed density may not contain many distinct modes at high noise levels (Poole et al. 2022). In experiments, we also witness the similar issue – given a fixed text prompt, the randomness in NeRF optimization does not guarantee sufficient generation variety. Second, the degeneration issue. The SDS loss can cause the learned NeRF to gradually collapse, preventing it from learning high-quality texture details. This issue occurs not only with the original SDS loss, but also with its successors like VSD loss (Wang et al. 2023). We show a visual illustration of the two problems in Fig. 2. In this paper, we thoroughly investigate the two issues and propose a Noise-Recalibration SDS (NR-SDS) algorithm to overcome them. The NR-SDS algorithm contains two parts: the single noise training and the Noise Recalibration loss. First, we propose a single noise training scheme to address the diversity issue. We demonstrate that the original SDS loss is searching for the optimal mode using random noises from the entire Gaussian space. However, the generation process can be limited to a single noise sampled from the Gaussian space, leading to more diverse results. Second, we propose Noise-Recalibration loss to address the degeneration issue. We attribute the degeneration problem to the high guidance weight used in SDS loss. While a high guidance factor is essential for the NeRF model to learn text-specific content, it tends to cause the learned NeRF to degrade during training. To resolve this dilemma, we make the “single noise” in the single noise training method learnable and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6549 A sliced loaf of fresh bread A car made out of cheese A model of a house in Tudor style Fries and a hamburger Figure 1: Examples of generation results with NR-SDS loss. Given a text prompt, we show our method can generate highquality diversified 3D objects. Results are generated using NeRF only, without DMTet finetuning. gradually adjust the learnable noise based on the Noise Recalibration loss, such that the noise could generate high quality contents even when operating at a high guidance weight. By using the NR-SDS algorithm, we achieve impressive results with improved texture details and diverse 3D content generation. Some generation results can be found in Fig. 1. The paper is organized as follows: in the related works section, we first briefly review the 3D generation methods and text guided 3D generation methods. After that, we introduce the preliminaries of diffusion models, the Dreamfusion algorithm, the SDS loss and the problems of SDS loss. Consequently, we present our proposed NR-SDS algorithm. In the experiment section, we show the ablation experiments and more experimental results. Unless otherwise stated, our analysis and experiments are primarily based on the latent diffusion models (Rombach et al. 2022). However, we show in the experiment section that the identified issues are not unique to the latent diffusion models. Our proposed method could improve the Pixel-Pixel diffusion models (Saharia et al. 2022) as well. To summarize our contributions: • We identify and systematically study the diversity and degeneration issues of SDS loss. • We propose the NR-SDS algorithm. The NR-SDS algorithm consists of two key components: single noise training to solve the diversity issue and the Noise Recalibration loss to solve the degeneration issue. • With the proposed NR-SDS algorithm, we are able to generate high-quality, multi-view consistent 3D objects using 2D diffusion models. Related Works Our work belongs to the field of 3D generation, specifically in text guided 3D generation. Previous research (Chan et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6550 Degeneration Issue Diversity Issue 10000 15000 30000 45000 Seed 1 Seed 2 60000 Steps: SDS: VSD: Figure 2: An illustration of the two identified issues. For the diversity issue, we run SDS loss using text prompt “a dog” with different random seeds. It can be observed that the generated content barely changes with different seeds. For the degeneration issue, we observe that it happens with both SDS loss and VSD loss. 2022; Liao et al. 2020; Henzler, Mitra, and Ritschel 2019; Nguyen-Phuoc et al. 2019, 2020; Wu et al. 2016; Zhu et al. 2018; Zhou et al. 2021; Yu et al. 2021) has focused on generating synthetic data or data of a single category, requiring the network to be trained on multi-view images of the same scene or images with pose annotations. These supervised learning methods have limitations in training scale and generalizing ability. Moreover, the lack of annotated 3D data with language constraints supervised language-guided 3D generation (Liu et al. 2022; Canfes et al. 2023) to simple shapes or avatars. Compared to language-guided image generation models, which are usually trained on billions of images (Rombach et al. 2022; Ramesh et al. 2022; Saharia et al. 2022; Nichol et al. 2022), it seems impossible to achieve the same training scale for 3D data by supervised learning. As a result, researchers have turned to using large-scale trained multimodal 2D models to improve text-guided 3D generation. Previous works (Jain et al. 2022; Wang et al. 2022) have used CLIP (Radford et al. 2021) to guide NeRF training or editing, but CLIP as a contrastive model struggles to recover high-frequency surface details and accurate object shapes. Large-scale text-guided diffusion models (Rombach et al. 2022; Ramesh et al. 2022; Saharia et al. 2022; Nichol et al. 2022) provide a more tractable way to distill 2D generative priors. Dreamfusion (Poole et al. 2022), a pioneering work, proposes a score distillation sampling (SDS) method to distill the prior of a 2D diffusion model for training Neural Radiance Field (NeRF) since diffusion models are a type of score function. In the following section, we will introduce the preliminaries of Dreamfusion, the SDS loss, and the problems of Dreamfusion. Dreamfusion and SDS Loss Revisit Diffusion Model The diffusion models (Ho, Jain, and Abbeel 2020; SohlDickstein et al. 2015; Song, Meng, and Ermon 2020; Song et al. 2020) as a new family of state-of-the-art generative models treat the image generation process as a noise removing process. Starting from a randomly sampled noise from the Gaussian space, diffusion process gradually removes a small portion of Gaussian noise step by step. Next, we discuss the training and testing phases of the diffusion model following DDPM (Ho, Jain, and Abbeel 2020). Training of Diffusion Model. To generate training data, given a sample from the real data distribution x0 ∼q(x), the diffusion process adds random Gaussian noise by following: q(xt | xt−1) = N(xt; p 1 −βtxt−1, βtI), (1) where β is called a variance schedule with value βt ∈(0, 1). Since the process is defined as a Markov process, we can also get: q(x1:T | x0) = T Y t=1 q(xt | xt−1). (2) Considering Eq 2, Eq 1 can be further simplified as: q(xt | x0) = N(xt; √¯αtx0, (1 −¯αt)I), (3) where αt = 1 −βt and ¯αt = QT t=1 αt. In other words, the distribution at time step t can be directly calculated from x0 without considering intermediate steps. Once training data is generated, the diffusion model is trained by optimizing the MSE loss between the added random noise and the model prediction: L(ϕ) = E[∥ϵ −ϵϕ(√¯αtx0 + √ 1 −¯αtϵ, t) ∥2], (4) where ϵ is the noise added to the image and ϵϕ is the noise predicted by the diffusion model. The generation process is indeed the reverse process of diffusion process: to find p(xt−1) given p(xt). Formally, the reverse diffusion process is: pϕ(xt−1 | xt) = N(xt−1; µϕ(xt, t), Σϕ(xt, t)), (5) where µϕ and Σϕ can be calculated from the trained diffusion model ϵϕ given the output of the previous step and current step t. Dreamfusion and SDS Loss In this section, we provide a description of the Dreamfusion algorithm and the SDS loss. Methodologically, Dreamfusion shares the same underlying principles as other gradient inversion techniques, such as Deepdream (Mordvintsev, Olah, and Tyka 2015), Dreaming to distill (Yin et al. 2020), and Gradient Inversion (Yin et al. 2021). These methods seek to optimize the input of a trained model, rather than the model parameters. However, previous methods mainly employ pre-trained image classification networks like ResNet, whereas Dreamfusion uses a diffusion model for distillation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6551 Guidance Weight W = 100 W = 50 W = 25 W = 1 Figure 3: A high guidance is necessary to learn good shapes. With the guidance weight reduced from 100, the learned shapes get weaker. purposes. The significant differences between a classification model and a diffusion model are twofold. Firstly, a diffusion model is a generative model that can potentially perform better in generation tasks when compared with a classification model. Secondly, a diffusion model is a score function that directly generates an update gradient. Formally, suppose g(θ) is the NeRF model to learn, Dreamfusion optimizes the parameter θ by the following SDS loss: ∇θLSDS(ϕ, x = g(θ)) ≜Et,ϵ (ˆϵϕ(zt; y, t) −ϵ) ∂x ∂θ , (6) where ϵ is a randomly sampled noise and ˆϵϕ is calculated from the trained diffusion model. Intuitively, the Dreamfusion algorithm adds a randomly sampled noise to the NeRF rendered image. The combined image is then fed into a trained diffusion model to predict the added noise. Ideally, if the rendered image is realistic, the diffusion model should predict the noise accurately. The difference between the predicted noise and the added noise is known as the SDS loss. It is witnessed in SDS loss that ignoring the UNet Jacobian will improve the generation quality. In practice, the 2D diffusion model is a conditional diffusion model and the calculation of ˆϵϕ is based on the classifier-free guidance (Ho and Salimans 2022): ˆϵϕ = ϵϕ(zt) + w(ϵϕ(zt; y) −ϵϕ(zt)), (7) where ϵϕ(zt; y) and ϵϕ(zt) represent the diffusion process using the given text prompt and the null embedding. w is the guidance weight. If the w is set too low, the generated image will be less related to the condition y. If the w is set too high, the generation quality will be reduced. In image generation tasks, a typical w choice is 5 to 20. However, SDS loss (Poole et al. 2022) requires training the 3D model with a guidance weight w as high as 100. Without such a high guidance value, the 3D models will not learn good shapes as described in Dreamfusion (Poole et al. 2022) (see Sec. 3.2 and Appendix Figure 9). In our experiments, we witness similar behaviours. We show a brief example in Fig. 3 using text prompt “a cat”. Understanding the Issues of SDS Loss In this section, we will discuss the two major issues of SDS loss: the diversity issue and the degeneration issue. These issues are summarized in Fig. 2. NeRF Moving Noise 𝝐𝒎 (Learnable) Anchor Noise 𝝐𝒂 (Fixed) SD𝐒𝐋𝐨𝐬𝐬𝐄𝐪𝟖 ෝ𝝐(𝝐𝒎) −𝝐𝒂 “A corgy” Diffusion Model NR Loss (Eq 9) ∥ෝ𝝐(𝝐𝒂) −ෝ𝝐(𝝐𝒎) ∥𝟐 Predicted Noise ෝ𝝐(𝝐𝒂) Predicted Noise ෝ𝝐(𝝐𝒎) + + “A corgy” Forward in NR Gradient Pass in NR Forward in SDS Gradient in SDS Figure 4: The forward and backward path of our NR-SDS algorithm. The SDS loss directly applies gradient on the NeRF rendered image, while the NR loss requires backpropagation on the diffusion model parameters. For the case of latent diffusion models, the rendered image will first pass an encoder and the noises are added to the latent features generated by the encoder. The diversity issue is a problem that has been previously observed in the original Dreamfusion work (see appendix Sec. A.5) and also in our own experiments. As shown in Fig. 2, even when adjusting the random seeds, the SDS loss struggles to generate diverse content. For example, in the case of the dog images, the shape and texture of the dog remain relatively unchanged despite varying the random seeds. The degeneration issue, as shown in Fig. 2 at the bottom, is another problem with SDS loss and its variances. Although the SDS loss can help the model learn good shapes in the early stages of training, the training process eventually collapses after longer iterations, resulting in a degeneration issue. Even for the VSD loss in prolific dreamer (Wang et al. 2023), the texture of the generated object could sometimes gradually disappear. This issue restricts the NeRF model from learning object details such as dog fur. Furthermore, we observe that the degeneration issue occurs unpredictably and not on a fixed training step. This makes it challenging to address this issue through methods like early stopping. Our Method In this section, we present our method – the Noise Recalibration SDS algorithm. Resolving Diversity Issue with Single Noise Training We hypothesize that the lack of diversity in SDS loss and Dreamfusion is primarily due to the large noise sampling space. To generate the 3D representation of a scene, the SDS algorithm samples random noise from the entire Gaussian space. This enforces the generated NeRF to satisfy all noises. In this case, the trained NeRF will finally become the average model. In fact, the generation of one data instance, whether it is an image or a 3D scene, is simply a data point sample from the data space. In the case of 2D generation based on Eq. 5, only T random noises are samThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6552 Algorithm 1: The NR-SDS Algorithm. Input: Diffusion Model ϵϕ, Language Prompt y, Hyper-parameters: Total Step N. Output: NeRF Model g(θ) 1 Initialize NeRF Model g(θ) 2 Initialize fixed anchor noise ϵa 3 Set initial learnable moving noise ϵm[t] = ϵa for all timestep t; 4 for n = 1 to N do 5 SAMPLE diffusion time step t 6 UPDATE θ Based on Eq 9 and ϵm[t] 7 UPDATE ϵm[t] Based on Eq 8 8 RETURN g(θ); pled to generate a single image, with T values ranging from 25 to 1000, depending on the generation steps. Therefore, we assume in 3D generation, the generated scene does not need to satisfy all the random noises sampled from the entire Gaussian space as well. In its extreme case, a 3D scene only needs to satisfy one single noise when using SDS loss. With that in mind, we propose a single noise training scheme to restrict the noise sampling process to a single random noise sampled from the Gaussian space, i.e., training one scene with one random noise. In the experiment section and Fig. 6, we show that the single noise training method helps to generate more varied content without reducing the generation quality. Resolving Degeneration Issue with Noise Recalibration Loss To understand the degeneration issue, we first consider the SDS loss given different input images. Ideally, the SDS should generate a high value for the unreal images and a zero value for the real images. However, this can not be achieved with the current SDS loss due to the high guidance weight w (Eq. 6, Eq. 7). The reason for this is as follows: Given the training and generation formula of the classifierfree guidance diffusion model (Eq. 4 and Eq. 7), the training of the original diffusion model is carried out by randomly choosing the correct language embedding and the null embedding. In other words, the training is carried out with a guidance weight w = 1. Therefore, if the diffusion model is well trained, it could only guarantee a zero SDS loss value with guidance weight w = 1. Under the setting of guidance weight w = 100, the difference between the language embedding output and the null embedding output will be greatly amplified. We show a detailed visual proof in Supplementary Material. Finally, the SDS loss will still generate a relatively high response to the real images, causing the 3D model to get worse and be unable to learn good details. Solving this problem is non-trivial because reducing the guidance weight in the SDS loss is not a viable option. As stated in the previous section and shown in Fig. 3, the NeRF model requires a large guidance weight to learn correct shapes. To address this issue, we propose the NoiseRecalibration (NR) loss. 2D Generation Comparison NR-SDS VSD SDS Ancestral Sampling A hamburger An astronaut riding a horse A tiger eating ice cream Figure 5: Generation comparison in 2D space. The intuition behind NR loss is that we would like to make the “single noise” in the single noise training method learnable and the learnable noise could operate well even under guidance weight w = 100. We call this noise the learnable moving noise. To achieve this, we first sample a fixed anchor noise, such that the learnable moving noise running at w = 100 will finally converge to the fixed anchor noise running at w = 1. Specifically, we define a fixed anchor noise ϵa and a learnable moving noise ϵm, where the moving noise ϵm is used to train the NeRF model and the anchor noise acts as an anchor to recalibrate the learnable moving noise. The Noise Recalibration loss is defined as: ∥ˆϵϕ(zt; y, t, w = 1, ϵa) −ˆϵϕ(zt; y, t, w = 100, ϵm[t]) ∥2 2 . (8) The NR loss gradually optimizes the moving noise to the direction of ϵa at w = 1. If fully optimized, the learnable moving noise operating at w = 100 will have the same behavior as the anchor noise operating at w = 1 and the degeneration issue can be resolved. The NR-SDS Algorithm To summarize the NR-SDS algorithm, our algorithm applies the following SDS loss and NR loss (Eq. 8): ∇θLSDS(ϕ, x = g(θ)) ≜Et,ϵ[(ˆϵϕ(zt; y, t, w = 100, ϵm[t]) −ϵa)∂x ∂θ ], (9) Concretely, Eq. 9 applies the SDS loss to update the NeRF parameters. Different from the original SDS loss, instead of randomly sampling the added noise from Gaussian space, we fix the noise to the learnable moving noise. Eq. 8 is the Noise Recalibration loss. We further summarize the NRSDS algorithm in Algorithm 1. An illustration of the forThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6553 SDS Baseline NR - SDS SDS + SNT Figure 6: Ablation Experiments on Diversity. We run experiments with SDS baseline, SDS + Single Noise Training, and our final NR-SDS algorithm. For SDS baseline and SDS + SNT, we manually select the best time step before degeneration. The baseline method only generates cats with very similar gestures and textures. Our methods can better generate diversified content. Training Steps 3000 10000 15000 30000 45000 SDS Baseline NR - SDS 60000 Figure 7: Ablation Experiments on Degeneration Issue. We observe that our method effectively resolves the degeneration issue. Our NeRF model learns object details when trained for long epochs, while the baseline method degenerates. ward propagation and backward propagation can be found in Fig. 4. In a perfectly trained case, the Noise-Recalibration SDS algorithm ensures the NeRF model converges to a stable state. That is, given a real image, both Eq. 9 and Eq. 8 will generate values close to zero. Experiments Implementation Details The experiments are carried out on NVIDIA-A6000 GPUs with 48GB memory. We use the code base from (Tang 2022) and (Guo et al. 2023) for the implementation of SDS loss and VSD loss. We use Instant-NGP (M¨uller et al. 2022) as the backbone NeRF and stable diffusion (Rombach et al. 2022) as the 2D diffusion model. The NeRF model is trained for 25000 iterations with 2048 sampled rays and a batch size of 1. We use a learning rate of 1x10-3 without learning rate decay. To ensure an equitable comparison and mitigate the potential influence of extraneous factors on the generated outcomes, we deliberately refrain from utilizing mesh finetuning to bolster the quality of our generation results. It’s worth highlighting that our proposed approach seamlessly integrates with DMTet finetuning, as introduced in Magic3D (Lin et al. 2023) and Fantasia3D (Chen et al. 2023). Qualitative Experiments Comparison in 2D space. We first show the generation results in 2D space in Fig. 5 as a proof of concept. In this experiment, we start from a randomly initialized noise latent and optimize the latent with various optimization methods. After optimization, we use the decoder of latent diffusion model to convert the latents to images. Compared with SDS loss, our proposed method can generate much better results with stunning details. Compared with concurrently proposed VSD loss (Wang et al. 2023), our method generates comparable high-quality images. In the meantime, our method only optimizes the noise space, while VSD loss requires to finetune a Lora model. Therefore, our method only requires around 60% as much running memory as VSD loss. 3D Ablations on the identified issues. We demonstrate the ablation experiments on the identified issues in Fig. 6 and Fig. 7. For the diversity issue, we run the experiments with baseline SDS loss, SDS + single noise training, and our The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6554 A brightly colored mushroom growing on a log A car made out of sushi A small saguaro cactus planted in a clay pot Ours Dreamfusion VSD Figure 8: Our Results vs. Other Methods. We compare our method against Dreamfusion and concurrently proposed VSD loss. We observe that our proposed method can generate high-fidelity 3D results. NR-SDS vs. SDS (Quality) NR-SDS vs. SDS (Diversity) NR-SDS vs. Dreamfusion NR-SDS vs. VSD Preference Score 80% 75% 67% 50% Table 1: User Study Results proposed NR-SDS method using the same text prompt “a cat”. We observe that the single noise training method can effectively improve the diversity of generations. In Fig. 7, we observe that our method resolves the degeneration issue. 3D comparison with other methods. We also directly compare our method with Dreamfusion (Poole et al. 2022) by using the released images and VSD loss by running the method with the same setting. Results are shown in Fig. 8. We observe that our method can generate high-resolution results with better texture details compared with Dreamfusion and comparable results compared with VSD loss. More results of generation quality and diversity can be found in Supplementary Materials. Quantitative Results We conduct user studies to quantitatively evaluate our method. Participants are given NeRF rendered videos or multi-view images to evaluate in four different experiments. Some generation videos can be found in Supplementary Materials. First, a diversity comparison is conducted between the NR-SDS and SDS baseline. We generate 300 NeRFs using 100 text prompts, with each prompt training 3 NeRFs with different seeds. Participants are asked to select which set of three is more diverse. Second, we conduct quality experiments, with 100 NeRFs trained using the NRSDS and SDS baseline (both use latent diffusion). Participants are asked to select which one is of higher quality. Thirdly, we compare our results to those Dreamfusion released generation examples and training NeRFs using the same prompts. The Dreamfusion released examples are generated with Imagen (Saharia et al. 2022) and the weights are not publicly available. Finally, we also compare ours against VSD loss. The user study preference scores are listed in Table. 1. The results of our user study indicate that our method attains higher or comparable user preference scores in comparison to both baseline and contemporary methods. NR-SDS with Even Larger Guidance Weight In our experiments, we observe that our method remains effective with guidance weights exceeding 100, consistently producing accurate colors. We include these results in Supplementary Materials. Conclusions, Limitation and Future Works In this work, we identify and study the two commonly seen problems of SDS loss, the diversity issue and the degeneration issue. We propose NR-SDS algorithm to tackle the two problems. With NR-SDS loss, we could greatly improve the generation diversity and quality. Future work can be done by improving the shapes of generated objects. One limitation of current method is that 2D diffusion guided 3D generation often falls short in learning object shapes. For example, they will suffer from the multihead Janus problem. One potential solution could be to add additional 3D priors to the generation process. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6555 Acknowledgements This research is supported by the Agency for Science, Technology and Research (A*STAR) under its MTC Programmatic Funds (Grant No. M23L7b0021). This research is also supported by an OPPO research grant. References Canfes, Z.; Atasoy, M. F.; Dirik, A.; and Yanardag, P. 2023. Text and image guided 3d avatar generation and manipulation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 4421–4431. Chan, E. R.; Lin, C. Z.; Chan, M. A.; Nagano, K.; Pan, B.; De Mello, S.; Gallo, O.; Guibas, L. J.; Tremblay, J.; Khamis, S.; et al. 2022. Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16123– 16133. Chen, R.; Chen, Y.; Jiao, N.; and Jia, K. 2023. Fantasia3D: Disentangling Geometry and Appearance for Highquality Text-to-3D Content Creation. arXiv preprint arXiv:2303.13873. Guo, Y.-C.; Liu, Y.-T.; Shao, R.; Laforte, C.; Voleti, V.; Luo, G.; Chen, C.-H.; Zou, Z.-X.; Wang, C.; Cao, Y.-P.; and Zhang, S.-H. 2023. threestudio: A unified framework for 3D content generation. https://github.com/threestudioproject/threestudio. Accessed: 2023-05-01. Henzler, P.; Mitra, N. J.; and Ritschel, T. 2019. Escaping plato’s cave: 3d shape from adversarial rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9984–9993. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Ho, J.; and Salimans, T. 2022. Classifier-Free Diffusion Guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications. Jain, A.; Mildenhall, B.; Barron, J. T.; Abbeel, P.; and Poole, B. 2022. Zero-shot text-guided object generation with dream fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 867–876. Liao, Y.; Schwarz, K.; Mescheder, L.; and Geiger, A. 2020. Towards unsupervised learning of generative models for 3d controllable image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5871–5880. Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; and Lin, T.-Y. 2023. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 300–309. Liu, Z.; Wang, Y.; Qi, X.; and Fu, C.-W. 2022. Towards implicit text-guided 3d shape generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17896–17906. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1): 99–106. Mordvintsev, A.; Olah, C.; and Tyka, M. 2015. Inceptionism: Going deeper into neural networks. Blog. M¨uller, T.; Evans, A.; Schied, C.; and Keller, A. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG), 41(4): 1– 15. Nguyen-Phuoc, T.; Li, C.; Theis, L.; Richardt, C.; and Yang, Y.-L. 2019. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7588–7597. Nguyen-Phuoc, T. H.; Richardt, C.; Mai, L.; Yang, Y.; and Mitra, N. 2020. Blockgan: Learning 3d object-aware scene representations from unlabelled images. Advances in neural information processing systems, 33: 6767–6778. Nichol, A. Q.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; Mcgrew, B.; Sutskever, I.; and Chen, M. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In International Conference on Machine Learning, 16784–16804. PMLR. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2022. DreamFusion: Text-to-3D using 2D Diffusion. arXiv. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684– 10695. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour, S. K. S.; Gontijo-Lopes, R.; Ayan, B. K.; Salimans, T.; et al. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In Advances in Neural Information Processing Systems. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, 2256–2265. PMLR. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2020. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456. Tang, J. 2022. Stable-dreamfusion: Text-to-3D with Stablediffusion. Https://github.com/ashawkey/stable-dreamfusion. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6556 Wang, C.; Chai, M.; He, M.; Chen, D.; and Liao, J. 2022. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3835–3844. Wang, Z.; Lu, C.; Wang, Y.; Bao, F.; Li, C.; Su, H.; and Zhu, J. 2023. ProlificDreamer: High-Fidelity and Diverse Textto-3D Generation with Variational Score Distillation. arXiv preprint arXiv:2305.16213. Wu, J.; Zhang, C.; Xue, T.; Freeman, B.; and Tenenbaum, J. 2016. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Advances in neural information processing systems, 29. Yin, H.; Mallya, A.; Vahdat, A.; Alvarez, J. M.; Kautz, J.; and Molchanov, P. 2021. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16337–16346. Yin, H.; Molchanov, P.; Alvarez, J. M.; Li, Z.; Mallya, A.; Hoiem, D.; Jha, N. K.; and Kautz, J. 2020. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8715–8724. Yu, A.; Ye, V.; Tancik, M.; and Kanazawa, A. 2021. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4578–4587. Zhou, P.; Xie, L.; Ni, B.; and Tian, Q. 2021. Cips-3d: A 3daware generator of gans based on conditionally-independent pixel synthesis. arXiv preprint arXiv:2110.09788. Zhu, J.-Y.; Zhang, Z.; Zhang, C.; Wu, J.; Torralba, A.; Tenenbaum, J.; and Freeman, B. 2018. Visual object networks: Image generation with disentangled 3D representations. Advances in neural information processing systems, 31. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6557 | 2024 | 728 |
18,549 | Semantic Segmentation in Multiple Adverse Weather Conditions with Domain Knowledge Retention Xin Yang1, Wending Yan2, Yuan Yuan2, Michael Bi Mi2, Robby T. Tan1 1National University of Singapore 2Huawei International Pte Ltd [email protected], {yan.wending,yuanyuan10@huawei}.com, [email protected], [email protected] Abstract Semantic segmentation’s performance is often compromised when applied to unlabeled adverse weather conditions. Unsupervised domain adaptation is a potential approach to enhancing the model’s adaptability and robustness to adverse weather. However, existing methods encounter difficulties when sequentially adapting the model to multiple unlabeled adverse weather conditions. They struggle to acquire new knowledge while also retaining previously learned knowledge. To address these problems, we propose a semantic segmentation method for multiple adverse weather conditions that incorporates adaptive knowledge acquisition, pseudolabel blending, and weather composition replay. Our adaptive knowledge acquisition enables the model to avoid learning from extreme images that could potentially cause the model to forget. In our approach of blending pseudo-labels, we not only utilize the current model but also integrate the previously learned model into the ongoing learning process. This collaboration between the current teacher and the previous model enhances the robustness of the pseudo-labels for the current target. Our weather composition replay mechanism allows the model to continuously refine its previously learned weather information while simultaneously learning from the new target domain. Our method consistently outperforms the stateof-the-art methods, and obtains the best performance with averaged mIoU (%) of 65.7 and the lowest forgetting (%) of 3.6 against 60.1 and 11.3 (Hoyer et al. 2023), on the ACDC datsets for a four-target continual multi-target domain adaptation. Introduction Semantic segmentation methods face challenges in adverse weather conditions, as these conditions significantly degrade images. Adapting models to multiple adverse weather conditions in an unsupervised and successive manner causes additional difficulties due to substantial domain gaps among various weather conditions, and potentially leads to forgetting previously learned knowledge. Continual unsupervised domain adaptation emerges as a potential solution to address these challenges by adapting the model from the labeled source domain to the unlabeled target domains in a sequential manner, e.g., ((Lin et al. 2022; Saporta et al. 2022)). However, these methods are designed Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Image MIC on T1 →T1 after T2 →T1 after T3 Ground Truth Ours on T1 →T1 after T2 →T1 after T3 Figure 1: The illustration of our method, where the model adapts to each target domain sequentially. MIC (Hoyer et al. 2023) fails to retain previously learned knowledge, as its performance on the first target gradually deteriorates, e.g., the sky and the side walk in Target 1 are disappearing after the method learns Targets 2 and 3. Our method retain previously learned knowledge while adapting to new targets. to acquire all information from the new target, without considering whether this information might lead to a forgetting of previously learned knowledge. Moreover, as mentioned in (Kalb and Beyerer 2023), domain shifts across distinct adverse weather conditions are primarily induced by the deteriorated information within low-level features extracted from the early convolutional layers. Consequently, there is a necessity to develop a method that takes this factor into consideration to effectively address the challenges posed by adverse weather conditions in domain adaptation. In this paper, we present a semantic segmentation method that sequentially adapts the model to multiple unlabeled adverse weather domains, by progressively learning a new domain at a time while retaining the previous learned knowledge. Our method conceives three novel concepts: adaptive knowledge acquisition, pseudo-label blending, and weather composition replay. In contrast to single-target unsupervised domain adaptation, our sequential domain adaptation aims to both learn from the new target and retain previously acquired knowledge simultaneously. Because of this, our model needs to identify potentially detrimental input regions that could introduce significant domain gaps and lead to forgetting of previously learned knowledge (Yang et al. 2022a; Kalb and Beyerer 2023). To achieve this, we introduce adaptive The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6558 knowledge acquisition by utilizing the previous model and class-wise feature representations, resulting in a dynamic weighting map. This dynamic weighting map acts as a constraint, preventing the current model from learning potentially detrimental areas. Various adverse weather conditions may exhibit similar degradation effects (Kalb and Beyerer 2023; Li et al. 2023). For instance, both fog and the rain veiling effect visually look alike. Based on this similarity, models trained under different adverse weather conditions can collaborate to enhance learning from the shared degradation patterns (AllenZhu and Li 2020). Motivated by this idea, we propose a pseudo-label blending strategy. This involves employing the previous model as an auxiliary model to identify images from the new target that share similarities with those from the past targets. We then involve the auxiliary model (i.e., the previous model) to enhance our learning process in an ensemble manner. Note that, since we use a teacher-student framework, the term ”previous model” refers to the previous teacher. When operating in a sequential manner, we do not assume accessibility to the images from previously learned targets. In such a scenario, to maintain the previously acquired knowledge, we introduce a replay technique that serves as a continuous reminder of the weather degradation patterns encountered in the past. To implement this, we retain the weather information acquired from different targets in the previous steps, Once a new target is introduced, for each target image, we randomly augment each of the past weather information instances and integrate them into random segments of the current target images. Through training on these composite images, the model sustains and revises its comprehension of various weather degradations over time, even when direct access to previously learned target domain images is unavailable. In a summary, our contributions are as follows: • We present an adaptive knowledge acquisition method that guides the model to refrain from learning new contents that could potentially result in forgetting. • We introduce the concept of incorporating the previous model into the current learning process to enhance our method’s overall performance in an ensemble manner. • To retain the past weather information, we propose a continuous replay of previously learned weather degradation, by randomly augmenting and integrating it into the present target images. Our method consistently outperforms the state-of-the-art methods, and obtains the best performance with averaged mIoU (%) of 65.7 and the lowest forgetting (%) of 3.6 against 60.1 and 11.3 (Hoyer et al. 2023), on the ACDC datsets for a four-target continual multi-target domain adaptation. Related Work Unsupervised domain adaptation (UDA) has been explored extensively in recent years, with many applications ranging from different vision tasks (Ganin and Lempitsky 2015; Chen et al. 2018, 2021; Vu et al. 2019; Saito et al. 2019; Zou et al. 2018; Li, Yuan, and Vasconcelos 2019). However, UDA settings are limited to one source and one target, meaning that the trained domain adaptive model can only work on a certain target domain and will fail if more target domains are involved. Hence, some researchers start to explore methods to adapt a model into multiple target domains (Peng et al. 2019; Chen et al. 2019; Yu, Hu, and Chen 2018; Gholami et al. 2020; Nguyen-Meidine et al. 2021; Yao et al. 2022; Roy et al. 2021). (Isobe et al. 2021; Saporta et al. 2021; Lee et al. 2022) adapt the model into multi-target domains in a parallel way, where all the target domains are involved in every iteration. (Isobe et al. 2021) maintains an expert model for each target domain, they are trained on many augmented images and teach a common student model via a knowledge distillation loss. (Saporta et al. 2021) proposes two MTDA methods, Multi-Dis and MTKT. For each target, Multi-Dis uses two types of domain classifiers, a source vs. target classifier and a target vs. all the other targets classifier. MTKT has a target domain-specific decoder, and a corresponding targetspecific domain classifier for each target domain, the knowledge learned from different decoders is passed to a common decoder with knowledge distillation. (Lee et al. 2022) disentangles the input images into semantic contents and style contents, they obtain source images in target styles by swapping the contents. The drawbacks of the parallel approaches are, (1) They need additional modules/images for every target domain in every iteration, and the real-time memory consumption could be a bottleneck when the number of target domains becomes larger. (2) Similar to the single-target UDA model, when a new target domain is introduced, there is no way to update a pretrained model, and hence a new model will need to be trained from scratch with all the targets. (Saporta et al. 2022) proposes a continual way to adapt a model to multiple target domains, where the target domains are adapted through multiple steps, and the model can be updated at any time when a new target domain is introduced. The paper also points out that due to this problem, the continual approach usually performs worse than the parallel approach. Replay has been proven effective in addressing this preserving previously learned knowledge for adverse weather conditions (Kalb and Beyerer 2023). Existing replay techniques including storing representative exemplar images (Rebuffi et al. 2017; Hayes et al. 2020; Kang, Park, and Han 2022) and adversarially generating images in the style of previous domains (Shin et al. 2017). However, the weather degradation is closely coupled to the image’s physics properties (e.g., depth) (Sindagi et al. 2020; Hu et al. 2021; Yang et al. 2022b; Li et al. 2023), so the degradation in each image is unique and it is hard to identify a representative exemplar image. Moreover, generating synthetic weather degradation is still an unsolved problem (Sakaridis, Dai, and Van Gool 2018; Anoosheh et al. 2019), and there does not exist a solution to continuously generate different adverse weather conditions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6559 Figure 2: Our architecture for adapting a model to n adverse weather conditions in n steps in a sequential manner. The architecture consists of several key components: (1) Adaptive knowledge acquisition, where the model is guided to avoid learning the areas that could lead to a forgetting problem. (2) Pseudo-label blending, where the previous teacher is involved for enhancing the pseudo-label. (3) Weather composition replay, where the weather vectors from previous steps are composed into the current target image for revising on previously learned knowledge. Proposed Method Our goal is to adapt our model sequentially to multiple adverse weather conditions, focusing on one weather domain at a time, all the while retaining the knowledge gained previously. This sequential learning is necessary since we do not assume that once we have learned a domain, the data from that particular domain is accessible. To achieve this goal, we begin by adapting our model to an initial weather domain. Subsequently, we utilize the knowledge gained from the first domain (previous domain), which includes the teacher model and weather vectors, to aid the model’s adaptation to the second weather domain (current domain). This process of acquiring and retaining knowledge is iterated for each weather domain. Fig. 2 shows our pipeline. We progressively adapt our model to n distinct adverse weather domains over n steps. We feed the source images to the student model in each step for supervised learning on semantic segmentation. As for the target parts, from step 1 to step n −1, our weather composition replay extracts and preserves weather vectors for every previously encountered target domain. Then, in step n, we compose the extracted weather vectors into the current target image. This composite image is subsequently fed into the student model for making predictions. We combine pseudo-labels generated by both the current teacher model and the previous teacher model to synergistically improve the quality of pseudo-labels. Our adaptive knowledge acquisition assesses pixel-wise domain shifts from both model-level and feature-level viewpoints. This dynamic reassessment enables us to adjust the learning process to the current target image, helping the model avoid incorporating detrimental information that may result in significant forgetting. Adaptive Knowledge Acquisition In this process of adaptive knowledge acquisition, we identify image regions with significant domain shifts that could potentially trigger model forgetting. Subsequently, we dynamically adjust the weighting of these areas, ensuring that while acquiring new knowledge, the model can still retain past knowledge. Model-Level Adjustment Following the standard unsupervised domain adaptation methods (Kennerley et al. 2023; Hoyer et al. 2023), in each step n (weather domain n), we obtain a teacher model through the EMA (Exponential Moving Average) of a student model. We then adapt the student model to the new target domain by learning from the soft pseudo-labels generated by the teacher. In this standard process, the student will quickly adapt to the new target domain. However, certain images in the new target domain can result in forgetting previous knowledge, especially for images that differ significantly from those in the previous target domain. To mitigate this potential risk, we incorporate the teacher model from a previous target domain as an auxiliary model. This teacher model has not been exposed to the new target images during training. We refer to this teacher model as Previous-Teacher. Our Previous-Teacher makes a prediction for each image in the current step. Consequently, the confidence of our Previous-Teacher in areas affected by the new adverse weather degradation can be used as an indication of the degree of the domain shift. Since, its confidence is low for areas that have never been learned before. This allows our model to selectively learn from the new target, avoiding harmful (low confidence) areas and thereby reducing the risk of forgetting previously acquired knowledge (Kalb and Beyerer 2023). To implement this idea, we define confidence as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6560 q: q(x)ij = max(g(x)ij), (1) where, g is the model and x is the input. We compute a pixel-wise dynamic weighting mask for the model-level adjustment, denoted as M mod, as follows: M mod ij = (1 −α)qcur(xT )ij + αqpre(xT )ij, (2) where xT represents the target image, qpre and qcur represent the confidence scores from the previous teacher and the current teacher, respectively. α is a parameter to control the weights of each term, where it is large initially, and reduces along the number of iterations. For regions where Previous-Teacher exhibits significant low confidence, M mod dynamically reduces the learning weight in those areas. As a result, the model is encouraged to prioritize exploration in the regions that are less likely to lead to forgetting. As the model progressively strengthens its resilience to new weather degradations, the confidence qcur(xT )ij in these challenging regions increases. Simultaneously, with a decrease in the parameter α, the model becomes more inclined to adaptively learn from these areas. Feature-Level Adjustment While model-level adjustment focuses on utilizing teacher models to guide the model’s learning, feature-level adjustment aims to utilize class-specific feature representations to quantify the domain shift caused by weather degradation. When a model attempts to adapt to a new adverse weather domain, the low-level features it extracts from the early convolutional layers are prone to degradation due to the new weather conditions. Consequently, predictions relying on these imprecise features could result in erroneous outcomes and induce substantial domain shifts within the feature space. In contrast, a model capable of accurately extracting information should avoid notable domain shifts in the feature space. Therefore, we leverage this feature information to assess the extent of domain shifts. To implement this, we first define the feature representations. Based on the ground-truth of a source image, we partition the feature maps for different classes and calculate their average for each class, yielding class-wise source feature vectors. We then compute the exponential moving average (EMA) of the source feature vectors, denoted as SR. Compared to the source feature vectors, SR can filter out images that are different from the typical source images, and thus can provide more representative features. Similarly, we partition and calculate the average of the target feature maps according to the corresponding pseudolabels, resulting in target feature vectors. As our objective involves dynamically assigning weights to each target image, we directly employ the target feature vectors as the target feature representation, denoted as TR, for every image. Following this, we construct a weighting map based on the distances between SR and TR: M feat = max(0, 1 − (TRc1 −SRc1)2 PC c=1(TRc −SRc1)2 ), (3) where c1 represents the predicted class, and C is the number of all the predicted classes. In this equation, we calculate the relative distance between SR and TR for class c1. Intuitively, as shown in Fig. 2, within the feature space, for the target feature representations of Class 1, if they closely resemble the source feature representations of the same class, the domain shift is probably minor. This implies that learning in these areas would result in lesser chances of significant forgetting. Conversely, when we notice that the target feature representations are considerably distant from the source feature representations of the same class (as illustrated in Class 2 in the figure), it indicates that the model might have captured degraded information from the image. Training within these regions can lead to substantial alterations in the feature space, potentially affecting the pretrained feature structures (Hoyer, Dai, and Van Gool 2022; Su et al. 2023). In such scenarios, our objective is to mitigate the possibility of forgetting by constraining the model from learning based on this information. Pseudo-Label Blending By integrating Previous-Teacher with the current teacher model, we can effectively ensemble them to enhance the robustness of the pseudo-labels (Allen-Zhu and Li 2020). We achieve this using both Previous-Teacher and its target feature representations. First, we utilize both the PreviousTeacher and the current teacher model to generate predictions for the current target image. Based on the confidences of these predictions, we generate a binary mask, denoted as M con, which indicates the regions where the PreviousTeacher exhibit higher confidence compared to the current teacher, M con ij = 1, if qpre(xT )ij > qcur(xT )ij 0, otherwise , (4) As discussed in the feature-level adjustment, if PreviousTeacher demonstrates greater robustness to an area than the current teacher for a specific image segment, the corresponding extracted target feature representations should be closer to the source feature representations. Hence, we also compute a weighting map using the equation mentioned in Eq. (3), but using the target feature representations generated from Previous-Teacher. We denote this weighting map as M feat pre . We can then obtain the refined pseudo-label, denoted as p. p(xT )ij = argmax c (qcur(xT )ijc +M conM feat pre qpre(xT )ijc). (5) We incorporate reliable predictions from the previously learned models into the current learning process. This blending of pseudo-labels enables the model to effectively learn the similar patterns in different targets, leveraging the knowledge and expertise accumulated by both PreviousTeacher and the current teacher model. Weather Composition Replay We propose a replay technique utilizing the weather information obtained from different weather domains. As mentioned in (Kalb and Beyerer 2023), the domain shifts introduced by weather degradation can be captured in the frequency domain by the amplitude spectrum. Hence, when The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6561 Figure 3: Examples of composing night and fog weather vectors into snow and rain images, respectively. adapting to a new target domain, we begin by translating each new target image into the frequency domain, and separate the amplitude from the phase. Although each target image may contain some image-specific information, they are all subjected to the same adverse weather conditions. By averaging the extracted amplitude, we can then suppress image-specific information while preserving the weather vectors, in frequency domain. As illustrated in Fig. 2, once the weather vectors are extracted, we keep them to future steps. In step n, we apply augmentations on the stored weather vectors from step 1 to step n −1, and randomly inject them into the current target images as follow: xT 2 r = MrxT 2 + (1 −Mr)iFT(P T 2, σAT 1), (6) where, xT 2 is the current target image, xT 2 r is the composed images for the model to learn, P T 2 and AT 1 represent the phase component of the current image and the weather vectors of the previous domain, respectively. σ represents a randomly determined volume of AT 1, where a larger volume indicates a stronger dominance of the previous weather information in the area. iFT represents an inverse Fourier Transformation to translate back the images from frequency domain. Mr represents a random segment in that image, as augmenting a segment can be more efficient than augmenting the whole image (Yun et al. 2019; Olsson et al. 2021). This process is repeated for each weather vector we obtained, for a total of n −1 times. This replay technique, utilizing the composed images, ensures that the model continuously updates its understanding of adverse weather conditions while retaining and refining its knowledge from past experiences. Experimental Results In this section, we present a comprehensive evaluation of our method under multiple adverse weather conditions in a sequential setting. We begin by describing the datasets used in our experiments, followed by the architectures and parameters employed. Next, we assess our model’s performance both quantitatively and qualitatively, demonstrating its effectiveness in handling diverse weather conditions. Lastly, we conduct ablation studies to evaluate the importance of each factor in our method. Datasets We utilize Cityscapes (Cordts et al. 2016) as our source domain, consisting of real-world street scene images captured under daytime, clear weather conditions. For the target dataset, we employ ACDC (Sakaridis, Dai, and Van Gool 2021), which contains real-world street scene images captured under four adverse weather conditions: nighttime, rain, fog, and snow. In our experiments, each of these four adverse weather conditions is treated as a separate target, and our models are adapted to these targets sequentially. Baseline Models In our experiments, we conduct comparisons with the state-of-the-art unsupervised domain adaptation method, MIC (Hoyer, Dai, and Van Gool 2022). To ensure a fair comparison, we use the same architecture, DAFormer, in all comparisons. Additionally, we employ identical optimization strategy, including the number of epochs, batch sizes, domain adaptation techniques and the pretrained backbone, as suggested in MIC. Since our method is not limited by the model’s architecture, we also compare it with two parallel and one continual general-purpose multiple target domain adaptation methods, MTKT, Multi-Dis, and MuHDi (Saporta et al. 2021, 2022). Note, for the parallel setting methods (Saporta et al. 2021), they learn all the targets simultaneously, their models are not affected by the forgetting problem. For the continual setting method (Saporta et al. 2022), it provides a forgetting prevention technique, but this method is not designed specifically for adverse weather conditions. Once again, to maintain fairness in the comparison, we utilize the same architecture, DeeplabV2 (Chen et al. 2017), and employ identical optimization strategy, domain adaptation techniques and pretrained backbone, following (Saporta et al. 2022). Regarding our method’s parameters, we initialize the value of α to be 0.8 and gradually decrease it to 0.2 as the number of iterations progresses. When injecting the previous weather information into the new target image, we randomly generate σ between 0.2 and 1.2, and the sizes of the affected area are randomly selected in the target image, ranging from one-third to half of the image size. All these parameters are decided empirically. Evaluation Metrics Cityscapes and ACDC have the same segmentation class protocols, hence we apply the same protocol in our evaluation. We use the percentage of Intersection over Union (IoU %) as our evaluation metric for the effectiveness of the knowledge acquisition from a new target, the higher the better (↑). As for the knowledge retention, we use Accumulated Forgetting, which is calculated as follow, A.F. = K−1 X k=1 (mIoUk,k −mIoUk,K), (7) where, K is the number of targets. We adapt the model to K targets in K steps. mIoUk,k represents the initial performance of target k at step k, and mIoUk,K represents the final performance of target k at the last step, K. A smaller Accumulated Forgetting indicates a lesser degree of forgetting in the model (↓). Quantitative Results We conduct experiments on four ACDC adverse weather conditions: nighttime, rain, fog, and snow. As shown in Tab. 1, our models outperformed other methods on all targets. Specifically, our models surpassed AdvEnt (Vu et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6562 Cityscapes→Night→Rain→Fog→Snow Method Forgetting Prevention Night Rain Fog Snow mIoU Avg. ↑ A.F. ↓ MTKT (Saporta et al. 2021) 21.5 39.4 48.8 38.7 37.1 Multi-Dis (Saporta et al. 2021) 20.5 38.5 43.6 36.8 34.8 AdvEnt (Vu et al. 2019) ✗ 16.9 (-9.0) 36.9 (-3.2) 40.8 (-10.3) 35.1 32.4 22.4 MuHDi (Saporta et al. 2022) ✓ 17.6 (-8.3) 37.2 (-3.4) 44.4 (-6.7) 36.3 33.9 18.4 Ours (DeeplabV2) ✓ 24.0 (-1.9) 42.0 (-1.3) 50.8 (-1.1) 44.0 40.2 4.3 MIC (Hoyer et al. 2023) ✗ 34.7 (-7.2) 65.8 (-2.8) 78.4 (-1.3) 65.2 60.1 11.3 Ours (DAFormer) ✓ 39.0 (-2.9) 70.6 (-0.9) 80.4 (-0.2) 72.6 65.7 3.6 Table 1: Quantitative results of Ours compare to the existing unsupervised domain adaptation methods and continual multiple target domain adaptation methods, evaluated against four targets, ACDC nighttime, rain, fog and snow. Bold numbers are the best scores for different backbones. The mIoU (%) of each target, the mIoU average of all the targets (the higher the better), and the accumulated forgetting, A.F. (the lower the better) are presented. The number in parentheses ’()’ indicates changes in performance, with a smaller number indicating a more pronounced forgetting effect. Our method outperforms the best existing method with forgetting prevention on DeeplabV2 backbone, by 6.3 mIoU (%) in average across all the targets, and 14.1 in the accumulated forgetting. As for the DAFormer backbone, our method outperforms the best domain adaptation method by 5.6 mIoU (%) in average across all the targets, and 7.7 in the accumulated forgetting. 2019) and MIC (Hoyer et al. 2023) by 8.2 mIoU (%) and 5.6 mIoU (%) in mIoU Avg., and 18.1 and 7.7 in accumulated forgetting, respectively. The highest mIoU and the smallest forgetting across all previous targets indicate the effectiveness of our models’ knowledge acquisition and retention when adapting to different targets. We compared our method to MuHDi (Saporta et al. 2022), which offers a general-purpose forgetting prevention technique for multiple targets. Our method outperformed MuHDi by 6.3 mIoU (%) in mIoU Avg. and 14.1 in accumulated forgetting. Moreover, while the parallel multiple target domain adaptation methods do not suffer from the forgetting problem, our model still outperforms their performance in mIoU Avg. by 3.1 mIoU (%) and 5.4 mIoU (%), respectively. This highlights the importance of our method’s weather-specific knowledge acquisition and retention. Without Source In certain circumstances, access to the source data may not be available in different steps. In this section, we evaluate our method when we can only access the current target domain in each step. Since the source datasets are inaccessible, we do not apply M feat and M feat pre in this scenario. We use MIC as the baseline for comparison. Note, for both MIC and our method without source, we provide a model finetuned on the source, following (Kalb and Beyerer 2023). Both methods are required to adapt this model to different adverse weather conditions. For AdvEnt backbone, it requires the source dataset for domain adaptation, so models with this backbone are not involved. The results are presented in Tab. 2, with the absence of the source image dataset, MIC’s accumulated forgetting increased significantly from 11.3 (%) to 23.9 (%), by 12.6 (%), where our model is less affected, with only an increase of accumulated forgetting by 6.7 (%). Qualitative Results We first show our qualitative results of our knowledge acquisition ability in Fig. 4, for a four-target sequential doCityscapes→Night→Rain→Fog→Snow Method Night Rain Fog Snow mIoU Avg. ↑ A.F. ↓ MIC (w/o) 30.8 63.0 74.5 69.2 57.5 23.9 Ours (w/o) 36.7 69.8 75.9 69.6 63.0 10.3 Table 2: Quantitative comparisons between MIC (Hoyer et al. 2023) and our method, both without source (w/o). Cityscapes→Night→Rain→Fog→Snow Method mIoU Avg. ↑ A.F. ↓ AdvEnt 32.4 22.4 Blending 34.0 19.6 Table 3: Ablation studies of knowledge acquisition with pseudo-label blending. main adaptation (on night, rain, fog, and snow). We evaluate our best model Ours (DAFormer) against MIC, and the ground truths semantic segmentation maps. We can see that our method can predict more precise semantic segmentation maps, and also has fewer false positives compared to MIC in all the target domains. More qualitative results are provided in the supplementary materials. Ablation Studies Knowledge Acquisition and Retention We begin by evaluating the effectiveness of our adaptive knowledge acquisition and weather composition replay techniques, as they are specifically designed for both knowledge acquisition and knowledge retention. In Tab. 3, we report the performance changes of four models: AdvEnt, AdvEnt with model-level assistance, AdvEnt with both model-level and feature-level assistance, and AdvEnt with adaptive knowledge acquisition and weather composition replay techThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6563 Image MIC (2023) Ours Ground Truth Figure 4: Comparisons on the semantic segmentation performance with MIC (Hoyer et al. 2023), Ours (DAFormer), and ground truths on ACDC (Val.) under rainy, foggy, and snowy weather conditions following sequential multitarget domain adaptation. Cityscapes→Night→Rain→Fog→Snow Progressive Replay A.F. ↓ Model Level Feature Level 22.4 ✓ 12.9 ✓ ✓ 9.6 ✓ ✓ ✓ 4.5 Table 4: Ablation studies of our knowledge retention techniques, including adaptive knowledge acquisition and weather composition replay. niques. All these models are trained to adapt to four adverse weather conditions using source data, and it is evident that each component contributes to preventing forgetting. Pseudo-Label Blending Pseudo-label blending is designed for exploring the similar patterns among different targets to improve the knowledge acquisition on the current target. We evaluate the knowledge acquisition with two approaches: AdvEnt and AdvEnt with pseudo-label blending. The results are presented in Tab. 3. We can observe that pseudo-label blending enhances the average mIoU across the four targets, without causing additional forgetting issues. Combining all our proposed techniques, we achieve an enhanced performance on the new target, while retain the most knowledge from the previous targets. Conclusion We have proposed a novel method that adapts a model to multiple unlabeled adverse weather conditions sequentially. We use both model-level and feature-level knowledge to assist the model avoid learning from the harmful contents from the new target image that can lead to a forgetting of the previously learned knowledge. To support our current learning process, we have also proposed a method to involve the previously obtained model to jointly improving the pseudolabels for the current target. We propose a weather composition replay technique, which compose the previously learned weather information to the current target image, enabling the model learn from the current target image while revising the previously learned weather information. We train two models using our method on both DeeplabV2 and DAFormer, respectively, to demonstrate that our method can be generalized to different architectures. We evaluate these models on several benchmark adverse weather conditions with different settings and found that our models outperform many methods in different settings. References Allen-Zhu, Z.; and Li, Y. 2020. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. arXiv preprint arXiv:2012.09816. Anoosheh, A.; Sattler, T.; Timofte, R.; Pollefeys, M.; and Van Gool, L. 2019. Night-to-day image translation for retrieval-based localization. In 2019 International Conference on Robotics and Automation (ICRA), 5958–5964. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6564 Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4): 834–848. Chen, Y.; Li, W.; Sakaridis, C.; Dai, D.; and Van Gool, L. 2018. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3339–3348. Chen, Y.; Wang, H.; Li, W.; Sakaridis, C.; Dai, D.; and Van Gool, L. 2021. Scale-aware domain adaptive faster r-cnn. International Journal of Computer Vision, 129(7): 2223–2243. Chen, Z.; Zhuang, J.; Liang, X.; and Lin, L. 2019. Blendingtarget domain adaptation by adversarial meta-adaptation networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2248–2257. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223. Ganin, Y.; and Lempitsky, V. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, 1180–1189. PMLR. Gholami, B.; Sahu, P.; Rudovic, O.; Bousmalis, K.; and Pavlovic, V. 2020. Unsupervised multi-target domain adaptation: An information theoretic approach. IEEE Transactions on Image Processing, 29: 3993–4002. Hayes, T. L.; Kafle, K.; Shrestha, R.; Acharya, M.; and Kanan, C. 2020. Remind your neural network to prevent catastrophic forgetting. In European Conference on Computer Vision, 466–483. Springer. Hoyer, L.; Dai, D.; and Van Gool, L. 2022. Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9924–9935. Hoyer, L.; Dai, D.; Wang, H.; and Van Gool, L. 2023. MIC: Masked image consistency for context-enhanced domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11721–11732. Hu, X.; Zhu, L.; Wang, T.; Fu, C.-W.; and Heng, P.-A. 2021. Single-image real-time rain removal based on depth-guided non-local features. IEEE Transactions on Image Processing, 30: 1759–1770. Isobe, T.; Jia, X.; Chen, S.; He, J.; Shi, Y.; Liu, J.; Lu, H.; and Wang, S. 2021. Multi-target domain adaptation with collaborative consistency learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8187–8196. Kalb, T.; and Beyerer, J. 2023. Principles of Forgetting in Domain-Incremental Semantic Segmentation in Adverse Weather Conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19508–19518. Kang, M.; Park, J.; and Han, B. 2022. Class-incremental learning by knowledge distillation with adaptive feature consolidation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 16071–16080. Kennerley, M.; Wang, J.-G.; Veeravalli, B.; and Tan, R. T. 2023. 2PCNet: Two-Phase Consistency Training for Dayto-Night Unsupervised Domain Adaptive Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11484–11493. Lee, S.; Choi, W.; Kim, C.; Choi, M.; and Im, S. 2022. ADAS: A Direct Adaptation Strategy for Multi-Target Domain Adaptive Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19196–19206. Li, M.; Xie, B.; Li, S.; Liu, C. H.; and Cheng, X. 2023. VBLC: Visibility Boosting and Logit-Constraint Learning for Domain Adaptive Semantic Segmentation under Adverse Conditions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8605–8613. Li, Y.; Yuan, L.; and Vasconcelos, N. 2019. Bidirectional learning for domain adaptation of semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6936–6945. Lin, H.; Zhang, Y.; Qiu, Z.; Niu, S.; Gan, C.; Liu, Y.; and Tan, M. 2022. Prototype-guided continual adaptation for class-incremental unsupervised domain adaptation. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, 351–368. Springer. Nguyen-Meidine, L. T.; Belal, A.; Kiran, M.; Dolz, J.; BlaisMorin, L.-A.; and Granger, E. 2021. Unsupervised multitarget domain adaptation through knowledge distillation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1339–1347. Olsson, V.; Tranheden, W.; Pinto, J.; and Svensson, L. 2021. Classmix: Segmentation-based data augmentation for semisupervised learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1369– 1378. Peng, X.; Huang, Z.; Sun, X.; and Saenko, K. 2019. Domain agnostic learning with disentangled representations. In International Conference on Machine Learning, 5102–5112. PMLR. Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001–2010. Roy, S.; Krivosheev, E.; Zhong, Z.; Sebe, N.; and Ricci, E. 2021. Curriculum graph co-teaching for multi-target domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5351–5360. Saito, K.; Ushiku, Y.; Harada, T.; and Saenko, K. 2019. Strong-weak distribution alignment for adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6956–6965. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6565 Sakaridis, C.; Dai, D.; and Van Gool, L. 2018. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126: 973–992. Sakaridis, C.; Dai, D.; and Van Gool, L. 2021. ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10765–10775. Saporta, A.; Douillard, A.; Vu, T.-H.; P´erez, P.; and Cord, M. 2022. Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3751–3760. Saporta, A.; Vu, T.-H.; Cord, M.; and P´erez, P. 2021. Multitarget adversarial frameworks for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9072–9081. Shin, H.; Lee, J. K.; Kim, J.; and Kim, J. 2017. Continual learning with deep generative replay. Advances in neural information processing systems, 30. Sindagi, V. A.; Oza, P.; Yasarla, R.; and Patel, V. M. 2020. Prior-based domain adaptive object detection for hazy and rainy conditions. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, 763–780. Springer. Su, W.; Han, Z.; He, R.; Wei, B.; He, X.; and Yin, Y. 2023. Neighborhood-based credibility anchor learning for universal domain adaptation. Pattern Recognition, 142: 109686. Vu, T.-H.; Jain, H.; Bucher, M.; Cord, M.; and P´erez, P. 2019. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2517–2526. Yang, L.; Zhuo, W.; Qi, L.; Shi, Y.; and Gao, Y. 2022a. St++: Make self-training work better for semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4268–4277. Yang, Y.; Wang, C.; Liu, R.; Zhang, L.; Guo, X.; and Tao, D. 2022b. Self-augmented unpaired image dehazing via density and depth decomposition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2037–2046. Yao, C.-H.; Gong, B.; Qi, H.; Cui, Y.; Zhu, Y.; and Yang, M.-H. 2022. Federated multi-target domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1424–1433. Yu, H.; Hu, M.; and Chen, S. 2018. Multi-target unsupervised domain adaptation without exactly shared categories. arXiv preprint arXiv:1809.00852. Yun, S.; Han, D.; Oh, S. J.; Chun, S.; Choe, J.; and Yoo, Y. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, 6023–6032. Zou, Y.; Yu, Z.; Kumar, B.; and Wang, J. 2018. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European conference on computer vision (ECCV), 289–305. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6566 | 2024 | 729 |
18,550 | Open-Set Facial Expression Recognition Yuhang Zhang1, Yue Yao2, Xuannan Liu1, Lixiong Qin1, Wenjing Wang1, Weihong Deng1 1Beijing University of Posts and Telecommunications 2Australian National University {zyhzyh, liuxuannan, lxqin, wwj311, whdeng}@bupt.edu.cn, [email protected] Abstract Facial expression recognition (FER) models are typically trained on datasets with a fixed number of seven basic classes. However, recent research works (Cowen et al. 2021; Bryant et al. 2022; Kollias 2023) point out that there are far more expressions than the basic ones. Thus, when these models are deployed in the real world, they may encounter unknown classes, such as compound expressions that cannot be classified into existing basic classes. To address this issue, we propose the open-set FER task for the first time. Though there are many existing open-set recognition methods, we argue that they do not work well for open-set FER because FER data are all human faces with very small inter-class distances, which makes the open-set samples very similar to close-set samples. In this paper, we are the first to transform the disadvantage of small inter-class distance into an advantage by proposing a new way for open-set FER. Specifically, we find that small inter-class distance allows for sparsely distributed pseudo labels of open-set samples, which can be viewed as symmetric noisy labels. Based on this novel observation, we convert the open-set FER to a noisy label detection problem. We further propose a novel method that incorporates attention map consistency and cycle training to detect the open-set samples. Extensive experiments on various FER datasets demonstrate that our method clearly outperforms state-of-the-art open-set recognition methods by large margins. Code is available at https://github.com/zyh-uaiaaaa. Introduction Facial expression recognition (FER) is vital in humancentered computing as it helps machines understand human feelings (Li and Deng 2020). Existing FER models are trained with the fixed seven basic classes. However, as pointed out by recent research works published in Nature and top computer vision conferences (Cowen et al. 2021; Bryant et al. 2022; Kollias 2023), humans can display various expressions that go beyond the basic classes in realworld deployments, such as other different expressions and compound expressions. Close-set FER models trained on the basic classes are unreliable when encountering new unknown expressions as these samples are always misclassified as one of the given classes with high confidence. This limitation hinders the real-world deployment of FER models. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: We show the extracted features using CLIP on CIFAR-10 and RAF-DB. CIFAR-10 (RAF-DB) has a large (small) inter-class distance. The small inter-class distance of FER data makes open-set samples similar to close-set samples and degrades the performance of the SOTA open-set recognition method DIAS from 0.850 to 0.714. Our method outperforms DIAS by large margins (over 20% improvement based on the original AUROC) on the open-set FER task of three different FER datasets. To solve the above problem, we propose the open-set FER for the first time, which aims to maintain the high accuracy of FER models in the closed set while enabling them to identify samples that belong to unknown classes. Specifically, FER models should be able to detect samples that do not fit perfectly into the close-set classes. However, it is a non-trivial problem because of the overconfidence problem (Nguyen, Yosinski, and Clune 2015; Goodfellow, Shlens, and Szegedy 2014; Liu et al. 2023) of deep learning models. While some open-set recognition methods (Zhou, Ye, and Zhan 2021; Moon et al. 2022) have tried to solve a similar problem, we argue that they fail in open-set FER. We claim that open-set FER is a much more difficult task because FER data are all human faces with very small interclass distances. This characteristic makes the open-set samples very similar to the close-set samples, which largely degrades the performance of existing open-set recognition methods. In this paper, different from existing methods, we proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 646 pose a new way to deal with the open-set FER. Though the small inter-class distance of FER data degrades the existing open-set recognition methods, we turn this characteristic into an advantage. Our motivation is shown in Fig. 2. We observe that for the data with relatively large inter-class distances like CIFAR-10, the predicted pseudo labels of samples from an unknown class fall into the most semantically similar known class. For example, following the open-set recognition setting (Wang et al. 2021; Zhou, Ye, and Zhan 2021; Zhang, Deng, and Zheng 2023), we consider the ’cat’ class as the unknown class and train models on other closed classes. In the test phase, the model predicts almost all unseen ’cat’ samples to the known class ’dog’. However, things are different in open-set FER. We observe that the close-set FER model predicts the samples from one unknown class to all known classes. The same phenomenon can easily generalize to the case of several open classes. This strikes us with the concept of symmetric noisy labels (Han et al. 2018) in the noisy label learning field, which is generated by flipping the labels of the original class to all other classes. Symmetric noisy labels are easy to be detected during training as they do not contain much semantic information (Han et al. 2018). On the contrary, asymmetric label noise which flips the labels of the original class to the most semantically similar class is difficult to be detected. For example, in the CIFAR10 training phase, if all ’cat’ samples are labeled as ’dog’ due to noisy labels, then the ’cat’ images are very likely to be confidently recognized as ’dog’ after training. Thus, it is infeasible to transform open-set recognition on CIFAR-10 or other datasets with large inter-class distances into noisy label detection, while it surprisingly works well under the open-set FER task. Inspired by the aforementioned discussion, we convert open-set FER to a noisy label detection problem for the first time. Unlike existing methods that use softmax scores to detect open-set samples, which tend to be overconfident, we believe that the pseudo labels contain valuable information that has been overlooked. Specifically, we use a close-set trained FER model to predict pseudo labels for all test samples, of which open-set samples will have wrong pseudo labels across all close-set classes. We then introduce cycle training with a cyclically set learning rate inspired by (Huang et al. 2019) and iteratively train two FER models to teach each other using the pseudo labels. During training, attention map consistency (Zhang et al. 2022b) is utilized to prevent the model from memorizing the wrong pseudo labels of open-set samples. After training, the loss values of the entire set will form a bimodal distribution, with close-set (open-set) samples having small (large) loss values. We compare our proposed method with state-of-the-art open-set recognition methods on different open-set FER datasets with different open-set classes. Extensive experiment results show that our method outperforms these methods by large margins. We further show the online prediction ability of our method for one given sample without retraining. More analyses and visualization results of loss values, pseudo labels, and learned features are provided to help further understanding. Figure 2: We provide an illustration of our motivation by showing the predicted pseudo labels of the close-set model on CIFAR-10 and FER datasets. CIFAR-10 has relatively large inter-class distances, and the close-set trained model predicts unknown samples into the most similar known class. For example, if the unknown class is ’cat’, the trained model will predict almost all cat samples into the known class ’dog’. However, FER data are all human faces. The close-set trained FER model predicts samples of one unknown class to all known classes, which is similar to the concept of symmetric noisy label - a type of easy label noise commonly encountered in the noisy label field. • We propose the open-set FER for the first time and observe that existing open-set recognition methods are not effective under this new task due to the small inter-class distance of FER data. • Based on our observation of the different distributions of pseudo close-set labels under large and small interclass distances, we transform open-set FER to noisy label detection for the first time. We further design a novel method with attention map consistency and cycle training to separate close-set and open-set samples. • Extensive experiments on different open-set FER datasets show that our method outperforms SOTA openset recognition methods by large margins. Related Work Facial Expression Recognition Facial expression recognition (FER) is vital for humancomputer interaction, and there are numerous studies aimed at improving FER performance (Zhong et al. 2012; Li, Deng, and Du 2017; Li et al. 2021; Farzaneh and Qi 2021; Zhang, Wang, and Deng 2021; Ruan et al. 2021; Li et al. 2022; Zhang et al. 2023). For instance, Li et al. (Li, Deng, and Du 2017) use crowd-sourcing to simulate human expression recognition. Farzaneh et al. (Farzaneh and Qi 2021) propose a center loss variant to maximize intra-class similarity and inter-class separation. Ruan et al. (Ruan et al. 2021) acquire expression-relevant information during the decomposition of an expression feature. Zhang et al. (Zhang, Wang, and Deng 2021) trains FER models through relative comparison. However, these FER methods are typically evaluated on fixed close-set datasets and they produce highly confident close-set predictions for open-set data. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 647 Figure 3: The pipeline of our method. Given the input of both close-set and open-set samples, we utilize the trained closeset model to generate pseudo labels for them. Open-set samples will get noisy close-set labels. We then cyclically train two FER models from scratch with the pseudo labels and utilize attention map consistency loss (Cons.) to prevent the model from memorizing the noisy close-set labels. Each model selects clean samples for another model and teaches each other cyclically. We also utilize a cyclical learning rate (lr) to create an ensemble of models for better separation of close-set and open-set samples. After training, the open-set samples have large classification (Cls.) loss while close-set samples have small Cls. loss. Open-Set Recognition There are two main streams of open-set recognition methods, based on the type of models that they use: discriminative models and generative models (Geng, Huang, and Chen 2020). The first stream typically employs K+1 way classifiers to discriminate between close-set and open-set data (Scheirer et al. 2012; Bendale and Boult 2015, 2016; Wang et al. 2021; Zhou, Ye, and Zhan 2021). For example, Bendale et al. (Bendale and Boult 2016) replace the softmax layer with OpenMax and calibrate the output probability with Weibull distribution. Wang et al. (Wang et al. 2021) utilize an energy-based K+1 way softmax classifier for open-world setting. Zhou et al. (Zhou, Ye, and Zhan 2021) prepare for the unknown classes through learning placeholders for both data and classifier. The second stream of works leverages generative models to generate openset data and predict the distribution of novel classes (Ge et al. 2017; Neal et al. 2018; Oza and Patel 2019; Perera et al. 2020; Moon et al. 2022). C2AE (Oza and Patel 2019) utilizes class-conditioned auto-encoders and divides the training procedure into two sub-tasks. Perera et al. (Perera et al. 2020) use self-supervision and augment the inputs with generative models to solve the task. The most recent work by Moon et al. (Moon et al. 2022) generates openset samples with diverse difficulty levels to simulate realworld conditions. While open-set recognition methods have demonstrated good performance when inter-class distances are large, they are not suitable for open-set FER with small inter-class distances. Methods that work well under small inter-class distances are needed. Problem Definition Facial expression recognition (FER) models are trained with Dtr = {(xi, yi)}N i=1, where xi is a training image and yi ∈Y = {1, . . . , K} is the corresponding label, N and K represent the number of samples and classes. However, in the real-world test set D, there are novel classes, i.e., D = {(xi, yi)}N i=1, yi ∈ˆY = {1, . . . , K, K + 1}, where class K + 1 is the novel category which may contain one or more novel classes. We aim to detect the samples with the label K + 1, while classifying the samples with label yi ∈Y . However, as FER data are all human faces, the open-set data is very close to the close-set data. Furthermore, FER models have the overconfidence problem, which makes the confidence scores of both close-set and open-set samples close to 1 and drastically degrades the performance of openset recognition methods. Thus, a more effective method is needed to solve open-set FER. Method We notice that there is useful information from the predicted pseudo labels of the trained close-set model, which have been neglected, shown in Fig. 2. Unlike the large inter-class distance datasets like CIFAR-10 where most open-set samples are predicted by the trained model into the most similar close-set class, in FER, open-set samples are predicted across all close-set classes, which are similar to the concept of symmetric noisy labels in the noisy label learning field. Symmetric noisy labels are easier to detect than asymmetric noisy labels, which only distribute to the most semantically similar class (Han et al. 2018; Kim et al. 2019; Zhang et al. 2022a) like the pseudo labels in CIFAR-10. Thus, for the first time, we transform the open-set FER into noisy label detection based on the above discussions. Pipeline The pipeline of our proposed method is shown in Fig. 3. Given a pre-trained close-set FER model fclose, and the input test set D, which contains both close-set and open-set samples. We first utilize fclose to generate close-set pseudo labels for all the samples in D. We then cyclically train two FER models f1 and f2 from scratch with the close-set pseudo labels. Specifically, given images from D, we first apply random erasing (Zhong et al. 2020) to them and the erased images are denoted as x. We then flip x to get ex. The classification loss is calculated only with the x as lcls = −1 N N X i=1 (log eWyif1(xi) PK j eWjf1(xi) ), (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 648 where Wyi is the yi-th weight from the FC layer, f1(xi) is the output logits of xi. Inspired by (Zhang et al. 2022b), we utilize attention map consistency to prevent the model from memorizing the wrong pseudo labels of open-set samples, leading to a large classification loss of open-set samples. The attention maps are computed by multiplying the weight from the FC layer and the features extracted from the last convolution layer following CAM (Zhou et al. 2016). We denote the attention maps of x and ex as M ∈RN×K×H×W and eM, where N, K, H, W represent the number of the samples, expression classes, height and width. The consistency loss is calculated based on the transformation consistency of attention maps according to (Zhang et al. 2022b), which regularizes the model to focus on the whole facial features and prevents the model from overfitting the wrong pseudo labels of open-set samples. lcons = 1 NKHW N X i=1 K X j=1 ||Mij −Flip( eM)ij||2. (2) The train loss for f1 and f2 is computed as follows, ltrain = lcls + λlcons. (3) We set the consistency weight λ as 5 across all experiments. We further introduce cycle training to improve the performance of open-set detection. First, we cyclically set the learning rate as the initial learning rate every 10 epoch inspired by (Huang et al. 2019), which is similar to an ensemble of several models with different states to help detect noisy (open-set) samples. Second, we cyclically set the training of f1 and f2. Specifically, at the first step, we train f1, then we utilize Gaussian Mixture Models (Li, Socher, and Hoi 2020) to model the classification loss of f1 and select the clean samples for the training of f2, we set the threshold of selection as 0.5. In the second step, f2 selects clean samples for the training of f1. We repeat the two steps until the two models converge. After training, the open-set samples are associated with large classification loss values which can be easily separated from close-set samples with small classification loss values. Novelty and Contribution We claim our novelty and contribution as introducing a new FER task and designing a new pipeline, which outperforms SOTA open-set recognition methods by large margins, along with our new discoveries and insights instead of simply introducing a new technical method. We are the first to propose the open-set FER task based on recent works (Cowen et al. 2021; Bryant et al. 2022; Kollias 2023) and find that existing SOTA methods do not perform well under this task. Our discovery that pseudo labels of open-set FER samples are similar to symmetric noisy labels is novel. Inspired by that, we design a new pipeline and propose a new method including cycle training and attention map consistency to address open-set FER from a noisy label detection perspective, which has not been done before. Our approach outperforms SOTA open-set recognition methods by large margins. Though with relatively small technical contribution, we believe that the new discoveries and good performance are our main contributions. Experiments Datasets RAF-DB (Li, Deng, and Du 2017) is annotated with seven basic expressions by 40 trained human coders, including 12,271 images for training and 3,068 images for testing. FERPlus (Barsoum et al. 2016) is extended from FER2013 (Goodfellow et al. 2013), which consists of 28,709 training images and 3,589 test images collected by the Google search engine. We utilize the same seven basic classes as RAF-DB in our experiments. AffectNet (Mollahosseini, Hasani, and Mahoor 2017) is a large-scale FER dataset, which contains eight expressions. There are 286,564 training images and 4,000 test images. Implementation Details Following open-set recognition setting (Geng, Huang, and Chen 2020), open-set samples should be semantically different from close-set samples while they do not have the domain gap. Specifically, we construct close-set and openset from the above FER datasets. We set some classes as open-set classes and the rest are close-set classes following (Geng, Huang, and Chen 2020; Moon et al. 2022). For training, we utilize close-set samples of the train set. The test set is the original test set containing both open-set and close-set samples plus the remaining open-set samples of the train set. We utilize ResNet-18 as the backbone. The learning rate η is set to 0.0002 and we use Adam (Kingma and Ba 2014) optimizer with weight decay of 0.0001. The max training epoch Tmax is set to 40. As our method does not affect the classification accuracy of the original closeset FER performance, we mainly focus on the detection of open-set data. Close-set classification accuracy of different methods is shown in the Supp. material. We utilize two widely used metrics AUROC (Liang, Li, and Srikant 2018) and FPR@TPR95 (Hendrycks and Gimpel 2017) in the open-set recognition field, they both range from 0 to 1. AUROC is the area under the Receiver Operating Characteristic (ROC) curve, the higher the better, while FPR@TPR95 measures the false positive rate (FPR) when the true positive rate (TPR) is 0.95, the lower the better. Open-Set FER With One or Several Basic Classes The open-set recognition performance is reported in Table 1. The baseline method is MSP (Hendrycks and Gimpel 2017) utilizing softmax score to detect open-set samples. EOW (Wang et al. 2021), PROSER(PROS) (Zhou, Ye, and Zhan 2021), DIAS (Moon et al. 2022) are state-of-the-art open-set recognition methods. Results show that our method not only outperforms all other methods on the mean performance but also achieves the best performance with different open-set classes. Furthermore, the improvements brought by our method are significant. For example, the mean performance of the baseline on the RAF-DB dataset is 0.497 AUROC, which is similar to a random guess. There is AUROC lower than 0.5 as we maintain the range meaning of the softmax score across different experiments, a lower softmax score always means the sample is more like open-set The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 649 Metric AUROC (↑) FPR@TPR95 (↓) Metric AUROC (↑) Method Baseline EOW PROS DIAS Ours Baseline EOW PROS DIAS Ours Method Baseline EOW PROS DIAS Ours R Sur. 0.517 0.648 0.806 0.725 0.918 0.926 0.897 0.730 0.850 0.608 F Sur. 0.406 0.641 0.676 0.710 0.933 R Fea. 0.411 0.577 0.706 0.660 0.907 0.980 0.946 0.882 0.918 0.444 F Fea. 0.370 0.581 0.664 0.634 0.899 R Dis. 0.473 0.609 0.788 0.728 0.910 0.925 0.914 0.730 0.863 0.462 F Dis. 0.352 0.596 0.771 0.711 0.871 R Hap. 0.554 0.606 0.695 0.703 0.892 0.852 0.904 0.881 0.823 0.528 F Hap. 0.476 0.645 0.731 0.726 0.855 R Sad. 0.506 0.654 0.738 0.668 0.911 0.930 0.940 0.798 0.879 0.519 F Sad. 0.413 0.575 0.681 0.665 0.862 R Ang. 0.450 0.720 0.704 0.734 0.906 0.937 0.814 0.877 0.857 0.684 F Ang. 0.410 0.578 0.798 0.753 0.891 R Neu. 0.566 0.606 0.803 0.778 0.917 0.866 0.977 0.777 0.819 0.587 F Neu. 0.544 0.529 0.852 0.767 0.864 Mean 0.497 0.631 0.749 0.714 0.909 0.917 0.913 0.811 0.858 0.547 Mean 0.424 0.592 0.739 0.709 0.882 Table 1: The detection performance of state-of-the-art open-set recognition methods on open-set FER. We start with the oneclass open-set FER and utilize two common metrics AUROC (higher the better) and FPR@TPR95 (lower the better) for evaluation. The expression class listed on the left is the open-set class (Sur.: Surprise, Fea.: Fear, Dis.: Disgust, Hap.: Happiness, Sad.: Sadness, Ang.: Anger, Neu.: Neutral). ’R’ represents RAF-DB and ’F’ represents FERPlus. ’PROS’ is ’PROSER’ for short. Our method outperforms the state-of-the-art open-set recognition methods on open-set FER tasks with very large margins. Open class Baseline EOW PROSER DIAS Ours Sur.+Fea. 0.436 0.561 0.763 0.706 0.916 Fea.+Dis. 0.445 0.583 0.764 0.688 0.884 Dis.+Hap. 0.575 0.632 0.727 0.700 0.887 Hap.+Sad. 0.609 0.536 0.726 0.717 0.865 Sad.+Ang. 0.486 0.691 0.718 0.702 0.879 Ang.+Neu. 0.552 0.634 0.825 0.769 0.895 Sur.+Fea.+Dis. 0.497 0.588 0.769 0.731 0.893 Fea.+Dis.+Hap. 0.592 0.534 0.667 0.685 0.880 Dis.+Hap.+Sad. 0.649 0.624 0.743 0.723 0.815 Hap.+Sad.+Ang. 0.637 0.698 0.705 0.718 0.840 Sad.+Ang.+Neu. 0.630 0.750 0.829 0.802 0.883 Mean 0.555 0.621 0.749 0.722 0.876 Table 2: The detection performance (AUROC) of different methods with two or three open classes. Our method achieves the best results under all settings. samples. PROSER (discriminative method) and DIAS (generative method) improve the baseline to 0.749 and 0.714, respectively. Our method further improves the AUROC from around 0.7 to around 0.9, which is impressive. We further carry out experiments to validate the effectiveness of our method when the open-set data contains more than one class with results displayed in Table 2. Our method outperforms other methods under all settings. We find that the number of open-set classes has little effect on our method. For instance, our method achieves a mean AUROC of 0.876 with more than one open class, which is only slightly lower than 0.909 with one open class. Compound Classes and Different Classes In the real-world deployment, FER models will encounter compound expressions, which cannot be simply classified into the basic classes (Du, Tao, and Martinez 2014). We utilize all basic expression images of RAF-DB as the closeset and all compound expression images of RAF-DB as the open set. Results in Table 3 illustrate that the performance of Dataset Baseline EOW PROSER DIAS Ours Compound 0.665 0.679 0.648 0.674 0.771 AffectNet 0.552 0.507 0.610 0.551 0.674 Table 3: The AUROC of different methods on compound classes and AffectNet. Compound classes are very similar to the basic classes and AffectNet is a large-scale FER dataset with lots of label noises. They both degrade the open-set detection performance, while our method still achieves the best performance. all methods drops compared with open basic classes. Compound classes usually contain several basic expressions, which are more similar to close-set classes than unseen basic classes. Though detecting compound expressions is harder, our method still achieves the best performance of 0.771 AUROC and outperforms other methods with large margins. To simulate the situation when different classes are encountered. We use the seven basic classes of AffectNet as close-set classes and the contempt class of AffectNet as the open-set class. In Table 3, our method reaches the best AUROC of 0.674. The detection performance of all methods drops as the labels of AffectNet are very noisy, leading to a low close-set classification accuracy of around 60%, which is significantly lower than the accuracy of around 90% achieved on RAF-DB. As claimed by work (?), a good closeset classifier leads to high detection performance, which means the performance drops of all methods on AffectNet are reasonable. Online Application for One Given Sample As our method needs to train a model from scratch, one may ask whether our method is suitable for the online detection of only one given test sample at a time. We show that once trained, our method can be utilized for online detection as the test time classification loss can still indicate whether a sample is open-set. The experiment details are in the Supp. material and the results are shown in Table 4. We train the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 650 Open class AUROC Open class AUROC Surprise 0.918/0.894 Sadness 0.911/0.882 Fear 0.907/0.870 Anger 0.906/0.898 Disgust 0.910/0.899 Neutral 0.917/0.889 Happiness 0.892/0.846 Mean 0.909/0.883 Table 4: Offline/Online detection performance of our method. The offline method achieves better performance, while it is slightly less efficient than the online method. The mean AUROC drops 2.6%, which is acceptable, as our online version still achieves better AUROC of 0.883 compared with other state-of-the-art open-set recognition methods, whose best AUROC is 0.749. Figure 4: Confidence scores of different methods. AUROC of each method is marked below. The baseline method fails as FER data have small inter-class distances, making openset data have the same range of confidence scores as closeset data. Close-set and open-set data are separated by DIAS and PROSER while they still overlap a lot. Our method transforms open-set FER to noisy label detection and effectively separates close-set and open-set samples. model only once and evaluate it on the fly to simulate the online detection of one test sample at a time. The mean AUROC of seven classes drops from 0.909 to 0.883 (2.6%), which is acceptable. Though with slightly lower AUROC, the online version is more efficient as we do not need to retrain our model each time we encounter new open-set data. Note that the online version achieves an AUROC of 0.883, which still outperforms other state-of-the-art open-set recognition methods as their best AUROC is 0.749 in Table 1. Further Analyses Visualization of Confidence Scores We visualize the confidence scores, which are utilized to detect open-set data in Fig. 4. The confidence score of our method is the classification loss value. We normalize confidence scores to [0, 1] to make comparisons. The results in Fig. 4 demonstrate that when no open-set recognition methAttention Cycle LR Cycle Training AUROC 0.517 ✓ 0.885 ✓ ✓ 0.912 ✓ ✓ ✓ 0.918 Table 5: The ablation study of our method. Figure 5: Hyperparameters analyses of our method on ResNet-18 and ResNet-50. ResNet-50 generally has better performance than ResNet-18. Our method is not sensitive to hyperparameters as AUROC slightly changes from 0.87 to 0.93. The best consistency weight is 5 and the best training epoch number is 40. ods are used, the confidence scores of close-set and open-set samples overlap considerably. Although open-set recognition methods such as DIAS (generative) and PROSER (discriminative) perform better than baseline, they still have significant overlap. In contrast, our method achieves the best performance and effectively separates close-set and openset samples. This is because we view open-set FER from a unique perspective of noisy label detection. By utilizing the useful information from pseudo labels, which implicitly encode the information of close/open set, we are able to mitigate the overconfidence problem. Ablation and Hyperparameter Study To show the effectiveness of each of the modules in our method, we carry out an ablation study on RAF-DB utilizing surprise as the open-set class. The AUROC in Table 5 illustrates that the most effective module in our method is the attention map consistency (Attention), which can prevent the FER model from memorizing the wrong pseudo labels of open-set samples. Cycle learning rate (LR) improves the performance by setting the learning rate cyclically to make an ensemble to detect wrong pseudo labels (Huang et al. 2019). Cycle training further utilizes two FER models to select clean samples to iteratively teach each other for better performance. Each of the introduced modules contributes to the good performance of our method. Our method has two main hyperparameters, which are the weight of consistency loss and the training epoch number. We carry out experiments with the consistency weight ranges from 1 to 10 and training epoch number ranges from 10 to 50, as shown in Fig. 5. Overall, our method is not sensitive to the two hyperparameters as the AUROC only The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 651 Figure 6: We design a comparison group to study the effect of the pseudo-label distribution. Our motivation is that the pseudo labels of open-set samples distribute across all closeset classes. The comparison group has the pseudo labels of open samples belonging to the most similar close-set class. The detection performance drops from 0.918 to 0.803. This illustrates that our method works well because the pseudo labels distribute across all close-set classes instead of centering to the class with the largest number of pseudo labels (happiness in the comparison group). changes from 0.87 to 0.93 under two different backbones and all different hyperparameters. ResNet-50 performs better than ResNet-18. However, in order to fairly compare with other methods, we report the performance using ResNet-18 in Table 1. As for the consistency weight, the performance increases and then decreases as a small consistency weight is not enough to prevent the model from memorizing the openset (noisy) samples and a very large consistency weight impedes the optimization of classification loss. We set the consistency weight as 5 in all our experiments. Training epoch number has little effect on the performance. We simply set the training epoch number as 40 in all our experiments. Analyses of Pseudo Label Distributions To dig deeper into why our method works well in open-set FER, we provide more analysis of the pseudo labels. We argue that our method is effective because it utilizes information from the pseudo labels which are neglected by previous methods. We first plot the distribution of the predicted pseudo labels of open-set samples in Fig. 6. We observe that the pseudo labels of the surprise (open-set) class are distributed across all close-set classes. To exclude the influence of the happiness class with the largest number of pseudo labels, we design a comparison group with all the pseudo labels of open-set (surprise) samples lying in the happiness class. Shown in Fig. 6, the open-set recognition performance drops from 0.918 to 0.803. We observe that though some open-set samples are correctly detected, there are many open-set samples confused with close-set samples. The results illustrate that semantically similar label noise, e.g., labeling all surprise samples to happiness, is harder to detect. They also demonstrate that our method works well because the pseudo labels distribute across all classes inFigure 7: The learned features of baseline and our method. The features are shown with the latent truth. Open-set features are marked as red. The learned open-set features of the baseline method are mixed with close-set features, while our method does not overfit the wrong pseudo labels of open-set samples and separates open-set features from close-set features. stead of centering on one class with the largest number of pseudo labels. Visualization of Learned Features We utilize t-SNE (Van der Maaten and Hinton 2008) to visualize the learned features. The results are shown in Fig. 7, which implies that the baseline method memorizes the wrong pseudo labels and the learned open-set features (marked as red) are mixed with other close-set features after training. However, our method prevents the model from memorizing the wrong labels of open-set samples, which avoids pushing open-set features into the close-set feature clusters. After training, the model learns useful features from the given samples and set the open-set features apart from the other close-set features. Conclusion We propose a new topic named open-set facial expression recognition (FER) to address the overconfidence problem of FER models and maintain their ability to reject unseen samples. As FER data have small inter-class distances, existing open-set recognition methods are not well suited for openset FER. However, we find that due to this characteristic, the pseudo labels assigned to open-set samples are distributed across all close-set classes, which is similar to the concept of symmetric noisy labels. Inspired by that, we propose a novel method to convert open-set FER into a noisy label detection problem. Utilizing extra information of pseudo labels and together with cycle training and attention map consistency, our method gets rid of the overconfidence problem of softmax scores and effectively detects the open-set samples. Extensive experiments on different open-set FER datasets and open-set classes show that our method outperforms state-ofthe-art open-set recognition methods by large margins. We believe that our work will enlighten more research works on the relationship between the open-set recognition field and the noisy label detection field. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 652 Acknowledgments We sincerely thank SPC who find our work valuable and the reviewers who have given us lots of valuable suggestions. This work was supported in part by the National Natural Science Foundation of China under Grant No.62276030 and 62236003, in part by the BUPT Excellent Ph.D. Students Foundation No.CX2023111 and in part by scholarships from China Scholarship Council (CSC) under Grant CSC No.202206470048. References Barsoum, E.; Zhang, C.; Ferrer, C. C.; and Zhang, Z. 2016. Training deep networks for facial expression recognition with crowd-sourced label distribution. In Proceedings of the 18th ACM international conference on multimodal interaction, 279–283. Bendale, A.; and Boult, T. 2015. Towards open world recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1893–1902. Bendale, A.; and Boult, T. E. 2016. Towards open set deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1563–1572. Bryant, D.; Deng, S.; Sephus, N.; Xia, W.; and Perona, P. 2022. Multi-Dimensional, Nuanced and SubjectiveMeasuring the Perception of Facial Expressions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20932–20941. Cowen, A. S.; Keltner, D.; Schroff, F.; Jou, B.; Adam, H.; and Prasad, G. 2021. Sixteen facial expressions occur in similar contexts worldwide. Nature, 589(7841): 251–257. Du, S.; Tao, Y.; and Martinez, A. M. 2014. Compound facial expressions of emotion. Proceedings of the national academy of sciences, 111(15): E1454–E1462. Farzaneh, A. H.; and Qi, X. 2021. Facial expression recognition in the wild via deep attentive center loss. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2402–2411. Ge, Z.; Demyanov, S.; Chen, Z.; and Garnavi, R. 2017. Generative openmax for multi-class open set classification. arXiv preprint arXiv:1707.07418. Geng, C.; Huang, S.-j.; and Chen, S. 2020. Recent advances in open set recognition: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3614–3631. Goodfellow, I. J.; Erhan, D.; Carrier, P. L.; Courville, A.; Mirza, M.; Hamner, B.; Cukierski, W.; Tang, Y.; Thaler, D.; Lee, D.-H.; et al. 2013. Challenges in representation learning: A report on three machine learning contests. In International conference on neural information processing, 117– 124. Springer. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Han, B.; Yao, Q.; Yu, X.; Niu, G.; Xu, M.; Hu, W.; Tsang, I.; and Sugiyama, M. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31. Hendrycks, D.; and Gimpel, K. 2017. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. In International Conference on Learning Representations. Huang, J.; Qu, L.; Jia, R.; and Zhao, B. 2019. O2u-net: A simple noisy label detection approach for deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, 3326–3334. Kim, Y.; Yim, J.; Yun, J.; and Kim, J. 2019. Nlnl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, 101–110. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kollias, D. 2023. Multi-Label Compound Expression Recognition: C-EXPR Database & Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5589–5598. Li, H.; Wang, N.; Ding, X.; Yang, X.; and Gao, X. 2021. Adaptively learning facial expression representation via cf labels and distillation. IEEE Transactions on Image Processing, 30: 2016–2028. Li, H.; Wang, N.; Yang, X.; Wang, X.; and Gao, X. 2022. Towards semi-supervised deep facial expression recognition with an adaptive confidence margin. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4166–4175. Li, J.; Socher, R.; and Hoi, S. C. 2020. Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394. Li, S.; and Deng, W. 2020. Deep facial expression recognition: A survey. IEEE transactions on affective computing, 13(3): 1195–1215. Li, S.; Deng, W.; and Du, J. 2017. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2852–2861. Liang, S.; Li, Y.; and Srikant, R. 2018. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In International Conference on Learning Representations. Liu, X.; Zhong, Y.; Zhang, Y.; Qin, L.; and Deng, W. 2023. Enhancing generalization of universal adversarial perturbation through gradient aggregation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4435–4444. Mollahosseini, A.; Hasani, B.; and Mahoor, M. H. 2017. Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing. Moon, W.; Park, J.; Seong, H. S.; Cho, C.-H.; and Heo, J.P. 2022. Difficulty-aware simulator for open set recognition. In European Conference on Computer Vision, 365– 381. Springer. Neal, L.; Olson, M.; Fern, X.; Wong, W.-K.; and Li, F. 2018. Open set learning with counterfactual images. In Proceedings of the European Conference on Computer Vision (ECCV), 613–628. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 653 Nguyen, A.; Yosinski, J.; and Clune, J. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 427–436. Oza, P.; and Patel, V. M. 2019. C2ae: Class conditioned auto-encoder for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2307–2316. Perera, P.; Morariu, V. I.; Jain, R.; Manjunatha, V.; Wigington, C.; Ordonez, V.; and Patel, V. M. 2020. Generativediscriminative feature representations for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11814–11823. Ruan, D.; Yan, Y.; Lai, S.; Chai, Z.; Shen, C.; and Wang, H. 2021. Feature decomposition and reconstruction learning for effective facial expression recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7660–7669. Scheirer, W. J.; de Rezende Rocha, A.; Sapkota, A.; and Boult, T. E. 2012. Toward open set recognition. IEEE transactions on pattern analysis and machine intelligence, 35(7): 1757–1772. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Wang, Y.; Li, B.; Che, T.; Zhou, K.; Liu, Z.; and Li, D. 2021. Energy-based open-world uncertainty modeling for confidence calibration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9302–9311. Zhang, Y.; Deng, W.; Cui, X.; Yin, Y.; Shi, H.; and Wen, D. 2022a. Model and Data Agreement for Learning with Noisy Labels. arXiv preprint arXiv:2212.01054. Zhang, Y.; Deng, W.; and Zheng, L. 2023. Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric Perspective. arXiv preprint arXiv:2302.08287. Zhang, Y.; Li, Y.; Qin, L.; Liu, X.; and Deng, W. 2023. Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognition. arXiv preprint arXiv:2310.19636. Zhang, Y.; Wang, C.; and Deng, W. 2021. Relative uncertainty learning for facial expression recognition. Advances in Neural Information Processing Systems, 34: 17616– 17627. Zhang, Y.; Wang, C.; Ling, X.; and Deng, W. 2022b. Learn from all: Erasing attention consistency for noisy label facial expression recognition. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVI, 418–434. Springer. Zhong, L.; Liu, Q.; Yang, P.; Liu, B.; Huang, J.; and Metaxas, D. N. 2012. Learning active facial patches for expression analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2562–2569. IEEE. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 13001–13008. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A. 2016. Learning Deep Features for Discriminative Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhou, D.-W.; Ye, H.-J.; and Zhan, D.-C. 2021. Learning placeholders for open-set recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 654 | 2024 | 73 |
18,551 | Hyperspectral Image Reconstruction via Combinatorial Embedding of Cross-Channel Spatio-Spectral Clues Xingxing Yang1, Jie Chen1*, Zaifeng Yang2 1Department of Computer Science, Hong Kong Baptist University 2Institute of High Performance Computing, Agency for Science Technology and Research [email protected], [email protected], yang [email protected] Abstract Existing learning-based hyperspectral reconstruction methods show limitations in fully exploiting the information among the hyperspectral bands. As such, we propose to investigate the chromatic inter-dependencies in their respective hyperspectral embedding space. These embedded features can be fully exploited by querying the inter-channel correlations in a combinatorial manner, with the unique and complementary information efficiently fused into the final prediction. We found such independent modeling and combinatorial excavation mechanisms are extremely beneficial to uncover marginal spectral features, especially in the long wavelength bands. In addition, we have proposed a spatiospectral attention block and a spectrum-fusion attention module, which greatly facilitates the excavation and fusion of information at both semantically long-range levels and finegrained pixel levels across all dimensions. Extensive quantitative and qualitative experiments show that our method (dubbed CESST) achieves SOTA performance. Code for this project is at: https://github.com/AlexYangxx/CESST. Introduction Combining spectroscopy and image processing techniques, the hyperspectral imaging system (HIS) records rich spectral information along long-range-distributed spectral bands as well as spatial information. In the past few years, HIS has emerged as a powerful tool in remote sensing (Yuan, Zheng, and Lu 2017), medical image processing (Lu and Fei 2014), agriculture (Ad˜ao et al. 2017), etc. Nonetheless, HIS usually requires a long acquisition time and captures images with limited spatial resolution, which constrained its applications, especially in dynamic or real-time scenarios (Arad et al. 2020). To facilitate and promote the applications of HIS, recent studies have explored efficient data captures, e.g., snapshot compressive imaging system that records 3D hyperspectral cube into the 2D measurement (Channing 2022; Cai et al. 2022a). However, these methods require expensive, bulky equipment and complicated reconstruction processing for high-fidelity 3D hyperspectral cubes. To this end, an increased interest in the hyperspectral image (HSI) reconstruction from RGB images using deep learning methods has *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. HSCNN++(Shi et al. 2018) HRNet+(Zhao et al. 2020) Restormer(Zamir et al. 2022) EDSR(Lim et al. 2017) AWAN(Li et al. 2020) HDNet(Hu et al. 2022) HINet(Chen et al. 2021b) MIRNet(Zamir et al. 2020) MPRNet(Zamir et al. 2021) MST-L(Cai et al. 2022b) MST++(Cai et al. 2022c) CESST (Our Method) Figure 1: HSI reconstruction on NTIRE2022 HSI dataset (Arad et al. 2022). emerged, which shows great potential due to the handy RGB image capture devices and satisfactory HSI reconstruction performance (Yan et al. 2020; Cai et al. 2022c). Conventional hyperspectral reconstruction methods are mainly model-based, e.g., sparse coding (Arad and BenShahar 2016), which fails in exploring the intrinsic spectral relations between input RGB images and the corresponding hyperspectral images and suffers from representation capacity. Over the years, an enormous amount of deep-learningbased research has been developed that mainly focuses on deep convolutional neural networks (CNNs) in an end-toend manner (Wang et al. 2018a, 2019). However, CNNs still fail to capture long-range dependencies and rely on delicately designed modules. Recently, vision transformer (Liu et al. 2021) (ViT) has been introduced into computer vision from natural language processing (NLP) and shows great potential in learning longrange dependencies and non-local self-similarities. However, existing frameworks for HSI reconstruction have two main limitations: (i) the complexity of the standard global transformer (Dosovitskiy et al. 2020) is quadratic to the spatial dimension, which occupies substantial computational reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6567 RGB Green channel Red channel Blue channel (a) Channel differences. (b) RMSE loss. Figure 2: Limitation of existing methods. Fig. 2a shows an example of visual differences between R, G, and B channels of the same RGB image. As can be seen, the red channel contains more texture energy than the other two channels, and most existing methods show compromised performance over the long-wavelength range. Fig. 2b shows the RMSE loss across all bands in the validation set. All existing methods show a dramatic rise of RMSE loss in the long-wavelength bands, while our method slows such deterioration and achieves the lowest RMSE loss across all bands. sources and thus limits its applications for high-resolution inputs. (ii) Although the Swin-transformer (Liu et al. 2021) and spectral-wise transformer (Cai et al. 2022c) achieve linear complexity to the spatial dimension via window-based multi-head self-attention (MSA) or calculating spectral-wise self-attention maps, they all focus on one dimension of the 3D hyperspectral cube, i.e., spatial or spectral dimension. Most importantly, as shown in Fig. 2, all these existing reconstruction methods treat the problem trivially by directly studying the correlations in the hyperspectral space. Specifically, these methods brutally combine and project the features (with different energy and noise characteristics) from RGB channels into the high dimensional spectral space in the early stages, which would inevitably sacrifice some critical information from the R, G, or B channels. In this study, we propose a novel hyperspectral image reconstruction framework that excavates the unique and complementary information among the RGB input channels in a Combinatorial manner for efficient Embedding of SpatioSpectral clues based on a Transformer structure (CESST), which achieves the best PSNR-Params performance compared with SOTA methods in Fig. 1. The novelty and technical contributions are generalized as follows: • We propose a novel framework for hyperspectral image reconstruction, which first fully excavates the intrachannel spatio-spectral features in the projected highdimensional embedding space before inter-channel fusion. Such channel-wise independent modeling procedure ensures unique local spectral features are well uncovered and preserved; • We propose a novel Spectrum-Fusion Attention Module (SFAM) that exhaustively queries and explores crosschannel correlations in a combinatorial manner via six parallel transformer branches. SFAM fully excavates complementary information for comprehensive interchannel fusion; • An efficient plug-and-play spatio-spectral attention block (SSAB) is designed to simultaneously extract spatiospectral features at both semantically long-range levels and fine-grained pixel levels across all dimensions, while keeping the complexity linear to the spatial dimension; • Both quantitative and qualitative experiments demonstrate that our CESST framework significantly outperforms SOTA methods while requiring fewer Parmas. Related Work Hyperspectral Image Reconstruction HSI reconstruction methods can mainly be categorized into two groups: recover 3D HSI cubes from corresponding 2D measurements recorded by SCI systems(Huang et al. 2021; Miao et al. 2019; Wang et al. 2020, 2016; Meng, Ma, and Yuan 2020), or recover from corresponding RGB images(Shi et al. 2018; Yan et al. 2020; Cai et al. 2022c; Akhtar and Mian 2018). In terms of the former methodology, many efforts(Huang et al. 2021; Miao et al. 2019; Wang et al. 2020; Meng, Ma, and Yuan 2020) focus on the coded aperture snapshot spectral imaging (CASSI) system(Meng, Ma, and Yuan 2020; Wagadarikar et al. 2008). However, SCI systems are often expensive, which limits their applications. Compared with the former, given an HSI, the corresponding RGB image can be generated using its camera response function, so recovering HSI from its corresponding RGB image is much cheaper. Thus this topic has significant research The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6568 and practical value. In this work, we mainly focus on the latter one. Two basic approaches are currently being adopted in research into HSI reconstruction from RGB images: modelbased and deep learning-based methods. Most model-based methods (Arad and Ben-Shahar 2016; Aeschbacher, Wu, and Timofte 2017; Jia et al. 2017; Robles-Kelly 2015) mainly focus on using hand-crafted priors to conduct spectrum interpolation along the channel dimension. For example, Arad et al. (Arad and Ben-Shahar 2016) proposed to address this issue by leveraging hyperspectral prior to creating a sparse dictionary of HSIs and their corresponding RGB projections. Robles et al. (Robles-Kelly 2015) further proposed to employ color and texture information to assist the reconstruction process subject to the material properties of the objects in the scene. However, these model-based methods rely heavily on hand-crafted priors and suffer from poor representation capacities. Meanwhile, they do not take the spatial context into consideration. Recently, inspired by the rapid progress of deep learning in image restoration (Tu et al. 2022; Helminger et al. 2021; Chen et al. 2022; Shao et al. 2020), CNNs have been widely exploited to learn the implicit mapping relation from RGB to HSI (Wang et al. 2018a, 2019; Shi et al. 2018; Fubara, Sedky, and Dyke 2020), which learn the spatial contextual information in a statistic sense. For instance, HSCNN (Xiong et al. 2017) proposed to upsample the input RGB image along the spectral dimension and learn the corresponding enhanced HSI using deep residual convolutional blocks. Alvarez-Gila et al. (Alvarez-Gila, Van De Weijer, and Garrote 2017) further proposed a conditional generative adversarial framework to deal with the paireddata insufficiency. Many other methods have produced impressive results by designing delicate architectures, including UNet (Can and Timofte 2018), Resnet (Stiebel et al. 2018), and self-attention mechanisms (Wang et al. 2020). However, these CNN-based methods show limitations in capturing long-range inter-dependencies and non-local selfsimilarities. Vision Transformer Since the vision Transformer (ViT) (Dosovitskiy et al. 2020) was first introduced into vision tasks, there has been a wave of enthusiasm due to its strength in capturing longrange correlations between spatial contexts. Since the complexity of standard global transformers (Chen et al. 2021a; Dosovitskiy et al. 2020) is quadratic to the spatial dimension of input images, many researchers introduce the localprocessing idea of CNNs into transformer blocks (Liu et al. 2021; Zhou et al. 2021) to reduce the computational burden. For instance, Liu et al. proposed to leverage local windowbased MSA, whose computational complexity is linear to the spatial dimension. Cai et al. (Cai et al. 2022b) further proposed a spectral-wise MSA to calculate the selfattention map along the channel dimension for HSI reconstruction. Nonetheless, neither spatial window-based MSA nor spectral-wise MSA considers spatial and spectral information, limiting the representation capacity. The Proposed Method Motivation. Existing frameworks (Cai et al. 2022c; Wang et al. 2019; Shi et al. 2018; Fubara, Sedky, and Dyke 2020) project the RGB image directly to the high-dimensional hyperspectral space in an early stage. Such brutal transformation sacrifices potentially crucial intra-channel features, as shown in Fig. 2, and it would be more difficult to learn from the inter-channel correlations in subsequent stages. As such, we propose to first fully excavate the intra-channel spatiospectral features in the projected high-dimensional embedding space before inter-channel fusion, ensuring local spectral features are well uncovered and preserved. Network Architecture We propose a multi-scale encoder-decoder architecture for HSI reconstruction, which has three layers of similar structures as shown on the top row of Fig. 3, with each layer focusing on different scales (full, half, and quarter sizes). At each scale, three encoder-decoder feature extraction blocks (FEBs) are designed to learn the contextual features of each channel independently (e.g., R, G, or B). Unlike the other methods that brutally combine and project the RGB channels into the high-dimensional spectral space in the early stages, our channel-wise independent modeling procedure ensures unique local spectral features are well uncovered and preserved. FEBs adopt UNet (Yang et al. 2021) as the backbone to extract both contextual and spectral features crucial for spectrum reconstruction. Specifically, as shown in Fig. 3 (b), each FEB comprises two encoder blocks, one bottleneck block, and two decoder blocks. Each block consists of a spatio-spectral attention block (SSAB), shown in Fig. 3 (c). Subsequently, a spectrum-fusion attention module (SFAM), illustrated in Fig. 3 (a), is cascaded to the output of the three FEBs. The SFAM exhaustively queries and explores cross-channel correlations in a combinatorial manner via six parallel transformer branches and comprehensively fuses the complementary information from R, G, and B channels. Finally, inspired by (Zamir et al. 2021), a supervised spectrum-consistency module (SCCM) is cascaded to generate spectrum-consistent predictions as well as crossscale features with the supervision of ground-truth HSI signals (as shown in Fig. 3 (d)), which is only embedded in the middle and bottleneck branches. The cross-scale features provide informative guidance from a lower scale to a larger scale, thus formulating a coarse-to-fine reconstruction (indicated by the light blue lines in Fig. 3). Spatio-Spectral Attention Block HSI contains plentiful spatio-spectral clues; however, existing CNN-based feature extraction blocks struggle to model the non-local self-similarities. Meanwhile, transformerbased blocks only take one-dimensional features into account (i.e., spatial (Liu et al. 2021) or spectral (Cai et al. 2022c)). To address these issues, we propose a dualdimensional transformer-based feature extraction block embedded in the FEB, denoted as spatio-spectral attention block (SSAB), to extract both spatial and spectral features and increase the learning capacity. As shown in Fig. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6569 Input RGB (scaled ) (b) Detail of the Feature Extraction Block (FEB) CONV R Channel (d) Supervised Spectrum-Consistency Module (SSCM) G Channel B Channel Up-scale feature Conv SSCM 𝐹𝑅 FEB Conv Conv 𝐹𝐺 𝐹𝐵 Conv FEB FEB (only for the top original resolution layer) 𝐹in Conv Conv 𝐹out Reconstruction Loss Ground truth (scaled ) Predicted HSI (scaled ) Attention map Conv RCAB (a) Spectrum-Fusion Attention Module (SFAM) c c Sigmoid Function Point-wise Multiplication Channel Concatenation Element-wise Addition Conv LayerNorm FFN Conv LayerNorm FFN Spec-MSA LayerNorm K Q V Spec-MSA LayerNorm K Q V 𝐹𝑅 𝐹𝐺 c Conv LayerNorm FFN Conv LayerNorm FFN Spec-MSA LayerNorm K Q V Spec-MSA LayerNorm K Q 𝐹𝑅 𝐹𝐵 V c 𝐹𝐵 𝐹𝐺 Spec-MSA LayerNorm K Q V Spec-MSA LayerNorm K Q V Conv LayerNorm FFN Conv LayerNorm FFN c Reconstructed HSI (only for the bottleneck and middle layer) 𝐹in 𝐹out Encoder DownSample SSAB (×N1) DownSample Conv Conv UpSample UpSample Decoder Up-scale feature Conv SSAB (×N2) SSAB (×N3) SSAB (×N2) SSAB (×N1) c c c 𝐹out from lower layer LayerNorm Spa-MSA Conv LayerNorm FFN LayerNorm Spec-MSA (c) Spatio-Spectral Attention Block (SSAB) Figure 3: Illustration of the proposed CESST framework, which consists of three layers of similar structures as shown on the top row, which represents the top original resolution layer. A middle layer and a bottleneck layer share similar structures but with slight differences. 3 (c), the SSAB consists of a parallel spatial-MSA and spectral-MSA, which calculates both the spatial multi-head self-attention and the spectral multi-head self-attention in parallel, and then feeds both features to enhance crossdimensional interaction. Note that our proposed spatial-MSA is different from conventional window-based MSA (Liu et al. 2021), which suffers from limited receptive fields within non-overlapping position-specific windows. As shown in Fig. 4, our spatialMSA consists of a normal window-based MSA, followed by a shuffle-window MSA, to build long-range cross-window interactions. The main difference between conventional window-based MSA and shuffle-window MSA is the spatial shuffle mechanism. To be specific, we assume a Windowbased MSA with window size M whose input has N tokens; we first reshape the output spatial dimension into (M, N/M), transpose and then flatten it back as the input of the next layer. This operation puts the tokens from distant windows together and helps build long-range crosswindow connections. Note that spatial shuffle requires the spatial alignment operation to adjust the spatial tokens into the original positions for spatially aligning features and image content. The spatial alignment operation first reshapes the output spatial dimension into (N/M, M), transposes it, and then flattens it, which is an inverse process of the spatial shuffle. Moreover, considering that the ”grid issue” widely MSA scheme Receptive Field Complexity-HW Calculation ViT Global Quadratic Spatial Swin-T local Linear Spatial Restormer Global Linear Spectral MST++ Global Linear Spectral CESST Global Linear Spatio-spectral Table 1: Comparison of the properties of different MSAs. exists when using window-based transformers to deal with high-resolution images, we introduce a depth-wise convolution layer between the normal window-based MSA and shuffle-window MSA via a residual connection. The kernel size of the convolution layer is the same as the window size. On the other hand, the spectral-MSA is mostly inspired by (Zamir et al. 2022; Cai et al. 2022c), which treats the spectral feature map as a token and thus focuses on more non-local spectral self-similarities. We further summarize the main properties of existing transformer-based blocks, including ViT (Dosovitskiy et al. 2020) (global MSA), Swin-Transformer (Liu et al. 2021) (Window-based MSA), Restormer (Zamir et al. 2022) (spectral MSA), MST++ (Cai et al. 2022c) (spectral MSA), and our CESST (spatio-spectral MSA) in Table 1. Our CESST computes global receptive fields and models both spatial and spectral self-similarities with linear computational costs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6570 Shuffle Alignment Input WMSA1 Output1 WMSA2 Output2 Tokens Tokens (a) Window-based MSA (b) Shuffle-window MSA Figure 4: Comparison of traditional window-based MSA and our shuffle-window MSA. WMSA represents windowbased multi-head self-attention. (a) two stacked windowbased transformer blocks, where each output token only relates to tokens within the same window, without any crosswindow interaction; (b) WMSA2 takes data from different windows after WMSA1 by spatial shuffle and alignment, which introduces global cross-window interaction. Spectrum-Fusion Attention Module To further improve the feature utilization and interactivity within the three learned hyperspectral representations (i.e., FR, FG, FB), we design an effective feature fusion module, named spectrum-fusion attention module, including two parts: channel learning and spectrum fusion. In channel learning, we propose to extract the correlations between each two of the three learned hyperspectral representations and then fuse them to generate the final finegrained reconstructed HSI. As shown in Fig. 3 (a), taking FR as an example, we take the FG as the Value and Query, and FR as the Key, and then feed them into the spectral-MSA to learn the correlation between FR and FG. The learning of the correlation between FR and FB is also carried out in a similar way. Then, the FG-enriched FRG R and the FBenriched FRB R are concatenated to fed into a convolutional layer to generate FG-FB-enriched FR, which can be formulated as: FRG R = FS-MSA (QFR, KFG, VFG) ∈R31×H×W , (1) FRB R = FS-MSA (QFR, KFB, VFB) ∈R31×H×W , (2) FR = Fconv Fconcat FRG R , FRB R ∈R31×H×W , (3) where FS-MSA (·) is spectral-MSA operation (including layernorm, residual connection, and feed-forward operations for simplicity, Fconv (·) is a 3 × 3 convolutional layer, Fconcat (·) is a concatenation process. Similarly, FR-FBenriched FG and FR-FG-enriched FB also have such two channel learning branches. Thus, the channel learning part has six branches. In spectrum fusion, the three representative hyperspectral features FR, FG, FB are concatenated first and then fed into a residual coordinate attention block (Yang et al. 2021) (RCAB) to generate fine-grained pixellevel HSI signals, which can be formulated as: X = FRCAB (Fconcat [FR, FG, FB]) ∈R31×H×W . (4) Objective Function To supervise reconstructed HSIs at any given scale s = 1, 2, ..., S, we employ the LMIX loss (Zhao et al. 2016), which combines both SSIM loss and L1 loss, as well as the mean relative absolute error (MRAE), to formulate a supervised consistency constraint on both pixel and feature levels: LMIX = 3 X s=1 [(Xs, Ys)mix] , (5) LMRAE = 3 X s=1 |Ys −Xs| Ys . (6) Here Ys represents the ground-truth image in each scale. Total Loss. The full objective function is expressed as: L = LMIX + λ1LMRAE, (7) Where λ1 is the hyperparameter that controls the relative importance of the two loss terms, empirically set to 100. Experiments and Analysis Experimental Settings Datasets. We adopt two datasets: NTIRE2022 HSI dataset (Arad et al. 2022) and ICVL HSI dataset (Arad and BenShahar 2016), to evaluate the performance of our CESST. In NTIRE2022 HSI dataset, there are 950 available RGB-HSI pairs, including 900 for training and 50 for validation. All the HSIs are captured at 482×512 spatial resolution over 31 channels from 400nm to 700nm. Besides, ICVL dataset contains 201 high-resolution HSIs. Considering that it does not provide aligned RGB images, we adopt the method proposed by Magnussonet al. (Magnusson et al. 2020) to produce corresponding RGB images. Since it contains 18 images within different resolutions, we only use the left 183 image pairs (147 pairs for training and 36 pairs for testing). Implementation Details. We implement our CESST with Pytorch. All the models are trained with Adam (Kingma and Ba 2014) optimizer (β1 = 0.9 and β2 = 0.999) for 300 epochs. The learning rate is initialized as 0.0002, and the Cosine Annealing scheme is adopted. During the training phase, RGB-HSI pairs are first cropped into 128 × 128 and the input RGB images are linearly rescaled to [0, 1]. We employ random rotation and flipping to augment training data. The whole training time of the proposed CESST is about 40 hours with a single NVIDIA Ampere A100-40G. All the RGB images are also rescaled to [0, 1] during the validation procedure. Our CESST takes 0.141s per image (size of 482 × 512 × 3) for HSI reconstruction. Evaluation Metrics. Mean relative absolute error(MRAE), Peak Signal-to-Noise Ratio(PSNR), error relative global dimensionless synthesis(ERGAS), and spectral angle mapper(SAM) are employed to evaluate HSI reconstruction methods. Quantitative Results We compared our CESST with both HSI reconstruction and image restoration methods, including four RGB-HSI reconstruction methods: HSCNN+ (Shi et al. 2018) (winner of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6571 HDNet EDSR HSCNN++ AWAN MST++ HRNet Input MIRNet Restormer MST-L MPRNet WNOFormer HINet Input HDNet EDSR HSCNN++ AWAN MST++ HRNet MIRNet Restormer MST-L MPRNet WNOFormer HINet Figure 5: The reconstruction error images of two images chosen from the validation set of ICVL dataset. The error images are the heat maps of the root mean square error (RMSE) (along spectral direction) between ground truths and reconstructed HSIs. Method NTIRE2022 HSI Dataset ICVL Dataset Params(M) FLOPs(G) ERGAS SAM MRAE PSNR ERGAS SAM MRAE PSNR HSCNN+ (Shi et al. 2018) 4.65 304.45 228.80 0.1093 0.3814 26.36 247.31 0.1108 0.2178 26.28 HRNet (Zhao et al. 2020) 31.70 163.81 107.23 0.0855 0.3476 26.89 101.02 0.0806 0.2155 26.93 AWAN (Li et al. 2020) 4.04 270.61 147.57 0.0960 0.2500 31.22 169.93 0.1054 0.1887 30.33 MST++ (Cai et al. 2022c) 1.62 23.05 107.23 0.0852 0.1645 34.32 118.33 0.0972 0.1776 31.41 EDSR (Lim et al. 2017) 2.42 158.32 212.51 0.0983 0.3277 28.29 235.83 0.1039 0.1972 27.51 HDNet (Hu et al. 2022) 2.66 173.81 133.72 0.1006 0.2048 32.13 177.47 0.1153 0.1942 29.28 HINet (Chen et al. 2021b) 5.21 31.04 140.82 0.0937 0.2032 32.51 152.45 0.0983 0.1663 30.43 MIRNet (Zamir et al. 2020) 3.75 42.95 115.38 0.0944 0.1890 33.29 131.43 0.0998 0.1797 30.76 Restormer (Zamir et al. 2022) 15.11 93.77 112.05 0.0983 0.1833 33.40 130.17 0.1003 0.1689 31.01 MPRNet (Zamir et al. 2021) 3.62 101.59 101.50 0.0901 0.1817 33.50 131.11 0.0979 0.2138 29.09 MST-L (Cai et al. 2022b) 2.45 32.07 112.57 0.0931 0.1772 33.90 122.36 0.0893 0.1845 30.75 CESST (ours) 1.54 90.18 98.74 0.0791 0.1497 35.14 109.47 0.0917 0.1230 33.25 Table 2: Comparison with SOTA methods on NTIRE2022 HSI dataset. The best results are highlighted in bold. NTIRE2018 HSI challenge), HRNet (Zhao et al. 2020), AWAN (Li et al. 2020) (winner of NTIRE2020 HSI challenge), MST++ (Cai et al. 2022c) (winner of NTIRE2022 HSI challenge); two compressive HSI recovery methods: HDNet (Hu et al. 2022) and MST-L (Cai et al. 2022b); five image restoration methods: EDSR (Lim et al. 2017), HINet (Chen et al. 2021b), MIRNet (Zamir et al. 2020), Restormer (Zamir et al. 2022), and MPRNet (Zamir et al. 2021). For fair comparisons, all the methods were retrained and tested with the same settings as MST++ (Cai et al. 2022c). As shown in Table. 2, it can be observed that our method obtains the best results of all five metrics while costing the least Params on NTIRE2022 HSI dataset. To more intuitively illustrate the competitiveness of our method, we provide the PSNR-Params comparisons in Fig. 1, including both HSI reconstruction methods and image restoration methods. The horizontal axis is the Params (memory cost), and the vertical axis is the PSNR (performance). As can be seen, our method takes up the upper-left corner, indicating the best efficiency. Note that although the FLOPs of our model are larger than MST++ (Cai et al. 2022c), benefited from our parallel calculation design, e.g., the three parallel branches of feature extraction, the parallel spatial-MSA and spectralMSA of SSAB, our model achieves comparable inference time compared with MST++ on the same GPU. Qualitative Results To evaluate the visual quality of our method, we provide visual comparisons in Fig. 5. Almost all existing methods fail to generate chromatic-consistent content and artifact-free results, especially for the high-frequency components (i.e., the sky area). In contrast, our method is capable of recovering more precise texture information and better pixel-level quality over other methods. This is because we treat each channel of input RGB images as a unique feature and model them into high dimensional space respectively, rather than The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6572 Baseline Independent SSAB SFAM Params(M) FLOPs(G) MRAE PSNR ✓ 0.92 54.24 0.3207 28.14 ✓ ✓ 1.03 74.75 0.2514 31.83 ✓ ✓ ✓ 1.37 81.38 0.1917 33.71 ✓ ✓ ✓ ✓ 1.54 90.18 0.1497 35.14 (a) Break-down ablation study of different pipelines. Method MRAE PSNR Baseline 0.1917 33.71 SFT 0.1652 34.22 EFF 0.1611 34.73 SFAM 0.1497 35.14 (b) Ablations of different fusion. Table 3: Ablation studies of break-down ablation and fusion scheme comparison. SFT EFF Baseline SFAM RGB SFT EFF Baseline SFAM RGB 35.14 34.73 34.22 33.71 PSNR Figure 6: The visual analysis of error images chosen from the validation set of NTIRE2022 HSI dataset. The error images are the heat maps of the root mean square error (RMSE) between the ground truth and the reconstructed HSI. equally modeling them and studying correlations in the high dimensional space directly (i.e., this is why our method performs better in the long-range spectral bands, as shown in 2b). Besides benefiting from the independent modeling and combinatorial excavation mechanism, as well as our spatiospectral attention mechanism, both spectral self-similarities and spatial details can be efficiently explored and well fused to generate fine-grained pixel-level predictions. Meanwhile, the intensity of our results is closest to ground truths compared with existing methods. Ablation Study In this section, we perform ablation studies to investigate the effectiveness of our proposed structure. The baseline model is derived by removing the independent modeling structure, including spatio-spectral attention block (SSAB), and spectrum-fusion attention module (SFAM) from CESST, and using the widely used ResNet (He et al. 2016) block. Break-down Ablation. To investigate the effect of each module, we first perform a break-down ablation and provide the quantitative results in Table 3a. From the first and the second rows, we find that the independent modeling significantly improves the performance of the whole model, which yields a 3.69dB improvement in PSNR. When we successively apply both SSAB and SFAM, the reconstruction performance further achieves 1.88dB and 1.43dB improvement. These results demonstrate the effectiveness of our independent modeling, SSAB, and SFAM. Fusion Scheme Comparison. As the fusion module is one of the main contributions of this work, we further compare our SFAM with the other popular fusion schemes in Table 3b, including concatenation-convolution (acts as the MSA scheme MRAE PSNR Spatial-MSA (Dosovitskiy et al. 2020) 0.1614 34.75 Shifted-window MSA (Liu et al. 2021) 0.1783 33.92 Spectral-MSA (Zamir et al. 2022) 0.1483 34.96 Shuffle-window MSA 0.1639 34.51 Spatio-spectral MSA (ours) 0.1497 35.14 Table 4: Ablation study of different MSAs. baseline in this scenario, which is the same settings as the third row in Table 3a), SFT layer (Wang et al. 2018b), and EFF layer (Hu et al. 2022). As can be seen, our module gains 1.43dB, 0.92dB, and 0.41dB in PSNR, which verifies the effectiveness of our SFAM. In addition, we further provide visual analysis in Fig. 6, which shows that our SFAM is more capable of fusing fine-grained details, especially in the salient regions. MSA Comparison. To further validate the effectiveness of our proposed spatio-spectral attention mechanism, we compare our SSAB with different MSA variations, including spatial-MSA (Dosovitskiy et al. 2020), shifted-window MSA (Liu et al. 2021), spectral-MSA (Cai et al. 2022c) and shuffle-window MSA (a variation of our original spatiospectral MSA, which disables the spectral-MSA and keeps other settings consistent). We switch these modules directly in our framework. As shown in Table 4, our spatio-spectral MSA performs best in RMSE and PSNR and is comparable with spectral-MSA in MRAE. Conclusion In this paper, we have proposed a novel hyperspectral image reconstruction framework that excavates the unique and complementary information among the RGB input channels in a combinatorial manner for efficient embedding of spatiospectral clues based on a Transformer structure: CESST. We have proposed a spatio-spectral attention module, and a spectrum-fusion attention module, which greatly facilitates the excavation and fusion of information at both semantically long-range levels and fine-grained pixel levels across all dimensions. The effectiveness of each module has been validated by ablation studies. Extensive visual comparisons and quantitative experiments have demonstrated that our proposed method achieves superior HSI reconstruction performance compared with SOTA methods. Acknowledgments This research was supported by A*STAR C222812026. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6573 References Ad˜ao, T.; Hruˇska, J.; P´adua, L.; Bessa, J.; Peres, E.; Morais, R.; and Sousa, J. J. 2017. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote sensing, 9(11): 1110. Aeschbacher, J.; Wu, J.; and Timofte, R. 2017. In defense of shallow learned spectral reconstruction from RGB images. In ICCVW, 471–479. Akhtar, N.; and Mian, A. 2018. Hyperspectral recovery from RGB images using Gaussian processes. TPAMI, 42(1): 100– 113. Alvarez-Gila, A.; Van De Weijer, J.; and Garrote, E. 2017. Adversarial networks for spatial context-aware spectral image reconstruction from rgb. In ICCVW, 480–490. Arad, B.; and Ben-Shahar, O. 2016. Sparse Recovery of Hyperspectral Signal from Natural RGB Images. In ECCV. Arad, B.; Timofte, R.; Ben-Shahar, O.; Lin, Y.-T.; and Finlayson, G. D. 2020. Ntire 2020 challenge on spectral reconstruction from an rgb image. In CVPRW, 446–447. Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; et al. 2022. NTIRE 2022 Spectral Recovery Challenge and Data Set. In CVPRW, 863–881. Cai, Y.; Lin, J.; Hu, X.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; and Van Gool, L. 2022a. Coarse-to-fine sparse transformer for hyperspectral image reconstruction. In ECCV, 686–704. Springer. Cai, Y.; Lin, J.; Hu, X.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; and Van Gool, L. 2022b. Mask-guided spectralwise transformer for efficient hyperspectral image reconstruction. In CVPR, 17502–17511. Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; Pfister, H.; Timofte, R.; and Van Gool, L. 2022c. Mst++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In CVPRW, 745–755. Can, Y. B.; and Timofte, R. 2018. An efficient CNN for spectral reconstruction from RGB images. arXiv preprint arXiv:1804.04647. Channing, G. 2022. Spectral DefocusCam: Compressive Hyperspectral Imaging from Defocus Measurements. In AAAI, volume 36, 13128–13129. Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; and Gao, W. 2021a. Pre-trained image processing transformer. In CVPR, 12299–12310. Chen, J.; Yang, Z.; Chan, T. N.; Li, H.; Hou, J.; and Chau, L.-P. 2022. Attention-Guided Progressive Neural Texture Fusion for High Dynamic Range Image Restoration. TIP, 31: 2661–2672. Chen, L.; Lu, X.; Zhang, J.; Chu, X.; and Chen, C. 2021b. HINet: Half instance normalization network for image restoration. In CVPR, 182–192. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fubara, B. J.; Sedky, M.; and Dyke, D. 2020. Rgb to spectral reconstruction via learned basis functions and weights. In CVPRW, 480–481. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778. Helminger, L.; Bernasconi, M.; Djelouah, A.; Gross, M.; and Schroers, C. 2021. Generic image restoration with flow based priors. In CVPR, 334–343. Hu, X.; Cai, Y.; Lin, J.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; and Van Gool, L. 2022. Hdnet: High-resolution dual-domain learning for spectral compressive imaging. In CVPR, 17542–17551. Huang, T.; Dong, W.; Yuan, X.; Wu, J.; and Shi, G. 2021. Deep gaussian scale mixture prior for spectral compressive imaging. In CVPR, 16216–16225. Jia, Y.; Zheng, Y.; Gu, L.; Subpa-Asa, A.; Lam, A.; Sato, Y.; and Sato, I. 2017. From RGB to spectrum for natural scenes via manifold-based mapping. In ICCV, 4705–4713. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Li, J.; Wu, C.; Song, R.; Li, Y.; and Liu, F. 2020. Adaptive weighted attention network with camera spectral sensitivity prior for spectral reconstruction from RGB images. In CVPRW, 462–463. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Mu Lee, K. 2017. Enhanced deep residual networks for single image superresolution. In CVPRW, 136–144. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 10012–10022. Lu, G.; and Fei, B. 2014. Medical hyperspectral imaging: a review. Journal of biomedical optics, 19(1): 010901. Magnusson, M.; Sigurdsson, J.; Armansson, S. E.; Ulfarsson, M. O.; Deborah, H.; and Sveinsson, J. R. 2020. Creating RGB images from hyperspectral images using a color matching function. In IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, 2045– 2048. Meng, Z.; Ma, J.; and Yuan, X. 2020. End-to-end low cost compressive spectral imaging with spatial-spectral selfattention. In ECCV, 187–204. Springer. Miao, X.; Yuan, X.; Pu, Y.; and Athitsos, V. 2019. l-net: Reconstruct hyperspectral images from a snapshot measurement. In ICCV, 4059–4069. Robles-Kelly, A. 2015. Single image spectral reconstruction for multimedia applications. In ACM MM, 251–260. Shao, Y.; Li, L.; Ren, W.; Gao, C.; and Sang, N. 2020. Domain adaptation for image dehazing. In CVPR, 2808–2817. Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; and Wu, F. 2018. Hscnn+: Advanced cnn-based hyperspectral recovery from rgb images. In CVPRW, 939–947. Stiebel, T.; Koppers, S.; Seltsam, P.; and Merhof, D. 2018. Reconstructing spectral images from rgb-images using a convolutional neural network. In CVPRW, 948–953. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6574 Tu, Z.; Talebi, H.; Zhang, H.; Yang, F.; Milanfar, P.; Bovik, A.; and Li, Y. 2022. Maxim: Multi-axis mlp for image processing. In CVPR, 5769–5780. Wagadarikar, A.; John, R.; Willett, R.; and Brady, D. 2008. Single disperser design for coded aperture snapshot spectral imaging. Applied optics, 47(10): B44–B51. Wang, L.; Sun, C.; Fu, Y.; Kim, M. H.; and Huang, H. 2019. Hyperspectral image reconstruction using a deep spatialspectral prior. In CVPR, 8032–8041. Wang, L.; Sun, C.; Zhang, M.; Fu, Y.; and Huang, H. 2020. Dnu: Deep non-local unrolling for computational spectral imaging. In CVPR, 1661–1671. Wang, L.; Xiong, Z.; Shi, G.; Wu, F.; and Zeng, W. 2016. Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging. TPAMI, 39(10): 2104– 2111. Wang, L.; Zhang, T.; Fu, Y.; and Huang, H. 2018a. Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging. TIP, 28(5): 2257–2270. Wang, X.; Yu, K.; Dong, C.; and Loy, C. C. 2018b. Recovering realistic texture in image super-resolution by deep spatial feature transform. In CVPR, 606–615. Xiong, Z.; Shi, Z.; Li, H.; Wang, L.; Liu, D.; and Wu, F. 2017. Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections. In ICCVW, 518–525. Yan, L.; Wang, X.; Zhao, M.; Kaloorazi, M.; Chen, J.; and Rahardja, S. 2020. Reconstruction of hyperspectral data from RGB images with prior category information. TCI, 6: 1070–1081. Yang, X.; Chen, J.; Yang, Z.; and Chen, Z. 2021. AttentionGuided NIR Image Colorization via Adaptive Fusion of Semantic and Texture Clues. arXiv preprint arXiv:2107.09237. Yuan, Y.; Zheng, X.; and Lu, X. 2017. Hyperspectral image superresolution by transfer learning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(5): 1963–1974. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In CVPR, 5728–5739. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2020. Learning enriched features for real image restoration and enhancement. In ECCV, 492– 511. Springer. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2021. Multi-stage progressive image restoration. In CVPR, 14821–14831. Zhao, H.; Gallo, O.; Frosio, I.; and Kautz, J. 2016. Loss functions for image restoration with neural networks. TCI, 3(1): 47–57. Zhao, Y.; Po, L.-M.; Yan, Q.; Liu, W.; and Lin, T. 2020. Hierarchical regression network for spectral reconstruction from RGB images. In CVPRW, 422–423. Zhou, Z.; Qiu, S.; Wang, Y.; Zhou, M.; Chen, X.; Hu, M.; Li, Q.; and Lu, Y. 2021. Swin-Spectral Transformer for Cholangiocarcinoma Hyperspectral Image Segmentation. In 2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 1–6. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6575 | 2024 | 730 |
18,552 | Decomposing Semantic Shifts for Composed Image Retrieval Xingyu Yang1,2*, Daqing Liu3, Heng Zhang4, Yong Luo1,2†, Chaoyue Wang3, Jing Zhang5 1School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China, 2Hubei Luojia Laboratory, Wuhan, China, 3JD Explore Academy, JD.com, China, 4Gaoling School of Artifical Intelligence, Renmin University of China, China, 5School of Computer Science, The University of Sydney, Australia, {yangxingyu2021,luoyong}@whu.edu.cn, [email protected], [email protected], [email protected], [email protected] Abstract Composed image retrieval is a type of image retrieval task where the user provides a reference image as a starting point and specifies a text on how to shift from the starting point to the desired target image. However, most existing methods focus on the composition learning of text and reference images and oversimplify the text as a description, neglecting the inherent structure and the user’s shifting intention of the texts. As a result, these methods typically take shortcuts that disregard the visual cue of the reference images. To address this issue, we reconsider the text as instructions and propose a Semantic Shift Network (SSN) that explicitly decomposes the semantic shifts into two steps: from the reference image to the visual prototype and from the visual prototype to the target image. Specifically, SSN explicitly decomposes the instructions into two components: degradation and upgradation, where the degradation is used to picture the visual prototype from the reference image, while the upgradation is used to enrich the visual prototype into the final representations to retrieve the desired target image. The experimental results show that the proposed SSN demonstrates a significant improvement of 5.42% and 1.37% on the CIRR and FashionIQ datasets, respectively, and establishes a new state-of-the-art performance. Code is available at https://github.com/starxing-yuu/SSN. 1 Introduction Composed Image Retrieval (Vo et al. 2019) (CIR) is an emerging image retrieval task in that the users can provide a multi-modal query composed of a reference image and a text. Different from the traditional image retrieval (Weinzaepfel et al. 2022) where the users must provide the exact same image of the desired result or text-to-image retrieval (Wang et al. 2019) where the users should describe the target in a detailed language, as shown in Figure 1a, CIR relax the requirement of input thus the users can simply provide an example image that similar to the desired image as reference, and then describe the difference from the *Contribution during internship at JD Explore Academy. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Change color of blue car to red. Text as a description to red No colored Change color of blue car to red. Text as an instruction (a) (b) (c) Figure 1: (a) gives an example of the CIR task. (b) shows existing works treat text as a description connecting the reference image and target image. They follow the paradigm of Ir + L ↔It. (c) gives a brief illustration of our idea. We propose to consider the text as an instruction, inheriting the property of human language to express semantic shifts. With text instructions, the reference image is first degraded into visual prototypes and then enriched into the final representations to retrieve. This process can be described as Ir L− −→I0 r L+ −→It. reference to the target. Despite their diverse model architectures (Kim et al. 2021; Yang et al. 2021), the essence of this task is to fully understand the user’s intent conveyed by the reference image and the language and then find the most similar image from all candidates. Thanks to the development of vision features (Radford et al. 2021) and language representations (Devlin et al. 2019), we can accomplish the CIR task with more flexible free-form languages, including changing one specific attribute of one object and adding or removing some objects. However, capturing the user’s intentions is still a challenging problem because the instruction text is far different The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6576 from the description text that is commonly used in current vision-language tasks, e.g., visual grounding (Deng et al. 2018), cross-modal retrieval (Wang et al. 2019), or visual captioning (Liu et al. 2022a). For example, given the reference image and the text “Change the color of blue car to red” in Figure 1b, how to depict the desired image for us humans? One may have the following intuitive procedure: 1) identify the part of the reference image that should change, i.e., “the color of blue car”, 2) imagine the visual prototype of the reference image, i.e., this car without any color, and 3) picture the final desired target image as “the car in red”. Unfortunately, despite the complex model architecture design with cross-modal attention (Hosseinzadeh and Wang 2020), graph neural network (Zhang et al. 2022), or finegrained visual network (Hosseinzadeh and Wang 2020), existing methods (Baldrati et al. 2022; Zhao, Song, and Jin 2022; Goenka et al. 2022) generally oversimplify the CIR task as a composition learning of vision and language where the text is usually treated as a description (Figure 1b), disregarding the propriety of the structure of the text, which should be an instruction on how to modify the reference to the target. More seriously, composition learning typically introduces redundant or even incorrect information that may disrupt the final representations, e.g., simply combining the semantics ‘blue, car’ of the reference image and the semantics ‘red, car, blue’ of the text will depict ‘a red and blue car’ defectively. The fundamental cause of this issue lies in a lack of precise understanding of the language. In this paper, we propose to take the text as instructions that represent the semantic shifts from the reference image to the target image, and then decompose the instructions into two parts: the degradation and the upgradation. As illustrated in Figure 1c, based on the decomposition, we conduct the desired image representations in two steps: 1) degrading the reference image into the visual prototype that only contains the visual attributes that need to be preserved, and 2) upgrading the visual prototype into the final desired target. Thanks to the decomposition of instructions, we divide the complex task that models the user’s intentions into two simple and orthogonal sub-tasks which are easier to learn. Based on the final representations, we can directly find the nearest neighbors in the latent space as the final retrieval results. Specifically, we implement the proposed method with a Semantic Shift Network (SSN) that is composed of four components: 1) the representation networks that extract visual and language features of reference images, target images, and instructions; 2) the decomposing network that decomposes the instruction text into the degradation part and upgradation part; 3) the degrading network that transforms the reference image to the visual prototype conditioned on the degradation part of the instruction text; 4) the upgrading network that transforms the visual prototype to the final representation of desired image conditioned on the upgradation part. To train the SSN, we design a traditional retrieval loss to guarantee the overall performance of composed image retrieval, as well as a regularization constraint that disciplines the language decomposing and the visual prototypes. We validate the effectiveness of SSN on two widely used composed image retrieval benchmarks, i.e., FashionIQ (Wu et al. 2021) and CIRR (Liu et al. 2021). SSN stands as a new state-of-the-art on all metrics. Specifically, we achieve impressive improvements of 5.42% and 1.37% on CIRR and FashionIQ mean recall metrics, respectively. In summary, our contributions include: • We reformulate the composed image retrieval task as a semantic shift problem based on the text instructions, with the shift path as reference image →visual prototype →desired target image. • We introduce a Semantic Shift Network for the CIR task that implements the decomposed semantic shifts with several well-designed components. • The proposed SSN achieves state-of-the-art performance with impressive improvements on two widely-used composed image retrieval datasets. 2 Related Work Image Retrieval. Image Retrieval is a fundamental task for the computer vision community since it has a wide range of application scenarios, e.g., search engines, and ecommerce (Zhang and Tao 2020). Given a query image, we need to return the most similar image. In the beginning, global image representation (Chen et al. 2022a) based retrieval methods were investigated. To achieve fine-grained matching (Sun et al. 2021) between images and thus improve retrieval performance, several approaches transformed images into several local representations (e.g., region features (Teichmann et al. 2019)). However, these pioneering works (Chen et al. 2022a; Sun et al. 2021; Weinzaepfel et al. 2022)’s queries are images only and they focus more on similarity matching between images. In reality, people usually convey query intent with text rather than images, so textimage retrieval is also a research focus in image search. Benefiting from the success of a large model for visual language pre-training (Kim, Son, and Kim 2021; Devlin et al. 2019), cross-modal representations have shown remarkable performance in text-image retrieval. Moreover, the query can also combine image and text, which is the direction we explore. Composed Image Retrieval. Composed Image Retrieval refers to searching target images given semantically related reference images and modification texts. One line of work (Zhang et al. 2023; Kim et al. 2021) introduces target images into the forward process during training, which greatly increases the training cost. The MCL&SAP (Zhang et al. 2023) method perceives the semantics of modification texts at multiple layers of the image and models the differences in the image. Another line of works aiming at efficient retrieval explores composition learning and the following introduced works belong to this type. Gated residual fusion is first proposed to combine image and text features for the CIR task in TIRG (Vo et al. 2019) and is commonly used for global fusion in later works (Wang et al. 2022; Kim et al. 2021; Chen and Bazzani 2020). MAAF (Dodds et al. 2020) method applies self-attention mechanisms to realize the interaction between image-text sequences. Conditioned on text, VAL (Chen, Gong, and Bazzani 2020) is proposed to obtain combined features through multi-grained crossmodal semantic alignment, and CosMo (Lee, Kim, and Han The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6577 Inner product Global text representation Image tokens Dog is barking, nothing blocking his mouth Visual-Language Representation Image Encoder Image Encoder Text Encoder MLP Layer ... ... ... Degradation Network Transformer Encoder Addition + Upgrading and Retrieval MLP Layer ... ... ... Sigmoid Concate Sigmoid Concate Transformer Encoder Addition Global image representation Text tokens GT Only for training barking blocked mouth Figure 2: The pipeline of our proposed Semantic Shift Network (SSN). Given a pair of reference images and text modifiers (also as an instruction), we aim at retrieving the correct target image from candidate images. At the stage of visual-language representation, we utilize the CLIP image and text encoders to obtain the respective features. Then the semantic shift features from text instruction are decomposed to direct the reference image features Ir into a visual prototype I0 r . The text features biased toward the target and reference image (namely L+ and L−respectively) are generated at the same time. In the upgrading process, L+ is fused with visual prototypes I0 r by transformer an encode layer and then are linearly added to global representations. Finally, similarity scores are measured by an inner-product operation to generate the ranked list. 2021) method refines reference image features from the term of style and content. Benefiting from the visual-language pre-training, several works (Liu et al. 2021; Goenka et al. 2022; Saito et al. 2023; Baldrati et al. 2022) transfer to the CIR task and achieved favorable performance. Representative works include CLIP-based models (Baldrati et al. 2022; Saito et al. 2023) but the work in (Saito et al. 2023) is under the zero-shot setting, which is different from our task. Following (Baldrati et al. 2022), we also use the same CLIP encoder. Different from previous works, our model considers the modification text as an instruction that guides the reference image semantically shift back to a visual prototype. 3 Approach In this section, we present the proposed Semantic Shift Network (SSN). We first briefly describe how to obtain the representations for image and text inputs. Then we present the technical details of degradation and upgradation, respectively, and finally depict our training objectives. 3.1 Preliminaries The Composed Image Retrieval (CIR) task replaces the query in traditional image retrieval with multi-modal input, usually an image plus a text modifier. In this task, the query image is referred to as a reference image r, and the text modifier is denoted as l. Given each query q = (r, l), the trained model returns a ranked list of the candidate images from a large image gallery D, in descending order of similarity to the joint query semantic representation. An ideal retrieval system should rank the target image t at the first position. 3.2 Visual-Language Representation CLIP (Radford et al. 2021) is a recently successful visuallanguage pre-trained model learned contrastively from 400M associated image-text pairs crawled from the internet. To leverage the powerful representation capability of CLIP, we adopt CLIP matched encoder to yield image and text features. Formally, we denote the CLIP image encoder as ΦI and the CLIP text encoder as ΦL. Given a triplet (r, l, t) of reference image r, modification text l, and target image t, image features Ir = ΦI(r), It = ΦI(t) and text feature L is represented as ΦL(l). Unlike previous works (Baldrati et al. 2022), we also preserve the fine-grained token-level features in addition to the global representations, which facilitates the exploration of richer interactions between modalities. We use a linear layer projecting image token-level features to the same d-dimension as text modality representations (d = 512). Note that the features are a set of token-level features and projected global representations, they can be formulated in a unified way as follows: V = {proj(vcls), v1, v2, ..., vM}, (1) where M is the sequence length or the number of tokens and tokens in V can be from r, l, t and thus produces Ir, It or L. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6578 3.3 Degradation Process We propose to model the CIR as a degradation-upgradation learning process where we treat the text as an instruction. The degradation process is to decompose features, during which the reference image is degraded into visual prototypes with decomposed text features guidance. Inspired by the token selection (Liu et al. 2022b) in vision transformers, we propose a trainable cross-modal decompose module to direct the semantics towards the degradation part and upgradation part. With the help of reference images, we can distinguish this set of opposite semantic information. As shown in Figure 2, the inputs are a set of tokens from the provided text {vl 1, vl 2, ..., vl M} ∈RM×d (M is the text sequence length), we first concatenate them with the global semantic representation of the reference image Ig r = proj(vr cls). Then we feed the concatenated features Xl = {x1, x2, ..., xM}, xi = [Ig r , vl i] ∈RM×2d ([·] is the concatenation operation) to one MLP-sigmoid layer. This reflects the weight of each token contribution to the degradation and upgradation semantic parts: Cl = sigmoid(WlXl + bl) ∈RM×1. (2) Multiplied with the original token-level text features, we generate the positive and negative guiding text representations by Eq.( 3). L+ denotes the semantics related to the target, such as the expected attribute, “red color” (in Figure 1). Complementarily, L−implies the object needed to be modified in the reference image, e.g., “the car” (in Figure 1). The conflicting property (red and blue color) is removed. L+ = Cl ⊙L L−= (1 −Cl) ⊙L, (3) So far, we have obtained the guidance text features. Conditioned on this, to determine which among the reference images should be kept or discarded, we proceed with similar decomposition to produce contribution weights of reference image features but exchange the roles of the two modalities: Xr = {x1, x2, ..., xP }, xi = [L−, vr i ] ∈RP ×2d Cr = sigmoid(WrXr + br) ∈RP ×1, (4) where P is the number of patches and Xr is the concatenated features of reference image tokens {vr 1, vr 2, ..., vr P } and global semantic representations of L−. After that, we obtain visual prototypes I0 r by I0 r = Cr ⊙Ir I 0 r = (1 −Cr) ⊙Ir, (5) where I0 r preserves the core information of the given reference image. I 0 r is the token features after removing the visual prototypes from the original reference image. 3.4 Upgrading Process Based on visual prototypes, the upgrading process aims to transform the visual prototype into the final representation close to the target image by compositional learning. In our late composition module, there are two parallel branches, one for processing positive guiding text and visual prototypes, and the other for negative guidance and irrelevant features in reference images. In each branch, we first add modality-specific embeddings El&Ei for inputs from different modalities, like modal-type embeddings in ViLT (Kim, Son, and Kim 2021). Note that these two learnable embeddings do not include degradation and upgrading information and they are used to indicate modality information. Then we fuse them via the transformer encode block F(·). In detail, take the top branch as an example, given token sequences [L+, I0 r ], the fused features represent as: Fen = F([L+ + El, I0 r + Ei]), (6) where El, Ei is the text modality embedding and image modality embedding respectively. The same fusion as in Eq.(6) is performed in the bottom branch for the inputs {L−, I 0 r}. Since the fusion layer takes sequence features as input, it can accept independent image token features rather than token features concatenated with text modality. Therefore we make the features of the target image also go through the fusion layer F to generate Ftg but without text inputs. Finally following the work in (Baldrati et al. 2022), the final predicted feature Fp is a linear addition between the convex combination of the global reference image Ig r and global text features L and the learned pooled fused features ˆ Fen. We denote the final fused features in the top branch as F + p and the fused features in the bottom branch as F − p . 3.5 Training and Inference As shown in Figure 2, we use the inner product to measure the similarity between the predicted features and the target image representation and then obtain the ranking list of candidate images. Following (Baldrati et al. 2022; Zhao, Song, and Jin 2022; Wang et al. 2022), the retrieval objective is to minimize batched-based classification loss as follows: Lc = 1 B B X i=1 −log exp(λ ∗s(F (i) p , F (i) tg )) PB j=1 exp(λ ∗s(F (i) p , F (j) tg )) , (7) where λ is a temperature parameter. Given the two sets of predicted features output by the late composite module, we obtain their similarity distribution to the target image. z−= softmax(sim(F − p , Ftg)), z+ = softmax(sim(F + p , Ftg)). Finally, we employ a Kullback-Leibler Divergence loss as a regularization constraint: Lk = KL(z+∥zgt) −KL(z−∥z+). (8) We aim to push away the distance between z+ and z−and thus optimize decompose learning. Thanks to the end-toend training, the overall objective L guides the learning of L+, L−and I0 r and is described as follows: L = Lc + wLkLk, (9) where wLk is the hyperparameter of the loss weights, and its default value is 1. It is worth noting that the bottom branch in Figure 2 used for fusing L−and I 0 r is only for training. During inference, we only use F + p , the composite features to retrieve the target image. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6579 Recall@K Recallsubset@K Method K=1 K=5 K=10 K=50 K=1 K=2 K=3 Average TIRG (Vo et al. 2019) 14.61 48.37 64.08 90.03 22.67 44.97 65.14 35.52 TIRG+LastConv (Vo et al. 2019) 11.04 35.68 51.27 83.29 23.82 45.65 64.55 29.75 MAAF (Dodds et al. 2020) 10.31 33.03 48.30 80.06 21.05 41.81 61.60 27.04 MAAF+BERT (Dodds et al. 2020) 10.12 33.10 48.01 80.57 22.04 42.41 62.14 27.57 MAAF-IT (Dodds et al. 2020) 9.90 32.86 48.83 80.27 21.17 42.04 60.91 27.02 MAAF-RP (Dodds et al. 2020) 10.22 33.32 48.68 81.84 21.41 42.17 61.60 27.37 ARTEMIS (Delmas et al. 2022) 16.96 46.10 61.31 87.73 39.99 62.20 75.67 43.05 CIRPLANT (Liu et al. 2021) 15.18 43.36 60.48 87.64 33.81 56.99 75.40 38.59 CIRPLANT w/OSCAR (Liu et al. 2021) 19.55 52.55 68.39 92.38 39.20 63.03 79.49 45.88 CLIP4Cir (Baldrati et al. 2022) 38.53 69.98 81.86 95.93 68.19 85.64 94.17 69.09 SSN 43.91 77.25 86.48 97.45 71.76 88.63 95.54 74.51 Table 1: Comparisons with the state-of-the-art methods for composed image retrieval on the CIRR dataset. Here we show all Recall@K, Recallsubset@K and the average metrics. The average metric is the mean value of Recall@5 and Recallsubset@1. Our complete SSN model obtains significant improvement compared to other SOTA methods. The best results are in bold. Method Tops&Tees Dress Shirt Average R@10 R@50 R@10 R@50 R@10 R@50 R@10 R@50 mean TIRG (Vo et al. 2019) 19.08 39.62 14.87 34.66 18.26 37.89 17.40 37.39 27.40 JVSM (Chen and Bazzani 2020) 13.00 26.90 10.70 25.90 12.00 27.10 11.90 26.60 19.25 VAL (Chen, Gong, and Bazzani 2020) 27.53 51.68 22.53 44.00 22.38 44.15 24.15 46.61 35.38 CoSMo (Lee, Kim, and Han 2021) 29.21 57.46 25.64 50.30 24.90 49.18 26.58 53.21 39.90 CLVC-Net (Wen et al. 2021) 33.50 64.00 29.85 56.47 28.75 54.76 30.70 58.41 44.56 SAC (Jandial et al. 2022) 32.70 61.23 26.52 51.01 28.02 51.86 29.08 54.70 41.89 DCNet (Kim et al. 2021) 30.44 58.29 28.95 56.07 23.95 47.30 27.78 53.89 40.84 MAAF (Dodds et al. 2020) 27.90 53.60 23.80 48.60 21.30 44.20 24.30 48.80 36.55 CIRPLANT (Liu et al. 2021) 21.64 45.38 17.45 40.41 17.53 38.81 18.87 41.53 30.20 ARTEMIS (Delmas et al. 2022) 29.20 54.83 27.16 52.40 21.78 43.64 26.05 50.29 38.17 MUR (Chen et al. 2022b) 37.37 68.41 30.60 57.46 31.54 58.29 33.17 61.39 47.28 CLIP4Cir (Baldrati et al. 2022) 41.41 65.37 33.81 59.40 39.99 60.45 38.32 61.74 50.03 SSN 44.26 69.05 34.36 60.78 38.13 61.83 38.92 63.89 51.40 Table 2: Comparisons with the state-of-the-art methods for composed image retrieval on the FashionIQ dataset. Here we show all Recall@10 and Recall@50 across all categories. Our complete SSN model outperforms other state-of-the-art methods on most of the metrics. The best result is in bold. 4 Experiments 4.1 Datasets and Metrics CIRR Dataset (Liu et al. 2021) is the released dataset of open-domain for the CIR task. Each triplet consists of real-life images with human-generated modification sentences. The real-life images come from the popular NLVR2 dataset (Suhr et al. 2018), which contains real-world entities with reasonable complexity. In 36,554 triplets, 80% are for training, 10% are for validation, and 10% are for evaluation. FashionIQ Dataset (Wu et al. 2021) is a realistic dataset for interactive image retrieval in the fashion domain. Each query is composed of one reference image and two natural language descriptions about the visual differences of the target image. Following (Baldrati et al. 2022; Kim et al. 2021), we use the original evaluation split, which includes 5,373, 3,817, and 6,346 images for three specific fashion categories: Tops&Tees, Dresses, Shirts. Metrics. Following previous works (Baldrati et al. 2022; Delmas et al. 2022; Zhao, Song, and Jin 2022), we employ Recall within top-K as the retrieval performance, which indicates the ratio of the ground-truth target image in the top-K ranking list that is correctly retrieved. 4.2 Implementation Details We utilize the CLIP (Radford et al. 2021) model to initialize the image encoder with ViT-B/32. The hidden dimension of the 1-layer 8-head transformer encoder is set to 512. The temperature λ of the main retrieval loss (in Eq.(7)) is equal to 100. Note that for FashionIQ, we fix the image encoder after one training epoch and fine-tune the text encoder only. We adopt AdamW optimizer with an initial learning rate of 5e-5 to train the whole model. We apply the step scheduler to decay the learning rate by 10 every 10 epochs. The batch The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6580 Recall@K Recall subset@K Method K=1 K=5 K=10 K=50 K=1 K=2 K=3 (R@5+R sub@1)/2 Baseline 42.62 76.7 87.06 97.54 68.98 86.73 94.16 72.84 SSN(Ir, L) 43.34 76.97 87.4 97.2 72.18 88.42 95.24 74.575 SSN(I0 r, L) 44.43 77.44 86.92 96.96 71.63 88.21 94.88 74.535 SSN(Ir, L+) 43.46 77.64 87.51 97.42 71.9 87.9 95.12 74.77 SSN 45.13 77.49 87.75 97.32 73.04 88.64 95.17 75.265 Table 3: Ablation Studies of our SSN model with different components and various settings for decomposition outputs. We report all Recall@K, Recallsubset@K, and the mean recall on the validation set of the CIRR dataset. shared Lk R@K R sub@1 mean K=1 K=5 1 ✓ ✓ 43.77 77.30 71.66 74.91 2 × × 44.32 77.42 71.92 74.67 3 × ✓ 45.13 77.49 73.04 75.27 Table 4: Ablation experiments on loss function terms Lk and an exploration of whether two decomposing layers (one for images and one for text) share parameters. We report all metrics on the validation set of CIRR dataset. size is set to 128 and the network is trained for 50 epochs. All experiments can be implemented with PyTorch on a single NVIDIA RTX 3090 Ti GPU. 4.3 Comparison with State-of-the-Arts Results on CIRR dataset are presented in Table 1 for the test set. Our model which learns to decompose visual prototypes and semantic shift outperforms the state-of-the-art in all metrics. Compared to CLIP4Cir (Baldrati et al. 2022), a strong competitor that has recently successfully applied the CLIP model to the CIR task, our model outperforms it by 5.42% mean recall (R@5 + Rsub@1)/2 and increases up to 5.38% in Top-1 recall metrics. As shown in Table 1, we also outperform other methods by a large margin. Results on FashionIQ dataset are reported in Table 2 for the validation set. Although the model is not improved as much as on the CIRR dataset, our proposed method achieves state-of-the-art results for all categories in most cases. Compared to the strongest method, we improved the mean recall by 1.37%. The limited improvement is due to the domain gap between the fashion data and the open domain CLIP, the small size of the data, and the specialization for fine-tuning the CLIP image encoder. 4.4 Ablation Studies Model architecture. In order to demonstrate the contributions of individual components in our design, we first conducted experiments about ablated models. Moreover, we also explored several various settings for the decomposition outputs that are used during upgradation. Table 3 presents the detailed results on the validation set of the CIRR dataset. The different ablated models are as follows: • Baseline: it is the model without any designed module. • SSN(Ir, L): it is the SSN model without degradation. That means the inputs for the upgradation are original dense tokens extracted from the visual and textual encoder. • SSN(I0 r, L): it is the complete SSN model where visual prototypes and original text features are decomposition outputs. • SSN(Ir, L+): it is the complete SSN model where the original reference image and positive guiding text features are decomposition outputs. • SSN: it is the full SSN model where positive guiding text features (L+) rich the degraded visual prototypes (I0 r) during upgradation and then produces the final representations. There are three following observations in Table 3: 1) the SSN(Ir, L) model slightly outperforms the baseline by 0.72% in Recall@1 because of fine-grained token features. Our proposed method (SSN) achieves the best performance and gains a more significant improvement over the baseline model (2.51% in Recall@1). This highlights the effectiveness of decomposing semantic shifts into two steps. 2) SSN(Ir, L+) is comparable to SSN (Ir, L) model. This is because when only decomposing text instructions without generating visual prototypes, semantic shifts lack a wellacted object. 3) Based on the SSN (Ir, L) model, two models (SSN(I0 r, L) & SSN) picturing visual prototypes from reference images achieved further improvements up to 1.79%. This supports the motivation discussed in Section 1 that the visual cue of the reference images should not be disregarded. Loss function. Our total training objective in Eq.(9) involves two aspects: the main retrieval loss and additional regularized loss Lk. To demonstrate the effectiveness of the regularized constraint decomposing features, we performed ablation experiments on Lk in Eq.(9). Comparing the second and third rows in Table 4, we observe that the model with a regularized loss Lk performs better than the one without Lk, despite only a slight improvement. This shows additional loss Lk helps to learn optimal decomposed features from original CLIP representations. Shared parameter of decomposing layer for image and text? In Figure 2, we employ the same structure: one MLP layer followed by a sigmoid activation function, to decompose the semantic shifts and obtain visual prototypes. To explore whether components with the same structure can share parameters, we conduct additional experiments. From The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6581 Sunlight hits the back of the furry animal on the rock. People in black cap and gown stand in a room with lights A short sleeve striped loose shirt and has no buttons Has green lettering and it has a print at the back Figure 3: Visualization of where to be concerned when picturing visual prototypes from reference images on two datasets, in the form of heatmaps. In the reference image in the first column, the back of the furry animal is highlighted in the visual prototype, indicating the main characteristics of the target image. In the reference image of the fourth column, the reference image changes the print (lettering and color) in the front of the T-shirt to bring it back to the visual prototype. This suggests that we are not just concerned with the salient objects in the image and that the visual prototype contains a rich set of visual cues. 𝐶": (1 −𝐶"): Remove the concreter to the right. Shot from a different angle. Query text: Figure 4: Examples of the learned Cl when decomposing text instructions to L+&L−. The darker the green box, the higher the corresponding weight. rows one and three in Table 4, we can see that MLP layers that share parameters hurt the performance of the model. This is because decomposing networks for reference images and text have different objectives despite the same architecture. Producing visual prototypes is to get invariant features (shared with target images) while decomposing semantics in the text instruction is to get semantically shifted features. 4.5 Qualitative Results Heatmaps in visual prototypes. As shown in Figure 3, we visualize what details are retained in the process of picturing visual prototypes from reference images on both datasets. In Section 3.3, we generate Cr to indicate whether certain features of the reference image are preserved or not. The heatmap is a merging of Cr and the original reference images. From the heatmaps, we can tell where the visual prototype and the original reference image have changed and the extent of these changes. With decomposing semantic shifts as guidance, we observe that the majority of important information in the reference image receives more attention. Normalized weights in L+&L−. We give examples of the learned weight Cl when decomposing text instructions to L+&L−in Figure 4. The word tokens with high weight in L+ are those words representing semantic shifts, e.g., “remove, to right”. While this type of words contribute little in L−, those with high weights in L−are some object words, corresponding to visual clues in the reference image. Query: Dog with human instead of another dog Retrieved by the reference image only (Top-4) Retrieved by the visual prototype (Top-4) Retrieved by the proposed SSN (Top-4) Figure 5: Top-4 retrieved results of the reference image, visual prototype and the proposed SSN. Comparison of the retrieved results between different image inputs and SSN. From Figure 5, we see that the images retrieved by the reference image only are still “Dog with dog”, while the images retrieved by the I0 r have removed another dog and preserved the most valuable cues, regardless of whether the dog was with something or not. Our SSN can put the correct image in the first position. 5 Conclusion In this paper, we focus on the composed image retrieval task, an extended image retrieval task. Given the provided reference image and text requirements pair, the goal is to retrieve the desired target image. We first rethink the text as an instruction and then propose a Semantic Shift Network (SSN) to decompose the text instructions into degradation and upgradation. The text first directs the reference image toward the visual prototype and then guides the visual prototype closer to the target image. Extensive experiments on two benchmark datasets verify the effectiveness of the proposed method and show that our model significantly outperforms state-of-the-art methods by 5.42% and 1.37% on the mean of Recall@K, respectively. In the future, we intend to explore other complex mechanisms to model the text instruction in the CIR task and extend to the zero-shot setting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6582 Acknowledgments This work is supported in part by National Natural Science Foundation of China (Grant No. U23A20318 and 62276195), Special Fund of Hubei Luojia Laboratory under Grant 220100014 and The Fundamental Research Funds for the Central Universities (No. 2042023kf1033). References Baldrati, A.; Bertini, M.; Uricchio, T.; and Del Bimbo, A. 2022. Conditioned and composed image retrieval combining and partially fine-tuning CLIP-based features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4959–4968. Chen, W.; Liu, Y.; Wang, W.; Bakker, E. M.; Georgiou, T.; Fieguth, P.; Liu, L.; and Lew, M. S. 2022a. Deep learning for instance retrieval: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Chen, Y.; and Bazzani, L. 2020. Learning Joint Visual Semantic Matching Embeddings for Language-Guided Retrieval. In European Conference on Computer Vision. Chen, Y.; Gong, S.; and Bazzani, L. 2020. Image search with text feedback by visiolinguistic attention learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3001–3011. Chen, Y.; Zheng, Z.; Ji, W.; Qu, L.; and Chua, T.-S. 2022b. Composed Image Retrieval with Text Feedback via Multi-grained Uncertainty Regularization. arXiv preprint arXiv:2211.07394. Delmas, G.; de Rezende, R. S.; Csurka, G.; and Larlus, D. 2022. Artemis: Attention-based retrieval with textexplicit matching and implicit similarity. arXiv preprint arXiv:2203.08101. Deng, C.; Wu, Q.; Wu, Q.; Hu, F.; Lyu, F.; and Tan, M. 2018. Visual grounding via accumulated attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7746–7755. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In North American Chapter of the Association for Computational Linguistics. Dodds, E.; Culpepper, J.; Herdade, S.; Zhang, Y.; and Boakye, K. 2020. Modality-agnostic attention fusion for visual search with text feedback. arXiv preprint arXiv:2007.00145. Goenka, S.; Zheng, Z.; Jaiswal, A.; CHADA, R.; Wu, Y.; Hedau, V.; and Natarajan, P. 2022. FashionVLP: Vision language transformer for fashion retrieval with feedback. In CVPR 2022. Hosseinzadeh, M.; and Wang, Y. 2020. Composed query image retrieval using locally bounded features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3596–3605. Jandial, S.; Badjatiya, P.; Chawla, P.; Chopra, A.; Sarkar, M.; and Krishnamurthy, B. 2022. SAC: Semantic Attention Composition for Text-Conditioned Image Retrieval. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 597–606. IEEE. Kim, J.; Yu, Y.; Kim, H.; and Kim, G. 2021. Dual compositional learning in interactive image retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 1771–1779. Kim, W.; Son, B.; and Kim, I. 2021. Vilt: Vision-andlanguage transformer without convolution or region supervision. In International Conference on Machine Learning, 5583–5594. PMLR. Lee, S.; Kim, D.; and Han, B. 2021. Cosmo: Content-style modulation for image retrieval with text feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 802–812. Liu, B.; Wang, D.; Yang, X.; Zhou, Y.; Yao, R.; Shao, Z.; and Zhao, J. 2022a. Show, deconfound and tell: Image captioning with causal inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18041–18050. Liu, Y.; Xiong, P.; Xu, L.; Cao, S.; and Jin, Q. 2022b. Ts2net: Token shift and selection transformer for text-video retrieval. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIV, 319–335. Springer. Liu, Z.; Opazo, C. R.; Teney, D.; and Gould, S. 2021. Image Retrieval on Real-life Images with Pre-trained Vision-andLanguage Models. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV, 2105–2114. IEEE. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Saito, K.; Sohn, K.; Zhang, X.; Li, C.-L.; Lee, C.-Y.; Saenko, K.; and Pfister, T. 2023. Pic2word: Mapping pictures to words for zero-shot composed image retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19305–19314. Suhr, A.; Zhou, S.; Zhang, A.; Zhang, I.; Bai, H.; and Artzi, Y. 2018. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491. Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; and Zhou, X. 2021. LoFTR: Detector-free local feature matching with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8922–8931. Teichmann, M.; Araujo, A.; Zhu, M.; and Sim, J. 2019. Detect-To-Retrieve: Efficient Regional Aggregation for Image Search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vo, N.; Jiang, L.; Sun, C.; Murphy, K.; Li, L.-J.; Fei-Fei, L.; and Hays, J. 2019. Composing text and image for image retrieval-an empirical odyssey. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6439–6448. Wang, C.; Nezhadarya, E.; Sadhu, T.; and Zhang, S. 2022. Exploring Compositional Image Retrieval with Hybrid Compositional Learning and Heuristic Negative Mining. In Findings of the Association for Computational Linguistics: EMNLP 2022, 1273–1285. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6583 Wang, Z.; Liu, X.; Li, H.; Sheng, L.; Yan, J.; Wang, X.; and Shao, J. 2019. Camp: Cross-modal adaptive message passing for text-image retrieval. In Proceedings of the IEEE/CVF international conference on computer vision, 5764–5773. Weinzaepfel, P.; Lucas, T.; Larlus, D.; and Kalantidis, Y. 2022. Learning super-features for image retrieval. arXiv preprint arXiv:2201.13182. Wen, H.; Song, X.; Yang, X.; Zhan, Y.; and Nie, L. 2021. Comprehensive linguistic-visual composition network for image retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1369–1378. Wu, H.; Gao, Y.; Guo, X.; Al-Halah, Z.; Rennie, S.; Grauman, K.; and Feris, R. 2021. Fashion iq: A new dataset towards retrieving images by natural language feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11307–11317. Yang, Y.; Wang, M.; Zhou, W.; and Li, H. 2021. Crossmodal Joint Prediction and Alignment for Composed Query Image Retrieval. In Shen, H. T.; Zhuang, Y.; Smith, J. R.; Yang, Y.; C´esar, P.; Metze, F.; and Prabhakaran, B., eds., MM ’21: ACM Multimedia Conference, 3303–3311. ACM. Zhang, F.; Yan, M.; Zhang, J.; and Xu, C. 2022. Comprehensive Relationship Reasoning for Composed Query Based Image Retrieval. In Magalh˜aes, J.; Bimbo, A. D.; Satoh, S.; Sebe, N.; Alameda-Pineda, X.; Jin, Q.; Oria, V.; and Toni, L., eds., MM ’22: The 30th ACM International Conference on Multimedia, 4655–4664. ACM. Zhang, G.; Wei, S.; Pang, H.; Qiu, S.; and Zhao, Y. 2023. Enhance Composed Image Retrieval via Multi-level Collaborative Localization and Semantic Activeness Perception. IEEE Transactions on Multimedia. Zhang, J.; and Tao, D. 2020. Empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things. IEEE Internet of Things Journal, 8(10): 7789–7817. Zhao, Y.; Song, Y.; and Jin, Q. 2022. Progressive Learning for Image Retrieval with Hybrid-Modality Queries. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1012–1021. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6584 | 2024 | 731 |
18,553 | Gaze Target Detection by Merging Human Attention and Activity Cues Yaokun Yang, Yihan Yin, Feng Lu* State Key Laboratory of VR Technology and Systems, School of CSE, Beihang University {yangyaokun, yyhppx, lufeng}@buaa.edu.cn Abstract Despite achieving impressive performance, current methods for detecting gaze targets, which depend on visual saliency and spatial scene geometry, continue to face challenges when it comes to detecting gaze targets within intricate image backgrounds. One of the primary reasons for this lies in the oversight of the intricate connection between human attention and activity cues. In this study, we introduce an innovative approach that amalgamates the visual saliency detection with the body-part & object interaction both guided by the soft gaze attention. This fusion enables precise and dependable detection of gaze targets amidst intricate image backgrounds. Our approach attains state-of-the-art performance on both the Gazefollow benchmark and the GazeVideoAttn benchmark. In comparison to recent methods that rely on intricate 3D reconstruction of a single input image, our approach, which solely leverages 2D image information, still exhibits a substantial lead across all evaluation metrics, positioning it closer to human-level performance. These outcomes underscore the potent effectiveness of our proposed method in the gaze target detection task. Introduction Eye gaze assumes a pivotal role in elucidating human activities. Although traditional studies (Lu et al. 2014a,b; Cheng et al. 2020; Zhang et al. 2015, 2017) have predominantly centered around estimating the gaze direction, discerning the precise location that a person fixates upon—termed as the gaze target—offers a more intuitive avenue for delving into profound human attention. Consequently, the detection of human gaze targets in real-world contexts has emerged as a formidable endeavor within the realm of computer vision. Furthermore, this approach has discovered extensive applications across diverse domains such as human-computer interaction (Fathi, Li, and Rehg 2012; Schauerte and Stiefelhagen 2014), analysis of social awareness (Marin-Jimenez et al. 2019, 2014; Fan et al. 2018), and medical research. Traditionally, the task of gaze target detection has predominantly revolved around visual saliency detection along the gaze direction (Recasens et al. 2015; Lian, Yu, and Gao 2018; Chong et al. 2020). Furthermore, recent advance*Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Existing methods Prediction : Frisbee √ Activity cue Gaze attention Prediction : Head × Fusion mechanism Ours Body-part & object interaction Head box input Predicted gaze target Gaze cone Figure 1: Comparison between existing methods and ours. ments (Fang et al. 2021; Bao, Liu, and Yu 2022) have integrated monocular depth estimation as an auxiliary information source to enhance the computation of the scene’s three-dimensional geometry. Despite achieving noteworthy performance gains, prevailing methods continue to grapple with the precise and dependable detection of gaze targets amidst intricate image backgrounds. This challenge can be attributed to the lack of consideration given to the intricate connection between human attention and activity cues. The gaze target detection task serves as a means to elucidate the connection between human attention and activity cues. Specifically, by observing an individual’s gaze attention, we can glean insights into their activities. Moreover, comprehending an individual’s activity cues helps us to anticipate their gaze target. Based on above analysis, as Illustrated in Fig. 1, we consider merging human attention and activity cues in the gaze target detection task. In this study, we introduce an innovative approach that amalgamates the visual saliency detection with the body-part & object interaction both guided by the soft gaze attention. This fusion enables precise and dependable detection of gaze targets amidst intricate image backgrounds. Based on our observations, when individuals are engrossed in specific activities, their gaze attention tends to be fixated on objects they are actively interacting with (see Fig. 2 (a, b, c)). However, scenarios exist where the gaze target might involve non-interactive objects (see Fig. 2 (d)) or be directed towards the conduct of another individual. Thus, it The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6585 (b) (c) (d) head – tube right foot – football right hand – knife body – bench left hand – frisbee (e) Gaze target Interaction: body part – object (a) Figure 2: Visualizing the intricate connection between human attention and activity cues. The gaze target of an individual can either be directed at an object that interacts with specific body parts (see images (a, b, c)), or it might involve a non-interactive object (see image (d)). Moreover, objects that interact with the entire body might not necessarily align with their gaze attention (see image (e)). becomes imperative to establish a mechanism that unearths the intricate connection between human gaze attention and activity cues. Recognizing that a significant portion of interactive objects might not align with the individual’s gaze attention (e.g., Fig. 2 (e)), we introduce a pioneering body-part & object interaction attention mechanism specially designed for gaze target detection. Our approach centers on identifying interactions between five specific body parts—namely, the head, hands and feet—and each object within the scene. This process is guided by the individual’s gaze attention and aims to effectively discern the potential gaze target among all interactive objects. Moreover, a significant portion of samples present challenges of low facial visibility in the wild due to factors like blurriness, orientation, or obstructions, among others. Gaze estimation methods (Cheng et al. 2020; Zhang et al. 2015) that solely rely on facial characteristics are susceptible to failure under such circumstances. To address this limitation, we introduce a resilient soft gaze attention mechanism. This technique extracts gaze-consistent features from both the human face and five specific head keypoints—namely, the nose, eyes, and ears. The resultant gaze attention is harnessed to assess the probability of a salient region or an interaction hotspot housing potential gaze targets. We stand as pioneers in merging human attention and activity cues into the gaze target detection task. In this study, we introduce an innovative approach that amalgamates the visual saliency detection with the body-part & object interaction both guided by the soft gaze attention. This fusion enables precise and dependable detection of gaze targets amidst intricate image backgrounds. Notably, our approach attains state-of-the-art performance on both the Gazefollow benchmark (Recasens et al. 2015) and the GazeVideoAttn benchmark (Chong et al. 2020). In comparison to recent methods that rely on intricate 3D reconstruction of a single input image, our approach, which solely leverages 2D image information, still exhibits a substantial lead across all evaluation metrics, positioning it closer to human-level performance. These outcomes underscore the potent effectiveness of our proposed method in gaze target detection. This paper makes the following primary contributions: • We propose a novel approach which utilizes gaze and activity cues to solve the gaze target detection task. Our strategy to integrate gaze direction and human-object interaction reflects the natural idea of combining human attention and activity. • We design a robust gaze attention mechanism which extracts the gaze features from both the human face and specific head keypoints. • We introduce a specialized body-part & object interaction module which is able to uncover the connection between human attention and activity cues. Related Work Gaze Target Detection The gaze target detection task offers a more intuitive approach to delve into profound human attention. Recasens (Recasens et al. 2015) pioneered the exploration of this general problem and presented the expansive GazeFollow image dataset, featuring annotations of head positions and corresponding gaze targets. Lian (Lian, Yu, and Gao 2018) harnessed multi-scale FOV attention to enhance view supervision. Chong (Chong et al. 2020) extended the task to out-of-frame scenarios through a video dataset. Fang (Fang et al. 2021) introduced monocular depth estimation as additional prior information. Bao (Bao, Liu, and Yu 2022) utilized intricate analytical calculations for 3D geometry. Despite these achievements in performance, prevailing methods still encounter challenges in accurately detecting gaze targets amid complex image backgrounds. Gaze Estimation The problem of appearance-based gaze estimation has long been a focal point in computer vision (Lu et al. 2014a,b; Cheng et al. 2020; Fischer, Chang, and Demiris 2018; Zhang et al. 2015, 2017). Nevertheless, the majority of available gaze estimation datasets (Kellnhofer et al. 2019; Sugano, Matsushita, and Sato 2014; Zhang et al. 2020) are obtained within controlled laboratory environments, encompassing meticulous configurations of multiview cameras, 3D positions of human subjects, and designated gaze targets. Consequently, these datasets consist solely of single face images from a limited range of scenes. Human-Object Interaction The task of recognizing human-object interactions (Yao and Fei-Fei 2010, 2012; Gupta and Malik 2015; Gkioxari et al. 2018; Gao, Zou, and Huang 2018; Chao et al. 2018; Qi et al. 2018) can be represented as detecting hhuman, verb, objecti triplets. Gupta and Malik (Gupta and Malik 2015) first tackle the HOI detection problem — detecting people doing actions and the object instances they are interacting with. Gkioxari (Gkioxari et al. 2018) introduces an action-specific density map over target object locations based on the appearance of a detected person. In addition to using object instance appearances, Chao (Chao et al. 2018) also encode the relative spatial relationship between a person and the object with a CNN. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6586 Gaze attention Soft Gaze Attention (SGA) Head location Input image Face FC Gaze Feature Extracter Head keypoints Body keypoints Body-part & Object Interaction Attention Object location Object proposal Interaction attention Location Encoder Object Detector Body-part location Scene Feature Extracter Input image Gaze target output Encoder Decoder Target Detection Backbone Interaction branch Saliency branch Body Pose Estimator (a) Gaze Target Detection Approach Encoder (b) SGA-Module Gaze attention Face image 5 head keypoints Head location FC FC FC 49 512 10 1024 768 1024 7×7 28 × 28 MLP CNN Class Figure 3: Overview. Our gaze target detection approach consists of three main modules: Soft Gaze Attention, Body-part & Object Interaction Attention, and Target Detection Backbone. The target detection backbone comprises two branches: the saliency branch and the interaction branch. Finally, we combine the target heatmaps generated by these two branches, utilize a CNN network to predict the ultimate gaze target heatmap, and employ an MLP to determine if the gaze target falls out of the frame. SGA-Module is the architecture of our soft gaze attention module. This module is designed to generate a soft gaze attention map by leveraging information from both the human face and specific head keypoints. Approach Illustrated in Figure 3, our gaze target detection approach consists of three main modules: Soft Gaze Attention, Bodypart & Object Interaction Attention, and Target Detection Backbone. The target detection backbone encompasses two distinctive branches: the saliency branch and the interaction branch. Our soft gaze attention module is designed to predict gaze attention by leveraging information from both the human face and five specific head keypoints (the nose, eyes and ears). The resulting gaze attention map Ag plays a pivotal role in guiding the body-part & object interaction module and the target detection backbone. Our body-part & object interaction attention module initiates by employing a pre-trained body pose estimator to calculate the body keypoints of the individual, denoted as vbk, and a pre-trained object detector to derive object proposals within the scene. Guided by the soft gaze attention Ag, this module discerns interactions between five distinct body parts (i.e., the head, hands and feet) and all objects present within the scene. Subsequently, the body-part & object interaction attention Ahoi is generated and employed to guide the interaction branch within our target detection backbone. Our target detection backbone initiates by extracting scene features from the entire scene input. Guided by soft gaze attention Ag, our saliency branch determines whether the extracted saliency regions encompass potential gaze targets. In parallel, guided by body-part & object interaction attention Ahoi, our interaction branch gauges the likelihood that the detected interaction hotspots constitute potential gaze targets. Finally, we combine the target heatmaps generated by these two branches, utilize a CNN network to predict the ultimate gaze target heatmap, and employ an MLP to determine if the gaze target falls out of the frame. Soft Gaze Attention The architecture of our soft gaze attention module is illustrated in Figure 3. We employ the lightweight MobileNet (Howard et al. 2019) to extract features from the provided face image Iface, which has been pre-resized to 64 × 64 pixels. Then, the extracted feature maps undergo an average pooling operation, resulting in a 1024-dimensional feature vector vf. Simultaneously, as depicted in Figure 3, we derive five specific head keypoints (i.e., the nose, eyes and ears) from the computed body keypoints of the individual, which is accomplished by a pre-trained body pose estimator. Subsequently, these five head keypoints are encoded and transformed into a 512-dimensional feature vector vhk through a fully connected (FC) layer. The vectors vf and vhk are then concatenated and further projected into a 1024-dimensional feature vector vg via an additional FC layer. Following this, the head location map Mh is resized to dimensions of 28 × 28 pixels and encoded into a 768dimensional vector vh. These vectors, vg and vh, are concatenated and projected into a 49-dimensional vector vatn through a subsequent FC layer. Finally, the vector vatn is resized to yield the 7 × 7 pixel gaze attention map Ag. In situations where faces have limited visibility, our soft gaze attention module showcases heightened resilience. This enhanced resilience stems from the module’s ability to leverage the spatial correlation between head keypoints and facial orientation, setting it apart from traditional gaze estimation methods (Cheng et al. 2020) that exclusively emphasize the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6587 Head box input Ground-truth gaze target Predicted gaze target Figure 4: Comparison between the Baseline and our method. First row: showcases a selection of samples extracted from the Gazefollow test set along with their corresponding true annotations. Second row: presents the gaze target heatmap forecasted by the Baseline. Third row: displays the prediction generated by our method. Our method distinctly outperforms the Baseline when the actual gaze target is a diminutive interactive object concealed within intricate image backgrounds. extraction of gaze-consistent features from the human face. Body-part & Object Interaction Attention As depicted in Fig. 3, our approach initially utilizes a pretrained body pose estimator to calculate the body keypoints of the individual. Subsequently, we determine the location map Mbp for five specific body parts (i.e., the head, hands and feet) using the keypoint coordinates. Simultaneously, employing a pre-trained object detector, we acquire object proposals from the scene and generate an object location map Mo for all detected objects. Subsequently, we concatenate the object location map Mo with the body-part location map Mbp in channel dimension, resulting in the formation of the body-part & object location pair. Through Eq. 1, these paired location maps, along with the gaze attention map Ag, are concatenated and passed through a location encoder denoted as Floc(·), leading to the generation of the interaction attention map Ahoi that pertains to the body-parts of the individual and the detected objects. Ahoi = Floc((Mbp ⊕Mo) ⊕Ag). (1) Our body-part & object interaction attention mechanism enhances the precision of identifying potential gaze targets among a range of interactive objects. Target Detection Backbone Illustrated in Fig.3, we commence by concatenating the head location map Mh of the given individual with the complete scene image Irgb. Subsequently, we utilize the feature extractor Fscn(·) to extract the convolutional scene feature maps denoted as mscn, mscn = Fscn(Irgb ⊕Mh). (2) The saliency branch Fsal(·) is composed of two 1 × 1 CNN layers and three transposed CNN layers. Guided by the soft gaze attention Ag, this branch encodes and decodes a target heatmap Hsal through Eq. 3, to ascertain if the extracted saliency regions contain potential gaze targets. Hsal = Fsal(mscn ⊗Ag). (3) The interaction branch Fhoi(·) shares the same architecture as the saliency branch. Guided by the body-part & object interaction attention Ahoi, this branch encodes and decodes another target heatmap Hhoi through Eq. 4, to determine the probability that the identified interaction hotspots represent potential gaze targets. Hhoi = Fhoi(mscn ⊗Ahoi). (4) Finally, through Eq. 5, we combine these two predicted heatmaps and input them into a fusion network Ffus(·) comprising two 1 × 1 CNNs, to generate the ultimate prediction Hfus for the gaze target. Hfus = Ffus(Hsal ⊕Hhoi). (5) Meanwhile, we also input Hsal ⊕Hhoi into a MLP classifier to determine if the gaze target falls out of the frame. Overall Loss Function To provide supervision for the saliency branch, we employ a regression loss function Lsal that computes the mean square error between the gaze-guided scene saliency map Hsal and the ground truth gaze target heatmap H∗, Lsal = MSE(Hsal, H∗). (6) Since there are no annotations pertaining to human activities in gaze target detection datasets, we do not provide distinct supervision for our interaction branch. We achieve supervision over the fusion of the interaction branch and the saliency branch through the inclusion of an additional loss function, denoted as Lfus, in our fusion prediction. The loss function Lfus computes the mean square error between the fusion target heatmap Hfus and the ground truth H∗, Lfus = MSE(Hfus, H∗). (7) We define the classification loss function of the gaze target as Lcls. The overall loss function is formulated as follows, L = λ1Lcls + λ2Lsal + λ3Lfus, (8) where λ1, λ2 and λ3 are hyper-parameters. We empirically set λ1 = λ2 = λ3 = 1.0. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6588 Methods Supervision GazeFollow VideoAttentionTarget Activity Depth 3D Eye Min. Dist. ↓ Avg. Dist. ↓ Dist. ↓ AP ↑ Random 0.391 0.484 0.458 0.621 Fixed bias 0.219 0.306 0.326 0.624 Baseline 0.077 0.137 0.147 0.848 Chen √ 0.074 0.136 Fang √ √ 0.067 0.124 0.108 0.896 Tu 0.069 0.133 0.126 0.854 Bao √ √ 0.122 0.120 0.869 Miao √ 0.065 0.123 0.109 0.908 Ours* 0.068 0.126 0.106 (20.9% ↓) 0.910 (7.3% ↑) Ours √ 0.061 (20.8% ↓) 0.118 (13.9% ↓) Table 1: Evaluation on the GazeFollow dataset and the VideoAttentionTarget dataset. Ours*: our method without the body-part & object interaction attention and the interaction branch. Ours: our complete method. The data in parentheses represents the proportion of improvement in the performance of our method compared to the Baseline. Activity: the individual’s activity cues. Depth: depth prior information of the scene. 3D: 3D reconstruction of the scene. Eye: additional eye annotations. Experimental Results Preparation Datasets This paper employs two well-established datasets for gaze target detection, namely GazeFollow (Recasens et al. 2015) and VideoAttentionTarget (Chong et al. 2020). GazeFollow constitutes a large-scale gaze-tracking dataset that comprises 130,339 individuals within 122,143 images. These images are sourced from a diverse range of existing datasets, e.g., ImageNet (Deng et al. 2009), COCO (Lin et al. 2014), PASCAL (Everingham et al. 2010), SUN (Xiao et al. 2010), etc.. After partitioning, 4,782 annotated individuals are designated for testing, with the remainder allocated for training. Furthermore, ten human annotations are solicited per individual in the test images to facilitate an evaluation of human performance. VideoAttentionTarget extends the task to out-of-frame scenarios. This dataset encompasses 1,331 video clips procured from various sources on YouTube, accompanied by 164,541 frame-level head bounding box annotations. Evaluation Metrics The evaluation of our proposed model’s performance is conducted using the following metrics. Dist.: This metric quantifies the performance by evaluating the L2 distance between the predicted gaze target point and the corresponding ground truth annotation. Out of frame AP: The accuracy of identifying out-of-frame instances is assessed through the utilization of average precision (AP). These metrics provide a comprehensive assessment of our model’s performance across various aspects. Implementation Details Our implementation is carried out using the PyTorch framework. We utilize ResNet-50 (He et al. 2016) as our scene feature extractor. All input scene images are resized to dimensions of 224 × 224, while our input face image is resized to 64 × 64. During training, we employ a mini-batch size of 32 on a single NVIDIA Titan Xp GPU, initializing with a learning rate of 0.0001. Our training regimen spans 90 epochs on the GazeFollow dataset, with learning rate adjustments at the 80th and 90th epochs, involving a multiplication by 0.1. Our entire training process takes approximately 18 hours. As our optimizer, we rely on the Adam algorithm (Kingma and Ba 2014), with an Adam weight decay set at 0.0001 and an Adam momentum of 0.9. During inference, our complete model achieved an image processing time of less than 75ms on a single NVIDIA GPU. Comparison Methods Baseline We adopt the method introduced in Video (Chong et al. 2020) as our Baseline. The Baseline approach generates gaze attention solely from the human face and predicts the gaze target exclusively by extracting the gazeguided salience feature of the scene. It is evident that the disparity in performance between our comprehensive model and the Baseline stems from the integration of the interaction branch guided by our proposed body-part & object interaction attention, along with the incorporation of five specific head keypoints into our soft gaze attention module. Gaze Target Detection Methods Furthermore, we conduct comparisons with five recent methods: Chen (Chen et al. 2021), Fang (Fang et al. 2021), Tu (Tu et al. 2022), Bao (Bao, Liu, and Yu 2022), and Miao (Miao, Hoai, and Samaras 2023). These methods have all demonstrated notable performance within the confines of within-dataset evaluations. Performance Comparison with SOTA Methods Evaluation on GazeFollow Dataset As demonstrated in Table 1, our method exhibits a substantial lead over the second-best competitor across all evaluation metrics, positioning it closer to human-level performance. Compared to the Baseline approach in Video (Chong et al. 2020), our method achieves a relative enhancement of 20.8% for the minimum L2 distance and 13.9% for the average L2 distance. Even compared with the state-of-the-art method Bao (Bao, Liu, and Yu 2022), which relies on intricate 3D reconstruction of a single input image, our approach,which solely leverages 2D image information, still attains a relative advancement of 3.3% for the average L2 distance. Evaluation on VideoAttentionTarget Dataset The VideoAttentionTarget dataset (Chong et al. 2020) exhibits a deficiency in terms of diverse human activities, thereby placing limitations on the efficacy of our proposed bodypart & object interaction module. As depicted in Table 1, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6589 Ground truth Ours Baseline Gaze estimation Figure 5: Visualizing the comparison between our soft gaze attention method (fourth column), the conventional gaze estimation approach (second column) and the Baseline (third column), in scenarios where faces have limited visibility. Methods Min. Dist. ↓ Avg. Dist. ↓ Baseline 0.077 0.137 Ours 0.068 0.126 Table 2: Ablation study of soft gaze attention module on the GazeFollow dataset. Baseline: soft gaze attention in the Baseline method. Ours: our proposed soft gaze attention. the performance of our model without the body-part & object interaction module is represented by ”Ours*”. In comparison to the Baseline approach in Video (Chong et al. 2020), ”Ours*” still attains a relative enhancement of 20.9% for the L2 distance and 7.3% for the average precision concerning out-of-frame identification. Qualitative Experimental Results A qualitative comparison between the Baseline and our method is presented in Figure 4. The initial row showcases a selection of samples extracted from the Gazefollow test set, along with their corresponding true annotations. The second and third rows respectively depict the gaze target heatmaps forecasted by the Baseline and our model. Our method notably outperforms the Baseline when the actual gaze target is a diminutive interactive object enshrouded in intricate image backgrounds. This substantial improvement is attributed to the fusion of human attention and activity cues within our approach. Ablation Study Soft Gaze Attention As shown in Fig.5, to evaluate the precision and resilience of our novel soft gaze attention approach (fourth column), which synergizes facial features and head keypoints, we conduct a comparative analysis with the conventional gaze estimation method (Zhang et al. 2020) (second column), as well as the soft gaze attention module within the Baseline method (third column). Both of these alternatives focus solely on extracting gaze-consistent features from the human face. In scenarios involving faces with reduced visibility, our proposed method demonstrates enhanced resilience attributed to its utilization of the spatial correlation between head keypoints and facial orientation. Besides, the quantitative comparison is shown in Tab.2. To ensure fairness, we exclude the body-part & object interaction attention and the interaction branch from our approach. This adjustment aligns our resulting model’s framework with that of the Baseline method, namely scene saliency deOurs Ground truth Full-body interaction W/O HOI Figure 6: Visualizing the comparison between the interaction branch guided by our proposed body-part & object interaction attention (fourth column), the variant employing full-body object interaction (third column) and another variant without the entire interaction module (second column). Methods Min. Dist. ↓ Avg. Dist. ↓ W/O HOI 0.068 0.126 Full-body HOI* 0.066 0.124 Full-body HOI 0.063 0.121 Ours* 0.064 0.122 Ours 0.061 0.118 Table 3: Ablation study of our interaction branch on the GazeFollow dataset. Ours: our complete model with the interaction branch guided by our proposed body-part & object interaction attention. Full-body HOI: the variant of our interaction branch employing the full-body object interaction attention. *: the variant of our interaction attention lacking the guidance of gaze attention. W/O HOI: the variant of our method without the entire interaction module. tection guided by soft gaze attention. These outcomes underscore the exceptional accuracy and robustness of our soft gaze attention method, even when confronting challenging instances of limited facial visibility in real-world conditions. Interaction Branch As depicted in Fig. 6, in order to validate the effectiveness of the interaction branch which is guided by the body-part & object interaction attention, we juxtapose our approach (fourth column) with two variants: one lacking the entire interaction module (second column), and the other utilizing the full-body object interaction attention (third column). The focal point of our bodypart & object interaction attention lies in the discernment of interactions between five specific body components (the head, hands and feet) and all detected objects. This attention mechanism enables a heightened precision in identifying potential gaze targets within all interactive objects. Furthermore, we scrutinize the performance of our proposed body-part & object interaction module in comparison to a variant operating without the guidance of gaze attention. The quantitative results are shown in Tab.3. This analysis underscores the effectiveness of infusing gaze attention into the body-part & object interaction module. Module Visualization The visualization of various stages within our network is presented in Figure 7, encompassing elements e.g., the soft gaze attention map, gaze target heatmaps derived from both The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6590 Input Saliency branch Interaction branch Fusion heatmap Target prediction Soft gaze attention Head box input Ground-truth gaze target Predicted gaze target Figure 7: Visualization of our soft gaze attention, saliency branch prediction, interaction branch prediction, fusion heatmap, and the predicted gaze target. The third row presents a scenario wherein the gaze target is a non-interacting object. the saliency branch and the interaction branch, the fusion heatmap, and the predicted gaze target. The initial two rows offer a demonstration of the adeptness of our proposed interaction branch in accurately discerning gaze targets amidst intricate image backgrounds. Conversely, the third row presents a scenario wherein the gaze target is a non-interacting object. Recognizing that instances where the true gaze target lacks interaction with the given individual are not uncommon in natural settings, the fusion of predictions from both the saliency branch and the interaction branch emerges as a strategy to yield enhanced robustness. Computational Complexity In order to analyse the computation complexity, we examine the inference speed of each module seperately. For our method, we use the pre-trained lightweight body pose estimator RTMPose (Jiang et al. 2023) and object detector YOLOv3 (Redmon et al. 2016). On the other hand, competing methods introduced some other modules, e.g., face detection and depth estimation from the scene (Fang et al. 2021), body pose estimation and 3D reconstruction from the scene (Bao, Liu, and Yu 2022), ViT backbone (Tu et al. 2022). In order to measure their computation complexity, we also select recent high-speed implementations for them, and compared their inference speed on a single NVIDIA Titan XP GPU. The results are shown in Table 4, where our method shows its advantage in terms of inference speed. Discussion Incorporating human gaze target annotations into tasks that encompass human activities (e.g., human-object interaction, action recognition/prediction, scene understanding, etc.) proves more advantageous for investigating the connection between human attention and activity cues, compared to datasets containing solely gaze target annotations. This augmentation is anticipated to evolve into a promising and innovative research domain within the realms of computer vision and human-computer interaction. Method Input Size Time/image Tu 224 × 224 ViT(63ms) Fang 224 × 224 F(∼10ms) + D(∼13ms) + G(∼8ms) Bao 224 × 224 P(∼10ms) + 3D(∼30ms) + G(∼8ms) Ours 224 × 224 P(∼10ms) + O(∼11ms) + G(∼8ms) Table 4: Evaluation of inference speed w.r.t different modules. ViT: ViT backbone (Tu et al. 2022). F: face detection module (Deng et al. 2020). D: depth estimation module (Godard et al. 2019). G: gaze target detection backbone (our implementation). 3D: 3D reconstruction module (Sun et al. 2021). P: human pose estimation module (Jiang et al. 2023). O: object detection module (Redmon et al. 2016). Conclusion In this study, we propose a novel approach which utilizes gaze and activity cues to solve the gaze target detection task. Our strategy to integrate gaze direction and human-object interaction reflects the natural idea of combining human attention and activity. Our method achieves state-of-the-art performance on both the GazeFollow benchmark and the GazeVideoAttn benchmark. In comparison to recent methods which rely on intricate 3D reconstruction of a single input image, our approach which only leverages 2D image information still exhibits a substantial lead across all evaluation metrics, positioning it closer to human-level performance. These outcomes prove the effectiveness of our method in the gaze target detection task. Acknowledgments This work was supported by National Natural Science Foundation of China (NSFC) under Grant 62372019. References Bao, J.; Liu, B.; and Yu, J. 2022. ESCNet: Gaze Target Detection With the Understanding of 3D Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14126–14135. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6591 Chao, Y.-W.; Liu, Y.; Liu, X.; Zeng, H.; and Deng, J. 2018. Learning to detect human-object interactions. In 2018 ieee winter conference on applications of computer vision (wacv), 381–389. IEEE. Chen, W.; Xu, H.; Zhu, C.; Liu, X.; Lu, Y.; Zheng, C.; and Kong, J. 2021. Gaze estimation via the joint modeling of multiple cues. IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1390–1402. Cheng, Y.; Zhang, X.; Lu, F.; and Sato, Y. 2020. Gaze estimation by exploring two-eye asymmetry. IEEE Transactions on Image Processing, 29: 5259–5272. Chong, E.; Wang, Y.; Ruiz, N.; and Rehg, J. M. 2020. Detecting attended visual targets in video. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5396–5406. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Deng, J.; Guo, J.; Ververas, E.; Kotsia, I.; and Zafeiriou, S. 2020. Retinaface: Single-shot multi-level face localisation in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5203–5212. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303–338. Fan, L.; Chen, Y.; Wei, P.; Wang, W.; and Zhu, S.-C. 2018. Inferring shared attention in social scene videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6460–6468. Fang, Y.; Tang, J.; Shen, W.; Shen, W.; Gu, X.; Song, L.; and Zhai, G. 2021. Dual attention guided gaze target detection in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11390–11399. Fathi, A.; Li, Y.; and Rehg, J. M. 2012. Learning to recognize daily actions using gaze. In European Conference on Computer Vision, 314–327. Springer. Fischer, T.; Chang, H. J.; and Demiris, Y. 2018. Rt-gene: Real-time eye gaze estimation in natural environments. In Proceedings of the European conference on computer vision (ECCV), 334–352. Gao, C.; Zou, Y.; and Huang, J.-B. 2018. ican: Instancecentric attention network for human-object interaction detection. arXiv preprint arXiv:1808.10437. Gkioxari, G.; Girshick, R.; Doll´ar, P.; and He, K. 2018. Detecting and recognizing human-object interactions. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8359–8367. Godard, C.; Mac Aodha, O.; Firman, M.; and Brostow, G. J. 2019. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF international conference on computer vision, 3828–3838. Gupta, S.; and Malik, J. 2015. Visual semantic role labeling. arXiv preprint arXiv:1505.04474. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. 2019. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, 1314–1324. Jiang, T.; Lu, P.; Zhang, L.; Ma, N.; Han, R.; Lyu, C.; Li, Y.; and Chen, K. 2023. RTMPose: Real-Time MultiPerson Pose Estimation based on MMPose. arXiv preprint arXiv:2303.07399. Kellnhofer, P.; Recasens, A.; Stent, S.; Matusik, W.; and Torralba, A. 2019. Gaze360: Physically unconstrained gaze estimation in the wild. In Proceedings of the IEEE/CVF international conference on computer vision, 6912–6921. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lian, D.; Yu, Z.; and Gao, S. 2018. Believe it or not, we know what you are looking at! In Asian Conference on Computer Vision, 35–50. Springer. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Lu, F.; Okabe, T.; Sugano, Y.; and Sato, Y. 2014a. Learning gaze biases with head motion for head pose-free gaze estimation. Image and Vision Computing, 32(3): 169–179. Lu, F.; Sugano, Y.; Okabe, T.; and Sato, Y. 2014b. Adaptive linear regression for appearance-based gaze estimation. IEEE transactions on pattern analysis and machine intelligence, 36(10): 2033–2046. Marin-Jimenez, M. J.; Kalogeiton, V.; Medina-Suarez, P.; and Zisserman, A. 2019. Laeo-net: revisiting people looking at each other in videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3477–3485. Marin-Jimenez, M. J.; Zisserman, A.; Eichner, M.; and Ferrari, V. 2014. Detecting people looking at each other in videos. International Journal of Computer Vision, 106(3): 282–296. Miao, Q.; Hoai, M.; and Samaras, D. 2023. Patch-level Gaze Distribution Prediction for Gaze Following. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 880–889. Qi, S.; Wang, W.; Jia, B.; Shen, J.; and Zhu, S.-C. 2018. Learning human-object interactions by graph parsing neural networks. In Proceedings of the European conference on computer vision (ECCV), 401–417. Recasens, A.; Khosla, A.; Vondrick, C.; and Torralba, A. 2015. Where are they looking? Advances in neural information processing systems, 28. Redmon, J.; Divvala, S.; Girshick, R.; and Farhadi, A. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6592 Schauerte, B.; and Stiefelhagen, R. 2014. “Look at this!” learning to guide visual saliency in human-robot interaction. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 995–1002. IEEE. Sugano, Y.; Matsushita, Y.; and Sato, Y. 2014. Learningby-synthesis for appearance-based 3d gaze estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1821–1828. Sun, J.; Xie, Y.; Chen, L.; Zhou, X.; and Bao, H. 2021. NeuralRecon: Real-time coherent 3D reconstruction from monocular video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15598– 15607. Tu, D.; Min, X.; Duan, H.; Guo, G.; Zhai, G.; and Shen, W. 2022. End-to-end human-gaze-target detection with transformers. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2192–2200. IEEE. Xiao, J.; Hays, J.; Ehinger, K. A.; Oliva, A.; and Torralba, A. 2010. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, 3485–3492. IEEE. Yao, B.; and Fei-Fei, L. 2010. Modeling mutual context of object and human pose in human-object interaction activities. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 17–24. IEEE. Yao, B.; and Fei-Fei, L. 2012. Recognizing human-object interactions in still images by modeling the mutual context of objects and human poses. IEEE transactions on pattern analysis and machine intelligence, 34(9): 1691–1703. Zhang, X.; Park, S.; Beeler, T.; Bradley, D.; Tang, S.; and Hilliges, O. 2020. Eth-xgaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation. In European Conference on Computer Vision, 365– 381. Springer. Zhang, X.; Sugano, Y.; Fritz, M.; and Bulling, A. 2015. Appearance-based gaze estimation in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4511–4520. Zhang, X.; Sugano, Y.; Fritz, M.; and Bulling, A. 2017. It’s written all over your face: Full-face appearance-based gaze estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 51–60. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6593 | 2024 | 732 |
18,554 | PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation Yiying Yang1, Fukun Yin2, Wen Liu3, Jiayuan Fan1*, Xin Chen3, Gang Yu3, Tao Chen2 1 Academy for Engineering and Technology, Fudan University 2 School of Information Science and Technology, Fudan University 3 Tencent PCG {yiyingyang23, fkyin21}@m.fudan.edu.cn, [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Recent advancements in implicit neural representations have contributed to high-fidelity surface reconstruction and photorealistic novel view synthesis. However, with the expansion of the scene scale, such as block or city level, existing methods will encounter challenges because traditional sampling cannot cope with the cubically growing sampling space. To alleviate the dependence on filling the sampling space, we explore using multi-modal priors to assist individual points to obtain more global semantic information and propose a priorrich multi-modal implicit neural representation network, PMINR, for the outdoor unbounded large-scale scene. The core of our method is multi-modal prior extraction and crossmodal prior fusion modules. The former encodes codebooks from different modality inputs and extracts valuable priors, while the latter fuses priors to maintain view consistency and preserve unique features among multi-modal priors. Finally, feature-rich cross-modal priors are injected into the sampling regions to allow each region to perceive global information without filling the sampling space. Extensive experiments have demonstrated the effectiveness and robustness of our method for outdoor unbounded large-scale scene novel view synthesis, which outperforms state-of-the-art methods in terms of PSNR, SSIM, and LPIPS. 1 Introduction Implicit neural representations have shown promising performance in surface reconstruction and novel view synthesis for single objects or object-centric small-scale scenes under sparse or limited posed camera images (Barron et al. 2021; Martin-Brualla et al. 2021; Park et al. 2021) and have been widely applied in the field of virtual reality and augmented reality. However, the difficulty exacerbates at the cubic level when the sampling space increases from a small-scale scenario or object to an outdoor unbounded large-scale scene. The core of the problem is that the existing implicit neural representation networks only model the scene by sampling points according to the ray direction from the entire scene space. Hence, Neural Radiance Fields (NeRF) methods designed for small-scale scenes (Mildenhall et al. 2021; Verbin et al. 2022) are challenging to fill the sampling space for *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. PSNR: 27.69 PSNR: 26.61 Stone hill PSNR: 25.57 Memorial Hall PSNR: 28.22 PSNR: 28.48 Island Skyscraper Building Natural scene PSNR: 29.12 Stone PSNR: 26.76 Figure 1: PM-INR is capable of handling outdoor unbounded large-scale scenes, and we have demonstrated this capability through experiments with scenes from the OMMO dataset (Lu et al. 2023). Impressively, PM-INR shows superior performance in various large scenes such as stone hills, memorials, buildings, etc. large-scale scenes and will synthesize rough geometry and blurry images. Fortunately, Some methods have also noticed this problem and sample small regions rather than individual sample points to alleviate exploding sampling spaces to some extent (Barron et al. 2021, 2022; Ding et al. 2023). Moreover, compared to sampling individual points, sampling a small region allows a region of space to be compactly featured, which can help improve NeRF’s (Mildenhall et al. 2021) ability to represent fine details. While for the block or citylevel scenes, view synthesis quality degrades as the camera is moved far from the center of the scene. Meanwhile, inspired by codebook-assisted vision tasks, which learn a representative codebook to denote valuable prototypes and have been applied to image segmentation (You et al. 2022; Rahebi 2022; Zhou et al. 2022; Yin and Zhou 2020; Ye et al. 2022a; Yin et al. 2023b; Wu et al. 2023), image synthesis (Esser, Rombach, and Ommer 2021; Zhang et al. 2021; The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6594 · · · Image Codebook Input Views Image Prior Module Q Cross Atten n×d K, V Self Atten + Cross Atten Feature-rich Prior Multi-modal Prior Fusion VQ Encoder 𝐸𝐸𝑔𝑔 A view of the Sydney Opera House with the Sydney Harbor Bridge in the background · · · Text Codebook Text Prompt Q Cross Atten n×d K, V Self Atten CLIP Encoder Text Prior Module 𝐸𝐸𝑡𝑡 · · · 3D Codebook Point Cloud Q Cross Atten n×d K, V Self Atten Sparse Encoder 3D Prior Module 𝐸𝐸𝑑𝑑 Q Image Prior Text Prior 3D Prior Learnable Embedding ( 𝒏𝒏𝑮𝑮, 256) 128 × 64 128 × 64 128 × 64 128 × 64 128 × 64 128 × 64 128 × 64 128 × 64 (𝒏𝒏𝑻𝑻, 512) (𝒏𝒏𝑫𝑫, 16) Novel Views Q Cross Atten K V Feature-poor sampling region Feature-rich sampling region Cross-modal Prior Injection Module 128 × 64 𝑋𝑋∈𝑁𝑁× 64 × 252 ෨𝑋𝑋∈𝑁𝑁× 64 × 252 [𝑋𝑋, ෨𝑋𝑋] MLP Layers Figure 2: Multi-modal prior extraction and fusion module. Multi-modal priors are extracted from three parallel modules benefiting from valuable codebooks obtained from pre-trained models, as shown on the left. Feature-rich priors are fused to ensure cross-modal scene consistency and preserve each modality-specific feature, as shown on the right. Esser et al. 2021; Yin et al. 2023a), and small-scale scene implicit representation (Yin et al. 2022; Shen, Ma, and Wang 2022; Yang et al. 2023; Ye et al. 2022b), rich global prior knowledge extracted from cross-modal codebooks coupled with local sampling regions seems able to cope with outdoor large-scale scene implicit representations. For local-sampled NeRF, prior knowledge can offer valuable global insights, which is lacking in ray-sampling based networks and is necessary for scene understanding and reconstruction. Therefore, equipping each region with rich priors is an unreached and promising approach for implicit large-scale scene neural representation. In this paper, we propose PM-INR: A Prior-rich multimodal Implicit Neural Representation that aims to extract and fuse prior knowledge across multiple modalities to facilitate the implicit neural representation of large-scale scenes. To achieve this, we first extract various priors from codebooks obtained from different modal inputs, including, Image Prior, extracted from the image codebook encoded by Vector Quantised-Variational AutoEncoder (VQVAE) (Van Den Oord, Vinyals et al. 2017), contains valuable global semantic and appearance information for each scene and unique surface texture patterns for each training view; Prompt Prior, with the help of the pre-trained Contrastive Language-Image Pre-Training (CLIP) (Radford et al. 2021) model, the text prompts of each training view are converted into a more accessible format to form a text codebook, and then extract prototypes rich in scene layout and positional relationships, which are high-level properties that are not easy to see just from visualizing data; Geometry (3D) Prior, benefiting from Multi-view Stereo (MVS) (Seitz et al. 2006) methods and pre-trained MinkowskiEngine (Choy, Gwak, and Savarese 2019) convolutions, we encode the geometric codebook from reconstructed sparse point clouds and then filter out geometric priors with scene structure and topology properties. To reduce the distance between different modalThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6595 ities while maintaining cross-modal scene consistency, we propose a multi-modal prior fusion module, which fuses priors from different modalities into a feature-rich cross-modal prior. Some of the fused prior prototypes are shared by all modalities while others are unique, where the former can maintain scene consistency across modalities, and the latter provides additional information to enhance feature representation from different modalities. Finally, cross-modal priors are injected into sampling regions to perceive global semantic information and cope with the exploding sampling space. Extensive experiments show the effectiveness of multimodal prior in large-scale implicit neural representation, which outperforms state-of-the-art method Mip-NeRF 360 (Barron et al. 2022) by more than 17% on each evaluation metric. We summarize the contributions as follows: • We propose an effective implicit neural representation pipeline to cope with the cubically growing sampling space of outdoor unbounded large-scale scenes by extracting rich priors from multi-modal inputs and equipping sampling regions; • A multi-modal prior fusion module is proposed to ensure scene cross-modal consistency while enriching regional feature representations; • Extensive experiments demonstrate that our PM-INR outperforms state-of-the-art methods, including robustness to large-scale outdoor scene representation and the capability to synthesize more photo-realistic novel views. Our code and models will be available. 2 Related Work Large-scale Neural Scene Representation. Large-scale scene representation is a crucial aspect of Implicit Neural Representation (INR) research, involving capturing and modeling complex scenes that encompass extensive spatial extents, such as urban environments, landscapes, or virtual worlds. NeRF++ (Zhang et al. 2020) handles the unbounded scenes by separately modeling foreground and background, and Mip-NeRF 360 (Barron et al. 2022) uses a non-linear scene parameterization to model large-scale unbounded scenes. Block-NeRF (Tancik et al. 2022) and Mega-NeRF (Turki, Ramanan, and Satyanarayanan 2022) decompose a scene into several partitions spatially and train model for each partition in parallel. BungeeNeRF (Xiangli et al. 2022) introduces an approach that progressively adds residual blocks to the network representation. MegaNeRF (Turki, Ramanan, and Satyanarayanan 2022) and BungeeNeRF (Xiangli et al. 2022) address the challenges of modeling and rendering large-scale scenes, spanning from buildings to multiple city blocks and utilizing thousands of images captured from drones. However, when the camera is posed far away from the scene’s center, such as in the urban environment, the visual synthesis quality will dramatically degrade. Consequently, existing methods face constraints in applying neural implicit reconstruction to expansive, outdoor, and unbounded scenes. To overcome these challenges, a robust and scalable solution is urgently in demand. Prior Information in Implicit Neural Representation. Recently, the application of prior information in implicit neural representation has attracted significant attention as researchers aim to improve the performance in 3D scene reconstructions. Prior information helps the implicit neural representation to leverage domain knowledge, such as geometry, materials, lighting, and semantics. Pixel-NeRF (Yu et al. 2021) addresses the challenge of learning neural radiance fields from limited input images by incorporating prior information from a pre-trained 2D convolutional neural network(CNN). Mixture of volumetric primitives (MVP) (Lombardi et al. 2021) designs an unsupervised method for learning implicit shape representations using a MVP as prior information, enabling high-fidelity 3D reconstructions without explicit 3D supervision. DeRF (Rebain et al. 2021) designs a method to decompose a scene into multiple depth layers, each represented by its own neural radiance field. FastNeRF (Garbin et al. 2021) addresses the issue of sampling artifacts in the original NeRF model by incorporating prior information about the scene’s density distribution. Point-NeRF (Xu et al. 2022) introduces surface point clouds as priors to guide point sampling and achieve scene generalization. Recently, CoCo-INR (Yin et al. 2022) learns a representative codebook from the large-scale 2D dataset ImageNet (Deng et al. 2009) with limited global features of the scene, which leads to inconsistencies between different views and rendering artifacts. These works have demonstrated the potential of incorporating prior information into implicit neural representation. However, the utility of prior knowledge in the large-scale implicit neural representation remains unexplored, especially multi-modal prior knowledge. Inspired by the above work, our method attempts to explore the effect of multi-modal prior knowledge in the largescale implicit neural representation for the first time, and proposes to enhance the understanding of scenes by extracting and fusing multiple modal priors. 3 Methodology In this paper, we aim to develop a prior-rich multi-modal implicit neural representation to improve the capacity to represent the outdoor unbounded large-scale scenes. We first design a multi-modal codebook and prior extraction module to establish a codebook and extract prior knowledge from multiple modalities (c.f. Sec.3.1 and Fig.2). Then we propose a multi-modal prior fusion module to incorporate heterogeneous prior knowledge and develop cross-modal prior features with rich scene-level semantics, contextual knowledge as well as the geometric properties of the scene (c.f. Sec.3.2 and Fig.2). Next, we inject the feature-rich crossmodal prior into each sampling region, which will help better model the outdoor unbounded large-scale scenes (c.f. Sec.3.3 and Fig.2). Finally, the implementation will be introduced (c.f. Sec.3.4). 3.1 Multi-modal Codebook and Prior Extraction Image Codebook. To make full use of the rich texture features provided by images, which are difficult to obtain in point-wise sampled implicit neural representations but necessary for synthesizing photo-realistic images, especially The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6596 for large-scale scenes, we derive the image codebook from training views by Vector-Quantized Variational AutoEncoder (VQVAE) (Van Den Oord, Vinyals et al. 2017). We denote our image codebook as Bg = {p0, p1, ..., pNg} ∈ RNg×cg , where Ng is the number of prototype vectors, cg is the dimension of each vector, and pi is each embedding vector. Given an input image I ∈RH×W ×3, VQVAE leverages an encoder Eg to obtain a set of continuous feature ˆz = Eg(I) ∈Rh×w×cg, where h and w are the height and width of the feature map. Then a quantization Qg is performed onto its closest codebook entry pi for the continuous feature map ˆz to obtain the discrete representation zq : zq = Qg(ˆz) := argmin pk∈E ∥ˆzij −pk∥2 , (1) where ˆzij ∈Rcg. Next, the reconstructed image ˆI is given by the decoder Dg: ˆI = Dg(zq) = Dg(Qg(Eg(I))) (2) Since our VQVAE can be optimized by reducing the loss between the original image I and the reconstructed image ˆI: L = ∥I −ˆI∥2 + ∥sg(zq) −ˆz∥2 2 + ∥sg(ˆz) −zq∥2 2 (3) where sg(.) denotes the stop-gradient operation. Text Codebook. Text prompts contain rich humanannotated global descriptions consistent with human perceptual and visual systems, and connecting prompts and visual domains enables implicit neural representation models to capture a more comprehensive understanding of scene context. Contrastive Language-Image Pre-training (CLIP) (Radford et al. 2021) is a neural network that efficiently learns visual concepts from natural language supervision. CLIP is designed to leverage large datasets of images and text pairs to train a model in a self-supervised manner, learning to break the gap between visual and textual modalities. We leverage the pre-trained CLIP encoder model Et to produce text embeddings as our text codebook Bt given the input text prompts L. Bt = Et(L) ∈RNc×ct (4) where Nc is the number of text prompts and ct is is the dimension of each text embedding. 3D Codebook. Point clouds are data structures that encapsulate multiple geometric information, including the exact coordinates of points within a three-dimensional space, which help inform and enhance the process of implicit neural representation. The Minkowski Engine (Choy, Gwak, and Savarese 2019) is an auto-differentiation library for sparse tensors, which is proposed to provide an efficient and flexible framework to represent and process point clouds, which enables the model to obtain the geometric information of the scene. We leverage the Minkowski sparse tensor to build our 3D codebook by aggregating the geometric patterns and relationships present in the point clouds. Given the original point cloud P = {d0, d1, ..., dn} where di represents the i-th point. We utilize the Minkowski Engine to transform this cloud into a sparse tensor representation S = (s1, f1), (s1, f1), ..., (sn, fn), composed of tuples (si, fi), where si and fi denote spatial position and feature vectors, respectively. To capture and quantize unique geometric patterns within this data, we construct a 3D codebook Bd = {m1, m2, ..., mND} ∈RND×cd, constructed by encoding S, where mi represents each item in the codebook Bd. Each feature vector fi is then mapped to its closest entry in the codebook, ensuring a compact and efficient representation of the original geometric information. Prior Extraction. Since the multi-modal codebook might contain a large number of redundant or unrelated prototypes for implicit neural representations, we designed a prior extraction module to query valuable prototypes for scene representation and novel view synthesis from each modality codebook. Given a pre-trained codebook B (could be Bg, Bt or Bd) and learnable query embedding vectors q = {q1, q2, ..., qM}, each embedding vector qi queries the valuable prior Z0 information from the given codebook via a cross-attention mechanism: Q ←fQ(q), K ←fK(B), V ←fV(B) Z0 ←Cross-Attention(Q, K, V) = Softmax(QKT √dk ) (5) where fQ, fK, and fV are the query, key, and value linear projections, respectively. Then we apply a self-attention module on the initial prior Z0 to improve prior feature representations further and obtain the final prior Z: Z ←Self-Attention(fq(Z0), fk(Z0), fv(Z0)) (6) where fq, fk, and fv are the query, key, and value linear projections, respectively. We apply the above method to the multi-modal codebooks, image codebook BG, text codebook Bt, and 3D codebook Bd, and obtain the corresponding priors, image prior ZG, text prior ZT , and 3D prior ZD, which contain respective modality-specific scenerelevant prior information. 3.2 Multi-modal Prior Fusion The extracted priors of each modality contain rich features, some of which are shared, describing the same object from different modalities, and others are unique, providing additional supplements from their respective modalities. However, this will also bring certain hidden dangers, such as scene inconsistency that may be caused by different modal prior features when describing the same object. Therefore, finding common ground while reserving differences and combining multi-modal priors to create a single, unified representation is crucial, which can ensure scene consistency across modals and help leverage the complementary and supplementary information present in each modality to improve the overall performance of scene reconstruction. Considering the feature distance between image prior ZG, text prior ZT , and 3D prior ZD, we first employ linear layers to obtain embedding priors ˆ ZG, ˆ ZT , and ˆ ZD respectively, and concatenate the elements along the zeroth dimension to construct a cohesive and unified prior ˆU: ˆU ←concat( ˆ ZG, ˆ ZT , ˆ ZD)) (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6597 We then define a learnable query embedding Uq = {q1, q2, ..., qM} to query scene-consistent cross-modal priors U by effectively capturing multimodal relations and dependencies: U ←Cross-Attention(fQ(Uq), fK( ˆU), fV( ˆU)) (8) where fQ, fK, and fV are the query, key, and value linear projections, respectively. Through the aforementioned multi-modal prior fusion module, we will arrive at a series of representative cross-modal scene priors U, which preserve consistent features from multi-modal priors to synthesize view-consistent images while retaining the unique features of each modality to enrich image details and ensure realism. 3.3 Cross-modal Feature Injection and Implicit Neural Representations In this subsection, we inject feature-rich cross-modal priors into sampled regions for outdoor unbounded large-scale scene implicit neural representations. We follow the sampling strategy of Mip-NeRF 360 (Barron et al. 2022), which uses a non-linear scene parameterization to model largescale unbounded scenes and samples a region with more local features in the sampling space instead of sampling a single individual point. Although sampling regions can obtain more local features, global features are still lacking, especially for outdoor large-scale scenes. Our unified prior can overcome this challenge with rich global features from different modalities. As shown in Fig.2, we inject the feature-rich multi-modal prior U derived in Sec.3.2 into the sampling domain via a cross-attention module. Gradually propagating cross-modal representative prototypes to each sampled region results in rich global scene features and representative representations for each sampling region. With the rich global and representative features, our method can better understand and represent outdoor unbounded large-scale scenes and synthesize high-fidelity and more detailed novel views. Our backbone follows Mip-NeRF 360. We use the same sampling method and loss function for a fair comparison but inject our cross-modal prior information into each sampling region and apply it to outdoor large-scale scene datasets. 3.4 Implementation Details We train the VQ-VAE network for 20k iterations with a batch size of 16 accumulated over 21 batches, which needs about 1 day on two A100 GPUs. The dimensions of image codebook Bg, text codebook Bt, and 3D codebook Bd are 256, 512, and 16, respectively. In our multi-modal codebook and prior extraction module, the number of learnable query embeddings is 128, and each embedding has a dimension qi ∈R64. Hence, the size of all priors is 128×64. We apply one cross-attention mechanism in the prior extraction module, followed by one self-attention block. In the multi-modal prior feature fusion module, we apply linear layers to three modalities prior to initially reduce the feature distribution gap and then concatenate the processed prior embedding into a unified multi-modal prior embedding. We apply one cross-attention to the multi-modal prior embedding to derive the feature-rich cross-modal prior embedding. GT Ours Ref-NeRF Mega-NeRF Mip-NeRF NeRF++ NeRF Mip-NeRF 360 Figure 3: Qualitative results sampled from the OMMO dataset. For each scene, we present a visualization of a synthetic novel view and zoom in on two regions. In the multi-modal feature injection module, we perform one cross-attention operation in the Mip-NeRF 360’s module of predicting density. All attention modules are transformerbased with a multi-head attention mechanism, Layer Normalization, Feed-Forward Network, and GELU activation. For a fair comparison, we adopt the optimizing strategies of Mip-NeRF 360, 250k iterations of optimization with a batch size of 211, using Adam (Kingma and Ba 2014) optimizer with a learning rate that is annealed log-linearly from 2 × 10−3 to 2 × 10−5 with a warm-up phase of 512 iterations, and gradient clipping to a norm of 10−3. Our method is built with Pytorch framework. Each scene is trained on four Nvidia A100 GPU devices for around one day. 4 Experiments 4.1 Experimental Setup Dataset. We evaluate our PM-INR on two datasets, namely the OMMO (Lu et al. 2023) and BlendedMVS (Yao et al. 2020) dataset. The OMMO dataset contains a total of 33 unbounded large-scale scenes with prompt annotations, tags, and 14k calibrated images. The BlendedMVS contains 17k pose images, covering 113 scenes, which are divided into The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6598 the general scene part and the large scene part according to the scene scale. We conduct experiments on all scenes of the OMMO dataset and the five outdoor unbounded large-scale scenes of the BlendedMVS dataset due to the time cost constraints caused by per-scene optimization. Baselines. We choose the recent state-of-the-art implicit large-scale scene neural representation methods, including NeRF (Mildenhall et al. 2021), NeRF++ (Zhang et al. 2020), Mip-NeRF (Barron et al. 2021), Mip-NeRF 360 (Barron et al. 2022), Mega-NeRF (Turki, Ramanan, and Satyanarayanan 2022), Ref-NeRF (Verbin et al. 2022) as the baselines. NeRF (Mildenhall et al. 2021) designs the first continuous MLP-based neural network to represent the scene, NeRF++ (Zhang et al. 2020) separately models the foreground and background neural representations to handle the unbounded scenes, Mip-NeRF (Barron et al. 2021) extends NeRF to represent the scene at a continuously-valued scale and improves NeRF’s ability to represent fine details, Mip-NeRF 360 (Barron et al. 2022) uses a non-linear scene parameterization to model large-scale unbounded scenes, Mega-NeRF (Turki, Ramanan, and Satyanarayanan 2022) decomposes a scene into several spatially to train the model in parallel, Ref-NeRF (Verbin et al. 2022) improves the quality of appearance and normal in synthesized views of the scene by reparameterizing NeRF’s directional MLP. Evaluation Metrics To evaluate the performance of each method in large-scale implicit neural representation, we use three standard metrics: Peak Signal Noise Ratio (PSNR), Structural Similarity (SSIM) (Wang et al. 2004) and the VGG implementation of Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al. 2018) on novel view synthesis. Higher PSNR and SSIM mean better performance, while a lower LPIPS means better. 4.2 Performance Comparison OMMO dataset. Quantitative results on the OMMO dataset are reported in Tab. 1, which demonstrates that our method outperforms others on the average and most scenes in terms of PSNR, SSIM, and LPIPS. Among them, MipNeRF 360 and Mega NeRF are both aimed at unbounded scenes, and our average gain of the three evaluation metrics is 17% and 24% higher than the two of them, respectively, implying that our method is more effective for large-scale scenes. At the same time, our LPIPS, a metric that correlates more strongly with human-perceived distance, is over 43% higher than all baselines, demonstrating that our method can generate more photo-realistic novel views. The qualitative results on the OMMO dataset are drawn in Fig.1. Our method can reconstruct finer texture in unbounded large-scale scenes, and some representative details are selected and zoomed in in Fig.1. It is worth noting that our method expresses better robustness for outdoor unbounded large-scale scene representation. BlendedMVS dataset. To further demonstrate the performance of our method for large scale scenes, we conduct the experiments and make comparisons with Mip-NeRF 360 (Barron et al. 2022), which is aimed at unbounded scenes and overperforms other methods on the OMMO dataset. Quantitative results between Mip-NeRF 360 (BarMethod PSNR↑ SSIM↑ LPIPS↓ NeRF 18.72 0.48 0.600 NeRF++ 21.45 0.58 0.538 Mip-NeRF 18.39 0.50 0.623 Mip-NeRF 360 23.10 0.67 0.419 Mega-NeRF 21.63 0.62 0.508 Ref-NeRF 360 21.28 0.55 0.574 PM-INR (Ours) 27.10 0.81 0.239 Table 1: Quantitative comparison results of our model PMINR with baselines on the OMMO dataset. ↑means the higher, the better, ↓means the lower, the better. ron et al. 2022) and our method on the BlendedMVS dataset are reported in Tab. 2. We conduct experiments on five outdoor unbounded large scale scenes of the BlendedMVS dataset. Tab. 2 demonstrates that our method also outperforms others on the average and most scenes in terms of PSNR, SSIM, and LPIPS. We compare our method on the BlendedMVS dataset with the baseline Mip-NeRF 360, and our average gain of the three evaluation metrics is 30 percent higher than Mip-NeRF 360 on the outdoor unbounded large scale scenes of the BlendedMVS dataset. Method PSNR↑ SSIM↑ LPIPS↓ Mip-NeRF 360 23.10 0.67 0.419 PM-INR (Ours) 27.10 0.81 0.239 Table 2: Quantitative comparison results of our model PMINR with baseline Mip-NeRF 360 on the five outdoor unbounded large-scale scenes of the BlendedMVS dataset. ↑ means the higher, the better, ↓means the lower, the better. 4.3 Ablation Studies and Analysis Effectiveness of each modal prior. To verify the effectiveness of each modal prior, we conduct controlled experiments on different modalities and their pairwise combinations, including the image prior (G), text prior (T), 3D prior (D), image prior plus text prior (G+T), image prior plus 3D prior (G+D), and text prior plus 3D prior (T+D). The comparison result in Fig.5 shows that every modal prior helps contribute to scene reconstruction while removing any single modal prior degrades the performance across all metrics. It’s also observed that without any modal prior, the performance degrades considerably further compared to the model equipped with any modal prior. Effectiveness of multi-modal prior fusion module. To demonstrate the effectiveness of the multi-modal prior fusion module, we conduct ablation studies on different fusion strategies: removing the module of cross-attention mechanism and directly injecting the initial multi-modal prior embedding, denoted as w/o cross; injecting the multiple modalities into the sampling region serially, denoted as serial; directly adding the multiple modalities to develop the crossmodal prior, denoted as plus; concatenating multi-model The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6599 Pm INR (Ours) GT Figure 4: Qualitative results sampled from the large-scale part of the BlendedMVS dataset. For each scene, we present a visualization of a synthetic novel view and zoom in on two regions. prior in the one dimension rather than the the zeroth dimension, denoted as hstack. The comparison results are shown in Tab. 3, which implies that our fusion strategy is the most advanced among them. Method PSNR↑ SSIM↑ LPIPS↓ w/o cross 31.12 0.88 0.221 serial 29.36 0.89 0.212 plus 28.34 0.87 0.237 hstack 28.64 0.87 0.230 PM-INR (Ours) 31.338 0.917 0.157 Table 3: Experiment results about the effectiveness of multimodal prior fusion module.“w/o” cross represents removing the cross-attention mechanism of our method, “serial” represents serially injecting the multiple modalities into our network, “plus” represents directly plusing the multiple modalities prior, and “hstack” represents concatenating the multiple modalities prior in the dimension. ↑means the higher, the better, ↓means the lower, the better. 5 Conclusions and Limitations In this paper, we propose PM-INR, a priori-rich multi-modal implicit neural representation network for outdoor unbounded large-scale scenes. Benefiting from our advanced multi-modal prior extraction and fusion modules, representative feature-rich priors are propagated to each sampling region. Therefore, without relying entirely on exploring sampling regions through individual sampling points, our PMINR network can obtain global-level cross-modal semantics, which is lacking in current methods to cope with the exploding sampling space. Expensive experiments have demonstrated that our method surpasses the state-of-the-art method Mip-NeRF 360 by over 17% in various evaluation metrics. Meanwhile, abundant ablation experiments prove each multi-modal prior knowledge, and our fusion method can help the network generate more robust scene representations and synthesize more photo-realistic novel views. With the help of multi-modal priors, our method can synthesize realistic novel views for outdoor unbounded large3D prior w/o prior image prior text prior Our multi-modal prior image + text prior image + 3D prior text + 3D prior GT 28.03 / 0.87 / 0.234 (PSNR↑/ SSIM↑/ LPIPS↓) 27.75 / 0.83 / 0.305 28.52 / 0.88 / 0.256 29.79 / 0.89 / 0.267 30.65 / 0.90 / 0.169 29.34 / 0.85 / 0.259 26.82 / 0.83 / 0.294 31.34 / 0.92 / 0.157 - / - / Figure 5: Qualitative visualization results for the effectiveness of each modal prior (zoom-in for the best of views) on the OMMO dataset. Obviously, using no prior or just a single modality prior will produce blurry images, while extracting cross-modal priors from both modalities can produce relatively realistic images. scale scenes. However, our method still does not have any scene editing capabilities. We will explore the capability of scene editing via editing priors, which is considered a fascinating, valuable and promising endeavor. Acknowledgements This work is supported by National Natural Science Foundation of China (No. 62101137, 62071127, and U1909207), Shanghai Natural Science Foundation (No. 23ZR1402900), and Zhejiang Lab Project (No. 2021KH0AB05). References Barron, J. T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; and Srinivasan, P. P. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6600 fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5855–5864. Barron, J. T.; Mildenhall, B.; Verbin, D.; Srinivasan, P. P.; and Hedman, P. 2022. Mip-nerf 360: Unbounded antialiased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5470–5479. Choy, C.; Gwak, J.; and Savarese, S. 2019. 4d spatiotemporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3075–3084. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Ding, Y.; Yin, F.; Fan, J.; Li, H.; Chen, X.; Liu, W.; Lu, C.; YU, G.; and Chen, T. 2023. PDF: Point Diffusion Implicit Function for Large-scale Scene Neural Representation. arXiv:2311.01773. Esser, P.; Rombach, R.; Blattmann, A.; and Ommer, B. 2021. Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in Neural Information Processing Systems, 34: 3518–3532. Esser, P.; Rombach, R.; and Ommer, B. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12873–12883. Garbin, S. J.; Kowalski, M.; Johnson, M.; Shotton, J.; and Valentin, J. 2021. Fastnerf: High-fidelity neural rendering at 200fps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 14346–14355. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lombardi, S.; Simon, T.; Schwartz, G.; Zollhoefer, M.; Sheikh, Y.; and Saragih, J. 2021. Mixture of volumetric primitives for efficient neural rendering. ACM Transactions on Graphics (ToG), 40(4): 1–13. Lu, C.; Yin, F.; Chen, X.; Chen, T.; Yu, G.; and Fan, J. 2023. A Large-Scale Outdoor Multi-modal Dataset and Benchmark for Novel View Synthesis and Implicit Scene Reconstruction. arXiv preprint arXiv:2301.06782. Martin-Brualla, R.; Radwan, N.; Sajjadi, M. S.; Barron, J. T.; Dosovitskiy, A.; and Duckworth, D. 2021. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7210–7219. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1): 99–106. Park, K.; Sinha, U.; Barron, J. T.; Bouaziz, S.; Goldman, D. B.; Seitz, S. M.; and Martin-Brualla, R. 2021. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5865–5874. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rahebi, J. 2022. Vector quantization using whale optimization algorithm for digital image compression. Multimedia Tools and Applications, 81(14): 20077–20103. Rebain, D.; Jiang, W.; Yazdani, S.; Li, K.; Yi, K. M.; and Tagliasacchi, A. 2021. Derf: Decomposed radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14153–14161. Seitz, S. M.; Curless, B.; Diebel, J.; Scharstein, D.; and Szeliski, R. 2006. A comparison and evaluation of multiview stereo reconstruction algorithms. In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), volume 1, 519–528. IEEE. Shen, Y.; Ma, W.-C.; and Wang, S. 2022. SGAM: Building a Virtual 3D World through Simultaneous Generation and Mapping. Advances in Neural Information Processing Systems, 35: 22090–22102. Tancik, M.; Casser, V.; Yan, X.; Pradhan, S.; Mildenhall, B.; Srinivasan, P. P.; Barron, J. T.; and Kretzschmar, H. 2022. Block-nerf: Scalable large scene neural view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8248–8258. Turki, H.; Ramanan, D.; and Satyanarayanan, M. 2022. Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs. arXiv:2112.10703. Van Den Oord, A.; Vinyals, O.; et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30. Verbin, D.; Hedman, P.; Mildenhall, B.; Zickler, T.; Barron, J. T.; and Srinivasan, P. P. 2022. Ref-nerf: Structured view-dependent appearance for neural radiance fields. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5481–5490. IEEE. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wu, K.; Fan, J.; Ye, P.; and Zhu, M. 2023. Hyperspectral Image Classification Using Spectral–Spatial Token Enhanced Transformer With Hash-Based Positional Embedding. IEEE Transactions on Geoscience and Remote Sensing, 61: 1–16. Xiangli, Y.; Xu, L.; Pan, X.; Zhao, N.; Rao, A.; Theobalt, C.; Dai, B.; and Lin, D. 2022. Bungeenerf: Progressive neural radiance field for extreme multi-scale scene rendering. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII, 106–122. Springer. Xu, Q.; Xu, Z.; Philip, J.; Bi, S.; Shu, Z.; Sunkavalli, K.; and Neumann, U. 2022. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5438–5448. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6601 Yang, Y.; Liu, W.; Yin, F.; Chen, X.; Yu, G.; Fan, J.; and Chen, T. 2023. VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations. arXiv preprint arXiv:2310.14487. Yao, Y.; Luo, Z.; Li, S.; Zhang, J.; Ren, Y.; Zhou, L.; Fang, T.; and Quan, L. 2020. Blendedmvs: A large-scale dataset for generalized multi-view stereo networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1790–1799. Ye, P.; Li, B.; Chen, T.; Fan, J.; Mei, Z.; Lin, C.; Zuo, C.; Chi, Q.; and Ouyang, W. 2022a. Efficient joint-dimensional search with solution space regularization for real-time semantic segmentation. International Journal of Computer Vision, 130(11): 2674–2694. Ye, P.; Li, B.; Li, Y.; Chen, T.; Fan, J.; and Ouyang, W. 2022b. b-darts: Beta-decay regularization for differentiable architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10874–10883. Yin, F.; Chen, X.; Zhang, C.; Jiang, B.; Zhao, Z.; Fan, J.; Yu, G.; Li, T.; and Chen, T. 2023a. ShapeGPT: 3D Shape Generation with A Unified Multi-modal Language Model. arXiv preprint arXiv:2311.17618. Yin, F.; Huang, Z.; Chen, T.; Luo, G.; Yu, G.; and Fu, B. 2023b. Dcnet: Large-scale point cloud semantic segmentation with discriminative and efficient feature aggregation. IEEE Transactions on Circuits and Systems for Video Technology. Yin, F.; Liu, W.; Huang, Z.; Cheng, P.; Chen, T.; and YU, G. 2022. Coordinates Are NOT Lonely–Codebook Prior Helps Implicit Neural 3D Representations. arXiv preprint arXiv:2210.11170. Yin, F.; and Zhou, S. 2020. Accurate estimation of body height from a single depth image via a four-stage developing network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8267–8276. You, C.; Zhao, R.; Liu, F.; Dong, S.; Chinchali, S.; Topcu, U.; Staib, L.; and Duncan, J. 2022. Class-aware adversarial transformers for medical image segmentation. Advances in Neural Information Processing Systems, 35: 29582–29596. Yu, A.; Ye, V.; Tancik, M.; and Kanazawa, A. 2021. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4578–4587. Zhang, K.; Riegler, G.; Snavely, N.; and Koltun, V. 2020. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv:1801.03924. Zhang, Z.; Ma, J.; Zhou, C.; Men, R.; Li, Z.; Ding, M.; Tang, J.; Zhou, J.; and Yang, H. 2021. UFC-BERT: Unifying multi-modal controls for conditional image synthesis. Advances in Neural Information Processing Systems, 34: 27196–27208. Zhou, S.; Chan, K.; Li, C.; and Loy, C. C. 2022. Towards robust blind face restoration with codebook lookup transformer. Advances in Neural Information Processing Systems, 35: 30599–30611. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6602 | 2024 | 733 |
18,555 | FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning Zhenhua Yang1, Dezhi Peng1, Yuxin Kong1, Yuyi Zhang1, Cong Yao3, Lianwen Jin12* 1South China University of Technology 2SCUT-Zhuhai Institute of Modern Industrial Innovation 3Alibaba Group {eezhyang, pengdezhi000, kongyxscut, yuyizhang.scut, yaocong2010}@gmail.com, [email protected] Abstract Automatic font generation is an imitation task, which aims to create a font library that mimics the style of reference images while preserving the content from source images. Although existing font generation methods have achieved satisfactory performance, they still struggle with complex characters and large style variations. To address these issues, we propose FontDiffuser, a diffusion-based image-to-image one-shot font generation method, which innovatively models the font imitation task as a noise-to-denoise paradigm. In our method, we introduce a Multi-scale Content Aggregation (MCA) block, which effectively combines global and local content cues across different scales, leading to enhanced preservation of intricate strokes of complex characters. Moreover, to better manage the large variations in style transfer, we propose a Style Contrastive Refinement (SCR) module, which is a novel structure for style representation learning. It utilizes a style extractor to disentangle styles from images, subsequently supervising the diffusion model via a meticulously designed style contrastive loss. Extensive experiments demonstrate FontDiffuser’s state-of-the-art performance in generating diverse characters and styles. It consistently excels on complex characters and large style changes compared to previous methods. The code is available at https://github.com/yeungchenwa/FontDiffuser. Introduction Automatic font generation aims to create a new font library in the required style given the reference images, which is referred to as an imitation task. Font generation has significant applications, including new font creation, ancient character restoration, and data augmentation for optical character recognition. Therefore, it has significant commercial and cultural values. However, this imitation process is both costly and labor-intensive, particularly for languages with a large number of glyphs, such as Chinese (> 90,000), Japanese (> 50,000), and Korean (> 11000). Existing automatic methods primarily disentangle the representations of style and content, then integrate them to output the results. Although these methods have achieved remarkable success in font generation, they still suffer from complex character generation and large style variation transfer, leading *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Characters generated by our method source ref (1) (2) (3) (4) Ours (b) Complex characters source ref (1) (2) (3) (4) Ours (c) Large style variations Figure 1: (a) Characters of different complexity generated by our method. (b)(c) Results of different methods on complex characters and large style variations. ‘ref’ represents the reference image. (1)-(4) represent the results of DG-Font (Xie et al. 2021), MX-Font (Park et al. 2021b), CG-GAN (Kong et al. 2022), and CF-Font (Wang et al. 2023) respectively. Red boxes highlight the failures of other methods. to severe stroke missing, artifacts, blurriness, layout errors, and style inconsistency as shown in Figure 1(b)(c). Retrospectively, most font generation approaches (Park et al. 2021a,b; Xie et al. 2021; Tang et al. 2022; Liu et al. 2022; Kong et al. 2022; Wang et al. 2023) adopt a GANbased (Goodfellow et al. 2014) framework which potentially suffers from unstable training due to their adversarial training nature. Moreover, most of these methods perceive content information through only single-scale highlevel features, omitting the fine-grained details that are crucial to preserving the source content, especially for complex characters. There are also a number of methods (Cha et al. 2020; Park et al. 2021a,b; Liu et al. 2022; Kong et al. 2022; He et al. 2022) that employ prior knowledge to facilitate font generation, such as stroke or component composition of characters; however, this information is costly to annotate for complex characters. Furthermore, the target style is commonly represented by a simple classifier or a discriminator in previous literature, which struggles to learn the appropriThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6603 ate style and hinders the style transfer with large variations. In this paper, we propose FontDiffuser, a diffusion-based image-to-image one-shot font generation method, which models the font generation learning as a noise-to-denoise paradigm and is capable to generate unseen characters and styles. In our method, we innovatively introduce a Multiscale Content Aggregation (MCA) block, which leverages global and local content features across various scales. This block effectively preserves intricate details from the source image of complex characters, by capitalizing on the fact that large-scale features contain lots of fine-grained information (strokes or components), whereas small-scale features primarily encapsulate global information (layout). Moreover, we introduce a novel style representation learning strategy, by applying a Style Contrastive Refinement (SCR) module to enhance the generator’s capability in mimicking styles, especially for large variations between the source image and the reference image. This module utilizes a style extractor to disentangle style from a font and then uses a style contrastive loss to provide feedback to the diffusion model. SCR acts as a supervisor and encourages our diffusion model to identify the differences among various samples, which are with different styles but the same character. Additionally, we design a Reference-Structure Interaction (RSI) block to explicitly learn structural deformations (e.g., font size) by utilizing a cross-attention interaction with the reference features. To verify the effectiveness of generating characters of diverse complexity, we categorize the characters into three levels of complexity (easy, medium, and hard) according to their number of strokes, and test our method on each level separately. Extensive experiments demonstrate that our proposed FontDiffuser outperforms state-of-the-art font generation methods on characters of three levels of complexity. Notably, as shown in Figure 1(a), FontDiffuser consistently excels both in the generation of complex characters and large style variations. Furthermore, our method can be applied to the cross-lingual generation tasks, showcasing the crossdomain generalization ability of FontDiffuser. We summarize our main contributions as follows. • We propose FontDiffuser, a new diffusion-based imageto-image one-shot font generation framework that achieves state-of-the-art performance in generating complex characters and handling large style variations. • To enhance the preservation of intricate strokes of complex characters, we propose a Multi-scale Content Aggregation (MCA) block, leveraging the global and local features across different scales from the content encoder. • We propose a novel style representation learning strategy and elaborate a Style Contrastive Refinement (SCR) module that supervises the diffusion model using a style contrastive loss, enabling effective handling of large style variations. • FontDiffuser demonstrates superior performance over existing methods in generating characters across easy, medium, and hard complexity levels, showcasing strong generalization capability across unseen characters and styles. Furthermore, our method can be extended to the cross-lingual generation, such as Chinese to Korean. Related Work Few-Shot Font Generation Early font generation methods (Chang et al. 2018; Lyu et al. 2017; Tian 2017; Jiang et al. 2017; Sun, Zhang, and Yang 2018) consider the font generation task as an image-toimage translation problem, but they cannot generate unseen style fonts. To address this, SA-VAE (Sun et al. 2017) and EMD (Zhang, Zhang, and Cai 2018) generate unseen fonts by disentangling style and content representations. To enable the generator to capture local style characteristics, some methods (Wu, Yang, and Hsu 2020; Huang et al. 2020; Cha et al. 2020; Park et al. 2021a,b; Liu et al. 2022; Kong et al. 2022) utilize prior knowledge, such as stroke and component. For instance, LF-Font (Park et al. 2021a), MX-Font (Park et al. 2021b) and CG-GAN (Kong et al. 2022) employ a component-based learning strategy to enhance the capability of local style representation learning. XMP-Font (Liu et al. 2022) utilizes a pre-training strategy to facilitate the disentanglement of style and content. Diff-Font (He et al. 2022) adopts stroke information to support the sampling but fails to generate unseen characters. However, the annotation of strokes and components is costly for complex characters. Some prior-free methods (Xie et al. 2021; Tang et al. 2022; Wang et al. 2023) have been proposed. DG-Font (Xie et al. 2021) achieves promising performance in an unsupervised manner. Fs-Font (Tang et al. 2022) aims to discover the spatial correspondence between content images and style images to learn the local style details, but its reference selection strategy is sensitive to the quality of results. CF-Font (Wang et al. 2023) fuses various content features of different fonts and introduces an iterative style-vector refinement strategy. However, these methods still struggle with generating complex characters and handling large style variations. Diffusion Model Recently, diffusion models have achieved rapid development in vision generation tasks. Several prominent conditional diffusion models have been developed (Nichol et al. 2021; Ramesh et al. 2022; Saharia et al. 2022; Rombach et al. 2022; Zhang and Agrawala 2023; Ruiz et al. 2023). LDM (Rombach et al. 2022) proposes a cross-attention mechanism to incorporate the condition into the UNet and treats the diffusion process in the latent space. In text image generation, (Luhman and Luhman 2020; Gui et al. 2023; Nikolaidou et al. 2023) apply diffusion models to generate handwritten characters and demonstrate their promising effects. CTIG-DM (Zhu et al. 2023) devises image, text, and style as conditions and introduces four text image generation modes in a diffusion model. In contrast to general image generation, font generation requires distinct stroke details and intricate structural features at a fine-grained level. This motivates us to harness multi-scale content features and propose an innovative style contrastive learning strategy. Methodology As shown in Figure 2, our proposed method consists of a Conditional Diffusion model and a Style Contrastive Refinement module. In the Conditional Diffusion model, given The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6604 t UNet RSI RSI 𝒙! 𝝐" 𝑟! 𝑟" 𝑬𝒔 𝑒! 𝑬𝒄 𝑓!" 𝑓!# 𝑬𝒄 source 𝑓$% 𝑓$# 𝑓$" ❄ 𝒙" 𝑿# """ ×𝐾 𝑽" 𝑽𝑮𝑮 Style Extractor reference (a) Conditional Diffusion for Font Generation (b) Style Contrastive Refinement Style Projector 𝑽$ 𝑽# target aug. Vector Space ℒ%" ℒ&'( ℒ)**!+, """ """ """ ×𝑁 𝑓×𝑁 ×𝑁 ×𝑁 ( )×/ MCA """ 𝑓! Avg Max ×𝑁 c 𝑭! 𝑭$ 𝒙$ 𝑟# ℒ!% 𝒙% 𝒙! Figure 2: Overview of our proposed method. (a) The Conditional Diffusion model is a UNet-based network composed of a content encoder Ec and a style encoder Es. The reference image xs is passed through a style encoder Es and a content encoder Ec respectively, obtaining a style embedding es and structure maps F s. The source image is encoded by a content encoder Ec. To obtain multi-scale features F c, we derive output from the different layers of Ec and inject each of them through our proposed MCA block. RSI block is employed to conduct spatial deformation from reference structural features F s. (b) The Style Contrastive Refinement module is to disentangle different styles from images and provide guidance to the diffusion model. a source image xc and a reference image xs, our goal is to train a conditional diffusion model where the final output image should not only have the same content as in xc, but should also be consistent with the reference style. Style contrastive refinement module aims to disentangle different styles from a group of images and offer guidance to the diffusion model via a style contrastive loss. Conditional Diffusion for Font Generation Based on DDPM (Ho, Jain, and Abbeel 2020), the general idea of our diffusion-based image-to-image font generation method is to design a forward process that incrementally adds noise to the target distributions x0 ∼q(x0), while the denoising process involves learning the reverse mapping. The denoising process aims to transform a noise xT ∼(0, I) to the target distribution in T steps. Specifically, the forward process of FontDiffusers is a Markov chain and the noise adding process can be summarized as follows: xt = √¯αtx0 + √ 1 −¯αtϵ, (1) where t ∼[0, T], ϵ is the added Gaussian noise. αt = 1−βt, ¯αt = Qt i=0(1 −βi), βi ∼(0, 1) is a fixed hyper-parameter of variance. During the reverse process, the reverse mapping can be approximated by a model to predict the noise ϵθ(xt, t, xc, xs) and then obtain the xt−1 as follows: xt−1 = 1 √αt (xt −1 −αt √1 −¯αt ϵθ(xt, t, xc, xs)) + σtz, (2) where σt is the hyper-parameter and noise z ∼(0, I). We predict the noise ϵθ(xt, t, xc, xs) using our conditional diffusion model. Specifically, to enhance the preservation of complex characters, we employ a Multi-scale Content Aggregation (MCA) block to inject the global and local content cues into the UNet of our model. Moreover, a Reference-Structure Interaction (RSI) block is employed to facilitate structural deformation from the reference features. Multi-Scale Content Aggregation (MCA) Generating complex characters has always been challenging, and many existing methods only rely on a single-scale content feature, disregarding the intricate details such as strokes and components. As shown in Figure 3, large-scale features retain lots of detailed information while small-scale are lacking. source block 1 block 2 block 3 (𝐻×𝑊) (𝐻 2 × 𝑊 2 ) (𝐻 4 × 𝑊 4 ) (𝐻 8 × 𝑊 8 ) Figure 3: Content features in various blocks. Therefore, we employ a Multi-scale Content Aggregation (MCA) block, injecting global and local content features across different scales into the UNet of our diffusion model. Specifically, the source image xc is first embedded by the content encoder Ec, obtaining multi-scale content features F c = {f 1 c , f 2 c , f 3 c } from different layers. Together with the style embedding es encoded by the style encoder Es, each content feature f i c is injected into the UNet through three MCA modules respectively. As illustrated in Figure 4, the content feature f i c is concatenated with the previous UNet block feature ri, resulting in a channel-informative feature Ic. To enhance the capability of adaptive selective channel fusion, we apply a channel attention (Hu, Shen, and Sun 2018) on Ic, in which an average pooling, two 1×1 convolutions and an activation function are employed. The attention results in a global channel-aware vector Wc, which is used to weight the channel-informative feature Ic via channel-wise The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6605 𝑟!"# 𝑪 𝐼$ avg pool 𝑆! 1x1 Act 1x1 𝑊! 𝐼$% 𝐼$& 𝑓!" Channel Attention Cross Attention 𝑒' 𝑟! 𝑪Concatenation Multiplication Add Figure 4: Multi-scale Content Aggregation. multiplication. Then, after a residual connection, we employ a 1 × 1 convolution to reduce the channel number of I′ c, obtaining the output Ico. Lastly, we apply a cross-attention module to insert the style embedding es, in which es is employed as Key and Value, while Ico is employed as Query. Reference-Structure Interaction (RSI) There exists structural differences (e.g., font size) between the source image and the target image. To address this issue, we propose a Reference-Structure Interaction (RSI) block that employs deformable convolutional networks (DCN) (Dai et al. 2017) to conduct structural deformation on the skip connection of UNet. In contrast to (Xie et al. 2021), our conditional model directly extracts structural information from the reference features to obtain the deformation offset δoffset for DCN. Specifically, the reference image xs is first passed through the content encoder Ec to obtain the structure maps F s = {f 1 s , f 2 s }, and each f i s is as the input to both RSI modules respectively. Due to the misalignment in the spatial position between the UNet feature and the reference feature, we introduce a cross-attention to enable long-distance interactions to obtain the offset δoffset, instead of CNN. The interaction process can be summarized in Equation 3: ri is the UNet feature. And the essential element of this process involves leveraging the UNet feature ri and structure map f i s in a softmax operation, which primarily calculates the region of interest relative to each query position. Ss ∈RCi f ×HiWi = flatten(f i s), Sr ∈RCi r×HiWi = flatten(ri), Q = Φq(Ss), K = Φk(Sr), V = Φv(Sr), Fattn = softmax(QKT √dk )V, δoffset = FFN(Fattn), IR = DCN(ri, δoffset), (3) where Φq, Φk, Φv are linear projections, and FFN denotes the feed forward network. IR is the output of RSI. Style Contrastive Refinement One purpose of font generation is to achieve the intended style imitating effect, regardless of the variations of style between the source and the reference. A novel strategy is to find a suitable style representation and further provide feedback to our model. Therefore, we propose a Style Contrastive Refinement (SCR) module, a font style representation learning module that disentangles style from a group of samples images and incorporates a style contrastive loss to supervise our diffusion model, ensuring the generated style aligns with the target at the global and local level. The architecture of SCR is shown on the right of Figure 2, which consists of a style extractor. Inspired by (Zhang et al. 2022), a VGG network is employed to embed the font image in the extractor. To capture both global and local style characteristics effectively, we select N layers of feature maps F v = {f 0 v , f 1 v , ..., f N v } from VGG network, utilizing them as input to a style projector. The projector applies an average pooling and a maximum pooling to extract different global channel features separately, and then concatenates both of them channel-wise, resulting in the features F g = {f 0 g , f 1 g , ..., f N g }. Finally, after several linear projections, style vectors V = {v0, v1, ..., vN} are obtained. The style vectors V can provide supervising signals to the diffusion model and guide it to imitate style. Therefore, we adopt a contrastive learning strategy, in which we leverage a pre-trained SCR and incorporate a style contrastive loss Lsc to supervise whether the style of the generated sample x0 is consistent with the target style and distinguishable from negative styles. To ensure content-irrelevance and style-relevance, we choose the target image as the positive sample and select K negative samples that are with different styles but the same content, rather than directly considering the rest of the chosen target sample as negatives. Therefore, the supervision of SCR can be summarized as follows: V 0 = Extrac(x0), V p = Extrac(xp), V n = Extrac(xn) Lsc = − N−1 X l=0 log exp(vl 0 · vl p/τ) exp(vl 0 · vlp/τ) + PK i=1 exp(vl 0 · vlni/τ) , (4) where Extrac represents the style extractor. K is the number of negative samples. V 0, V p and V n denote the style vectors of generated, positive and negative samples respectively, and vl 0, vl p, vl ni denotes the l-th generated, positive and negative layer vector respectively. τ is a temperature hyperparameter and set as 0.07. The pre-training details of SCR are listed in Appendix. To enhance the robustness of style imitation, we apply an augmentation strategy on the positive target sample, which includes random cropping and random resizing. Training Objective Our training adopts a coarse-to-fine two-phase strategy. Phase 1 During phase 1, we optimize FontDiffuser mainly with the standard MSE diffusion loss, excluding the SCR module. This ensures that our generator acquires the fundamental capability for font reconstruction: L1 total = LMSE + λ1 cpLcp + λ1 offLoffset, (5) in which, LMSE = ∥ϵ −ϵθ(xt, t, xc, xs)∥2 , (6) Lcp = L X l=1 ∥VGGl(x0) −VGGl(xtarget)∥, (7) Loffset = mean(∥δoffset∥), (8) where L1 total represents the total loss in phase 1. VGGl(·) is the layer feature encoded by VGG and L is the number of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6606 Model Easy Medium Hard Average User (%) FID SSIM LPIPS L1 FID SSIM LPIPS L1 FID SSIM LPIPS L1 FID SSIM LPIPS L1 SFUC FUNIT 11.34 .4342 .1985 .3888 11.02 .3516 .2144 .4474 17.91 .3271 .2374 .4648 9.67 .3755 .2146 .4305 1.30 LF-Font 18.01 .4914 .1770 .3325 25.72 .3833 .2048 .4184 39.68 .3444 .2343 .4511 22.74 .4127 .2024 .3955 0.37 DG-Font 20.48 .4613 .2111 .3610 24.44 .3831 .2354 .4146 29.60 .3444 .2614 .4430 21.16 .4016 .2333 .4024 5.28 MX-Font 12.43 .4693 .1688 .3511 11.29 .3790 .1784 .4184 14.11 .3338 .1964 .4546 10.22 .4002 .1796 .4033 11.39 Fs-Font 27.20 .4282 .2258 .3869 27.94 .3425 .2394 .4536 36.90 .3091 .2621 .4800 25.99 .3651 .2404 .4361 0.83 CG-GAN 8.23 .4692 .1816 .3582 9.11 .3755 .1952 .4280 14.19 .3396 .2173 .4570 7.79 .4004 .1961 .4100 32.87 CF-Font 14.08 .4924 .2015 .3224 13.96 .4006 .2347 .3929 16.84 .3662 .2634 .4197 12.13 .4253 .2301 .3741 1.11 Ours 8.51 .5370 .1316 .2901 9.46 .4462 .1411 .3623 11.15 .4033 .1562 .3986 7.70 .4682 .1416 .3454 46.85 UFUC FUNIT 14.55 .4507 .1839 .3720 16.09 .3495 .2045 .4484 25.97 .2963 .2403 .4918 13.14 .3655 .2095 .4374 2.03 LF-Font 23.92 .4949 .1687 .3301 38.61 .3746 .1997 .4257 55.44 .3071 .2370 .4833 32.89 .3922 .2018 .4130 .10 DG-Font 25.61 .4788 .1957 .3450 27.08 .3803 .2172 .4165 32.73 .3254 .2421 .4561 22.71 .3948 .2183 .4059 8.99 MX-Font 14.92 .4808 .1552 .3408 14.09 .3786 .1625 .4195 16.40 .3189 .1783 .4689 10.77 .3928 .1653 .4098 14.11 Fs-Font 42.78 .4524 .2100 .3646 43.69 .3495 .2282 .4448 49.33 .2973 .2565 .4869 38.77 .3664 .2315 .4321 1.26 CG-GAN 14.14 .4887 .1677 .3369 14.41 .3793 .1831 .4173 26.89 .3114 .2120 .4710 12.93 .3931 .1876 .4084 20.97 CF-Font 22.09 .4841 .1901 .3322 24.58 .3897 .2180 .4046 25.83 .3434 .2461 .4420 19.69 .4057 .2180 .3929 5.41 Ours 12.90 .5080 .1418 .3175 11.63 .4117 .1468 .3926 13.12 .3420 .1600 .4508 8.54 .4206 .1496 .3870 47.15 Table 1: Quantitative Results on SFUC and UFUC. ‘User’ denotes the user study. ‘Average’ and the user study is evaluated on all characters of three levels of complexity. The bold indicates the state-of-the-art and the underline indicates the second best. chosen layers. Lcp is used to penalize the content misalignment between generated VGG features of x0 and the corresponding xtarget target features. The offset loss Loffset is used to constrain the offset in our RSI module and mean is the averaging process. λ1 cp = 0.01 and λ1 off = 0.5. Phase 2 In phase 2, we implement the SCR module, incorporating the style contrastive loss, to provide style imitation guidance to the diffusion model at the global and local levels. Thus our conditional diffusion model in phase 2 is optimized by: L2 total = LMSE + λ2 cpLcp + λ2 offLoffset + λ2 scLsc, (9) where L2 total represents the total loss in phase 2. The hyperparameters λ2 cp = 0.01, λ2 off = 0.5 and λ2 sc = 0.01. Experiment Datasets and Evaluation Metrics We collect a Chinese font dataset of 424 fonts. We randomly select 400 fonts (referred to as “seen fonts”) with 800 Chinese characters (referred to as “seen characters”) as training set. We evaluate methods on two test sets: one includes 100 randomly selected seen fonts, which contains 272 characters that were not seen during training (referred to as “SFUC”), and the other test set consists of 24 unseen fonts and 300 unseen characters (referred to as “UFUC”). The categorization details of three levels of complexity are in Appendix. Moreover, we additionally conduct a comparison on 24 unseen fonts and 800 seen characters (referred to as “UFSC”). For quantitative evaluation, we adopt FID, SSIM, LPIPS, and L1 loss metrics. Pixel-level metrics SSIM and L1 loss are employed to measure the per-pixel consistency. LPIPS (Zhang et al. 2018) and FID (Heusel et al. 2017) are perceptual metrics, which are closer to human visual perception. Furthermore, we conduct a user study to assess the subjective quality of images. In total, 25 participants are asked for the evaluation and the results are presented in Table 1. Implementation Details We train FontDiffuser using AdamW optimizer with β1 = 0.9 and β2 = 0.999. The image size is set as 96. Moreover, following (Ho and Salimans 2022), we simply drop out the source image and the reference image with the probability of 0.1. In phase 1, we train the model with a batch size of 16 and a total step of 440000. The learning rate is set as 1e −4 with linear schedule. In phase 2, the learning rate is set as 1e −5 and is fixed as constant. We train with a batch size of 16, a total step of 30000, and negative samples of 16. The experiments are conducted on a single RTX 3090 GPU. During sampling, we adopt a classifier-free guidance strategy (Ho and Salimans 2022) to amplify the effect of the conditions xc and xs. We set the unconditional content image and unconditional style image to pixel 255 as ∅, and our sampling strategy can be formulated as: ϵθ(xt, t, xc, xs) = (1 −s)ϵθ(xt, t, ∅, ∅) + sϵθ(xt, t, xc, xs), (10) where s is the guidance scale and is set as 7.5 in the experiments. To speed up sampling, we use the DPM-Solver++ sampler (Lu et al. 2022) with only 20 inference steps. Comparison with State-of-the-Art Method We compare our method with seven state-of-the-art methods: one image-to-image translation method (FUNIT (Liu et al. 2019)) and six Chinese font generation methods (LFFont (Park et al. 2021a), MX-Font (Park et al. 2021b), DGFont (Xie et al. 2021), CG-GAN (Kong et al. 2022), Fs-Font (Tang et al. 2022), and CF-Font (Wang et al. 2023)). Additionally, we compare with Diff-Font (He et al. 2022) on Unseen Font Seen Character (UFSC). For a fair comparison, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6607 Model FID SSIM LPIPS L1 LF-Font 18.67 .4823 .1688 .3400 DG-Font 19.81 .4532 .2047 .3646 MX-Font 9.32 .4605 .1603 .3571 Fs-Font 31.40 .4270 .2160 .3855 CG-GAN 7.72 .4655 .1721 .3544 CF-Font 14.20 .4396 .2139 .3713 Diff-Font 12.08 .4192 .2022 .3877 Ours 7.67 .4942 .1426 .3279 Table 2: Quantitative Results on UFSC. source ref Figure 5: Cross-lingual generation (Chinese to Korean). we use the font of Song as the source, and all methods are trained based on their official codes. Quantitative Comparison The quantitative results are presented in Table 1. FontDiffuser achieves the best performance across all matrices at average level, showing a significant gap compared to other methods on both SFUC and UFUC. It indicates that FontDiffuser can generate fonts that are visually closer to human perception. At easy and medium levels, though FID in SFUC ranks second, FontDiffuser outperforms other methods in the remaining metrics, particularly the perceptual matrix LPIPS. At hard level, our method performs the best in SFUC and achieves the best FID and LPIPS scores in UFUC. It should be noted that SSIM and L1 loss are pixel-level metrics, which may not directly reflect the overall performance. For instance, an impressive visual result may not perfectly match the target pixel to pixel. The hard-level results demonstrate the advantage of FontDiffuser in generating complex characters. Furthermore, as shown in Table 2, FontDiffuser achieves state-ofthe-art performance on UFSC. Notably, Diff-Font (He et al. 2022) is only capable of generating seen characters, and our method also outperforms it by a significant margin. Qualitative Comparison In Figure 7, we provide visualizations of the results on SFUC and UFUC, which intuitively reflect the visual effects of different methods. FontDiffuser consistently generates high-quality results and performs better in terms of content preservation, style consistency, and structural correctness compared with other stateof-the-art methods. Particularly, our method demonstrates significant superiority in generating complex characters and handling large variations in style transfer, while other methods still exhibit issues such as missing strokes, artifacts, Module FID SSIM LPIPS L1 M R S 8.12 .4112 .1526 .3955 7.84 .4114 .1511 .3954 8.44 .4137 .1506 .3925 8.54 .4206 .1496 .3870 Table 3: Effectiveness of different modules. M, R, and S represent MCA, RSI, and SCR respectively. The first row represents the baseline. source ref. baseline +M +MR +MRS target Figure 6: Visualization of different modules. M, R, and S represent MCA, RSI, and SCR respectively. Red boxes represent the missing strokes while green represents the corresponding improvements. Blue denotes structural promotion. blurriness, layout errors, and style inconsistency. We also present some cross-lingual generation samples (Chinese to Korean) in Figure 5, which are generated by our method. It demonstrates that FontDiffuser is flexible in generating for other languages and exhibits cross-domain capability though our model is trained by Chinese dataset. Ablation Studies In this section, we conduct several ablation studies to analyze the performance of our proposed modules and strategies. The experiments are tested on the unseen font unseen characters (UFUC) at average level. Effectiveness of Different Modules We separate the proposed MCA, RSI, and SCR, and progressively add them to the baseline. The baseline concatenates the content image with xt as the input of UNet. Table 3 shows that the quantitative results of these three modules are improved in terms of SSIM, LPIPS, and L1 loss, except for FID. Additionally, these modules also contribute to visual enhancements, as shown in Figure 6. For example, in the first row of Figure 6, the issue of missing strokes in the baseline is mitigated by the incorporation of the MCA module. Effectiveness of Augmentation Strategy in SCR We investigate the advantage of the proposed augmentation strategy in SCR, in which FontDiffuser is trained with and without augmentation strategy during the training phase 2. As shown in Table 4, it clearly demonstrates that the augmentation strategy boosts the generation performance in terms of SSIM, LPIPS, and L1 loss. Comparison between Cross-attention Interaction and CNN in RSI We conduct a comparative analysis between The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6608 SFUC UFUC Source FUNIT DG-Font CG-GAN Fs-Font CF-Font Ours Target LF-Font MX-Font Source FUNIT DG-Font CG-GAN Fs-Font CF-Font Ours Target LF-Font MX-Font Figure 7: Qualitative comparison on SFUC and UFUC. Red boxes highlight the failures of other methods. Method FID SSIM LPIPS L1 w/o augmentation 8.18 .4172 .1504 .3900 augmentation 8.54 .4206 .1496 .3870 Table 4: Effectiveness of augmentation strategy in SCR. Method FID SSIM LPIPS L1 CNN 9.17 .4130 .1537 .3932 cross-attention 8.54 .4206 .1496 .3870 Table 5: Comparison between cross-attention and CNN. cross-attention interaction and CNN interaction in RSI. The results in Table 5 show that the cross-attention interaction in RSI outperforms the CNN-based in all matrices, showcasing the superiority of our proposed method. Visualization of SCR Contrastive Score We provide visualization of the SCR contrastive score in Figure 8, which demonstrates that SCR can effectively distinguish the target from a group of samples, even though some of them exhibit similar styles. By combining SCR with style contrastive loss, we observe that SCR can refine the generated style through a learning-by-contrast manner. Figure 8: Visualization of SCR contrastive score. The left column represents the generated samples. Each row corresponds to the chosen samples. Red boxes highlight the target while blues highlight samples similar to the generated style. Conclusion In this paper, we propose a diffusion-based image-to-image font generation method, called FontDiffuser, which excels in generating complex characters and handling large variations in style transfer. Specifically, we propose the MCA block to inject multi-scale content features into our diffusion model, enhancing the preservation of complex characters. Moreover, we propose a novel style representation learning strategy, which implements the SCR module and uses a style contrastive loss to supervise our diffusion model. Additionally, an RSI block is employed to facilitate structural deformation using reference features. Extensive experiments demonstrate that FontDiffuser outperforms the state-of-theart method on characters of three levels of complexity. Furthermore, FontDiffuser demonstrates its applicability to the cross-lingual font generation task (e.g., Chinese to Korean), highlighting its promising cross-domain capability. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6609 Acknowledgements This research is supported in part by National Key Research and Development Program of China (2022YFC3301703) and Alibaba Innovative Research Foundation (no. 20210975). We thank the support from the AlibabaSouth China University of Technology Joint Graduate Education Program. References Cha, J.; Chun, S.; Lee, G.; Lee, B.; Kim, S.; and Lee, H. 2020. Few-shot compositional font generation with dual memory. In Proc. ECCV, 735–751. Springer. Chang, J.; Gu, Y.; Zhang, Y.; Wang, Y.-F.; and Innovation, C. 2018. Chinese Handwriting Imitation with Hierarchical Generative Adversarial Network. In Proc. BMVC, 290. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proc. ICCV, 764–773. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Proc. NeurIPS, 27. Gui, D.; Chen, K.; Ding, H.; and Huo, Q. 2023. Zero-shot Generation of Training Data with Denoising Diffusion Probabilistic Model for Handwritten Chinese Character Recognition. arXiv preprint arXiv:2305.15660. He, H.; Chen, X.; Wang, C.; Liu, J.; Du, B.; Tao, D.; and Qiao, Y. 2022. Diff-Font: Diffusion Model for Robust OneShot Font Generation. arXiv preprint arXiv:2212.05895. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs trained by a two time-scale update rule converge to a local nash equilibrium. Proc. NeurIPS, 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Proc. NeurIPS, 33: 6840–6851. Ho, J.; and Salimans, T. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proc. CVPR, 7132–7141. Huang, Y.; He, M.; Jin, L.; and Wang, Y. 2020. RD-GAN: Few/zero-shot chinese character style transfer via radical decomposition and rendering. In Proc. ECCV, 156–172. Springer. Jiang, Y.; Lian, Z.; Tang, Y.; and Xiao, J. 2017. DCFont: an end-to-end deep Chinese font generation system. In SIGGRAPH Asia Technical Briefs, 1–4. Kong, Y.; Luo, C.; Ma, W.; Zhu, Q.; Zhu, S.; Yuan, N.; and Jin, L. 2022. Look closer to supervise better: one-shot font generation via component-based discriminator. In Proc. CVPR, 13482–13491. Liu, M.-Y.; Huang, X.; Mallya, A.; Karras, T.; Aila, T.; Lehtinen, J.; and Kautz, J. 2019. Few-shot unsupervised image-to-image translation. In Proc. ICCV, 10551–10560. Liu, W.; Liu, F.; Ding, F.; He, Q.; and Yi, Z. 2022. XMPFont: self-supervised cross-modality pre-training for fewshot font generation. In Proc. CVPR, 7905–7914. Lu, C.; Zhou, Y.; Bao, F.; Chen, J.; Li, C.; and Zhu, J. 2022. DPM-Solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095. Luhman, T.; and Luhman, E. 2020. Diffusion models for handwriting generation. arXiv preprint arXiv:2011.06704. Lyu, P.; Bai, X.; Yao, C.; Zhu, Z.; Huang, T.; and Liu, W. 2017. Auto-encoder guided GAN for Chinese calligraphy synthesis. In Proc. ICDAR, volume 1, 1095–1100. IEEE. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741. Nikolaidou, K.; Retsinas, G.; Christlein, V.; Seuret, M.; Sfikas, G.; Smith, E. B.; Mokayed, H.; and Liwicki, M. 2023. WordStylist: Styled Verbatim Handwritten Text Generation with Latent Diffusion Models. arXiv preprint arXiv:2303.16576. Park, S.; Chun, S.; Cha, J.; Lee, B.; and Shim, H. 2021a. Few-shot font generation with localized style representations and factorization. In Proc. AAAI, volume 35, 2393– 2402. Park, S.; Chun, S.; Cha, J.; Lee, B.; and Shim, H. 2021b. Multiple heads are better than one: Few-shot font generation with multiple localized experts. In Proc. ICCV, 13900– 13909. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 10684–10695. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. DreamBooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proc. CVPR, 22500–22510. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Proc. NeurIPS, 35: 36479–36494. Sun, D.; Ren, T.; Li, C.; Su, H.; and Zhu, J. 2017. Learning to write stylized chinese characters by reading a handful of examples. arXiv preprint arXiv:1712.06424. Sun, D.; Zhang, Q.; and Yang, J. 2018. Pyramid embedded generative adversarial network for automated font generation. In Proc. ICPR, 976–981. IEEE. Tang, L.; Cai, Y.; Liu, J.; Hong, Z.; Gong, M.; Fan, M.; Han, J.; Liu, J.; Ding, E.; and Wang, J. 2022. Few-shot font generation by learning fine-grained local styles. In Proc. CVPR, 7895–7904. Tian, Y. 2017. zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks. http://github.com/kaonashityc/zi2zi. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6610 Wang, C.; Zhou, M.; Ge, T.; Jiang, Y.; Bao, H.; and Xu, W. 2023. CF-Font: Content Fusion for Few-shot Font Generation. In Proc. CVPR, 1858–1867. Wu, S.-J.; Yang, C.-Y.; and Hsu, J. Y.-j. 2020. CalliGAN: Style and structure-aware Chinese calligraphy character generator. arXiv preprint arXiv:2005.12500. Xie, Y.; Chen, X.; Sun, L.; and Lu, Y. 2021. DG-Font: Deformable generative networks for unsupervised font generation. In Proc. CVPR, 5130–5140. Zhang, L.; and Agrawala, M. 2023. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proc. CVPR, 586–595. Zhang, Y.; Tang, F.; Dong, W.; Huang, H.; Ma, C.; Lee, T.Y.; and Xu, C. 2022. Domain enhanced arbitrary image style transfer via contrastive learning. In Proc. SIGGRAPH, 1–8. Zhang, Y.; Zhang, Y.; and Cai, W. 2018. Separating style and content for generalized style transfer. In Proc. CVPR, 8447–8455. Zhu, Y.; Li, Z.; Wang, T.; He, M.; and Yao, C. 2023. Conditional Text Image Generation with Diffusion Models. In Proc. CVPR, 14235–14245. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6611 | 2024 | 734 |
18,556 | Full-Body Motion Reconstruction with Sparse Sensing from Graph Perspective Feiyu Yao1, Zongkai Wu2*, Li Yi3,4,5* 12012 Lab, Huawei Technologies Co., Ltd 2Fancy Technology 3Tsinghua University 4Shanghai Artificial Intelligence Laboratory 5Shanghai Qi Zhi Institute [email protected], [email protected], [email protected] Abstract Estimating 3D full-body pose from sparse sensor data is a pivotal technique employed for the reconstruction of realistic human motions in Augmented Reality and Virtual Reality. However, translating sparse sensor signals into comprehensive human motion remains a challenge since the sparsely distributed sensors in common VR systems fail to capture the motion of full human body. In this paper, we use welldesigned Body Pose Graph (BPG) to represent the human body and translate the challenge into a prediction problem of graph missing nodes. Then, we propose a novel full-body motion reconstruction framework based on BPG. To establish BPG, nodes are initially endowed with features extracted from sparse sensor signals. Features from identifiable joint nodes across diverse sensors are amalgamated and processed from both temporal and spatial perspectives. Temporal dynamics are captured using the Temporal Pyramid Structure, while spatial relations in joint movements inform the spatial attributes. The resultant features serve as the foundational elements of the BPG nodes. To further refine the BPG, node features are updated through a graph neural network that incorporates edge reflecting varying joint relations. Our method’s effectiveness is evidenced by the attained state-of-the-art performance, particularly in lower body motion, outperforming other baseline methods. Additionally, an ablation study validates the efficacy of each module in our proposed framework. Introduction Continuously full-body motion reconstruction from sparse motion sensing is crucial for applications in Augmented Reality and Virtual Reality (AR/VR), which demands highly accurate human motion poses to render vivid avatars in the digital world and do interactions. Common VR systems are composed by head-mounted displays and handheld controllers. These devices can provides resourceful abundant upper body motion information, yet they are unable to provide corresponding lower body motion data. The significant sparsity inherent in known data distribution makes the generation of realistic full-body motion a particularly challenging endeavor for conventional methods based on human kinematics (Company 2018) and matching motions (Ahuja et al. 2021). *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Various learning-based methods have been made to generate full-body avatars from sparse inputs in AR/VR (Dittadi et al. 2021) (Du et al. 2023) (Jiang et al. 2022) (Jiang et al. 2022). These methods in these diverse studies essentially entail the extraction of features from sparse sensor data, devoid of considerations for human body joint relationships. Subsequently, these extracted features are integrated into various network architectures that similarly also lack a profound consideration of the interdependence among human body joints. The homogenization of these methodologies confines the development of reconstructing human motion from sparse inputs to the realm of network structure updates. Also, the absence of sufficient human body information contributes to a notable disparity between the reconstructed outcomes of the lower human body and actual motion dynamics. To solve the problems mentioned above, we consider the human body from graph perspective and propose BPG to represent full body. The task is then transformed to be predicting missing nodes in established BPG. Considering the limited information available on missing nodes, the BPG is initialized and updated referring to node properties. The first stage is processing node features. Position feature and angle feature are fused since they share different transformation law and distribution. Temporal Pyramid Structure is proposed on fusing frame-level and clip-level features to build temporal properties for feature representation. To model spatial properties, features of limb joints and trunk joints are generated separately referring to the human skeleton dynamic. The generated motion features are assigned to be initial features in BPG. In Node Feature Updating stage, the nodes in BPG is updated referring to joint relations. We split the node relations into static skeleton relations, dynamic skeleton relations and latent relations. Then the node features in BPG are updated in Graph Convolution Network with expressive edges generated from node relations. Our main contributions are summarized as follows: • We are the first to conduct research on full body pose reconstruction with sparse sensing from graph perspective. The task is viewed as predicting missing nodes in an established graph. • We propose a framework to reconstruct full body motions via Body Pose Graph (BPG). Motion features with temporal and spatial properties are generated and assigned The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6612 Feature Integration Interactive learning P F Truck motion Limb motion Node Property Generation Temporal Pyramid Temporal Pyramid Node Feature Updating Static skeleton edges Dynamic skeleton edges Dynamic latent edges Body Pose Graph A F P F c A F c T F L F P exp F \ A exp F I P A exp F F K I F I exp e A P exp F F U \ exp e T exp F I T L exp F F U \ expp Figure 1: Illustration of our proposed structure. Inputs are sparse sensor position and rotational signals from VR system. Feature Integration module integrates position feature and rotation feature with different physical properties with interactive learning. In Node Property Generation module, motion temporal property is achieved through Temporal Pyramid module. To gain motion spatial property, the limb motion feature is composed by trunk motion features and limb local motion features. The trunk and limb feature then serve as initial node features in Body Pose Graph. In Node Feature Updating, graph convolution network with different edges modeling different joint relations is applied to update nodes. to be initial node features. Then a Graph Neural Network with expressive edges is applied to updated nodes. Full body motion sequence is generated from the nodeassociated joint movements. • Experiment results demonstrate that our framework achieves state of the art performance on on full-body avatar estimation from sparse inputs. Further analysis shows the contribution of each component to the performance improvement, especially in the lower body joints. Related Work Full-Body Motion Reconstruction from Sparse Inputs Primary researches on this area make attempts on fully-body motion reconstruction from 6 IMU sensors on human body (head, arms, pelvis and legs). (von Marcard et al. 2017) proposes a joint optimization framework based on statistic body models. (Huang et al. 2018) applies learning method BiRNN with body models to do the estimation. (Yi, Zhou, and Xu 2021) proposes a multi-stage learning based method where multiple subtasks and losses are designed to restrain pose generation. (Yi et al. 2022) relies on physical models to refine the poses generated from learning methods. However, requiring six IMUs is still excessive, as they are costly and logistically inconvenient to deploy. Reconstructing full body motion from common VR system will be more advantageous and flexible. (Ahuja et al. 2021) first utilizes sparser inputs from current consumer-grade VR systems (with headset and hand controllers) to estimate fullbody motion. It makes full body motion reconstruction much easier. However, it estimates poses based on matching from a dataset with only 5 types of activities. It can hardly handle diverse activities out of dataset. (Yang, Kim, and Lee 2021) proposes a Gated Recurrent Unit - based method to estimate lower-body pose while achieving upper-body with IK solver. It queries the confidence of lower-body motion reconstruction especially when upper-body and lower-body have weak correlations. Thus apart from sensor data from VR devices, it also requires a sensor on human waist. (Dittadi et al. 2021) proposes a VAE(Variance AutoEncoders)based method to generate poses from VR devices. However, it assumes the directions of pelvis in each frame should be the same. (Jiang et al. 2022) proposes a Transformer-based structure to generate global orientation and local joint orientations. Orientations will then be input to body models to generate joint poses. In the domain of human body reconstruction utilizing only head-mounted devices, several methods have also been developed and explored. (Li, Liu, and Wu 2023) estimates full-body human motions from only egocentric video for diverse scenarios. (Winkler, Won, and Ye 2022) proposes a reinforcement learning based framework and, together with physical simulator, can generate vivid leg motions even when the input is only the 6D transformations of the HMD. Despite these methods’ promising performance, leveraging joint motion relations in the human body can likely yield better results. Hence, our work introduces this prior knowledge through a graph-based approach. Graph Neural Networks Graph Neural Networks (Kipf and Welling 2017)(Zhang and Chen 2018) process data that can be represented as graph. Nodes representations will be iteratively updated by messages passed from their neighbors. Typical message passing methods include convolution-based methods (Kipf and Welling 2017) and attention-based (Veliˇckovi´c et al. 2018). One research field related to sparsity is handling graphs with missing nodes. (Chen et al. 2020) develops a distribution match based GNN Transformer-like method for attributemissing graph. (Taguchi, Liu, and Murata 2018) introduces Gaussian mixture model to represent missing data in Graph The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6613 Convolutional Network. (Jiang and Zhang 2020) utilizes a partial message-passing method to transmit observed features observed features in GCN-based model. (Rossi et al. 2022) handles missing features in graph by minimizing Dirichlet energy and leading to a diffusion-type differential equation on graph. Although they handle the graphs missing data, however, the graphs that these methods focus on have three characteristics. First, the node features have great similitudes. Also, node relations tend to be qualitative and simple. And the downstream tasks (for example node prediction) do not rely much on the accurate quantity of node features. A typical example will be graph in bibliographic data. While fullbody motion reconstruction from sparse inputs needs accurate quantity node features. The joints also have their own characteristics and the relations between them have motion meanings. The methods mentioned above are unsuitable for handling joint missing human joint graph. Graph Neural Networks in Human Pose Estimation While there are currently no methods solving full-body motion estimation from sparse inputs utilizing Graph Neural Networks (GNN), GNN, renowned for their heightened interpretability, have found widespread application in tasks associated with human pose. For example, action recognition (Sofianos et al. 2021; Liu et al. 2020c; Cheng et al. 2020b,a; Chen et al. 2021b; Si et al. 2019; Zhang, Xu, and Tao 2020; Li et al. 2019; Shi et al. 2019b; Chen et al. 2021a; Shi et al. 2019a; Duan et al. 2022; Shi et al. 2020; Ye et al. 2019) and 3D human pose estimation from 2D (Azizi et al. 2022; Liu et al. 2020b,a; Kundu et al. 2021; Zhao et al. 2019; Zou and Tang 2021; Li et al. 2020; Liu, Zou, and Tang 2020). (Sofianos et al. 2021) is the first work that applies GNN in action recognition. After that, various researches have been proposed on GNN-based method in action recognition. Some focus on improving the graph structure itself. For example, (Cheng et al. 2020b) proposes shift operations and lightweight point-wise convolutions to provide flexible receptive field for graph. (Si et al. 2019) integrates GNN with attention mechanism and lstm to increase representation ability. Some focus on empowering node relations to be more expressive. (Shi et al. 2019b) generates graph edges with directions based on kinematics while (Shi et al. 2020; Ye et al. 2019) generates edges containing temporal information or spatial information by learning methods. Different from action recognition tasks which focus on pose classification, 3D human pose estimation aims to reduce the estimation errors on all joints in pose. In order to solve this harder task, various methods focus on more powerful graph structure and more contextual edges. The more powerful graph includes graph in Non-euclidean space (Azizi et al. 2022), hypergraph neural networks (Liu et al. 2020b), and so on. More advanced graph updating methods are also proposed, for example (Liu et al. 2020a; Kundu et al. 2021; Zhao et al. 2019; Zou and Tang 2021; Yan, Xiong, and Lin 2018). Also, more human priors are utilized. For example, (Li et al. 2020) establishes dynamic GNN based on human motion prediction. (Liu, Zou, and Tang 2020) reveals the importance of decoupling global information from joints. (Zeng et al. 2021), (Lee and KIM 2022) analyzes the multi-hop relations between human graph nodes and models then in updating methods. The various methods mentioned above motivate us to introduce graph in fully-body pose estimation from sparse inputs. However, these methods mainly focus on tasks where sparsity hardly exists. Features for almost all joints are supplied as input. While our task only provides sensor data on 3 joints as inputs and expects accurate estimation in 22 joints. Above mentioned methods in related human pose methods can hardly be applied to our task. Methods In this section, we first formalize the process of full body motion reconstruction with sparse sensing and understand the task from graph perspective. Then building process of BPG is introduced. After node initialization, BPG is updated referring to several joint relations and all joint motions are generated. Problem Formulation This work focuses on full-body motion reconstruction with measurements from one headset and two hand controllers, a common configuration in commercial VR device. The inputs are cartesian coordinates p1×3 and orientations in axisangle representation Φ1×3 of headset and hand controllers. The outputs are local rotation angles between joints and their parent joints θ. considering real-time requirements in application scenarios, the issue is formalized an online problem: θ1:F N = f {pw, Φw}1:S (N-K):N , (1) in which S = 3 corresponds to the number of joints tracked by the VR system, F = 22 is the number of joints used to represent the full-body motion. The movements of joints in current frame (N) are generated from sensor data in previous K frames ((N-K):N). The final full human body can be rendered from the outputs θ with human body model. From graph perspective, we view the full-body as a graph with 22 nodes. For N-th frame full-body motion reconstruction, motions of 3 nodes in graph are known and used. They are the positional and angular motions of head and two hands in previous K frames. Thus the task is transformed to be completion of the missing 19 nodes in graph. Feature Integration module and Node Property Generation module extract various features from movement sequences of three known nodes, assign features to different nodes and endow these node features with characteristics corresponding to the joints properties. Feature Integration module integrates different sensor signals as normalization. Node Property Generation module generates features with temporal and spatial properties for nodes as initial value. In Node Feature Updating module, the node features are updated by GCN with expressive edges. Node Feature Initialization Given the constraints of a limited number of known nodes and the valuable information associated with them, the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6614 sparse sensor data from sensors undergoes an abstraction process. This process leads to the extraction of features related to the whole body motions. Subsequently, these extracted features are assigned to all nodes, thus serving as the initialization for the graph structure. Feature Integration The angular measurements and positional measurements have totally different distribution and follows different math laws for transformation, we propose Feature Integration module to fuse them. The measurements from each VR system device are joint position p ∈R1×3 and joint angular representation θ ∈ R1×6 (elements in rotation matrix R3×3 first row and second row). We augment the features by differentiate the measurements and get joint velocity v ∈v1×3 and joint angular velocity representation ωt ∈R1×6(elements in rotation velocity matrix R−1 t−1Rt first row and second row ). Joint position and joint velocity compose translation feature FPsensor . Joint angular representation and joint angular velocity representation compose joint rotation feature FAsensor . F S×6 Psensor = (p1 t, v1 t )1×6 . . . (ps t, vs t)1×6 (2) F S×12 Asensor = (θ1 t , ω1 t )1×12 . . . (θs t , ωs t )1×12 (3) FPsensor and FAsensor describes the joint motion from different perspective and shares totally different geometric properties. Thus, to enhance the feature representation ability and eliminate feature geometric differences, we utilize dual interactive learning to generate new feature referring to (Zhu et al. 2020). F ′ P = FP ⊙exp (ϕ (FA)) −ρ (FA ⊙exp (ψ (FP ))) F ′ A = FA ⊙exp (ψ (FP )) + η (FP ⊙exp (ϕ (FA))) (4) ψ, ρ are 1D convolutional layers. exp is used to map different features onto similar distribution spaces. The Feature Integration module is applied to integrate the motion information of nodes and output fused node features that incorporate both positional and angular information. Node Temporal Motion Property Generation As stated in the Problem Formulation, the framework’s input comprises motion information from k preceding frames, while the output entails the motion of the current frame. Thus this module is designed to mitigate temporal disparities in features originating from different frames. Also, As highlighted in (Martinez, Black, and Romero 2017), motion continuity stands out as a distinct characteristic of human motion. Motion details within each frame can be inferred from the surrounding contextual frames (clip). In this section, we apply the modeling of motion continuity as a guidance to mitigate temporal disparities. For node temporal properties, to better capture the joint motion temporal properties, we design Temporal Pyramid ĊĊ ĊĊ ĊĊ ĊĊ Previous K-frame Sequence Clip Level Feature Feature Extractor Frame Level Feature Interleaved Concatenation Current Frame Feature Feature Extractor Feature Extractor Figure 2: Temporal Pyramid Structure Structure in Figure 2. The inputs are k previous frame features and the outputs are high dimensional temporal features of current frame. The feature extractor is based on SCIBlock (Liu et al. 2022), which is a CNN-based time series model with output dimension adjustable. In Temporal Pyramid Structure, three Feature extractor are applied. First one and second one extract frame level and clip level features. The two level features are concatenated in an interleaved manner. The third extractor is applied to generate motion features for current frame. Node Spatial Motion Property Generation Human motions in different joints have different spatial properties. (Leteneur et al. 2013) claims the important impact of trunk on human body motion and reveals the different property of trunk and limb joints. As the human skeleton kinematic (Shi et al. 2019b) (Hu et al. 2021) reveals, joints farther from the center of the human body are always physically controlled by an adjacent joint which is closer to the center. In this context, limb joints act as child joints relative to trunk joints, resulting in limb joint motions being composed of both local limb joint movements and trunk joint motions. To address the challenge of predicting joints located distantly from the body’s center and to capture this directed control relationship, we propose a unidirectional interactive learning approach. This method guides the extraction of limb motion features by leveraging the guidance from trunk motion features. This module’s mechanism is described as followed. FL = FL ⊙exp (ϕ (FT)) + ρ (FT ⊙exp (ψ (FL))) (5) FT and FL are trunk motion features and limb motion features generated by temporal pyramid separately. ϕ and ψ are convolutional networks for generating sub-structure level features. The interactive learning mechanism used here is similar to Feature Integration module. The trunk and limb motion features generated are then assigned to corresponding trunk nodes (joint 0,1,2,3,4,5,6,9,12,13,14 in Figure 3) and limb nodes (joint 7,8,10,11,15,16,17,18,19,20,21 in Figure 3) as initial features. Node Feature Updating Node Feature Updating aims to capture diverse joint relationships through a Graph Convolutional Network with expressive edges. In Section 1, we initially revisit the vanilla The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6615 12 15 17 19 21 9 6 3 0 18 20 16 13 10 11 1 2 4 5 7 8 14 Figure 3: Index of human body joints Graph Convolutional Network, updating joint features considering the static human skeleton as edges. Subsequently, Section 2 introduces Node Updating using a graph convolutional network with expressive edges. Vanilla GCN We will first review the vanilla graph convolution network. Given a graph G = (V, E), it consists of the nodes V and the edges E. We revisit a generic GCN layer defined as follows: X ′ = σ(WXA) (6) where A ∈RN×N is an adjacency matrix with N nodes, indicating the connections between nodes. In 2D-3D human pose estimation, a task predicts 3d coordinates from pictures, the adjacency matrix is often established referring to human skeleton. If there is a bone connection between joint j and the joint i, then aij = 1. Otherwise, the value will be set to zero aij = 0. We denote the input node features as X ∈RN×Cin. Each node corresponding to a Cin-dimensional feature vector. The learnable weight matrix W ∈RCin×Cout is set to adjust feature’s dimension to be expected. The σ(·) is common activation function. Consider i-th node, the node feature of node i is Xi. Corresponding adjacency matrix slice will be Ai, with j-th element being aij. Si represents the joints have bone connections with joint i. aij = 1 if j ∈Si and aij = 0 if j ̸∈Si. The update of i-th node in vanilla GCN is expressed as: X′ i = σ X j∈Si WXjaij (7) Although lots of improvements has been made in past, the Graph updating method updates each node synchronously, which assumes that useful information is evenly distributed and the confidence of joint features is at the same level. Node Updating with Expressive Edges Vanilla graph convolution network has limited representation ability. It assumes that features in each node is reliable and node can be represented well by updates referring to constant graph edges built by human skeletons. Also, other strong hidden relationships among joint nodes exist and change with actions, these relations can hardly be modeled by vanilla GCN. For example, when the human is running, there is a strong relation between hand joint and foot joint. But when the human is sitting, there is no such strong relation. Considering above two limitations, we proposed GCN with multiple kinds of edge learned. To be specific, the edges in graph are dynamic corresponding to the current joint state instead of being constant. When human action changes, the edges can change simultaneously to better represent the node relations. In our task, edges are represented as an adjacency matrix Ah ∈R22×22. Ah = As + Al (8) As = Ass + Ads (9) As ∈R22×22 is skeleton relation adjacency matrix, it describes the relations exist in human skeleton (to be specific, all the edges drawn in Figure 3). Al ∈R22×22 is latent relation adjacency matrix, it describes potential links between nodes (node links not exist in Figure 3 ). Static skeleton relation adjacency matrix Ass ∈R22×22 is built referring to the human skeleton of SMPL model. Joints connected in human skeleton will have edges with non-zero constant value in the corresponding place. Dynamic skeleton relation adjacency matrix Ads ∈R22×22 models the relations among joints in skeleton. It is also built referring to human skeleton of SMPL. However, the value of the edges will be determined by the features of nodes in graph. The values in Ads and Al are learned by MLP structure seperately. A = W1ϕ(W0X + B0) + B1 (10) in which, X ∈Rb×nf, W0 ∈Rh×nf, B0 ∈Rh, W1 ∈ Ro×h, B1 ∈Ro. X is joint feature. b is batch size, n is number of nodes and f is the dimension of feature. h is the dimension of the hidden layer. o is the dimension of output. ϕ is the ReLU activation function. The nodes are updated with above adjacency matrix. The final output of BPG are the axis-angle of each joint, which, togther with SMPL human model (Pavlakos et al. 2019), will be referred to generate the position of each joint . Training and Loss The loss function is composed of rotational loss, positional loss and bone symmetric loss. Lfinal = Lrot + Lpos + Lbone (11) Lrot is absolute error loss on all joint axis-angles. Lpos is absolute error loss on all joint positions. The accuracy of axis-angle and position of each joint is both crucial for full body reconstruction. (Jiang et al. 2022). Lbone is human skeleton symmetric loss. It emphasises the relative position relations among joints and introduce human body priors for optimization. Lbone = X il,jl,ir,jr (|| ˆY pos il −ˆY pos jl ||−|| ˆY pos ir −ˆY pos jr ||) (12) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6616 Methods MPJRE MPJPE MPJVE Final IK 16.77 18.09 59.24 CoolMoves 5.20 7.83 100.54 LoBSTr 10.69 9.02 44.97 AvatarPoser 3.21 4.18 29.40 AvatarPoser * 3.01 4.11 27.79 Our method * 2.49 3.34 22.84 Table 1: Performance Comparison among our method and baselines in AMASS dataset. Notice that * means the results are trained in our machine. Here, ˆY pos i represents the predicted position of joint i. (il, jl) ∈setr are right human skeleton bones shown as blue lines in Figure 3, (ir, jr) ∈setl are left human skeleton bones shown as red lines in Figure 3. Experiemnt Data Preparation and Evaluation Metrics CMU (Lab 2000), BMLrub (Troje 2002) and HDM05 (M¨uller et al. 2007) in AMASS (Mahmood et al. 2019) dataset are employed . The datasets are randomly partitioned into training and testing subsets, comprising 90% and 10% of the data respectively, following the same setting as (Jiang et al. 2022). The metrics utilized for overall performance comparison are MPJRE (Mean Per Joint Rotation Error [◦] ), MPJPE (Mean Per Joint Position Error [cm] ), and MPJVE (Mean Per Joint Velocity Error [cm/s] ). In ablation study, to reveal the effect of each component on motion reconstruction, we list the estimated position error on each lower body joint. Performance Comparison with Baseline Method We compare our method with baseline methods in Table 1. To be specific, there are Final IK, CoolMoves(Ahuja et al. 2021), LoBSTr (Yang, Kim, and Lee 2021), VAEHMD (Dittadi et al. 2021), AvatarPoser (Jiang et al. 2022). The results are referring to (Jiang et al. 2022). To be fair, we retrained AvatarPoser on our platform. Our method attains superior results across all three metrics, outperforming all other methods. By representing human body as Graph and modeling spatial-temporal relations among joints, our method surpasses than baseline method, notably in predicting unseen lower body joints, as illustrated in Table 3. Performance Comparison with Offline Method Offline methods refer to methods outputting n length human pose sequence instead single frame in each inference. AGRoL (Du et al. 2023) is the state-of-art Offline method. In our method, we use 41 frame sensor sequence as input and ouput 1 frame in each inference. In Table 2, AGRoL41 represents that the lengths of input sequence and output sequence are both 41. Thanks to the feature generation method and graph based architecture, which are special designed for human body, our method performs better than AGRoL in Methods MPJRE MPJPE MPJVE VAE-HMD 4.11 6.83 37.99 AGRoL 41 2.59 3.64 23.24 AGRoL 196 2.66 3.71 18.59 Our method 2.49 3.34 22.84 Table 2: Performance Comparison with offline methods Index AvatarPoser Our method Improvement Joint 1 3.8 3.1 18.84% Joint 2 3.8 3.2 15.79% Joint 4 6.9 5.5 20.29% Joint 5 6.9 5.5 20.29% Joint 7 10.1 7.9 21.78% Joint 8 10.1 8.0 20.80% Joint 10 10.8 8.4 22.22% Joint 11 11.0 8.7 20.91% All Joints 4.11 3.34 18.73% Table 3: Joint Position Error performance comparison between our method and AvatarPoser in lower body joints [cm] all criteria in same condition. When extending the output sequence length to 192, AGRoL demonstrates commendable performance in MPJVE metric. However, this enhancement of the MPJVE metrics did not translate into superior results for the MPJRE and MPJPE metrics, which are more important for the restructure task. In contrast, our proposed methodology exhibits superior performance in both the MPJRE and MPJPE metrics, further substantiating its efficacy. Ablation Study To dissect individual component functions, we conduct ablation studies across various cases. Findings are outlined in Table 4. Given our method’s targeted focus on mitigating substantial estimation errors in lower body joints, we employ MPJPE-lower-body (MPJPE on joints 1, 2, 4, 5, 7, 8, 10, 11) to directly exemplify performance. • No Bone Symmetric Loss: The bone symmetric loss is not utilized in the framework. • No Spatial Property: The node features are not generated separately for trunk joints and limb joints and the relations between trunk and limb is not considered. • No Temporal Property: The temporal pyramid structure is replaced by temporal feature extractor. • No Feature Initialization: The Feature Integration process is replaced by simple MLP structure. • Vanilla GCN: The nodes in BPG are updated through Vanilla GCN instead of the GCN with expressive edges. As evident from Table 4, the absence of modules induces notable performance declines, particularly in the lower body region. This proves the efficacy of each component. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6617 Figure 4: Visualization of estimated poses on an avatar involves a series of frames portraying a human front kick action. It encompasses three rows: the top row showcases avatars with ground truth (GT) poses, while the subsequent two rows display avatars generated by our approach and AvatarPoser. These avatars are color-coded to denote errors in each mesh. Figure 5: Left diagram depicts a 0-1 adjacency matrix representation of the skeletal connectivity within the human body. Conversely, right diagram showcases an adjacency matrix generated by the GCN with expressive edges. The deeper the color, the stronger the relationship between the nodes. Red indicates positive correlation, while blue indicates negative correlation. Configuration MPJPE MPJPE-lower-body No Bone Symmetric Loss 3.53 6.75 No Spatial Property 3.60 6.88 No Temporal Property 3.71 7.10 No Feature Initialization 3.53 6.80 Vanilla GCN 3.58 6.88 Default 3.34 6.29 Table 4: Ablation study Visualization of Estimated Pose on Avatar In order to better analyze the estimation performance, we visualize the estimated poses on the whole avatar in figure 4. Each mesh triangle in avatar is rendered referring to the error of each estimated mesh vertex. Red represents large mesh vertex estimation error. The avatars in the first row show the ground truth poses. The avatars in second row and the third row are generated by our proposed method and baseline. As can be seen, the avatar generated by our method accomplishes the whole process of lifting and lowering the leg with little mesh error while the one generated by AvatarPoser accomplishes the action with errors and stiffness. Especially in frames (5) (6) (7) (8) (9), avatars generated by AvatarPoser can hardly even raise left leg as high as the ground truth. Analysis of Expressive Edges As shown in Figure 5, the adjacency matrix generated by GCN with expressive edges (right-hand figure) shows more joint relations than the static 0-1 adjacency matrix generated from the quantification of the human skeletal structure (left-hand figure). This indicated that, benefiting from the potent expressive capabilities of of GCN with expressive edges, our approach has yielded more comprehensive joint relationships in than human skeleton. Conclusion In this study, we approach the task of full-body motion reconstruction from sparse sensor input through a graphbased perspective, introducing the Body Pose Graph to represent the human body. In Node Feature Initialization step, different kind of VR system device features are first integrated. The new generated features are then processed to achieve spatial properties and temporal properties of joint motions before serving as initial node features in Body Pose Graph. Temporal property is generated by Temporal Pyramid Structure and Spatial property is generated referring to joint motion spatial relations. In the Node Feature Updating stage, we employ GNN with expressive edges to update node features within the Body Pose Graph. Our approach demonstrates exceptional estimation performance as evidenced by comprehensive evaluations. Ablation studies validate the effectiveness of individual components. Visualizations of learned edges and estimated poses on avatars provide insights into learned motion relationships and our method’s prowess in mesh-scale representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6618 References Ahuja, K.; Ofek, E.; Gonzalez-Franco, M.; Holz, C.; and Wilson, A. D. 2021. CoolMoves: User Motion Accentuation in Virtual Reality. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, volume 5, 1–23. Azizi, N.; Possegger, H.; Rodol`a, E.; and Bischof, H. 2022. 3D Human Pose Estimation Using M¨obius Graph Convolutional Networks. In Proceedings of the European Conference on Computer Vision, 160–178. Chen, X.; Chen, S.; Yao, J.; Zheng, H.; Zhang, Y.; and Tsang, I. W. 2020. Learning on Attribute-Missing Graph. IEEE transactions on pattern analysis and machine intelligence, 44(2): 740–757. Chen, Y.; Zhang, Z.; Yuan, C.; Li, B.; Deng, Y.; and Hu, W. 2021a. Channel-Wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13359–13368. Chen, Z.; Li, S.; Yang, B.; Li, Q.; and Liu, H. 2021b. Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 1113– 1122. Cheng, K.; Zhang, Y.; Cao, C.; Shi, L.; Cheng, J.; and Lu, H. 2020a. Decoupling GCN with DropGraph Module for Skeleton-Based Action Recognition. In Proceedings of the European Conference on Computer Vision, 536–553. Cheng, K.; Zhang, Y.; He, X.; Chen, W.; Cheng, J.; and Lu, H. 2020b. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 183–192. Company, R. 2018. Final IK. https://assetstore.unity.com/ packages/tools/animation/final-ik-14290. Dittadi, A.; Dziadzio, S.; Cosker, D.; Lundell, B.; Cashman, T.; and Shotton, J. 2021. Full-Body Motion from a Single Head-Mounted Device: Generating SMPL Poses from Partial Observations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11687–11697. Du, Y.; Kips, R.; Pumarola, A.; Starke, S.; Thabet, A.; and Sanakoyeu, A. 2023. Avatars Grow Legs: Generating Smooth Human Motion From Sparse Tracking Inputs With Diffusion Model. In Proceedings of the IEEE conference on computer vision and pattern recognition, 481–490. Duan, H.; Zhao, Y.; Chen, K.; Lin, D.; and Dai, B. 2022. Revisiting Skeleton-Based Action Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2969–2978. Hu, W.; Zhang, C.; Zhan, F.; Zhang, L.; and Wong, T.-T. 2021. onditional Directed Graph Convolution for 3D Human Pose Estimation. In Proceedings of the 29th ACM International Conference on Multimedia, 602–611. Huang, Y.; Kaufmann, M.; Aksan, E.; Black, M. J.; Hilliges, O.; and Pons-Moll, G. 2018. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG), 37(6): 1–15. Jiang, B.; and Zhang, Z. 2020. Incomplete graph representation and learning via partial graph neural networks. arXiv:2003.10130. Jiang, J.; Streli, P.; Qiu, H.; Fender, A.; Laich, L.; Snape, P.; and Holz, C. 2022. AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing. In Proceedings of the European Conference on Computer Vision, 443–460. Kipf, T. N.; and Welling, M. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations. Kundu, J. N.; Seth, S.; Jamkhandi, A.; YM, P.; Jampani, V.; Chakraborty, A.; and R, V. B. 2021. Non-local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation. In Advances in Neural Information Processing Systems, volume 34, 158–171. Lab, C. G. 2000. CMU Graphics Lab Motion Capture Database. http://mocap.cs.cmu.edu/. Lee, J. Y.; and KIM, I. 2022. Multi-hop Modulated Graph Convolutional Networks for 3D Human Pose Estimation. In British Machine Vision Conference. Leteneur, S.; Simoneau, E.; Gillet, C.; Dessery, Y.; and Barbier, F. 2013. Trunk’s natural inclination influences stance limb kinetics, but not body kinematics, during gait initiation in able men. PloS one, (1): e55256. Li, J.; Liu, K.; and Wu, J. 2023. Ego-Body Pose Estimation via Ego-Head Pose Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17142–17151. Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; and Tian, Q. 2019. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3595–3603. Li, M.; Chen, S.; Zhao, Y.; Zhang, Y.; Wang, Y.; and Tian, Q. 2020. Dynamic Multiscale Graph Neural Networks for 3D Skeleton Based Human Motion Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 214–223. Liu, K.; Ding, R.; Zou, Z.; Wang, L.; and Tang, W. 2020a. A Comprehensive Study of Weight Sharing in Graph Networks for 3D Human Pose Estimation. In Proceedings of the European Conference on Computer Vision, 318–334. Liu, K.; Zou, Z.; and Tang, W. 2020. Learning Global Pose Features in Graph Convolutional Networks for 3D Human Pose Estimation. In Proceedings of the Asian Conference on Computer Vision, 1429–1442. Liu, M.; Zeng, A.; Chen, M.; Xu, Z.; Lai, Q.; Ma, L.; and Xu, Q. 2022. SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction. In Advances in Neural Information Processing Systems, 5816–5828. Liu, S.; Lv, P.; Zhang, Y.; Fu, J.; Cheng, J.; Li, W.; Zhou, B.; and Xu, M. 2020b. Semi-Dynamic Hypergraph Neural Network for 3D Pose Estimation. In Proceedings of the International Joint Conference on Artificial Intelligence, 782–788. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6619 Liu, Z.; Zhang, H.; Chen, Z.; Wang, Z.; and Ouyang, W. 2020c. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 143–152. Mahmood, N.; Ghorbani, N.; Troje, N. F.; Pons-Moll, G.; and Black, M. J. 2019. AMASS: Archive of motion capture as surface shapes. Proceedings of the IEEE/CVF international conference on computer vision, 5442–5451. Martinez, J.; Black, M. J.; and Romero, J. 2017. On human motion prediction using recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2891–2900. M¨uller, M.; R¨oder, T.; Clausen, M.; Eberhardt, B.; Kr¨uger, B.; and Weber, A. 2007. Documentation Mocap Database HDM05. Technical Report CG-2007-2, Institut f¨ur Informatik II, Universit¨at Bonn. Pavlakos, G.; Choutas, V.; Ghorbani, N.; Bolkart, T.; Osman, A. A. A.; Tzionas, D.; and Black, M. J. 2019. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 10975–10985. Rossi, E.; Kenlay, H.; Gorinova, M. I.; Chamberlain, B. P.; Dong, X.; and Bronstein, M. M. 2022. On the Unreasonable Effectiveness of Feature Propagation in Learning on Graphs With Missing Node Features. In Learning on Graphs Conference, volume 198, 11:1–11:16. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019a. Two-Stream Adaptive Graph Convolutional Networks for SkeletonBased Action Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12026–12035. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019b. Skeletonbased action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7912–7921. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2020. SkeletonBased Action Recognition With Multi-Stream Adaptive Graph Convolutional Networks. IEEE Transactions on Image Processing, 29: 9532–9545. Si, C.; Chen, W.; Wang, W.; Wang, L.; and Tan, T. 2019. An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1227–1236. Sofianos, T.; Sampieri, A.; Franco, L.; and Galasso, F. 2021. Space-time-separable graph convolutional network for pose forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11209–11218. Taguchi, H.; Liu, X.; and Murata, T. 2018. Graph convolutional networks for graphs containing missing features. Future Generation Computer Systems, 117: 155–168. Troje, N. F. 2002. Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of vision, 2(5): 2–2. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations. von Marcard, T.; Rosenhahn, B.; Black, M. J.; and PonsMoll, G. 2017. Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs. Computer graphics forum, 36(2): 349–360. Winkler, A.; Won, J.; and Ye, Y. 2022. QuestSim: Human Motion Tracking from Sparse Sensors with Simulated Avatars. In SIGGRAPH Asia 2022 Conference Papers, 1–8. Yan, S.; Xiong, Y.; and Lin, D. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Yang, D.; Kim, D.; and Lee, S.-H. 2021. Lobstr: Real-time lower-body pose prediction from sparse upper-body tracking signals. Computer Graphics Forum, 40(2): 265–275. Ye, F.; Pu, S.; Zhong, Q.; Li, C.; Xie, D.; and Tang, H. 2019. Dynamic gcn: Context-enriched topology learning for skeleton-based action recognition. In Proceedings of the 28th ACM international conference on multimedia, 55–63. Yi, X.; Zhou, Y.; Habermann, M.; Shimada, S.; Golyanik, V.; Theobalt, C.; and Xu, F. 2022. Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13167–13178. Yi, X.; Zhou, Y.; and Xu, F. 2021. TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors. ACM Transactions on Graphics (TOG), 40(4): 1– 13. Zeng, A.; Sun, X.; Yang, L.; Zhao, N.; Liu, M.; and Xu, Q. 2021. Learning skeletal graph neural networks for hard 3d pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11436–11445. Zhang, M.; and Chen, Y. 2018. Link Prediction Based on Graph Neural Networks. In Advances in Neural Information Processing Systems, 5165–5175. Zhang, X.; Xu, C.; and Tao, D. 2020. Context aware graph convolution for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14333–14342. Zhao, L.; Peng, X.; Tian, Y.; Kapadia, M.; and Metaxas, D. N. 2019. Semantic graph convolutional networks for 3d human pose regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3425–3435. Zhu, S.; Pan, S.; Zhou, C.; Wu, J.; Cao, Y.; and Wang, B. 2020. Graph geometry interaction learning. In Advances in Neural Information Processing Systems, volume 33, 7548– 7558. Zou, Z.; and Tang, W. 2021. Modulated Graph Convolutional Network for 3D Human Pose Estimation. In Proceedings of the International Joint Conference on Artificial Intelligence, 11477–11487. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6620 | 2024 | 735 |
18,557 | FoSp: Focus and Separation Network for Early Smoke Segmentation Lujian Yao, Haitao Zhao*, Jingchao Peng, Zhongze Wang, Kaijie Zhao East China University of Science and Technology {lujianyao, zzwang, kjzhao}@mail.ecust.edu.cn, [email protected], [email protected] Abstract Early smoke segmentation (ESS) enables the accurate identification of smoke sources, facilitating the prompt extinguishing of fires and preventing large-scale gas leaks. But ESS poses greater challenges than the conventional object and regular smoke segmentation due to its small scale and transparent appearance, which can result in high miss detection rate and low precision. To address these issues, a Focus and Separation Network (FoSp) is proposed. We first introduce a focus module employing bidirectional cascade which guides low-resolution and high-resolution features towards mid-resolution to locate and determine the scope of smoke, reducing the miss detection rate. Next, we propose a separation module that separates smoke images into a pure smoke foreground and a smoke-free background, enhancing the contrast between smoke and background fundamentally, and improving segmentation precision. Finally, a domain fusion module is developed to integrate the distinctive features of the two modules which can balance recall and precision to achieve high Fβ. Furthermore, to promote the development of ESS, we introduce a highquality real-world dataset called SmokeSeg, which contains more small and transparent smoke images than the existing datasets. Experimental results show that our model achieves the best performance on three available smoke segmentation datasets: SYN70K (mIoU: 83.00%), SMOKE5K (Fβ: 81.6%) and SmokeSeg (Fβ: 72.05%). The code can be found at https://github.com/LujianYao/FoSp. Introduction In wildlife, smoke is an important indicator of fire. Early smoke segmentation (ESS) enables rapid identification of the location of the fire (Muhammad, Ahmad, and Baik 2018; Robinson 1979), facilitating the timely extinguishing of the flames by rescue personnel and preventing the occurrence of large fires. In industrial production, ESS can also aid in promptly detecting the location of gas leaks and prevent the spread of toxic and harmful gases (Hsu et al. 2021). Smoke segmentation is usually defined as a binary segmentation task (e.g., salient object detection (Borji et al. 2015; Qin et al. 2019), small object segmentation (Wang, Zhou, and Wang 2019; Liu et al. 2021)), but the task of *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. attention Region (a) Smoke (b) Atten-based ✓ (d) (e) clear gap 55 60 65 70 75 10 60 110 160 210 FoSp-s Segformer OCRNet OCRNet-s PSPNet-s DeepLabV3+-s PSPNet DeepLabV3+ FoSp FLops 𝐹! (g) Flops & Params 𝐹𝛽 Flops (d) Thick vs BG (e) Trans vs BG (f) Sepa vs BG 70 65 60 55 10 60 110 (c) Sepa-based ambiguous narrow ambiguous Figure 1: Motivation and Comparison. In this figure, ”Sepa” means ”Separation” and ”BG” means ”Background”. ESS is particularly challenging due to three main reasons: ①As a small non-rigid object, early smoke exhibits strong variability, resulting in diverse patterns due to differences in scenes, lighting conditions, and wind intensities. ②As a transparent object, the low contrast will make it difficult to distinguish between the smoke and the background. ③Unlike other transparent object detection (e.g., glass segmentation (Mei et al. 2022; Yu et al. 2022)), the transparency of early smoke is variable. The conventional smoke image only has a transparent part at the edge, but the early smoke also has transparency in the main body. The thick part of the smoke behaves like common objects, with distinct edges and noticeable color. However, transparent smoke has low contrast with the background, making it difficult to distinguish. From the perspective of pixel distribution, Fig. 1d illustrates a gap between the thick parts of smoke and the background, making it relatively easy to segment. Conversely, Fig. 1e shows that the distribution of transparent smoke has a large ambiguous overlapping region with the surrounding background. The larger the overlapping region is, the more difficult to distinguish them. Previous regular smoke segmentation methods (Li et al. 2018; Yuan et al. 2019a,b, 2022) primarily emphasize larger receptive fields to cope with the variability and blurred edges of smoke. However, directly applying binary segmentation can result in high miss detection (incomplete segmentation) and low precision (rough-edge segmentation) as the early smoke is small and transparent. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6621 Therefore, we propose a Focus and Separation Network (FoSp) to deal with these two problems separately. Firstly, we design a focus module utilizing bidirctional cascade which directs low-resolution and high-resolution features towards mid-resolution to locate and confirm the scope of smoke, effectively reducing the miss detection rate (for ①). Then, a separation module is introduced to enhance the contrast between the smoke foreground and the background, thereby improving precision (for ②). Previous methods (Cao, Tang, and Lu 2022; Jing, Meng, and Hou 2023) have used attention-based local enhancement, but this approach may fail to increase contrast (compare the origin smoke region in Fig. 1a and the attention region in Fig. 1b). Therefore, we refer to the atmospheric scattering model, estimate the original image directly, and use it to calculate a discrepancy to obtain the foreground image. Fig. 1c shows that the separation-based method can precisely enhance the contrast between the smoke and the background. Finally, to prevent overpowering one of the modules (the model tends to over-optimize recall or precision), we further design the fusion module to balance the performance of the two modules to obtain the general Fβ (for ③). Furthermore, we provide a large and real-world dataset called SmokeSeg, which includes 6,144 real smoke images, with a considerable number of them featuring small and transparent smoke. SmokeSeg currently boasts the largest number of real images among publicly available smoke segmentation datasets. Notably, the SmokeSeg contains 4.5 times more real images than SMOKE5K (Yan, Zhang, and Barnes 2022), which only comprises 1,360 real images. Our main contributions can be summarized as follows: • We propose a Focus and Separation Network (FoSp), where the focus module and separation module are introduced to reduce the miss detection rate and improve the precision of early smoke segmentation, respectively. • We have created a large-scale dataset named SmokeSeg for early smoke segmentation, which includes 6144 real images with pixel-wise annotations. • Our FoSp achieve the best performance on three smoke segmentation datasets: SYN70K (mIoU: 83.00%), SMOKE5K (Fβ: 81.6%) and SmokeSeg (Fβ: 72.05%). Related Work Early Smoke Segmentation. Although current semantic segmentation methods (Long, Shelhamer, and Darrell 2015; Lin et al. 2017; Chen et al. 2018; Strudel et al. 2021; Xie et al. 2021; Cheng et al. 2022) are effective at segmenting regular objects (i.e., those with clear outlines and roughly the same shape), these generic algorithms are not suitable for early smoke segmentation due to its varying transparency and small scale. Traditional smoke segmentation methods have mainly focused on extracting high-quality color and texture features (Mahmoud and Ren 2019; Xing et al. 2015; Yuan, Liu, and Zhang 2019), and deep-learning-based methods (Yuan et al. 2022, 2019a,b; Jing, Meng, and Hou 2023) mainly focus on extracting features with larger receptive fields to handle the variability and blurred edges of smoke. However, these methods suffer from high miss detection rate and low precision in segmenting small and transparent early smoke. And many of them (Yuan et al. 2019a,b, 2021) have only been quantitatively evaluated on synthetic datasets. Despite some methods (Jia et al. 2019; Wang et al. 2014) claiming to address early smoke, they still rely on images of regular smoke. Therefore, the existing smoke segmentation methods are not capable of effectively addressing the issue of early smoke. Prior Attention. Prior attention is a mechanism that adds a branch to the network before the backbone (Uzkent, Yeh, and Ermon 2020; Wang et al. 2020; Xie et al. 2020) or before the final predictions (Najibi, Singh, and Davis 2019; Yang, Huang, and Wang 2022), making the network focus on a specific area in early to obtain finer predictions. However, current methods are mostly used for object detection and are designed to improve precision. How to obtain prior attention for segmentation with high recall is still to be resolved. Smoke Segmentation Datasets. Currently, there are two main smoke segmentation datasets: SYN70K (Yuan et al. 2019b) and SMOKE5K (Yan, Zhang, and Barnes 2022). SYN70K is a synthetic dataset comprising 70K images, while SMOKE5K contains 1K real images and 4K synthetic images, with the latter being selected from SYN70K. However, neither of these datasets is specifically designed for early smoke segmentation. The SYN70K comprises entirely of synthetic smoke, which is larger in scale and significantly different from real images. The SMOKE5K has only a small number of real smoke images and a low proportion of early small and transparent smoke. Therefore, there is currently no dataset particularly designed for early smoke segmentation. Method Introduction of Focus and Separation (FoSp) Intuition. We assume that the image pixel i(x) ∈R3 is composed of a smoke-free background component b(x) ∈ R3 and a smoke foreground component s(x) ∈R3: i(x) = b(x)t(x) + s(x)(1 −t(x)), (1) where x represents the position of the pixel and t(x) ∈R3 is used to control the density of the smoke foreground in RGB channels. Eq. 1 can be transformed to: s(x) = i(x) −b(x)t(x) 1 −t(x) = i(x) −b(x) + b(x) −b(x)t(x) 1 −t(x) = 1 1 −t(x)(i(x) −b(x)) + b(x) = 1 α(x)(i(x) −b(x)) + b(x) (2) where α(x) = 1 −t(x). Supposing we can obtain a smokefree background component b(x), we are capable of adjusting the smoke foreground component s(x) by controlling the value of α to enhance the contrast between foreground and background. Following such intuition, we propose a Focus and Separation Network (FoSp) to estimate the background pixel among the smoke region. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6622 Bidirectional Cascade Generator Fusion Transformer Block 1 Embedding Transformer Block 2 Transformer Block 3 Transformer Block 4 Decoder Block 1 Decoder Block 2 Decoder Block 3 Decoder Block 1 Decoder Block 2 Decoder Block 3 Blank Separation Focus Embedding Embedding Conv Block Conv Block 𝐹! " 𝐹! # 𝐹$ " 𝐹% " 𝐹& " 𝐹$ # 𝐹% # 𝐹& # 𝐹& ' 𝐹% ' 𝐹! ' 𝐹$ ' Cls Head Focus Map (Re.) Focus Map 𝐹! ( 𝐹$ ( 𝐹% ( 𝐹& ( Reverse & Binary Figure 2: Structure illustration of our Focus and Separation Network (FoSp). The focus module extracts image features and generates a focus map (FM) using a bidirectional cascade generator (BCG). The separation module consists of two inpainters: one to complete the smoke areas with smoke-free backgrounds using the original image and FM, and the other using the original image and a blank image. The smoke foreground features are obtained by subtracting the two features, and then the origin features, FM, and foreground features are fused in the fusion module to obtain the final prediction. Overview. As shown in Fig. 2 , we first propose a focus module to locate and confirm the scope of smoke. This enables the network to pay closer attention to the smoke and reduce the miss detection rate. Next, a separation module is introduced to split the smoke image into pure smoke foreground and smoke-free background, which enhances the contrast between the foreground and background, thereby improving precision. Finally, a domain fusion module is developed to narrow the domain gap between the features generated by the first two modules, ultimately achieving a balance between recall and precision. Focus Module Intuition. Most segmentation methods use pyramid-based unidirectional feature integration to obtain a more refined prediction. However, our focus module is designed to achieve a complete smoke area. In this regard, we introduce a novel bidirectional feature cascade approach. We consider the smoke as two parts: the low-opacity smoke body and the high-opacity smoke edge. As shown in Fig. 3, low-resolution features can provide an approximate outline of the smoke body, However, the effectiveness of capturing transparent regions of smoke is compromised. Conversely, high-resolution features offer more precise attention and can identify transparent parts of the smoke edge, but cannot fully capture the entire smoke. Therefore, we propose a bidirectional fusion of features that can integrate features from both low and high resolution, resulting in a complete scope of the smoke. Overview. In the focus module, a bidirectional cascade generator (BCG) is proposed to integrate the original multiscale image features for locating and confirming the range of smoke, which we refer to as the focus map (FM). As shown in Fig. 3, BCG bidirectionally guides low-resolution features and high-resolution features towards mid-resolution to obtain a complete scope of smoke. Detail. As shown in Fig 2, in the feature extraction section, MiT-B3 (Xie et al. 2021) is adopted as our backbone, which is a transformer-based backbone with the large receptive field that can handle the variability of smoke. Specifically, we preprocess the image using an embedding layer and extract features at four different scales using four Transformer blocks. These scales range from small to large and are respectively Fi ∈R H 26−i × W 26−i ×Ci(i = 1, 2, 3, 4). The BCG is illustrated in Fig. 3, which consists of two parts: Low-Mid Cascade (LMC) and High-Mid Cascade (HMC). The two modules are similar, so we will use the LMC module as an example to explain. In LMC, The lowest-resolution feature F1 ∈R H 32 × W 32 ×1 is fed into a layer of Conv to obtain the logits G1 ∈R H 32 × W 32 ×1 and then applied the Sigmoid function to obtain the prediction Q1 ∈R H 32 × W 32 ×1 of the F1. A queryguide approach is adopted to enhance the feature maps by taking the dot product of Q1 and F1, and adding the result to the F1. The cascade process of obtaining the final output logits G2 ∈R H 16 × W 16 ×1 of LMC can be represented using the following equation: G2 = Conv(Cat(U(Q1 ∗F1 + F1), F2), θ2), (3) where the θ2 is the learnable parameters of the Conv and the U refers to the upsampling operation. The HMC follows the same procedure to obtain the final logits G3 ∈R H 8 × W 8 ×1. At last, the two intermediate resolution logits G2 (LMC) and G3 (HMC) are fused to obtain the final focus map (FM) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6623 (FM∈R H 16 × W 16 ×1): FM = Sigmoid(Conv(Cat(G2, G3), θm)), (4) where the θm is the learnable parameters of the Conv. Separation Module Intuition. The region of FM shares similarities with the adjacent pixels, while the region of smoke can be perceived as an external element that is not compatible with the background. We leverage the contextual features surrounding FM to complement the features within the FM area, thereby aligning them with the neighboring features (i.e., smoke-free background). Consequently, by subtracting the original image, the foreground smoke can be extracted. Overview. In the separation module, we utilize the FM to obtain the smoke region and apply the inpainting technique (a.k.a Image completion) to fill the smoke area with a smoke-free background using the surrounding scene (particularly the area around the smoke) as a reference. We then subtract the smoke-free background from the original smoke region to derive the pure smoke foreground. Furthermore, to preserve information and maintain consistency with highlevel segmentation tasks, we separate the foreground and background at the feature level. Detail. As shown in Fig. 2, In order to separate the foreground and background of smoke images at the feature level, the entire separation module is composed of two weightshared inpainters. 1) In the top branch, the original image I and focus map FM is first input into the embedding layer EB(·) for feature integration and then encoded into a latent feature F N 1 ∈ R H 32 × W 32 ×C1 through the Conv block. F N 1 = Conv(EB(I, FM), θI), (5) where θI represents the pre-trained Conv parameters of the inpainting network. 2) Different from the top branch, in the bottom branch, the input is the original image I and a Blank image, without any mask. F A 1 = Conv(EB(I, Blank), θI), (6) where F A 1 ∈R H 32 × W 32 ×C1. 3) Then the latent features of the two branches are sent to three decoder blocks and three different scale inpaintingnetwork-domain feature maps are obtained. F A/N i = Deci−1(F A/N i−1 ), i = 2, 3, 4, (7) where F A/N i ∈R H 26−i × W 26−i ×Ci. 4) Finally, we calculate the L1-norm of the four scale feature maps in both the top branch and the bottom branch to get the foreground features F F i , forming a feature list and feeding it into the domain fusion module. F F i = β ∗∥F N i −F A i ∥1, i = 1, 2, 3, 4 (8) where β is the gain factor and we set β = 10. Conv Sigmoid Conv Sigmoid Conv Conv Conv High-Mid Cascade Low-Mid Cascade Sigmoid 𝑄! Focus Map 𝑄" 𝐺! 𝐺# 𝐺$ 𝐺" 𝐹! % 𝐹# % 𝐹$ % 𝐹" % Query-guide Query-guide Low-resolution High-resolution C C C Figure 3: Structure of bidirectional cascade generator. Domain Fusion Intuition. Due to the foreground features generated by the inpainters, the origin features and foreground features actually belong to different feature domains. Simply adding features does not effectively merge the high-quality features of both, and this operation may even damage the original feature domain due to different information represented by different channels. Therefore, we design a domain fusion module to achieve the fusion of both domain features. Detail. As shown in Fig. 4, the domain fusion module is divided into three domains: origin domain, foreground domain, and fusion domain. The origin domain is composed of features at different scales generated by the focus module, and we utilize the focus map to enhance the features of the smoke region. The foreground domain is composed of four distinct scale foreground features generated by the separation module. In each stage, the features of the two domains and the feature of previous fusion stage are concatenated and then fused with an MLP. The features of the two domains are hierarchically merged in this way. Loss Function The loss function consists of two parts: basic loss and Focus loss. In conjunction with our proposed focus module, we introduce a Focus loss, which provides supervisory information to the prediction logits Gi(i = 1, 2, 3, 4) of multiple resolution feature maps. Lfocus = λfLBCE(FM, Y) + 4 X i=1 λi(LBCE(Gi, Y)), (9) where LBCE is the binary cross-entropy, Y is the ground truth, λf and λi are the weighting factors and we set λf = λi = 0.1. For the basic segmentation loss, we use the standard binary cross-entropy. The final objective function is the sum of the two losses: L = Lfocus + λbLBCE(P , Y), (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6624 C Origin Domain Foreground Domain Fusion Domain UP UP UP MLP MLP MLP MLP C C C 𝐹! " 𝐹# " 𝐹$ " 𝐹% " 𝐹$ & 𝐹% & 𝐹# & 𝐹! & 𝐹$ ' 𝐹% ' 𝐹# ' 𝐹! ' Figure 4: Structure of domain fusion module. where P is the final prediction of model, λb is the weighting factor and we set λb = 0.5. Experiments SmokeSeg Dataset There are two main smoke segmentation datasets available: SYN70K (Yuan et al. 2019b) and SMOKE5K (Yan, Zhang, and Barnes 2022). SYN70K comprises 70k synthetic images, while SMOKE5K is a dataset containing 1,360 real images and 4k synthetic images, with the latter being selected from SYN70K. However, neither of these datasets is tailored specifically for early smoke segmentation. SYN70K is a synthetic dataset with lots of images of smoke, but the smokes in it are large and prominent, which is quite different from real smoke. SMOKE5K has a limited number of real smoke images, and a low proportion of these are early smoke images, making it difficult to conduct experiments specifically targeting early smoke segmentation. To address these issues, we contribute SmokeSeg dataset. SmokeSeg consists of 6,144 real images (the raw smoke images are sourced from FigLib (Dewangan et al. 2022).), which has the largest number of real images with pixelwise annotations in any publicly available smoke segmentation dataset. The number of real images in SmokeSeg is 4.5 times larger than that of SMOKE5K, which only comprises 1,360 real images. The majority of these images in SmokeSeg are early smoke, characterized by small scale and transparent appearance. Fig. 5 shows the distribution of smoke pixel ratios in three datasets, with smaller ratios indicating smaller smoke. It can be seen that the SYN70K dataset hardly contains small smoke images, while SMOKE5K only has a small fraction of them. In contrast, the majority of our SmokeSeg dataset is composed of small smoke images. Implementation Details Datasets. We conduct experiments on three large smoke datasets: SYN70K (Yuan et al. 2019b), SMOKE5K (Yan, Zhang, and Barnes 2022) and our SmokeSeg. To make a fair comparison, we train our models on SYN70K and SMOKE5K and test them on their respective test sets. Then we provide a new benchmark on our SmokeSeg dataset. Evaluation Metrics. Following previous smoke segmentation literature (Yan, Zhang, and Barnes 2022; Yuan et al. 2019b), we evaluate our model utilizing the mMse (M) and Frequency (×10!) Proportion of Smoke Pixels 0 0.2 0.4 0.6 0.8 1.0 0 1.0 2.0 3.0 SmokeSeg SMOKE5K SYN70K Figure 5: Distribution of smoke pixel ratio in an image of three datasets. This indicates that SmokeSeg contains a much higher proportion of small smoke images than the other two datasets. mIoU metric on SYN70K and adopt mMse (M) and Fmeasure (Fβ) on SMOKE5K. For our SmokeSeg, we employ the three metrics mentioned above (Fβ, mIoU, M) to comprehensively evaluate the performance of the model. Training Details. We implement our FoSp on MMSegmentation with a single NVIDIA RTX 3090Ti GPU. Each image is resized to 512 × 512. Random crop and random flip are adopted during the training. We use the AdamW (Loshchilov and Hutter 2017) optimizer and set the learning rate to 6e-5 with 0.01 weight decay. We train 40k iterations on SMOKE5K and SmokeSeg, and 80k iterations on SYN70K, with all batch sizes set to 6. Comparison with State-of-the-Art Methods SmokeSeg. As there is no complete open-source smoke segmentation code, we have chosen several strong baselines (Chen et al. 2018; Zhao et al. 2017; Yuan, Chen, and Wang 2020; Zhang et al. 2018; Xie et al. 2021) that perform well in semantic segmentation for comparison. In order to explore the abilities of various methods for the smoke of different scales, especially for early smoke, we divide the test set into three parts based on the proportion of smoke pixels in an image, namely small, medium, and large, and test them separately. We use the evaluation metric of ”small” to measure the early smoke segmentation capability of our model. Small : δ < 0.5% Medium : 0.5% < δ < 2.5% Large : δ > 2.5% , (11) where δ is the smoke pixel ratio in an image. As shown in Table 1, the comparison is divided into two parts, with the top part of the table showing the comparison of methods with lightweight backbones, and the bottom part showing the comparison of methods with medium-sized backbones. From the comparison of the lightweight methods, our FoSp using the least number of parameters surpasses other methods with higher parameter counts in the vast majority of cases. In the comparison of the mediumsized methods, our FoSp significantly outperforms all other methods, especially in the comparison of small smoke, where our method is 7.71% higher than the second-best The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6625 Total Small Medium Large Method Fβ ↑ mIoU ↑ M ↓ Fβ ↑ mIoU ↑ M ↓ Fβ ↑ mIoU ↑ M ↓ Fβ ↑ mIoU ↑ M ↓ Params FCN-s (2015) 54.12 42.06 0.0105 33.93 25.00 0.0017 64.08 50.65 0.0051 67.24 52.96 0.0271 9.71 DeepLabv3+-s (2018) 59.56 46.71 0.0095 44.06 32.77 0.0014 68.35 54.76 0.0046 68.37 54.46 0.0246 15.23 PSPNet-s (2017) 57.28 44.76 0.0096 40.86 30.70 0.0015 63.77 50.09 0.0051 69.72 55.68 0.0245 13.61 OCRNet-s (2020) 62.64 50.11 0.0087 50.99 39.47 0.0013 71.47 55.09 0.0043 70.45 57.33 0.0225 12.07 SegFormer-s (2021) 60.73 48.19 0.0097 48.21 35.18 0.0015 70.22 56.81 0.0042 67.52 54.18 0.0256 6.38 FoSp-s (ours) 64.26 51.44 0.0089 51.35 39.57 0.0012 72.47 58.95 0.0041 70.60 57.31 0.0234 5.79 FCN (2015) 65.41 51.89 0.0089 55.30 41.58 0.0013 70.72 57.31 0.0043 71.64 58.22 0.0233 49.48 PSPNet (2017) 65.80 52.42 0.0086 55.27 42.01 0.0012 71.44 58.13 0.0042 72.15 58.54 0.0224 48.96 EncNet (2018) 66.54 53.15 0.0088 58.09 44.86 0.0012 71.16 57.74 0.0042 71.54 57.96 0.0232 35.87 DeepLabv3+ (2018) 67.26 53.85 0.0084 57.00 43.70 0.0013 73.34 59.80 0.0039 72.79 59.39 0.0219 43.58 CCNet (2019) 64.45 51.42 0.0090 52.94 40.59 0.0015 69.79 56.26 0.0046 72.31 59.01 0.0231 49.81 OCRNet (2020) 65.13 52.66 0.0085 52.60 41.04 0.0012 72.24 59.34 0.0039 72.25 59.16 0.0224 70.37 SegFormer (2021) 67.99 54.98 0.0088 58.32 45.72 0.0017 74.34 61.37 0.0038 72.50 58.95 0.0235 44.60 FoSp (ours) 72.05 59.03 0.0079 66.03 52.68 0.0010 75.90 62.99 0.0037 74.98 62.24 0.0208 47.52 Table 1: Performance on the test set of SmokeSeg. The best results: bold. The second-best results: underline. DS01 DS02 DS03 Method M mIoU M mIoU M mIoU SMD 0.32 62.88 0.34 61.50 0.33 62.09 TBFCN 0.30 66.67 0.32 65.85 0.31 66.20 LRN 0.31 66.43 0.31 67.71 0.30 67.46 ESPNet 61.85 61.90 62.77 LKM 0.27 75.82 0.28 74.93 0.27 75.39 RefineNet 0.25 77.16 0.26 76.75 0.25 77.52 PSPNet 0.24 78.71 0.25 78.01 0.24 78.39 CCL 0.23 78.87 0.25 77.95 0.24 78.55 DFN 0.23 80.87 0.24 79.90 0.23 80.60 DSS 0.27 71.04 0.29 70.01 0.29 69.81 W-Net 0.27 73.06 0.25 73.97 0.26 73.36 CCENet 74.67 75.24 76.01 Trans-BVM 0.12 0.14 0.13 FoSp (ours) 0.10 83.00 0.11 81.81 0.10 82.80 Table 2: Performance on three test sets of SYN70K. SegFormer (Xie et al. 2021) on Fβ, demonstrating that our method can better recognize early smoke. Fig. 1g shows the parameters (the size of the bubble) and computational complexity of different models. SYN70K. We evaluate our model on their three synthetic test sets (DS01, DS02, and DS03). As shown in Table 2, our model outperforms the previous state-of-the-art method Trans-BVM on all three test sets. SMOKE5K. We evaluate our model on their test set of 400 real smoke images. As demonstrated in Table 3, our approach surpasses previous state-of-the-art Trans-BVM and achieves 2.5% improvement on Fβ. Ablation Study To further investigate the role of each module in our proposed FoSp, we conduct ablation experiments on the SmokeSeg dataset. As shown in Table 4, we utilize SegFormer (Xie et al. 2021) as the baseline for our study. In order to investigate the influence of individual modules on the miss detection rate and precision of early smoke detection, we augment the evaluation metrics by including recall (for miss detection rate) and precision. Fβ is adopted as the Method Fβ M F3Net (Wei, Wang, and Huang 2020) 67.0 0.004 BASNet (Qin et al. 2019) 73.3 0.005 SCRN (Wu, Su, and Huang 2019) 76.9 0.003 ITSD (Zhou et al. 2020) 77.4 0.003 UCNet (Zhang et al. 2020) 78.7 0.003 Trans-BVM (Yan, Zhang, and Barnes 2022) 79.1 0.002 FoSp (ours) 81.6 0.002 Table 3: Performance on the test set of SMOKE5K. Smoke FM & GT Low Res. High Res. Mid Res. Figure 6: Qualitative results of focus module. The blue region represents the focus map (FM), while the red denotes the ground truth (GT). comprehensive evaluation metric. Effect of Focus Module. Methods (b) and (c) in Table 4 demonstrate the role of the focus loss and focus module. It can be observed that after incorporating the focus loss and focus module, a substantial enhancement on recall is observed, particularly in the segmentation of small smoke particles. As shown in Fig. 6, high-resolution features can capture fine details of the smoke edges, whereas low-resolution features can capture the overall characteristics of the smoke body. By establishing bidirectional cascade between these two types of features, a focus map can be generated to fully The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6626 Module Metrics Metrics (small) Method Baseline Lfocus Focus Separation Fusion Recall Precision Fβ Recall Precision Fβ (a) ✓ 71.77 71.34 67.99 60.82 63.29 58.32 (b) ✓ ✓ 73.53 71.55 68.90 65.06 63.06 60.99 (c) ✓ ✓ ✓ 75.60 71.50 70.00 68.52 62.86 62.23 (d) ✓ ✓ ✓ ✓ 73.82 75.50 71.50 66.24 71.16 65.77 (e) ✓ ✓ ✓ ✓ ✓ 74.60 75.20 72.05 68.24 68.56 66.03 Table 4: Ablation study of our FoSp. Method Fβ mIoU M SegFormer 67.99 54.98 0.0088 Foreground only 68.22 54.94 0.0087 FoSp (image-level) 71.04 ↑3.05 57.81 ↑2.83 0.0082 FoSp (feature-level) 72.05 ↑4.06 59.03 ↑4.05 0.0079 Table 5: Experiments on different separation levels. cover the smoke area and suppress background interference. Effect of Separation Module. Method (d) in Table 4 demonstrates the role of the separation module. This method of separating the smoke foreground from a smoke-free background results in a notable increase in precision, particularly in the segmentation of small smoke particles, where precision improved by 7.87% compared to the baseline and by 8.30% compared to the focus module only. The foreground feature in Fig. 7 demonstrates that the extracted smoke foreground feature closely resembles the shape of the smoke itself, particularly the fine edges of the smoke. To further explore the role of the foreground features, we conduct an experiment in which only the foreground features are used for prediction in the decoder head. As shown in Table 5, the ”Foreground only” method even outperforms the SegFormer (Xie et al. 2021) which uses the full original features on Fβ metric. This demonstrates the immense potential of the foreground feature in our FoSp. Effect of Domain Fusion Module. Method (e) in Table 4 has demonstrated the effectiveness of domain fusion. After incorporating the focus module, the model is inclined towards improving recall, while adding the separation module tends to improve precision. Furthermore, after including the domain fusion module, a better balance between these two metrics is achieved, resulting in the best performance on Fβ metric. Compared to the baseline, the model shows a significant improvement of 4.09% (Fβ) on the entire test set, and an even more pronounced improvement of 7.71% on the test set of small smoke particles. Feature-level Separation vs Image-level Separation. Fig. 8 has shown the performance of the image-level separation. Although image-level separation methods can split the foreground of smoke more precisely, it is computationally expensive to restore the complete image, and downsampling is required to match the feature dimension of the original image. We have conducted experiments on both feature-level and image-level separation methods, as shown in Table 5. It can be observed that the FoSp using image-level separation outperformed SegFormer, which first proves the efSmoke FM & GT Separated BG Separated FG Prediction Figure 7: Qualitative results of separation module. Smoke Separated FG Image Smoke Separated FG Image Figure 8: Qualitative results of image-level separation. fectiveness of the FoSp concept. However, compared to the feature-level separation FoSp, the image-level FoSp results in a lower Fβ and mIoU. Therefore, we think feature-level separation is the optimal choice for our FoSp. Conclusion In this paper, we propose a FoSp for early smoke segmentation (ESS). Concentrating on the formulation of smoke, we first determine the scope of the smoke, and then separate the smoke foreground from the smoke-free background, increasing the contrast between the background and foreground to obtain the complete and refined segmentation map. Furthermore, we provide a SmokeSeg dataset for ESS to promote the development of this field. Surprisingly, we find that sometimes using image-level separation can provide a certain degree of transparency information at the edges of smoke, as shown in Fig. 8. This has surpassed the scope of binary segmentation and we believe this method can be applied to more sophisticated segmentation tasks such as image matting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6627 Acknowledgments This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 62173143 and Grant 61973122. References Borji, A.; Cheng, M.-M.; Jiang, H.; and Li, J. 2015. Salient object detection: A benchmark. IEEE transactions on image processing, 24(12): 5706–5722. Cao, Y.; Tang, Q.; and Lu, X. 2022. STCNet: spatiotemporal cross network for industrial smoke detection. Multimedia Tools and Applications, 81(7): 10261–10277. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; and Adam, H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), 801–818. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1290–1299. Dewangan, A.; Pande, Y.; Braun, H.-W.; Vernon, F.; Perez, I.; Altintas, I.; Cottrell, G. W.; and Nguyen, M. H. 2022. FIgLib & SmokeyNet: Dataset and Deep Learning Model for Real-Time Wildland Fire Smoke Detection. Remote Sensing, 14(4): 1007. Hsu, Y.-C.; Huang, T.-H. K.; Hu, T.-Y.; Dille, P.; Prendi, S.; Hoffman, R.; Tsuhlares, A.; Pachuta, J.; Sargent, R.; and Nourbakhsh, I. 2021. Project RISE: recognizing industrial smoke emissions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 14813–14821. Jia, Y.; Du, H.; Wang, H.; Yu, R.; Fan, L.; Xu, G.; and Zhang, Q. 2019. Automatic early smoke segmentation based on conditional generative adversarial networks. Optik, 193: 162879. Jing, T.; Meng, Q.-H.; and Hou, H.-R. 2023. SmokeSeger: A Transformer-CNN coupled model for urban scene smoke segmentation. IEEE Transactions on Industrial Informatics, 1–12. Li, X.; Chen, Z.; Wu, Q. J.; and Liu, C. 2018. 3D parallel fully convolutional networks for real-time video wildfire smoke detection. IEEE Transactions on Circuits and Systems for Video Technology, 30(1): 89–103. Lin, G.; Milan, A.; Shen, C.; and Reid, I. 2017. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1925–1934. Liu, Y.; Sun, P.; Wergeles, N.; and Shang, Y. 2021. A survey and performance evaluation of deep learning methods for small object detection. Expert Systems with Applications, 172: 114602. Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3431–3440. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Mahmoud, M. A. I.; and Ren, H. 2019. Forest fire detection and identification using image processing and SVM. Journal of Information Processing Systems, 15(1): 159–168. Mei, H.; Dong, B.; Dong, W.; Yang, J.; Baek, S.-H.; Heide, F.; Peers, P.; Wei, X.; and Yang, X. 2022. Glass segmentation using intensity and spectral polarization cues. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12622–12631. Muhammad, K.; Ahmad, J.; and Baik, S. W. 2018. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing, 288: 30–42. Najibi, M.; Singh, B.; and Davis, L. S. 2019. Autofocus: Efficient multi-scale inference. In Proceedings of the IEEE/CVF international conference on computer vision, 9745–9755. Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; and Jagersand, M. 2019. Basnet: Boundary-aware salient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7479–7489. Robinson, D. A. 1979. Smoke Detection: Critical Element of a University Residential Fire Safety Program. Journal of the American College Health Association, 27(5): 265–66. Strudel, R.; Garcia, R.; Laptev, I.; and Schmid, C. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7262–7272. Uzkent, B.; Yeh, C.; and Ermon, S. 2020. Efficient object detection in large images using deep reinforcement learning. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1824–1833. Wang, H.; Zhou, L.; and Wang, L. 2019. Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8509–8518. Wang, S.; He, Y.; Zou, J. J.; Zhou, D.; and Wang, J. 2014. Early smoke detection in video using swaying and diffusion feature. Journal of Intelligent & Fuzzy Systems, 26(1): 267– 275. Wang, Y.; Lv, K.; Huang, R.; Song, S.; Yang, L.; and Huang, G. 2020. Glance and focus: a dynamic approach to reducing spatial redundancy in image classification. Advances in Neural Information Processing Systems, 33: 2432–2444. Wei, J.; Wang, S.; and Huang, Q. 2020. F3Net: fusion, feedback and focus for salient object detection. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 12321–12328. Wu, Z.; Su, L.; and Huang, Q. 2019. Stacked cross refinement network for edge-aware salient object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 7264–7273. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077–12090. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6628 Xie, Z.; Zhang, Z.; Zhu, X.; Huang, G.; and Lin, S. 2020. Spatially adaptive inference with stochastic feature sampling and interpolation. In European conference on computer vision, 531–548. Springer. Xing, D.; Zhongming, Y.; Lin, W.; and Jinlan, L. 2015. Smoke image segmentation based on color model. Journal on Innovation and Sustainability RISUS, 6(2): 130–138. Yan, S.; Zhang, J.; and Barnes, N. 2022. TransmissionGuided Bayesian Generative Model for Smoke Segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 3009–3017. Yang, C.; Huang, Z.; and Wang, N. 2022. QueryDet: Cascaded sparse query for accelerating high-resolution small object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13668– 13677. Yu, L.; Mei, H.; Dong, W.; Wei, Z.; Zhu, L.; Wang, Y.; and Yang, X. 2022. Progressive glass segmentation. IEEE Transactions on Image Processing, 31: 2920–2933. Yuan, C.; Liu, Z.; and Zhang, Y. 2019. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. Journal of Intelligent & Robotic Systems, 93(1): 337–349. Yuan, F.; Dong, Z.; Zhang, L.; Xia, X.; and Shi, J. 2022. Cubic-cross convolutional attention and count prior embedding for smoke segmentation. Pattern Recognition, 131: 108902. Yuan, F.; Zhang, L.; Xia, X.; Huang, Q.; and Li, X. 2019a. A wave-shaped deep neural network for smoke density estimation. IEEE transactions on image processing, 29: 2301– 2313. Yuan, F.; Zhang, L.; Xia, X.; Huang, Q.; and Li, X. 2021. A gated recurrent network with dual classification assistance for smoke semantic segmentation. IEEE Transactions on Image Processing, 30: 4409–4422. Yuan, F.; Zhang, L.; Xia, X.; Wan, B.; Huang, Q.; and Li, X. 2019b. Deep smoke segmentation. Neurocomputing, 357: 248–260. Yuan, Y.; Chen, X.; and Wang, J. 2020. Object-contextual representations for semantic segmentation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16, 173–190. Springer. Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; and Agrawal, A. 2018. Context encoding for semantic segmentation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 7151–7160. Zhang, J.; Fan, D.-P.; Dai, Y.; Anwar, S.; Saleh, F. S.; Zhang, T.; and Barnes, N. 2020. UC-Net: Uncertainty inspired RGB-D saliency detection via conditional variational autoencoders. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8582–8591. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2881– 2890. Zhou, H.; Xie, X.; Lai, J.-H.; Chen, Z.; and Yang, L. 2020. Interactive two-stream decoder for accurate and fast saliency detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9141–9150. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6629 | 2024 | 736 |
18,558 | How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection Yiyang Yao1, Peng Liu2, Tiancheng Zhao3, Qianqian Zhang2, Jiajia Liao3, Chunxin Fang3, Kyusong Lee3, Qing Wang1 1 Northwestern Polytechnical University 2 Linker Technology Research Co. Ltd 3 Binjiang Institute of Zhejiang University [email protected], {liu peng,zhang qianqian}@hzlh.com, {tianchez,liaojiajia,fangchunxin,kyusongl}@zju-bj.com, [email protected] Abstract Object detection (OD) in computer vision has made significant progress in recent years, transitioning from closed-set labels to open-vocabulary detection (OVD) based on largescale vision-language pre-training (VLP). However, current evaluation methods and datasets are limited to testing generalization over object types and referral expressions, which do not provide a systematic, fine-grained, and accurate benchmark of OVD models’ abilities. In this paper, we propose a new benchmark named OVDEval, which includes 9 subtasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models’ true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue. Extensive experimental results show that existing top OVD models all fail on the new tasks except for simple object types, demonstrating the value of the proposed dataset in pinpointing the weakness of current OVD models and guiding future research. Furthermore, the proposed NMS-AP metric is verified by experiments to provide a much more truthful evaluation of OVD models, whereas traditional AP metrics yield deceptive results. Data is available at https://github.com/om-ai-lab/OVDEval Introduction Open vocabulary detection (OVD) models have experienced rapid development in recent years, with numerous innovative techniques being introduced to the field. Novel models such as GLIP (Li et al. 2022b), Grounding DINO (Liu et al. 2023) and OmDet (Zhao et al. 2022a) have introduced new vision-language learning methods such as modeling detection as visual grounding (Kamath et al. 2021; Li et al. 2022b), pre-training with coarse image-text pairs (Dou et al. 2022), and multi-task learning with a variety of detection tasks (Zhao et al. 2022a). As a result, for the first time, we can achieve strong zero-shot object detection (OD) on popular datasets such as Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. COCO (Lin et al. 2014), even surpassing the performance of some of the supervised methods (Liu et al. 2023). Users can simply use natural language to specify the desired targets and OVD models can detect the described targets on the fly, which opens doors for many new applications such as interactive image-editing (Shen et al. 2023), Augmented Reality (Li et al. 2023) and robotics (Shah et al. 2023). Meanwhile, current common approaches to evaluate OVD models include zero-shot/few-shot testing on OD dataset with common objects like COCO (Lin et al. 2014), OD dataset with long-tail objects like LVIS (Gupta, Dollar, and Girshick 2019), grounding such as Flickr30K (Plummer et al. 2015) and referral expression comprehension (REC) such as RefCOCO (Yu et al. 2016). These datasets were challenging for traditional OD research, but no longer serve as a challenging enough benchmark for future OVD methods for the following reasons: • Lack of systematic probing of model’s generalization ability: An ideal OVD model should be able to understand the fine-grained semantics in the language prompt and align the language with visual features. Thus, it is required to probe the OVD model from various linguistic aspects such as object type, visual attributes, object relationship, etc., to quantify an OVD model’s generalization to various degrees of prompt complexity. • Lack of hard negative for real-world usage: Existing grounding and REC data assume the text prompt is paired with the image. The OVD model is only required to localize the entities mentioned in the caption without the need to discriminate against hard negatives. However, real-world usages command an OVD model to detect described object without knowing if the caption is related to the image at all. To address the above issues, this paper introduces OVDEval to provide a comprehensive evaluation of OVD models and test their robustness against hard negatives. OVDEval is inspired by behavioral testing (Ribeiro et al. 2020; Zhao et al. 2022b), and consists of 9 large datasets that cover 6 linguistic aspects: object, proper noun, attribute, position, relationship, and negation. All of the data annotations are carefully annotated by human experts to guarantee data quality. Additionally, these sub-datasets are meticulously crafted to ensure that all negative labels are hard. As a result, OVDEThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6630 val is able to rigorously test a model’s true understanding of a given aspect, preventing them from achieving high scores on a particular dimension by taking advantage of data bias. Besides the proposed dataset, this work also proposes a new evaluation metric named Non-Maximum Suppression Average Precision (NMS-AP). We identifies the The Inflated AP Problem where even with high-quality hard negatives, a poor OVD model can still achieve a deceptive high AP score due to limitations on the calculation process of AP. The proposed NMS-AP is able to effectively resolve the Inflated AP Problem issue and offers a truthful evaluation of OVD models. We compared six strong baseline models on the proposed OVDEval dataset. Experimental results show that the current state-of-the-art (SOTA) OVD models only achieve strong results in simple object detection, and performance drop significantly on visual attribute understanding, commonsense knowledge and etc. This shows the significance to have a comprehensive and truthful benchmark to reveal the weakness of SOTA systems and guides the direction of future improvement. Analysis results also confirm the effectiveness of the proposed NMS-AP metric, whereas the conventional AP score is 30% higher than the model’s actual performance. Further analysis indicates that the current OVD model is only able to detect object types reliably and shows how OVD models can deceive conventional AP metrics by predicting multiple bounding boxes for each potential target object. The contributions of our work are summarized as follows: • We introduce the first OVD evaluation benchmark that comprehensively tests model abilities across six linguistic aspects with complex language prompts and welldesigned hard negatives. • We identify the inflated AP problem that applies to any OVD model with traditional AP metric. • We propose NMS-AP, a novel evaluation metric that addresses the inflated AP score problem associated with traditional AP and we show NMS-AP provides a more accurate evaluation of OVD models’ performance when dealing with fine-grained described detection. • We show extensive experiment results that reveal the limitations of current SOTA OVD models and verify the effectiveness of the proposed metric. Related Work Progression from Fixed Labels to Open Vocabulary Expressions: Traditional object detectors, such as Faster RCNN (Ren et al. 2015) and YOLO (Redmon et al. 2016), rely on a closed-set vocabulary and are trained on datasets like COCO (Lin et al. 2014) and Pascal VOC (Hoiem, Divvala, and Hays 2009) with predefined categories. Over time, the number of labels increased, with Object365 (Shao et al. 2019) introducing 365 labels and LVIS (Gupta, Dollar, and Girshick 2019) surpassing a thousand. Also, datasets like ODinW (Li et al. 2022a) focus on wilderness objects with 35 different domains. V3Det (Wang et al. 2023) further broadened object detection capabilities across an extensive range of categories, paving the way for OVD. In addition to object detection, a growing body of research is dedicated to referral expression comprehension (REC) and visual grounding. REC focuses on identifying objects based on textual descriptions provided. Notable datasets in this area include RefCOCO (Yu et al. 2016), PhraseCut (Wu et al. 2020), Flickr30K (Plummer et al. 2015) and Visual Genome (Krishna et al. 2017). The Described Object Detection (DOD) introduced recently, combines the principles of object visual detection and REC, with the goal of detecting objects across various described categories. However, the above-mentioned datasets often lack hard negatives, which can lead to models detecting objects based on general terms rather than recognizing fine-grained details. Moreover, existing datasets have not investigated the model’s ability to utilize common sense knowledge for detecting objects such as landmarks, logos, and celebrities. Endeavor for Systematic Model Evaluations: Benchmark scores often do not provide a comprehensive understanding of a model’s capabilities, as they tend to present a superficial evaluation that can be difficult to interpret. Consequently, researchers have sought to scrutinize machine learning (ML) models with greater precision and granularity. In the realm of natural language processing (NLP), CheckList (Ribeiro et al. 2020) evaluates a wide range of linguistic competencies, revealing the limitations of numerous leading NLP models. For computer vision, the Vision CheckList (Du et al. 2022) assists system developers in understanding a model’s potential by introducing various transformation techniques to generate an extensive array of test samples. In the vision-language multimodal domain, VLChecklist (Zhao et al. 2022b) serves as a framework for examining the proficiency of vision-language processing (VLP) models. In the field of OD, studies often report conventional Average Precision (AP) scores. However, without an in-depth analysis, these scores can be challenging to understand. To address this limitation, we propose a novel evaluation approach that investigates a model’s proficiency across clearly defined dimensions. Additionally, we introduce an evaluation metric designed to tackle the problem of deceptively high AP scores. OVDEval Benchmark The utilization of commonly employed OD datasets is associated with certain limitations. Firstly, evaluating OD performance solely based on AP across all labels in these datasets provides only a basic assessment. The specific capabilities of the model, such as accurately identifying object positions, have not been thoroughly evaluated. Moreover, in order to maintain linguistic label diversity and comprehensiveness, the distinctions between labels within the same dataset are typically coarse-grained and easily distinguishable. However, the OD task in the real world is much more challenging than merely detecting obvious objects or expressions. It is crucial to include hard negative samples that possess similar linguistic meanings but refer to different objects. Considering these concerns, we propose a new comprehensive benchmark dataset called OVDEval. OVDEval is divided into 9 sub-datasets, each focusing on evaluating the OD capabilities across 6 aspects: object, proper noun, attribute, position, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6631 relationship, and negation. The utilization of this benchmark dataset offers 3 significant benefits: • Detailed understanding of OD models: By evaluating OD models across different linguistic aspects, we can gain a more detailed understanding of their performance. This allows us to gain insights into the strengths and weaknesses of OD models, thereby facilitating the identification of areas for improvement. • Commonsense understanding performance: OVDEval is specifically designed with linguistic queries, including commonsense knowledge-based labels, which enable us to assess the model’s commonsense capabilities in the context of multimodal OVD. This evaluation sheds light on how well the model interprets knowledge. • Fine-grained hard negative labels: we have carefully selected hard negative samples that conflict with the ground truth labels for each object, which provide a straightforward assessment of the model’s performance in specific aspects. Dataset Description We comprise the benchmark dataset from three viewpoints. Firstly, in line with existing datasets that primarily focus on evaluating the detection of common objects, we employ the COCO dataset to assess the models’ general ability in this domain. Additionally, we aim to investigate the models’ capacity to leverage external knowledge and common sense. Therefore, landmark, logo, and celebrity, which require knowledge in both vision and language are implemented. Samples for the three aspects are shown in Figure 1. Furthermore, to delve into the models’ proficiency in localizing fine-grained details, we divide the dataset into attributes (color and material), relationship, position, and negation aspects. Figure 2 shows the detail-oriented dataset samples with corresponding fine-grained hard negative samples, which essentially raise the detection difficulty. Finally, 9 sub-dataset across 6 aspects are collected and described as follows: Figure 1: Samples of Proper noun datasets. • Object is utilized to evaluate the general capability in identifying common objects on the COCO Val 2017 (Lin et al. 2014), which covers 80 common object categories. • Proper noun can unveil a model’s comprehension of commonsense knowledge including famous landmarks, renowned logos, and celebrities. • Attribute is used to assess OVD model’s proficiency of distinguishing object characteristics. Specifically, color and material are employed as representing attribute aspects. Figure 2: Detail-oriented dataset samples. Ground-truth labels are annotated with red color, and fine-grained hard negative samples are shown in black. • Position aims to evaluate identifying specific objects among multiple visually similar items within a given image. The evaluation entails determining the target object based on the spatial relationships with other described object expressions. • Relationship involves the examination of interactions between humans and other objects to comprehend both active and passive relationships among multiple objects. • Negation focuses on identifying objects expressed negatively, like spotting kitchen staff not wearing gloves. This checks the model’s skill in detecting objects expressed in a negated context. Dataset Collection Process Image Collection We collected varied images from three main sources. We used popular datasets, notably COCO and HICO (Chao et al. 2018). For evaluation, the COCO Val 2017 was directly used. For relationship, the HICO dataset was the key source. After selecting the top-most frequent interaction label and excluding it, this selection process was repeated 10 times. We also changed active expressions to passive ones, ensuring two distinct labels for each sample image. For the color sub-dataset, we identified the top 50 objects from the visual genome (VG) dataset (Krishna et al. 2017), labeling them using Oscar (Li et al. 2020). This enabled labeling objects from VG with colors. We concentrated on six object categories and six distinct colors, leading to 36 object-color combinations. Images were then randomly chosen from VG based on Oscar’s labels. For other datasets (landmark, logo, etc.), images came from the Laion-400m dataset (Schuhmann et al. 2021). We began by identifying key terms for each subset. Using CLIP (Radford et al. 2021), a top-tier image-text match model, images were sourced based on these keywords. To ensure variety, we crafted specific search prompts, considering context The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6632 and diversity. For position and negation, we added terms like ”multiple” to get images with several similar items. Hard Negative Labels We have implemented a novel approach that incorporates fine-grained hard negative labels for each linguistic aspect. These carefully selected hard negative labels are specifically designed to challenge the models and prevent them from achieving high scores on particular aspects without a genuine understanding. • For color, variations in colors with the same object category are used to serve as negative labels. This approach exposes the OVD models to different color representations of the same object, thereby testing their ability to accurately distinguish and classify objects based on color. • For material, we maintain consistency in the object category while introducing variations in materials. • For relationship, we maintain the same subject and object entities but alter the verbs used to describe their relationship. • In position, we introduce changes in position words to serve as negative labels. For example, the words left can be replaced with right, above, under, front, back, and in. • For negation, we remove the word ”not” from the positive labels as negative labels. The datasets with detail-oriented negatives essentially challenge the OVD models toward the advancement of object understanding in natural language processing. Manual Annotation To ensure the accuracy and reliability of the dataset, we engaged a team of OD annotation experts to manually annotate the collected images with a rigorous annotation process and a thorough quality inspection process. During the annotation process, any images with ambiguous labels were carefully identified and filtered out, guaranteeing the integrity of the final dataset. All the bounding boxes for the corresponding objects were annotated. Statistics As shown in Table 1, the full OVDEval dataset comprises 9 distinct sub-datasets, collectively offering a total of 20K high-quality images accompanied by 3K meticulously annotated labels. The statics of each sub-dataset is provided in Table 1. Notably, each sub-dataset encompasses a range of 1K to 5K images, ensuring the diversity and representatives of samples. While some sub-datasets feature proper nouns with a limited number of labels, it is important to highlight that all other sub-datasets can be considered as open set labels. Moreover, these sub-datasets incorporate extremely hard negative labels, further pushing the boundaries of model performance and evaluation. The inclusion of open set labels and hard negative labels within the majority of the sub-datasets enhances the dataset’s realism and reflects the complexities encountered in real-world scenarios. The Proposed Evaluation Metric The Inflated AP Problem AP is defined as the area under the precision-recall curve. This metric evalautes a model’s performance by considering the trade-off between precision and recall. Recent OD research has predominantly used the COCO AP as the major benchmark metric (Zhang et al. 2022; Zong, Song, and Liu 2023). In the COCO mean Average Precision (mAP) calculation, a 101-point interpolated AP definition is utilized. Specifically, for COCO, AP is determined as the average across multiple Intersection-over-Union (IoU) thresholds that determine a positive match. AP@[.5:.95] represents the average AP for IoU values ranging from 0.5 to 0.95, with a step size of 0.05. Considering a scenario where an OVD model demonstrates good zero-shot performance in detecting objects but totally does not understand contextual descriptions, the model can deceive traditional AP metrics and obtain a high score by generating multiple predicted bounding boxes for the target object with all candidate labels. Assuming an image with 2 annotated ground-truth instances, which are labeled as red car and blue car, respectively. Then, the aforementioned model predicts 4 bounding boxes, generating 2 for each target object and assigning both candidate labels to each box. The IoUs between predictions and corresponding ground-truth instances are assumed to be greater than 0.95. As a result, the precision and recall for each category can be derived using the following equation: Precision = TP TP + FP = 1 1 + 1 = 0.50 (1) Recall = TP GTnum = 1 1 = 1.0 (2) Here, TP is the number of correctly predicted instances for a specific category, while FP is the number of instances that were incorrectly predicted as belonging to that category. GTnum represents the total number of ground-truth instances in the image. In the given scenario, where the IoU of predictions is assumed to be greater than 0.95, we can ignore the AP calculation process for IoU values ranging from 0.5 to 0.95. Therefore, we can calculate the average AP of each category as 0.50. Consequently, the mAP would also be 0.50. In this case, the model deceives traditional AP metrics to get an mAP score of 0.50, even though it only detects the target objects without comprehending their descriptions. In this case, the conventional COCO AP metric demonstrates a vulnerability that we refer as The Inflated AP Problem. During the stage of matching predictions with ground truth to count TP and FP, it only considers predictions that have the same label as the ground truth. As a result, OVD models can obtain inflated AP scores by simply predicting multiple bounding boxes on a single object with all possible labels. The inflated AP problem can lead to misleading evaluations of OVD models, as it fails to capture the accuracy of the descriptive labels assigned to the objects. Therefore, it is essential to develop alternative evaluation metrics that consider both object detection and the understanding of linguistic descriptions to provide a more robust assessment of OVD models. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6633 Object Attribute Proper noun Relationship Position Negation COCO Color Material Landmark Logo Celebrity Images 5,000 1,170 2,124 1,533 1,935 2,244 2,169 2,109 1,858 Bboxes 36,781 3,421 5,358 1,709 2,329 2,244 8,190 2,150 3,785 Labels 80 36 90 9 9 10 319 7,301 2,414 Avg. negative labels 5.01 8.73 8.00 8.00 9.00 7.65 3.06 1.00 Avg. label tokens 6.03 11.56 11.01 14.37 11.24 12.15 24.14 47.08 27.34 Avg. label words 1.10 2.00 2.03 2.66 2.00 2.12 4.48 9.67 5.35 Table 1: Statistics of OVDEval for the 9 sub-datasets. OVDEval provides fine-grained annotations with hard negatives. (a) before NMS (b) after NMS Figure 3: Examples of predictions from GLIP before and after class-ignored NMS, showing the limitation of current OVD models. A Simple Fix: NMS-AP To address the aforementioned issue, we propose a simple fix for the COCO AP metric, which we refer to as NMS-AP. It extends the traditional COCO AP metric by incorporating NMS (Girshick 2015), a technique that is used in OD tasks to eliminate redundant bounding box predictions by selecting the most relevant ones based on their confidence scores and suppressing overlapping bounding boxes based on IoU. Specifics of NMS-AP are outlined below Algorithm 1. Algorithm 1: NMS-AP Metrics Input: preds: predictions Input: GT: ground-truth 1: pickedPreds = keepPreds = [] 2: for k in GT do 3: for p in preds do 4: if IoU(p, k) >0.5 then 5: pickedPreds = pickedPreds ∪p 6: else 7: keepPreds = keepPreds ∪p 8: end if 9: end for 10: keepPreds = keepPreds ∪C-NMS(pickedPreds) 11: end for 12: mAP = AP(keepPreds, GT ) 13: return mAP In NMS-AP, instead of considering only the prediction with the highest confidence score for each object, we apply a class-ignored NMS (C-NMS) to remove redundant predictions that match ground truth. To be specific, we employed class-ignored NMS on the predictions that exhibited an IoU >0.5 when compared to the ground-truth instances. This ensures that multiple bounding boxes predicted for the same object are appropriately handled and only use the prediction with the highest confidence. In an ideal scenario with a flawless OVD model, it should predict bounding boxes with the correct label and the highest confidence score for each ground-truth instance. Consequently, the application of class-ignored NMS will solely remove false positives, ensuring that this model achieves a perfect score of 1.0. However, in the case of a subpar model that struggles to comprehend complex linguistic descriptions, the application of class-ignored NMS may lead to a decrease regarding true positives and NMS-AP scores (Figure 3). This is because of the failure of accurately predict the bounding boxes that correspond to the ground-truth instances due to its limited understanding of the linguistic context. Note that NMS-AP is model-agnostic and can be apply to any OVD models. It simply takes a set of predictions and ground-truth bounding boxes, and removes overlapping predictions adjacent to the ground-truths. Results and Analysis We conducted experiments on 9 datasets across 6 aspects using several leading publicly available models: Detic (Zhou et al. 2022), MDETR (Kamath et al. 2021), GLIP (Li et al. 2022b), FIBER (Dou et al. 2022), OmDet (Zhao et al. 2022a) and Grounding DINO (Liu et al. 2023). We provide these detailed model information such as pretraining data, backbone, and the number of parameters (Table 2). Main Results on NMS-AP on OVDEval The experimental results, as presented in Table 3, show that current models generally perform satisfactorily on the object task, with the exception of MDETR. This observation is consistent with earlier work that reported MDETR’s low performance on the COCO dataset (Cai et al. 2022). This indicates that most existing models possess strong capabilities in detecting objects. However, we observe that all current models exhibit poor performance on the logo, landmark, and celebrity tasks in proper noun aspect. Especially the NMSAP values are close to 0% in celebrity tasks. Notably, Detic demonstrates impressive results on the logo and landmark tasks, even without employing a complex fusion strategy, while its performance is relatively weak on tasks involving longer descriptions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6634 Model Pre-train Data Backbone Params Detic ImageNet-21K,COCO,LVIS Swin-B 141.6M MDETR VG,Flickr30k,COCO image-text pairs ResNet-101 185M GLIP FourODs,GoldG,CC3M+12M,SBU Swin-L 430.42M FIBER Flickr30k, MixedNoCOCO, O365 Swin-B 252.06M OmDet O365,GoldG,PhraseCut,HOI-A,VAW,RefCOCO ConvNext-B 241.5M Grounding DINO COCO,O365,GoldG,Cap4M,OpenImage,ODinW-35,RefCOCO Swin-B 232.9M Table 2: The relevant information of different models include pre-train data, backbone, and parameters. Aspects Sub-datasets GLIP FIBER Grounding DINO Detic MDETR OmDet NMS-AP/AP NMS-AP/AP NMS-AP/AP NMS-AP/AP NMS-AP/AP NMS-AP/AP Object COCO 48.90 / 51.30 46.80 / 49.30 52.50∗/ 55.30∗ 45.30∗/ 45.80∗ 1.60 / 3.20 54.68 / 57.50 Logo 10.20 / 17.61 6.30 / 9.05 10.30 / 14.60 9.60 / 9.60 0.90 / 4.60 6.10 / 11.00 Landmark 20.30 / 36.36 11.00 / 16.99 15.10 / 23.40 30.00 / 30.08 1.80 / 7.80 26.30 / 32.38 Celebrity 4.60 / 8.24 0.80 / 3.31 0.70 / 2.00 0.00 / 0.00 1.10 / 4.80 1.80 / 6.36 Proper Noun Avg 11.70 / 20.74 6.03 / 9.78 8.70 / 13.33 13.20 / 13.23 1.27 / 5.73 11.40 / 16.58 Color 3.70 / 6.70 6.80 / 9.40 9.40 / 12.41 3.90 / 4.14 3.10 / 7.30 22.90 / 24.56 Material 7.40 / 15.87 12.40 / 17.72 9.00 / 15.50 9.20 / 9.75 2.50 / 10.70 16.30 / 22.59 Attribute Avg 5.55 / 11.28 9.60 / 13.56 9.20 / 13.96 6.55 / 6.94 2.80 / 9.00 19.60 / 23.58 Position 30.90 / 48.10 34.30 / 48.20 67.50 / 77.40 12.20 / 14.40 34.00 / 48.80 21.20 / 47.75 Relationship 10.00 / 33.20 14.50 / 31.40 10.70 / 35.30 6.10 / 7.20 8.20 / 29.40 41.98 / 51.98 Negation 29.30 / 51.80 28.70 / 57.20 52.50 / 67.30 27.90 / 29.70 28.30 / 41.10 35.10 / 55.86 Total Average 18.37 / 29.91 17.96 / 26.95 25.30 / 33.69 16.02 / 16.74 9.06 / 17.52 25.86 / 39.15 Table 3: The NMS-AP and traditional AP evaluation results (%), * represents supervised score, otherwise it’s zero-shot. Total average is averaged over the 9 subtasks. Figure 4: Radar chart of NMS-AP results on 6 aspects. Most models successfully worked on object but failed on others. For datasets with hard negatives, the labels often involve some descriptions and require a more fine-grained linguistic understanding for models. We found that all models exhibit poor performance on color and material tasks. In contrast, OmDet performs more favorably overall on these tasks, largely due to its use of the VAW (Pham et al. 2021) dataset with attributes during pre-training. Meanwhile, the overall performance of existing models on the position, relationship, and negation tasks is similar, with generally low NMSAP values. This indicates that the current models have limited capability in handling tasks with fine-grained descriptions. However, we note that Grounding DINO significantly outperforms the other models in position task. This can be attributed to its utilization of the RefCOCO dataset with orientation data during pre-training, which provides the model with specific knowledge related to the position and improves its performance on this task. Moreover, OmDet performs better than other models on the relationship task, which can be attributed to its use of the HOI-A (Liao et al. 2020) dataset with relation attributes during pre-training, providing the model with specific knowledge related to the relationship and improving its performance on this task. While minor differences exist, all models display a similar trend when we represent the 6 aspects on a radar chart (Figure 4). All models successfully worked on the common object task (object). However, they all failed on the hard tasks from the proposed datasets, which require the use of external/commonsense knowledge and fine-grained localization ability. Therefore, it is evident that a dataset with fine-grained labels is necessary to establish a better benchmark to provide a clear optimization direction for improving the model’s performance on challenging tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6635 Comparing NMS-AP with Traditional AP To validate the Inflated AP problem, we performed the evaluation of traditional AP on our OVDEval dataset and compared it with the NMS-AP results. Table 3 shows that the difference between NMS-AP and AP on classical OD datasets such as COCO is small, e.g., 52.50 vs. 55.30 for Grounding DINO because the probability of mutual exclusion of the predicted labels in this task is small and its impact on the AP calculation is negligible. On the other hand, the difference between NMS-AP and AP becomes much more significant for more difficult aspects including attribute, position and etc. For example, the relationship AP of Grounding DINO decreased from 35.30 to 10.70. The above results confirmed that our hypothesis about the Inflated AP Problem exists for the compared OVD models. To visually illustrate our hypothesis and investigate the cause of the large NMS-AP and AP difference, we have plotted several bounding boxes obtained from the GLIP predictions, as illustrated in Figure 3. From the examples depicted in Figure 3, it is evident that GLIP tends to generate multiple bounding boxes on the same object. Notably, the labels assigned to these bounding boxes are mutually exclusive. For instance, in the case of a cat, the predicted bounding boxes include both ”cat that is sitting” and ”cat that is not sitting”. This inconsistency matches our hypothesis about the inflated score problem by deceiving traditional AP. That is although Grounding DINO has a poor performance in understanding negation it can still obtain a high AP score. On the other hand, by employing our NMS-AP algorithm, we effectively retain only one bounding box with the highest confidence for each ground-truth instance while disregarding other false bounding boxes during the AP calculation. This approach helps mitigate the inflated AP problem caused by multiple bounding boxes. The decrease in scores that we observed earlier can be primarily attributed to models predicting the highest confidence on false labels, indicating a failure in comprehending fine-grained descriptions. Note that among all the models, Detic suffers the least from NMS-AP and AP difference because its model architecture already applies NMS internally to the region proposal network (RPN) that remove the duplicated boxes over the same region (Zhou et al. 2022). Therefore, utilizing NMS-AP to evaluate OVD models on our benchmark provides a more suitable approach for assessing their performance on intricate linguistic descriptions. This method helps address the limitations of the models and provides a more accurate evaluation metric. Limitations of Current OVD Models We have also noticed a recurring issue among all the OVD models, where they tend to generate multiple bounding boxes for the same object but assign inconsistent labels to them. Moreover, these predicted labels are often mutually exclusive, and it is worth mentioning that the predictions with the highest confidence scores are frequently incorrect. This issue is particularly pronounced in models with a large number of output bounding boxes, such as Grounding Dino. This observation further strengthens our previous hypothesis that the current models demonstrate exceptional performance in learning straightforward object tasks such as COCO. However, they encounter difficulties in comprehending the intricacies of detailed descriptions. To further support our hypothesis, we plot the distribution of predicted confidence score for the object and negation aspects. Figure 5 from GLIP illustrates the distribution of confidence scores. The distribution of object is obtained from the model predictions on a subset of images in the COCO validation dataset. To calculate these distributions, we tally the number of positive and negative labels from the predictions that have an IoU greater than 0.9 with the ground truth. (a) object aspect (b) negation aspect Figure 5: Distribution analysis of predicted confidence for object and negation aspects in GLIP. X-axis is prediction confidence and Y-axis is the number of predictions. Based on the results in Figure 5, it is clear that in the object task, positive predictions tend to be spread out across the high confidence range, while negative predictions are mostly concentrated in the low confidence range. This indicates that most models have successfully learned to accurately identify objects. However, in the negation task, the confidence distribution of positive and negative samples exhibits a similar trend. Meanwhile, the predictions predominantly appear in the low confidence region. These findings further support our hypothesis that existing models struggle to comprehend certain nuanced semantic information in fine-grained tasks. Conclusion This paper presents a novel benchmark OVDEval, testing the generalization of open-vocabulary detectors. We carefully create the dataset with challenging hard negatives and annotate 20K images with human experts. We also identified the Inflated AP problem for conventional AP calculation and introduce a new metric NMS-AP to deal with it. Our assessment validates the OVDEval’s effectiveness in revealing the pros and cons of current SOTA open-vocabulary models. Lastly, OVDEval provides promising future research questions. How can we incorporate better training objectives so OVD models can acquire better discriminate abilities against hard negatives in both visual and linguistic input? What are the better pre-training data to inject more common sense knowledge in vision-language alignment? In summary, solving OVDEval is an important step for future general-purpose object detectors. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6636 Acknowledgements This research is supported by National Key R&D Program of China under grant (2022YFF0902600) and Key R&D Program of Zhejiang under grant (2023C01048). Y.Y. Yao and Q. Wang are supported by NSFC under grant 62031023. References Cai, Z.; Kwon, G.; Ravichandran, A.; Bas, E.; Tu, Z.; Bhotika, R.; and Soatto, S. 2022. X-detr: A versatile architecture for instance-wise vision-language tasks. In European Conference on Computer Vision, 290–308. Springer. Chao, Y.-W.; Liu, Y.; Liu, X.; Zeng, H.; and Deng, J. 2018. Learning to detect human-object interactions. In 2018 ieee winter conference on applications of computer vision (wacv), 381–389. IEEE. Dou, Z.-Y.; Kamath, A.; Gan, Z.; Zhang, P.; Wang, J.; Li, L.; Liu, Z.; Liu, C.; LeCun, Y.; Peng, N.; et al. 2022. Coarseto-fine vision-language pre-training with fusion in the backbone. Advances in neural information processing systems, 35: 32942–32956. Du, X.; Legastelois, B.; Ganesh, B.; Rajan, A.; Chockler, H.; Belle, V.; Anderson, S.; and Ramamoorthy, S. 2022. Vision checklist: Towards testable error analysis of image models to help system designers interrogate model capabilities. arXiv preprint arXiv:2201.11674. Girshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, 1440–1448. Gupta, A.; Dollar, P.; and Girshick, R. 2019. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5356–5364. Hoiem, D.; Divvala, S. K.; and Hays, J. H. 2009. Pascal VOC 2008 challenge. World Literature Today, 24(1). Kamath, A.; Singh, M.; LeCun, Y.; Synnaeve, G.; Misra, I.; and Carion, N. 2021. Mdetr-modulated detection for endto-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1780–1790. Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Li, L.-J.; Shamma, D. A.; et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123: 32–73. Li, B.; Zhang, Y.; Chen, L.; Wang, J.; Yang, J.; and Liu, Z. 2023. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726. Li, C.; Liu, H.; Li, L. H.; Zhang, P.; Aneja, J.; Yang, J.; Jin, P.; Hu, H.; Liu, Z.; Lee, Y. J.; and Gao, J. 2022a. ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models. Neural Information Processing Systems. Li, L. H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong, Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.-N.; et al. 2022b. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10965–10975. Li, X.; Yin, X.; Li, C.; Zhang, P.; Hu, X.; Zhang, L.; Wang, L.; Hu, H.; Dong, L.; Wei, F.; et al. 2020. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXX 16, 121–137. Springer. Liao, Y.; Liu, S.; Wang, F.; Chen, Y.; Qian, C.; and Feng, J. 2020. Ppdm: Parallel point detection and matching for realtime human-object interaction detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 482–490. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Yang, J.; Li, C.; Yang, J.; Su, H.; Zhu, J.; et al. 2023. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499. Pham, K.; Kafle, K.; Lin, Z.; Ding, Z.; Cohen, S.; Tran, Q.; and Shrivastava, A. 2021. Learning To Predict Visual Attributes in the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13018–13028. Plummer, B. A.; Wang, L.; Cervantes, C. M.; Caicedo, J. C.; Hockenmaier, J.; and Lazebnik, S. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, 2641–2649. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Redmon, J.; Divvala, S.; Girshick, R.; and Farhadi, A. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. Ribeiro, M. T.; Wu, T.; Guestrin, C.; and Singh, S. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. arXiv preprint arXiv:2005.04118. Schuhmann, C.; Vencu, R.; Beaumont, R.; Kaczmarczyk, R.; Mullis, C.; Katta, A.; Coombes, T.; Jitsev, J.; and Komatsuzaki, A. 2021. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114. Shah, D.; Osi´nski, B.; Levine, S.; et al. 2023. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In Conference on Robot Learning, 492–504. PMLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6637 Shao, S.; Li, Z.; Zhang, T.; Peng, C.; Yu, G.; Zhang, X.; Li, J.; and Sun, J. 2019. Objects365: A large-scale, highquality dataset for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 8430–8439. Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; and Zhuang, Y. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Wang, J.; Zhang, P.; Chu, T.; Cao, Y.; Zhou, Y.; Wu, T.; Wang, B.; He, C.; and Lin, D. 2023. V3det: Vast vocabulary visual detection dataset. arXiv preprint arXiv:2304.03752. Wu, C.; Lin, Z.; Cohen, S.; Bui, T.; and Maji, S. 2020. Phrasecut: Language-based image segmentation in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10216–10225. Yu, L.; Poirson, P.; Yang, S.; Berg, A. C.; and Berg, T. L. 2016. Modeling context in referring expressions. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 69–85. Springer. Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L. M.; and Shum, H.-Y. 2022. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv preprint arXiv:2203.03605. Zhao, T.; Liu, P.; Lu, X.; and Lee, K. 2022a. Omdet: Language-aware object detection with large-scale visionlanguage multi-dataset pre-training. arXiv preprint arXiv:2209.05946. Zhao, T.; Zhang, T.; Zhu, M.; Shen, H.; Lee, K.; Lu, X.; and Yin, J. 2022b. Vl-checklist: Evaluating pre-trained visionlanguage models with objects, attributes and relations. arXiv preprint arXiv:2207.00221. Zhou, X.; Girdhar, R.; Joulin, A.; Kr¨ahenb¨uhl, P.; and Misra, I. 2022. Detecting twenty-thousand classes using imagelevel supervision. In European Conference on Computer Vision, 350–368. Springer. Zong, Z.; Song, G.; and Liu, Y. 2023. Detrs with collaborative hybrid assignments training. In Proceedings of the IEEE/CVF international conference on computer vision, 6748–6758. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6638 | 2024 | 737 |
18,559 | Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation Guy Yariv1,2, Itai Gat3, Sagie Benaim1, Lior Wolf4, Idan Schwartz4,2,∗, Yossi Adi1* 1The Hebrew University of Jerusalem, 2NetApp, 3Technion, 4Tel-Aviv University Abstract We consider the task of generating diverse and realistic videos guided by natural audio samples from a wide variety of semantic classes. For this task, the videos are required to be aligned both globally and temporally with the input audio: globally, the input audio is semantically associated with the entire output video, and temporally, each segment of the input audio is associated with a corresponding segment of that video. We utilize an existing text-conditioned video generation model and a pre-trained audio encoder model. The proposed method is based on a lightweight adaptor network, which learns to map the audio-based representation to the input representation expected by the text-to-video generation model. As such, it also enables video generation conditioned on text, audio, and, for the first time as far as we can ascertain, on both text and audio. We validate our method extensively on three datasets demonstrating significant semantic diversity of audio-video samples and further propose a novel evaluation metric (AV-Align) to assess the alignment of generated videos with input audio samples. AV-Align is based on the detection and comparison of energy peaks in both modalities. In comparison to recent state-of-the-art approaches, our method generates videos that are better aligned with the input sound, both with respect to content and temporal axis. We also show that videos produced by our method present higher visual quality and are more diverse. Code and samples are available at: https://pages.cs.huji.ac.il/adiyoss-lab/TempoTokens/. Introduction Neural generative models have changed the way we create and consume digital content. From generating high-quality images and videos (Ho, Jain, and Abbeel 2020; Rombach et al. 2022), speech and audio (Wang et al. 2023a; Sheffer and Adi 2023; Copet et al. 2023; Kreuk et al. 2022; Hassid et al. 2023), through generating long textual spans (Touvron et al. 2023a,b; Brown et al. 2020), these models have shown impressive results. In the context of video generation, progress has been more elusive, with recent work making progress in generating short videos conditioned on text (Singer et al. 2022; Ho et al. 2022). Although audio is tightly connected to videos (e.g., providing important cues for motion in a scene), most of the *Equal Contribution. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Generated video frames (above) and input audio signal (below the frames) employing our technique. The input to our model is an audio recording from which a representation is extracted. This representation maintains crucial temporal attributes and is then mapped into a textbased latent space representation incorporating both local and global audio context. Subsequently, this latent representation is fed into a pre-trained text-to-video diffusion generative model, ensuring the synchronized generation of video which is closely aligned with the input audio. prior work did not consider audio in the generation process. For instance, the action of ‘playing drums’ or the ‘motion of waves’ can be distinctively associated with a naturally occurring sound. Moreover, audio is comprised of structural components such as pitch and envelope that provide important cues for the type of scene and motion depicted. We tackle the problem of generating diverse and realistic videos guided by natural audio samples. Our generated videos capture diverse and real-life settings from a wide variety of semantic classes and are aligned both globally and temporally with the input audio. Globally, the input audio is semantically associated with the entire output video, and temporally, each segment of the input audio is associated with a corresponding segment of that video. An example generation video can be seen in Figure 1. Prior work on audio-guided video generation was mainly focused on either global information in the videos (i.e., capturing the semantic class) or specific scenes (e.g., speech). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6639 (Mama et al. 2021; Park et al. 2022; Kumar et al. 2020) generate talking heads conditioned on speech, but these are limited to videos of human faces and are conditioned on speech and not natural audio. More closely related to our setting, given an input video and an audio sample, Chatterjee and Cherian (2020) generate a continuation of the video that is aligned with the audio. Our method, however, generates videos from audio-only. Ge et al. (2022) proposed a method for generating aligned videos conditioned on audio. While impressive, generated videos are highly limited in diversity. Other works such as Chen et al. (2017); Hao, Guan, and Zhang (2022); Ruan et al. (2023) generate videos that are globally aligned to the semantic class of the input audio sample (e.g., dancing, drums, etc.) but are unable to generate videos in which every segment is temporally aligned to each segment in the input audio sample. In contrast to the above methods, our approach enables the generation of diverse and realistic videos associated and aligned with the input audio from a wide variety of semantic classes. Our work utilizes a pre-trained text-conditioned video generation engine and converts the input audio to a sequence of pseudo tokens. Given an input audio sample, we first encode it using an audio encoder, producing a latent representation of the audio signal. To capture local-to-global information, we construct the representation considering the i-th segment as well as neighboring segments. In particular, we use windows of varying sizes and average the embeddings corresponding to audio segments in these windows. Next, to produce the N-th video frame, we divide the audio embedding into N consecutive segments. We then train an adapter network to map each of these segments to a set of pseudo-tokens. Lastly, to produce the corresponding video, we feed the output of the audio mapping module into the pretrained text-to-video generation model. Intuitively, we learn a mapping between the audio representation obtained by the pre-trained audio encoder, to the textual tokens’ representation used for conditioning the pretrained text-to-video model. By that, extending the possible video conditioning to audio tokens. To validate our approach, we consider a number of datasets that exhibit a diverse set of videos and input audio samples. We consider the Landscape dataset (Lee et al. 2022), which captures landscape videos. The AudioSet-Drums dataset (Gemmeke et al. 2017) which captures drums videos, and the VGGSound dataset (Chen et al. 2020) which consists of a diverse set of real-world videos from 309 different semantic classes. We compare our method to state-of-the-art approaches, both in terms of objective evaluation and human study. We evaluate the audio-video alignment as well as video quality and diversity. To capture temporal alignment, we devise a new metric based on detecting energy peaks in both modalities separately and measuring their alignment. Further, we provide an ablation study where we consider alternative approaches to condition the video model. Our contributions: (i) A state-of-the-art audio-to-video generation model which captures diverse and naturally occurring real-life settings from a wide variety of input videos of different semantic classes; (ii) We present a method that is based on a lightweight adapter, which learns to map audiobased tokens to pseudo-text tokens. As such, it also allows video generation conditioned on text, audio, or both text and audio. As far as we are aware, our method is the first to enable video generation conditioned both on audio and text; and (iii) Our method can generate natural videos aligned with the input sound, both globally and temporally. To validate this, we present a novel evaluation function to measure audio-video alignment. Since, as far as we can ascertain, we are the first to generate diverse and natural videos guided by audio inputs, such an evaluation function is critical to making progress in the field. Related Work Audio-to-image generation. Text-to-image generation has seen great advances recently, using either autoregressive methods (Ramesh et al. 2021; Gafni et al. 2022; Yu et al. 2022) or diffusion based models (Nichol et al. 2022; Ramesh et al. 2022; Saharia et al. 2022; Rombach et al. 2022; Ramesh et al. 2022; Rombach et al. 2022). This inspired a new line of work concerning audio-to-image generation. ˙Zelaszczyk and Ma´ndziuk (2022); Wan, Chuang, and Lee (2019) proposed to generate images based on audio recordings using a GAN˙Zelaszczyk and Ma´ndziuk (2022). ˙Zelaszczyk and Ma´ndziuk (2022) present results for generating MNIST digits only and did not generalize to general audio sounds, while Wan, Chuang, and Lee (2019) generate images from general audio. In Wav2Clip Wu et al. (2022b), the authors learn a Contrastive Language-Image Pre-Training (CLIP) (Radford et al. 2021) like a model for learning joint representation for audio-image pairs. Later on, such representation can be used to generate images using VQ-GAN (Esser, Rombach, and Ommer 2021) under the VQ-GAN CLIP (Crowson et al. 2022) framework. The most relevant related work to ours is AudioToken (Yariv et al. 2023), in which the authors learn an audio token while adapting a diffusion-based text-to-image model to generate images using audio inputs. Text-to-video generation. Early attempts to establish a connection between text and video relied on conditioned retrieval methods (Ali et al. 2022). Later, Wu et al. (2021) introduces the novel integration of 2D VQVAE and sparse attention in text-to-video generation, facilitating the generation of highly realistic scenes. Wu et al. (2022a) extends GODIVA and presents a unified representation for various generation tasks in a multitask learning scheme. Later on, CogVideo (Hong et al. 2022) is built on top of a frozen text-to-image model by adding additional temporal attention modules. Singer et al. (2022) further improves generation quality following a similar modeling paradigm. Video Diffusion Models (He et al. 2022) uses a space-time factorized U-Net with joint image and video data training. Other approaches, such as Villegas et al. (2022) and Villegas et al. (2022) and (Yu et al. 2023) proposed transformerbased approaches to generate long videos or for multi-tasklearning. The most relevant prior work to ours is Wang et al. (2023b), which proposed ModelScope. ModelScope is a latent diffusion-based text-to-video generation model with spatiotemporal blocks. By that, ModelScope enables consistent frame generation and smooth movement transitions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6640 . . . . . . BEATs Audio Mapper MLP Video Generator . . . . . . . . . . . . . . . Temporal audio-conditioned sentences Figure 2: An illustration of the proposed model architecture and method. The input audio is first passed through a pre-trained audio encoder model (BEATs). Then, the resulting representations are fed into a trainable MLP layer, establishing a mapping between audio and text tokens. These text-based representations are then used to condition each frame via a temporal audioconditioned sequence. This sequence effectively takes into account both local and global audio segments. Furthermore, an attentive token (˜aatten) is included to learn the identification of significant audio signals using a pooling attention layer. Lastly, the conditioned components are utilized to generate frames through a pre-trained video generator. Notably, optimization is only applied to the MLP within the AudioMapper model and the pooling attention module. Audio-to-video generation models can be roughly divided into two: (i) speech-to-video generation (talking heads); and (ii) general audio-to-video. Under the speech-to-video generation, Mama et al. (2021) proposed learning a discrete latent representation of the video signal using VQ-VAE, which will be later modeled via an auto-encoder conditioned on speech spectrogram. Park et al. (2022) generates talking face focusing a piece of phonetic information via Audio-Lip Memory module, while (Kumar et al. 2020) proposed a oneshot approach for fast speaker adaptation. When considering general audio-to-video generation, Chatterjee and Cherian (2020) first proposed a method of generating aligned videos conditioned on both audio and video prompts. Ge et al. (2022) introduced a transformerbased approach for generating videos conditioned on either audio or textual features. Although providing impressive generations, their videos are not diverse and were demonstrated on drum generation only. Chen et al. (2017) suggest using separate frameworks for audio-to-image and imageto-audio generation. Hao, Guan, and Zhang (2022) also suggest modeling both audio-to-image and image-to-audio using bidirectional transformers, however, using a unified framework. The authors prove it is better than two separate ones. Lastly, Ruan et al. (2023), follows the same modeling paradigm, however, using latent diffusion models. Method The proposed method is composed of three main components: (i) an AudioMapper; (ii) multiple audio-conditioned temporal sequences; and (iii) a text-to-video generation module. As our goal in this study is to enrich video generation models using audio inputs, we leverage a pre-trained diffusion-based text-to-video model and augment it with audio conditioning capabilities. A visual description of the proposed method can be seen in Figure 2. In contrast to converting audio to image, transforming audio to video presents two additional challenges: (i) ensuring the creation of coherent frames and (ii) synchronization between the audio and video components. For example, consider the scenario of having an audio recording of a dog barking. In the resulting video, it is crucial not only for the dog’s appearance to remain consistent across all frames but also for the match between the timing of the barking sound and the dog’s motion. In this work, we focus on item (ii) by temporally conditioning the generation of each of the video frames by a contextualized representation of the input audio. Formally, we are interested in the generation of a video, denoted as v = (v(1), . . . , v(L)), where v(i) ∈R3×H×W is an output frame, driven by a corresponding audio condition a = (a1, . . . , aR), where ai ∈[−1, 1] is an audio sample at a given sampling rate in the time domain. We seek to establish a conditional probabilistic model, pθ(v|a), encompassing the entire frame-set, where each frame v(i) is conditioned on a, which denotes the audio condition. Note that the conditioning of each frame considers the entire audio input but is built differently for each frame. More details can be found in the paragraph on Audio-conditioned temporal sequence. AudioMapper maps the audio representation obtained from a pre-trained audio encoder to pseudo-tokens compatible with the pre-trained text-to-video model. We denote the output of the AudioMapper as TEMPOTOKENS. Formally, the model gets as input embedded audio, which originates from a pre-trained audio encoder h : [−1, 1]R → RR′×H×d, where H is the number of layers the representation is collected from, d is the inner dimension of the encoder, and R′ is the segment length that h operates on. To force both audio and video latent representations to have the same dimension, we fix R′ = L by employing a pooling layer. Specifically, we use the BEATs model (Chen et al. 2022) as the audio encoder h. Different layers encapsulate a range of specificity levels. Representations derived from BEATs’ final layers are strongly tied to class-related attributes, whereas earlier layers encompass low-level audio features (Gat et al. 2022; Adi et al. 2019). We embed an auThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6641 dio segment into a token representation using a non-linear neural network g : RL×H×d →RL×H×dt: ˜a(i) = g h(a)(i) , (1) where ˜a(i) ∈RL×H×dt, and dt is the embedding dimension of the text-conditioned tokens of the video generation process. The network g consists of four sequential linear layers with GELU non-linearity between them. We denote ˜a(i) as TEMPOTOKENS. Subsequently, we generate a temporal conditioning sequence for each video frame using TEMPOTOKENS. We provide a detailed description of the process in the following paragraph. Audio-conditioned temporal sequence. Next, to better capture the local context around each video frame, we apply an expanding context window technique over the obtained TEMPOTOKENS. This approach captures the surrounding sound signals of the i-th frame as follows: c(i) = ˜amax(1,i−j),min(i+j,K) | j = 2klog K k=0 , (2) where ˜al,r = 1 r −l r X s=l ˜a(s). (3) This context window expands exponentially with increasing temporal distance from the target position, facilitating consideration of a wider local-to-global audio context range. The exponential expansion effectively balances local and global contexts, encompassing important distant audio components that can provide valuable insights into the audio class and close temporal changes needed for audio-video alignment. Figure 3 visually describes the audio-conditioned temporal sequence. Finally, we consider a context window that encompasses all audio signals. We substitute average operation with a trainable attentive pooling layer (Schwartz et al. 2019). Thus, ˜aatten = L X u=1 p(u)˜a(u), (4) where p(u)≥0 ∀u is a probability distribution (i.e., PL u=1 p(u) = 1) over the audio components. The probability distribution takes the form: p(u) ∝exp (αlθl(u) + αcθc(u)) . (5) The local potential is θl(u) = v⊤ l relu(Vlau), and the cross potential between the audio components is: θc(u) = L X i=1 W1˜a(u) ||W1˜a(u)|| ⊤ W2˜a(i) ||W2˜a(i)|| ! . (6) The trainable parameters are (i) Vl, W1, W2, which reembed the data to tune the attention, (ii) vl ∈R(L·H·dt)×1 that scores the sound component (iii) αl, αc that calibrates the local and cross potentials. The attention mechanism enables learning the significance of the audio components. Text-to-video. Lastly, we leverage a pre-trained latent diffusion text-to-video model to learn the aforementioned temporal audio tokens, c = {c(i)}L i=1. Figure 3: Illustration of the audio-conditioned temporal sequence for the case of 24 audio components. For the i-th frame, the window sizes grow exponentially, considering local audio details to aid in aligning audio and video, as well as the broader global information that enhances the differentiation of video classes. Additionally, we introduce a token that encompasses all audio components and identifies the significant ones through an attention pooling layer (˜aatten). Diffusion models are a family of generative models designed to learn the data distribution p(x). This is done by learning the reverse Markov process of length T. Given a timestamp t ∈[0, 1], the denoising function ϵθ : Rd →Rd learns to predict a clean version of the perturbed xt from the training distribution. The generative process can be conditioned on a given input, i.e., modeling p(x|y) where y is a condition vector. In that case, the objective function is LCLDM ≜, E(v,a)∼S,t∼U(0,1),ϵ∼N (0,I) h ∥ϵ −ϵθ(f (vt, c) , t)∥2 2 i , (7) where each video frame, v(i), is conditioned on a dedicated condition vector c(i). Specifically, in this work, we set ϵθ to be a state-of-theart text-to-video model, ModelScope, which is comprised of a 3D-UNet integrated with a temporal attention layer as outlined in Wang et al. (2023b). ModelScope was trained on ∼10M text-video pairs and ∼2B text-image pairs (Wang et al. 2023b). Notice the proposed framework is not limited to ModelScope and can be used over any differentiable textto-video model. Model optimization. We optimize the AudioMapper and the attentive pooling layer only and backpropagate gradients through ϵθ while keeping its parameters unchanged. Optimization minimizes the loss LCLDM for reconstructing a frame v(i) conditioned on c(i) (see Equation (7)), with an added weight decay regularization for the encoded TEMPOTOKENS. Overall, we optimize the following loss function: L = LCLDM + λl1 L log L X u=1 ˜a(u), (8) where λl1 is a trade-off hyper-parameter between the loss term and the regularization. Evaluation Metrics We evaluate our method on three main axes: video quality and diversity, audio-video alignment, and a human study. Video quality and diversity. We report standard evaluation metrics in the domain of video generation for assessing quality and diversity. We utilize the following metrics: (i) Frechet The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6642 Video Distance (FVD) metric, which quantifies the visual disparity between feature embeddings extracted from generated and reference videos (Unterthiner et al. 2019) and is used to assess quality and diversity; (ii) Inception Score (IS), which is computed with a trained C3D model (Tran et al. (2015)) on UCF-101 (Soomro, Zamir, and Shah 2012) and assesses video quality. Audio-video alignment. We distinguish between two types of audio-video alignment: (i) Semantic (or global) alignment, in which the semantic class (e.g., playing drums) of the input audio is depicted by the output video (e.g., a video of people playing drums). To this end, we consider the CLIP Similarity (CLIPSIM) metric (Wu et al. 2021), which gauges the alignment between generated video content and its corresponding audio label; (ii) Temporal alignment, in which we consider if each input audio segment is synchronized with its corresponding generated video segment. To measure this type of alignment, we introduce a novel evaluation metric. The new metric is based on detecting energy peaks in both modalities separately and measuring their alignment. The premise behind this metric is that fast temporal energy changes in the audio signal often correspond to an object movement producing this sound. For instance, consider an audio waveform of fireworks. A successful audio-video temporal alignment would ensure that the video frames portraying the fireworks exhibit a noticeable change synchronously. Conversely, when the video exhibits a significant change, a corresponding peak should be observed in the audio waveform at that precise moment. Our audio-video alignment metric operates as follows. We first detect candidate alignment points by considering each modality separately. We detect audio peaks using an Onset Detection algorithm (B¨ock and Widmer 2013), pinpointing instances of heightened auditory intensity. To detect the changes within the video, we calculate the mean of the Optical Flow (Horn and Schunck 1981) magnitude for each frame and identify rapid changes over time. Then, for each peak in one modality, we validate whether a pick was also detected in the other modality within a three-frame temporal window and vice-versa. Finally, we normalize by the number of peaks to derive the alignment score ranging between zero and one. Such a metric reflects the model’s proficiency in synchronizing audio and video. More formally, given A and V, audio and video peaks were obtained from the onset detection algorithms and optical flow, respectively. The alignment score is defined as: AV-Align = 1 2|A ∪V| X a∈A 1[a ∈V] + X v∈V 1[v ∈A] ! , (9) where we consider a valid peak if placed within a window of three frames in the other modality. The above metric can be interpreted as the Intersection-over-Union metric. To facilitate comprehension, Figure 4 illustrates the alignment process visually, depicting audio peaks and corresponding video changes, emphasizing the interplay between the auditory and visual domains. Human study. We perform Mean Opinion Scores (MOS) experiments considering both quality and audio-video alignment. In this setup, human raters are presented with several Magnitude Amplitude Time (s) 0.5 1 1.5 Figure 4: Audio-Video alignment metric illustration. The first row presents four frames from a generated video featuring a dog. The second row depicts the mean magnitude of optical flow for each frame, capturing video changes. The bottom row shows the amplitude of the audio waveform. The vertical line in the middle and the bottom graphs marks the onset of the waveform, while the peak of video change is also indicated. short video samples and are instructed to evaluate their quality and alignment on a scale between 1–5 with increments of 1.0. Specifically, we ask raters to evaluate the videos considering overall quality, global alignment to the audio file, and local alignment between the visual and sound of the video files. We evaluate 20 videos per method and enforce ten raters per sample. The full questionnaire we asked the raters can be found in the supplemental material. Experimental Setup Implementation details. The proposed method contains ∼35M trainable parameters. We optimized the model using two A6000 GPUs for 10K iterations. We use AdamW optimizer with learning rate of 1e-05 using constant learning rate scheduler. Each batch comprises 8 videos with 24 frames per video, sampled randomly for one-second granularity. To enhance training efficiency and mitigate memory consumption, we integrated gradient checkpointing into the training process of the 3D U-net architecture. Code and pretrained models will be publicly available upon acceptance. Datasets. We utilize the VGGSound dataset (Chen et al. 2020), derived from YouTube videos containing ∼180K clips of 10 seconds duration, annotated across 309 classes. To enhance data quality, we filtered ∼60K videos in which audio-video alignment is weak. During this filtering procedure, we utilized a pre-trained audio classifier to categorize sound events present in each clip. Simultaneously, a pretrained image classifier was employed to classify the middle frame of every video clip. We then computed the CLIP (Radford et al. 2021) score by comparing the predicted labels from both classifiers. Then, filtering is done by removing videos that do not pass a pre-defined threshold. Our exploration of alternative filtering criteria, focusing on frames with maximum similarity to text labels rather than uniformly The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6643 Model FVD (↓) CLIPSIM (↑) IS (↑) AV-Align (↑) VGGSound ModelScope Text2Vid 801 0.69 15.55 0.27 ModelScope Random 1023 0.47 6.32 0.26 Ours 923 0.57 11.04 0.35 AudioSet-Drums TATS 303 0.69 2.10 0.28 Ours 299 0.70 2.78 0.61 Landscape MM-Diffusion 922 0.53 2.85 0.41 Ours 784 0.57 4.49 0.54 Table 1: Automatic video generation results. We report FVD, CLIPSIM, IS, and Alignment (‘align’) scores for both the proposed method (Ours) and the baselines. For a fair comparison, we compare our method to TATS (Ge et al. 2022) and to MM-Diffusion (Ruan et al. 2023) using the benchmarks reported by the authors in the original paper. choosing the middle frame, reveals minimal differences (approximately 0.01) in CLIPSIM (Wu et al. 2021), IS (Tran et al. 2015), and AV-Align scores, leading us to use the middle frame. Additionally, to have a fair comparison with prior work, we experimented with two additional datasets. (i) The Landscape dataset (Lee et al. 2022), which contains 928 nature videos divided into 10-second clips, covering nine distinct scenes; (ii) The AudioSet-Drum dataset (Gemmeke et al. 2017), contains ∼7k videos of drumming. We used the same split as proposed by Ge et al. (2022), where ∼6k is used as the training set while the rest serves as a test set. Baselines. We compare the proposed method to previous state-of-the-art models generating videos conditioned on audio inputs. Ge et al. (2022) proposed Time Sensitive Transformer (TATS) model, which projects audio latent embeddings onto video embeddings, enabling cross-modal alignment. Ruan et al. (2023) recently proposed MM-Diffusion, which employs coupled denoising auto-encoders to generate joint audio and video content. Each of the above-mentioned baselines, i.e., TATS and MM-Diffusion, were originally evaluated using different benchmarks, i.e., AudioSet-Drums and Landscape, respectively. For a fair comparison, we evaluate the proposed method using each of the datasets suggested in the original papers. Moreover, we consider two naive baselines based on textto-video models. In the first one, we generate videos from text description and retrieve random audio from the training set which corresponds to the same class as the generated video, denoted as ModelScope Text-To-Video. For the second one, denoted as ModelScope Random, we generate videos unconditionally (i.e., without any specific textual conditions), and match it with a random audio segment. For both baselines, we use the pre-trained publicly available Sem. Align. Temp. Align. Vid Quality 0 1 2 3 4 5 Scores AudioSet-Drum TATS Ours Sem. Align. Temp. Align. Vid Quality 0 1 2 3 4 5 Scores Landscape MM-Diffusion Ours Figure 5: Human study. We consider the MOS score for three metrics: (i). Semantic alignment, where we ask users to rate how well the video matches the input audio semantic label, (ii). Temporal alignment, where we ask users to rate how well each input audio segment is aligned with the generated video segments, and (iii) Video quality, where we ask users to rate the generated video quality. On the LHS, we consider video models trained on AudioSet-Drum, and on the RHS, we consider video models trained on Landscape. zeroscope-v2 model 1. Results We start by presenting results for audio-to-video generation considering both objective metrics presented above and human study. Next, we empirically demonstrate how the proposed method can be used to generate videos conditioned on both text and audio modalities, thus enhancing text-to-video generations. Lastly, we conduct an ablation study to understand better the effect of our audio conditioning technique on generation quality and alignment. Visual results are provided in the supplementary. Audio-to-Video Generation Objective evaluation. As can be seen in Tab. 1, our method outperforms the baselines on all metrics for the AudioSetDrums and Landscape datasets. Specifically, our method improves both the quality of the generated videos (FVD and IS scores) together with the audio-video alignment (AV-Align and CLIPSIM scores). As expected, the gap between the methods is larger when considering the alignment scores. Notice the alignment scores changed significantly when considering different benchmarks. Sound events can also be produced by objects not seen in the video; this is especially noticeable in the VGGSound benchmark, in which the AVAlign score of the original videos is 0.51. Next, we compare our method to the original ModelScope model, both text-condition (ModelScope Text2Vid) and unconditionally (ModelScope Random). As we do not modify the model, we consider the text-condition setup as a top-line in terms of video quality metrics. Recall the audio in both models is retrieved from our training set, using either the video class for ModelScope Text2Vid or ran1we use the zeroscope-v2 576w as can be found in the following link: https://huggingface.co/cerspense/zeroscope v2 576w The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6644 Cond. FVD (↓) CLIPSIM (↑) IS (↑) AV-Align (↑) Text 801 0.69 15.55 0.27 Audio 923 0.57 11.04 0.35 Text+Audio 859 0.58 11.66 0.36 Table 2: Results of the proposed method using different modalities as conditioning. We report results for Text, Audio, and Text+Audio modalities as model conditioning. "A video of <TemporalAudioTokens> in Abstract Colors" Fire Water "A video of <TemporalAudioTokens>, on the moon" "A video of <TemporalAudioTokens>, with vibrant red and orange foliage" Figure 6: Examples of added text tokens for altering the output video. We show results for fire and flowing water audio. domly ModelScope Random. As expected, our model outperforms ModelScope Random considering all metrics. The ModelScope Text2Vid is superior to our model for video quality. However, when considering audio-video alignment, our method is significantly better. Human study. We present results using a human study considering both video quality and alignment (both semantic and temporal). Results are depicted in Figure 5. As can be seen for both the AudioSet-Drum and Landscape datasets, users found our videos significantly more temporally aligned. For semantic alignment, our method improves on both TATS and MM-Diffusion, with a significant gap to MM-Diffusion on the Landscape dataset. Finally, on video quality, users found our videos significantly superior. Joint Audio-Text to Video Generation Utilizing text and audio together to guide generation involves adding text tokens for conditioning. In Tab. 2, we show results using “A video of <class>” for text conditioning and “A video of <TemporalAudioAtokens> <class>” for Text+Audio. Combining text and audio conditioning outperforms audio-only in all metrics, especially FVD. Textonly provides the highest video quality but lacks alignment. In Fig. 6, we present how we merge text tokens to temporal audio tokens, which enables style manipulation. For example, for the sound of a river, we can depict it flowing over the moon by using the prompt “on the moon”. Ablation Study Recall our method consists of using context windows of varying sizes to capture a local-to-global context of the input audio. In Tab. 3, we assess the effect of using different windows of size K ∈{1, 2, 3, 4} denoted as win. (K-res.). Note in practice, the window size is determined by log K; Cond. FVD (↓) CLIPSIM (↑) IS (↑) AV-Align (↑) vec. 948 0.57 10.12 0.29 win. (1-res.) 998 0.56 9.22 0.36 win. (2-res.) 965 0.56 9.87 0.35 win. (3-res.) 972 0.56 10.01 0.34 win. (4-res.) 950 0.56 10.13 0.35 win. (5-res.) 923 0.57 11.04 0.35 Table 3: An ablation study exploring the different audio conditioning. We report FVD, CLIPSIM, IS, and Alignment scores on VGGSound (Chen et al. 2020) considering singlevector conditioning (vec.), time-dependent condition using one window size (win. (1-res.), and different windows of size k (win. (k-res.)). we use K for readability. Using only the local context window (K = 1) results in a good alignment. As we increase the global context (i.e., increasing K), the video quality is improved while the alignment scores are comparable. We additionally consider a single audio conditioning vector (vec) by averaging all the audio components. Despite high video quality scores, the absence of local temporal information results in a notably worse AV-Align score. Limitations Our method, using a pre-trained text-to-video model, involves adapting between text and audio tokens, posing challenges in mapping between their latent representations. Due to hardware limitations, our method generates relatively short video segments with temporal conditioning limited to 24 frames. Additionally, discrepancies can arise between visual and audio modalities, such as a video showing a dog in a car while the audio only features a radio playing. This limitation is not specific to our method but rather a general challenge in the domain. Conclusion We introduced a state-of-the-art audio-to-video generation model that generates diverse and realistic videos aligned to input audio samples. Leveraging a lightweight adapter for mapping between audio and text representations enables conditioning video generation on both audio and text for the first time. Our expanding context window technique captures local and global context, and we propose the AV-Align metric for assessing temporal alignment. Future work aims to explore incorporating additional modalities like depth, images, or IMU alongside audio and text for video generation. Acknowledgements Work supported in part by ISF grant 2049/22 and the Israeli Ministry of Science and Technology, grant 2022-16196. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6645 References Adi, Y.; Zeghidour, N.; Collobert, R.; Usunier, N.; Liptchinsky, V.; and Synnaeve, G. 2019. To reverse the gradient or not: An empirical comparison of adversarial and multi-task learning in speech recognition. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3742–3746. IEEE. Ali, A.; Schwartz, I.; Hazan, T.; and Wolf, L. 2022. Video and text matching with conditioned embeddings. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1565–1574. B¨ock, S.; and Widmer, G. 2013. Maximum Filter Vibrato Suppression for Onset Detection. In DAFx-13. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. NeurIPS. Chatterjee, M.; and Cherian, A. 2020. Sound2sight: Generating visual dynamics from sound and context. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVII 16, 701– 719. Springer. Chen, H.; Xie, W.; Vedaldi, A.; and Zisserman, A. 2020. Vggsound: A large-scale audio-visual dataset. In ICASSP. Chen, L.; Srivastava, S.; Duan, Z.; and Xu, C. 2017. Deep cross-modal audio-visual generation. In Proceedings of the on Thematic Workshops of ACM Multimedia 2017, 349–357. Chen, S.; Wu, Y.; Wang, C.; Liu, S.; Tompkins, D.; Chen, Z.; and Wei, F. 2022. Beats: Audio pre-training with acoustic tokenizers. arXiv preprint arXiv:2212.09058. Copet, J.; Kreuk, F.; Gat, I.; Remez, T.; Kant, D.; Synnaeve, G.; Adi, Y.; and D´efossez, A. 2023. Simple and Controllable Music Generation. arXiv preprint arXiv:2306.05284. Crowson, K.; Biderman, S.; Kornis, D.; Stander, D.; Hallahan, E.; Castricato, L.; and Raff, E. 2022. Vqgan-clip: Open domain image generation and editing with natural language guidance. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII, 88–105. Springer. Esser, P.; Rombach, R.; and Ommer, B. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12873–12883. Gafni, O.; Polyak, A.; Ashual, O.; Sheynin, S.; Parikh, D.; and Taigman, Y. 2022. Make-a-scene: Scene-based text-toimage generation with human priors. In European Conference on Computer Vision, 89–106. Springer. Gat, I.; Lorberbom, G.; Schwartz, I.; and Hazan, T. 2022. Latent space explanation by intervention. In AAAI. Ge, S.; Hayes, T.; Yang, H.; Yin, X.; Pang, G.; Jacobs, D.; Huang, J.-B.; and Parikh, D. 2022. Long video generation with time-agnostic vqgan and time-sensitive transformer. In European Conference on Computer Vision, 102– 118. Springer. Gemmeke, J. F.; Ellis, D. P.; Freedman, D.; Jansen, A.; Lawrence, W.; Moore, R. C.; Plakal, M.; and Ritter, M. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), 776–780. IEEE. Hao, W.; Guan, H.; and Zhang, Z. 2022. Vag: A uniform model for cross-modal visual-audio mutual generation. IEEE Transactions on Neural Networks and Learning Systems. Hassid, M.; Remez, T.; Nguyen, T. A.; Gat, I.; Conneau, A.; Kreuk, F.; Copet, J.; Defossez, A.; Synnaeve, G.; Dupoux, E.; et al. 2023. Textually Pretrained Speech Language Models. arXiv preprint arXiv:2305.13009. He, Y.; Yang, T.; Zhang, Y.; Shan, Y.; and Chen, Q. 2022. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv preprint arXiv:2211.13221. Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D. P.; Poole, B.; Norouzi, M.; Fleet, D. J.; et al. 2022. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. NeurIPS. Hong, W.; Ding, M.; Zheng, W.; Liu, X.; and Tang, J. 2022. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868. Horn, B. K.; and Schunck, B. G. 1981. Determining optical flow. Artificial Intelligence, 17(1): 185–203. Kreuk, F.; Synnaeve, G.; Polyak, A.; Singer, U.; D´efossez, A.; Copet, J.; Parikh, D.; Taigman, Y.; and Adi, Y. 2022. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352. Kumar, N.; Goel, S.; Narang, A.; and Hasan, M. 2020. Robust one shot audio to video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 770–771. Lee, S. H.; Oh, G.; Byeon, W.; Kim, C.; Ryoo, W. J.; Yoon, S. H.; Cho, H.; Bae, J.; Kim, J.; and Kim, S. 2022. Soundguided semantic video generation. In European Conference on Computer Vision, 34–50. Springer. Mama, R.; Tyndel, M. S.; Kadhim, H.; Clifford, C.; and Thurairatnam, R. 2021. NWT: towards natural audio-to-video generation with representation learning. arXiv preprint arXiv:2106.04283. Nichol, A. Q.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; Mcgrew, B.; Sutskever, I.; and Chen, M. 2022. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. In International Conference on Machine Learning, 16784–16804. PMLR. Park, S. J.; Kim, M.; Hong, J.; Choi, J.; and Ro, Y. M. 2022. Synctalkface: Talking face generation with precise lip-syncing via audio-lip memory. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2062–2070. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6646 Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In ICML. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-toimage generation. In International Conference on Machine Learning, 8821–8831. PMLR. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Ruan, L.; Ma, Y.; Yang, H.; He, H.; Liu, B.; Fu, J.; Yuan, N. J.; Jin, Q.; and Guo, B. 2023. Mm-diffusion: Learning multi-modal diffusion models for joint audio and video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10219–10228. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour, S. K. S.; Gontijo-Lopes, R.; Ayan, B. K.; Salimans, T.; et al. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In NeurIPS. Schwartz, I.; Yu, S.; Hazan, T.; and Schwing, A. G. 2019. Factor graph attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2039–2048. Sheffer, R.; and Adi, Y. 2023. I hear your true colors: Image guided audio generation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Singer, U.; Polyak, A.; Hayes, T.; Yin, X.; An, J.; Zhang, S.; Hu, Q.; Yang, H.; Ashual, O.; Gafni, O.; et al. 2022. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792. Soomro, K.; Zamir, A. R.; and Shah, M. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning Spatiotemporal Features with 3D Convolutional Networks. arXiv:1412.0767. Unterthiner, T.; van Steenkiste, S.; Kurach, K.; Marinier, R.; Michalski, M.; and Gelly, S. 2019. Towards Accurate Generative Models of Video: A New Metric Challenges. arXiv:1812.01717. Villegas, R.; Babaeizadeh, M.; Kindermans, P.-J.; Moraldo, H.; Zhang, H.; Saffar, M. T.; Castro, S.; Kunze, J.; and Erhan, D. 2022. Phenaki: Variable length video generation from open domain textual description. arXiv preprint arXiv:2210.02399. Wan, C.-H.; Chuang, S.-P.; and Lee, H.-Y. 2019. Towards audio to scene image synthesis using generative adversarial network. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 496–500. IEEE. Wang, C.; Chen, S.; Wu, Y.; Zhang, Z.; Zhou, L.; Liu, S.; Chen, Z.; Liu, Y.; Wang, H.; Li, J.; et al. 2023a. Neural codec language models are zero-shot text to speech synthesizers. arXiv preprint arXiv:2301.02111. Wang, J.; Yuan, H.; Chen, D.; Zhang, Y.; Wang, X.; and Zhang, S. 2023b. ModelScope Text-to-Video Technical Report. arXiv:2308.06571. Wu, C.; Huang, L.; Zhang, Q.; Li, B.; Ji, L.; Yang, F.; Sapiro, G.; and Duan, N. 2021. Godiva: Generating opendomain videos from natural descriptions. arXiv preprint arXiv:2104.14806. Wu, C.; Liang, J.; Ji, L.; Yang, F.; Fang, Y.; Jiang, D.; and Duan, N. 2022a. N¨uwa: Visual synthesis pre-training for neural visual world creation. In European conference on computer vision, 720–736. Springer. Wu, H.-H.; Seetharaman, P.; Kumar, K.; and Bello, J. P. 2022b. Wav2clip: Learning robust audio representations from clip. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4563–4567. IEEE. Yariv, G.; Gat, I.; Wolf, L.; Adi, Y.; and Schwartz, I. 2023. AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image Generation. arXiv preprint arXiv:2305.13050. Yu, J.; Xu, Y.; Koh, J. Y.; Luong, T.; Baid, G.; Wang, Z.; Vasudevan, V.; Ku, A.; Yang, Y.; Ayan, B. K.; et al. 2022. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3): 5. Yu, L.; Cheng, Y.; Sohn, K.; Lezama, J.; Zhang, H.; Chang, H.; Hauptmann, A. G.; Yang, M.-H.; Hao, Y.; Essa, I.; et al. 2023. Magvit: Masked generative video transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10459–10469. ˙Zelaszczyk, M.; and Ma´ndziuk, J. 2022. Audio-to-image cross-modal generation. In IJCNN. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6647 | 2024 | 738 |
18,560 | AltDiffusion: A Multilingual Text-to-Image Diffusion Model Fulong Ye1,2*†, Guang liu2*‡, Xinya Wu2, Ledell Wu2 1 Beijing University of Posts and Telecommunications, Beijing, China 2 Beijing Academy of Artificial Intelligence fulong [email protected] {liuguang, yxwu, wuyu}@baai.ac.cn Abstract Large Text-to-Image(T2I) diffusion models have shown a remarkable capability to produce photorealistic and diverse images based on text inputs. However, existing works only support limited language input, e.g., English, Chinese, and Japanese, leaving users beyond these languages underserved and blocking the global expansion of T2I models. Therefore, this paper presents AltDiffusion, a novel multilingual T2I diffusion model that supports eighteen different languages. Specifically, we first train a multilingual text encoder based on the knowledge distillation. Then we plug it into a pretrained English-only diffusion model and train the model with a two-stage schema to enhance the multilingual capability, including concept alignment and quality improvement stage on a large-scale multilingual dataset. Furthermore, we introduce a new benchmark, which includes Multilingual-General-18(MG-18) and Multilingual-Cultural18(MC-18) datasets, to evaluate the capabilities of T2I diffusion models for generating high-quality images and capturing culture-specific concepts in different languages. Experimental results on both MG-18 and MC-18 demonstrate that AltDiffusion outperforms current state-of-the-art T2I models, e.g., Stable Diffusion in multilingual understanding, especially with respect to culture-specific concepts, while still having comparable capability for generating high-quality images. All source code and checkpoints could be found in https://github.com/superhero-7/AltDiffuson. Introduction In recent years, there has been an emerging interest in large Text-to-Image(T2I) diffusion models, such as Stable Diffusion(SD)(Rombach et al. 2022), Imagen(Saharia et al. 2022) and DALLE2(Ramesh et al. 2022), due to their remarkable ability to produce photorealistic and diverse images based on text input. A limitation of these large T2I diffusion models is that they only support prompts in English, which is inconvenient for non-English users, e.g., Spanish or French. NonEnglish users usually utilize T2I diffusion models with the Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. *Equal contribution. †Work done during internship with Beijing Academy of Artificial Intelligence. ‡Corresponding Author. help of translation tools, which may lead to translation error and information loss, especially in some culture-specific concepts. For example, the name of the famous Chinese painter Baishi Qi may be translated as ”white stone” in English. And the translation process is getting more complex when it comes to mixed language prompts. Intuitively, a T2I generative model uses native languages directly without additional translation steps have no such problems. Recently, some scholars have begun to develop multilingual T2I diffusion models. Taiyi-Bilingual(Wang et al. 2022b), ERNIEViLG 2.0(Feng et al. 2023) are T2I bilingual models that support both Chinese and English. However, these T2I diffusion models are still limited by the scarcity of language varieties. To address the problem, we propose a novel multilingual T2I diffusion model, which is capable of processing eighteen languages1 that cover 46.94% of the world’s first-language speakers and 27.64% of the world’s secondlanguage speakers, named AltDiffusion(AD), along with an efficient training approach. We first train a multilingual text encoder based on the knowledge distillation(Chen et al. 2022) to enhance the language capability to support eighteen languages. Then, the parameters of the text encoder are frozen and plugged into a pretrained English-only diffusion model. Next, we propose a two-stage training schema to enhance the language capability of the diffusion model. In the first stage, we train the K, V matrix in cross-attention of the UNet on a multilingual dataset LAION 5B(Schuhmann et al. 2022) to align the embedding space between the UNet and the text encoder. In the second stage, all parameters of the UNet are unfrozen to improve the quality of the generated images using LAION Aesthetics(Shing and Sawada 2022a). In addition, a classifier-free guidance training technique is employed to further improve the generation quality. It is worth noting that our training approach possesses strong generality and can support any pretrained T2I diffusion models. To evaluate our AD model, we introduce a benchmark including two evaluation datasets that focus on two aspects of multilingual generation, respectively. For general 1Eighteen languages: English, Chinese, Japanese, Thai, Korean, Hindi, Ukrainian, Arabic, Turkish, Vietnamese, Polish, Dutch, Portuguese, Italian, Spanish, German, French, and Russian. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6648 Chinsse 小桥流水人家 Portuguese Bela vista da floresta de Sintra, pintura em aquarela Ukrainian Obraz Pałacu Kultury i Nauki, olej na płótnie French pintura de la torre eiffel Italian Pittura ad acquerello di spaghetti Spanish Retrato de mujer, estilo Picasso English The Hay Wain, John Constable Vietnamese Tranh sơn dầu sông Mekong Dutch Van Goghs sterrenhemel German ein Bild von Schloss Neuschwanstein Russian Від на Байкал, карціна алеем Ukrainian Майдан Незалежності, Київ, олійний живопис Arabic لوحة زيتية لرجل عربي برداء Italian पहाड़ी महहलाएँ, अमृता शेरगिलल Thai ชีวิตชนบทไทย อรรคพล เมฆ พจน์ Japanese 宮崎駿のスタイル、精巧な ディテ ールで描かれた飛行機の絵 Turkish Catedral de Sevilla, pintura al óleo Korean 서울타워,유화 Figure 1: Images generated by AltDiffusion with prompts in various languages. We select prompts with culture-specific concepts in different languages to demenstrate the strong capability of multilingual T2I generation of AltDiffusion. quality evaluation, we expand the data of XM-3600 by filtering high-quality image-text pairs from WIT and construct a high-quality dataset Multilingual-General-18(MG18) that includes 7,000 images per language to evaluate FID(Heusel et al. 2017), IS(Salimans et al. 2016), and CLIP Sim(Hessel et al. 2021). For culture-specific concepts evaluation, we introduce Multilinguale-Cultural-18(MC-18), a culture-specific dataset with 50 text prompts per language about culture-specific concepts of different countries. The MC-18 is the first dataset about culture-specific concepts. This benchmark provides robust and comprehensive evaluation for multilingual T2I generation. Our experimental results on MG-18 demonstrate that AD is the first multilingual T2I diffusion model that supports eighteen languages and outperforms other multilingual diffusion models, e.g. Taiyi(Wang et al. 2022a) and Japanese SD models(Shing and Sawada 2022b), in FID, IS, and CLIP Sim. In addition, AD surpasses translation-based SD in CLIP Sim of all languages and achieves comparable results in FID and IS, proving that AD is better than SD in multilingual understanding and can generate almost the same high-quality images as SD on general prompts. Experimental results on MC-18 show that AD beats translation-based SD in the culture-specific concepts in all languages. Our contributions are as follows: • We introduce AltDiffusion, a novel multilingual diffusion model that supports eighteen languages, which covers 46.94% of the world’s first-language speakers and 27.64% of the world’s second-language speakers. • We introduce a benchmark that includes two datasets for evaluating T2I generative model: a general quality evaluation dataset MG-18 and a culture-specific dataset MC18. This benchmark provides robust and comprehensive evaluation for multilingual T2I generation. • AltDiffusion outperforms other multilingual diffusion models and performs better than a translation-based Stable Diffusion in multilingual understanding capability, especially in culture-specific concepts. Related Work Multilingual Text-to-image Generation Recently, T2I diffusion models ((Rombach et al. 2022), (Saharia et al. 2022), (Ramesh et al. 2022), (Zhang et al. 2021), (Feng et al. 2022), (Ding et al. 2021), (Ding et al. 2022)) achieve remarkable success in generating photorealistic and diverse images. Stable Diffusion(SD) is a prominent open-source framework with a considerable community. SD model consists of three parts: autoencoder is responsible for encoding images and decoding the pictures into and from the latent space; text encoder is accountable for encoding text prompts; Unet(Ronneberger, Fischer, and Brox 2015) is responsible for predicting noise based on the language embedding in latent space. Despite its strong generative capability, the fact that the SD model can only support English input still leads to limitations. Some works, such as CogView((Ding et al. 2021), (Ding et al. 2022)) and ERNIEViLG((Zhang et al. 2021), (Feng et al. 2022)), start to explore the T2I diffusion model that can support multilingual text prompt. Only some studies try to extend the applicability of the SD beyond English to other languages, e.g., Taiyi(Wang et al. 2022a) and Japanese SD (Shing and Sawada 2022b). However, these T2I generative models are still limited by the scarcity of language varieties. In this work, We are committed to constructing a multilingual T2I diffusion model which can serve most of the world’s population. Multilingual CLIP CLIP shows a solid capability to provide a robust text-image representation in English. Recently, several works have tried to expand the language capabilities of CLIP to other languages. For instance, previous studies (Aggarwal and Kale 2020), (Carlsson et al. 2022) attempt to create multilingual CLIP models. AltCLIP (Chen et al. 2022) applies knowledge distillation techniques to develop a state-of-the-art multilingual CLIP model by leveraging the XLM-R multilingual model (Conneau et al. 2019). Following AltCLIP, we retrain the text encoder to align the penultimate hidden layer of OpenCLIP(Cherti et al. 2022) with the text encoder used in SD v2 to create a multilingual CLIP model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6649 A cute corgi dog. 牡丹花,中国风,国画 富士山と桜と水彩画 noise noise noise Reconstruction Loss Multilinguale Encoder (a) Text Encoder Training (b) Concept Alignment (c) Quality Improvement Second Stage of Training Diffusion First Stage of Training Diffusion Frozen Tuned K V K V UNet K V K V K V K V [ CLS ] A Koki dog Peony, Chinese style, Chinese painting Mt. Fuji, cherry blossoms and watercolors Teacher Model OpenCLIP text encoder FC Un perro Corgi 富士山と桜と水彩画 牡丹花,中国风,国画 [ EOS ] MSE Loss Multilinguale Encoder XLM-R Figure 2: Illustration of the training approach. First, we train a multilingual text encoder. Then in the concept alignment stage, we only unfreeze the k and v parameters in cross-attention. In the quality improvement stage, all parameters of the UNet are unfrozen. Both stages are trained in 18 languages(Here only illustrate English, Chinese, and Japanese). Multilingual Image Caption Datasets Image caption datasets are widely used for multimodal tasks, which are mainly accessible in English, e.g., Flickr30k(Plummer et al. 2017), MS COCO(Lin et al. 2014). These monolingual datasets are limited by language-linguistic diversity. Thus some works have focused on image caption datasets in different languages. Multi30K(Elliott et al. 2016) is a dataset that supports German, while Wikimedia Commons(Schamoni, Hitschler, and Riezler 2018) can support German, French, and Russian. Recently, datasets that support multiple languages such as WIT(Srinivasan et al. 2021), XM 3600(Thapliyal et al. 2022) are proposed. WIT is collected by gathering diverse textual content linked to an image from Wikipedia, which has 37.6 million image-text pairs in 100+ languages. XM 3600 is a manually annotated dataset with 3,600 image-text in 36 languages. Based on XM3600 and WIT datasets, we build a dataset MG-18 to evaluate AD. Method The current Large T2I diffusion models usually consist of a text encoder and a UNet model. To train a multilingual T2I diffusion model, we first enhance the language capability of the text encoder and then align it with the UNet to enhance the language capability of UNet. Enhance Language Capability of the Text Encoder Following AltCLIP(Chen et al. 2022), we retrain the text encoder to support 18 languages based on the knowledge distillation. As shown in Figure 2(a), the text encoder from OpenCLIP(Cherti et al. 2022) is the teacher model, and XLMR(Conneau et al. 2019) is the student model. Given parallel prompts (textenglish, textotherlanguage), the textenglish input is fed into the teacher model, and the textotherlanguage input is fed into the student model. We minimize the Mean Squared Error(MSE) between [TOS] embedding of the teacher model and [CLS] embedding of the student model. A fully connected network maps the outputs of XLM-R and the OpenCLIP text encoder penultimate layer to the same dimensionality. After training, we obtain a multilingual text encoder whose embedding space is close to the original OpenCLIP text encoder. Enhance Language Capability of the UNet After training the text encoder, the parameters of the text encoder are frozen and plugged into an off-the-shelf pretrained English-only diffusion model(here, we use SD, but our method can be extended to other diffusion models with a text encoder). Then we use two-stage training schema, including concept alignment and quality improvement stage, to transform the English-only diffusion model into a multilingual one. Concept Alignment Stage This stage aims to re-establish the relationship between the text and the images by aligning the embedding space between the text encoder and UNet. The training dataset is LAION(Schuhmann et al. 2022), which is a large-scale dataset with a multilingual corpus(detailed introduction is in Dataset section). Preliminary analysis(Gandikota et al. 2023) reveals that the crossattention mechanism of the diffusion model plays a crucial role in matching the text and images. Therefore, as illustrated in Figure 2(b), we freeze the multilingual text encoder, the autoencoder, and most of the parameters of the UNet, then train the K, V matrix of the cross-attention module using the denoising diffusion objective(Ho, Jain, and Abbeel 2020): L = Eε(x),t,c,ϵ∼N(0,1) h ∥ϵ −ϵO (zt, c, t)∥2 2 i (1) where t ∼Uniform [1, T], zt is a noisy version of latent embedding z of input image x (i.e. z = E(x)), obtained using ϵ ∼N(0, I). θ is the parameter of UNet to predict the noisy ϵθ (zt, c, t) condition on zt, multilingual text condition embedding c and t. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6650 Painting Literature Festival Clothe ch A landscape painting by BaishiQi Jade-like adornment forms a tall tree, with ten thousand branches hanging down like green silk threads. On the eve of ChineseNew Year, every household hangs bright red lanterns. The girl wearing a qipao performed on stage ja Miyazaki Hayao’s style, the fantastic scene of the night where the spirits shining in the woods dance In the cauliflower field, the moon faces west and the sun faces east At the night of Japan’s Star Festival, the city lights up and decorates special decorations. The embroidery of the kimono is delicate embroidery. es The Mother of Picasso”: A touching portrait of his mother.” In a deep canyon, the rocky walls rise impressively, creating a spectacle of natural shapes and colors. Carnival in Spain is an explosion of joy and color that fills the streets with fun and festivity. The ”trajede baturro” from Arag´on consists of a vest, sash, and trousers. Table 1: Data samples of MC-18 in Chinese(ch), Japanese(ja) and Spanish(es). For the convenience of reading, the examples shown here are translated into English. For more and more complete examples, please visit our github homepage. Previous observation(Rombach et al. 2022) indicates that reducing the dimensions of images from 512×512 to 256×256 results in minimal damage to semantic information, only eliminating some imperceptible details. In line with our objective of aligning the semantic information between modalities, we utilize the lower image resolution of 256×256 during this stage to facilitate faster training while minimizing computational costs. Quality Improvement Stage In the second stage, as illustrated in Figure 2(c), we apply a continuous learning strategy by loading the first stage checkpoint and subsequently finetuning all the parameters of the UNet using the same objective function as defined in Equation 1. We train our model on high-quality multilingual aesthetic datasets with a resolution of 512x512 to improve the generative quality. Furthermore, we also drop 10% of text inputs following the setting as SD(Rombach et al. 2022) for classifier-free guidance(Ho and Salimans 2022), which is helpful when calculating the final noisy score during inference. ˜ϵθ (zt, c, t) is obtained by a combination of conditioned score ϵθ (zt, c, t) and unconditioned score ϵθ (zt, t), which is formalized in the following equation: ˜ϵθ (zt, c, t) = ϵθ (zt, t) + α (ϵθ (zt, c, t) −ϵθ (zt, t)) (2) where α > 1 is a scale weight of condition. With the completion of the second stage, we finally obtain a multilingual T2I diffusion model that meet the needs of users across different linguistic backgrounds. Dataset Training Data All the image-text pairs we use to train AD come from LAION (Schuhmann et al. 2022). LAION 5B LAION 5B includes three sub-datasets: LAION2B-en, LAION2B-multi and LAION1B-nolang. LAION2B-en contains 2.32 billion image-text pairs in English. LAION2B-multi contains 2.26 billions image-text pairs and the text comes from 100+ languages beyond English. In the first training stage, we filter 1.8 billions data in eighteen languages from LAION2B-multi and combine it with LAION2B-en. LAION Aesthetics LAION Aesthetics contains several collections of subsets from LAION 5B with high quality. An Aesthetics Predictor is trained using LAION to predict the aesthetics score of images on a scale of 1 to 10, with higher aesthetics score being better. Then the Aesthetics Predictor is used for filtering the data. To conduct the second training stage, we filter eighteen languages from the LAION Aesthetics and the LAION Aesthetics V1-multi dataset with the predicted aesthetics score higher than seven. Evaluation Benchmark To evaluate the capability of AD to generate images and capture culture-specific concepts of different languages, we introduce two datasets: Multilingual-General-18(MG-18) for generation quality evaluation and Multilingual-Cultural18(MC-18) for culture-specific concepts evaluation. Multilinguale-General-18(MG-18) We construct a large and high-quality dataset MG-18 which contains 7,000 image-text pairs in 18 languages, by expanding XM 3600 with high-quality images from WIT in two steps. In the first step, we use an Optical Character Recognition(Du et al. 2020) system to filter out images with more than five words, considering images with excessive text tend to be document images, which are unsuitable for evaluating the generation capabilities of the T2I model. Next, we use AltCLIP to calculate the similarity score between the image and the caption, and then keep those with a score higher than 0.2. Multilinguale-Cultural-18(MC-18) One of the important capabilities of multilingual T2I models is to understand culture-specific concepts of different languages. To evaluate this, we construct MC-18, a dataset that contains culture-specific concepts in painting, literature, festival, food, clothe, and landmark. First, we select the representatives of each language in the above six aspects. Then we use ChatGPT to generate prompts and ask the crowdsourcing personnels to select suitable prompts. We create 50 prompts for each of the 18 languages. Some samples of MC18 are shown in Table 1 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6651 A stuffed bear is sitting outside an eatery. A black and white photo shows a boat sitting on a beach. AD Original SD Taiyi Bilingual EN ⼀间有⿊⾊沙发、⽊地板和书柜的客厅。 (A living room with a black couch, wood floor and bookcase.) AD Taiyi Bilingual CN Taiyi Chinese クマの群れが雑⽊林の中を歩いています。 (A herd of bears walks through a wooded forest.) AD Japanese SD (c) Japanese (b) Chinese (a) English 海滩上的⼀把空椅⼦在遮阳伞下。 (A empty chair on the beach is under a parasol.) 草むらに佇む⻩⾊いバス。 (A yellow bus that is sitting in the grass.) Figure 3: Comparison of generated results with original English SD and other multilingual diffusion models on MG-18. Results of AD are framed with yellow dashed boxes. Obvious differences in generation are highlighted in orange in the prompts. Experiments Implement Details The optimizer is AdamW(Loshchilov and Hutter 2017). The learning rate is 1e-4, with 10,000 warmup steps on 64 NVIDIA A100-SXM4-40GB GPUs. Follow AltCLIP(Chen et al. 2022), we use knowledge distillation to retrain the multilingual text encoder. Through the training process, the text encoder remains frozen. In addition, we adopt a continuous learning strategy for model training. In the concept align stage, we use the SD v2.1 512-base-ema checkpoint to initialize all parameters except the text encoder, with a batch size of 3,072 and a resolution of 256x256. The training process on LAION2Ben and LAION2B-multi for 330,000 steps takes approximately eight days. In the quality improvement stage, the training starts at the 330,000-step checkpoint, with a batch size of 3,840 on LAION Aesthetics V1-en and V1-multi and 270,000-steps with a resolution of 512x512, which takes around seven days. After that, a new round of training continues from the 270,000-step checkpoint for another 150,000 steps, with 10% of the text randomly discarded for classifierfree guidance learning, taking approximately four days. The teacher model using in knowledge distillation is OpenCLIP ViT-H-14. We also use Xformer and Efficient Attention to save memory use and speed up training. The decay of EMA is 0.9999. Results on MG-18 We evaluate the general multilingual T2I generative capability of AD on MG-18. We compare AD with two kinds of models. The first is translation-based SD, which requires translating prompts in other languages into English before generation. The second is multilingual baseline diffusion models that beyond English. The inference resolution of all models is 512×512, using 50 DDIM steps and 9.0 classifierfree guidance scale. Metrics We use FID and IS for evaluating the generation quality, and use Multilingual CLIP(Radford et al. 2021) to calculate the cosine similarity score to evaluate the consistency of generated images with multilingual text. AltDiffusion(AD) Stable Diffusion(SD) FID IS C-Sim FID IS C-Sim English 19.02 26.97 0.324 18.02 27.45 0.322 Chinese 20.32 29.46 0.350 18.51 28.30 0.317 Japanese 18.90 28.13 0.356 18.09 29.39 0.328 Thai 19.94 27.63 0.353 19.82 25.61 0.240 Korean 20.54 27.63 0.338 18.69 28.79 0.284 Hindi 20.92 25.90 0.338 18.52 26.30 0.311 Ukrainian 19.27 28.18 0.346 17.36 28.54 0.314 Arabic 20.32 28.71 0.346 18.34 28.90 0.298 Turkey 19.54 28.53 0.347 17.40 28.54 0.315 Vietnamese 19.02 29.22 0.346 17.02 30.86 0.312 Polish 19.67 29.11 0.347 18.36 30.41 0.327 Dutch 20.14 27.64 0.350 17.91 29.48 0.329 Portuguese 20.78 28.59 0.352 18.82 28.56 0.302 Italian 19.77 27.19 0.352 17.38 28.53 0.317 Spanish 20.15 27.64 0.357 20.41 25.31 0.260 German 18.74 27.58 0.359 17.06 28.66 0.347 French 18.99 28.34 0.357 17.09 29.71 0.341 Russian 19.18 28.26 0.347 17.49 29.42 0.322 Table 2: Comparison of evaluation results with translationbased SD on MG-18. C-Sim means clip similarity. Compare with Translation-based SD v2.1 We use the original multilingual prompts directly as the input of AltDiffuison. Considering that SD only supports English inputs, we first use the state-of-the-art opensource translation model NLLB-3B2 to translate other languages into English and then feed them into SD. As shown in Table 2, AD surpasses translation-based SD in CLIP Sim of all languages and achieves comparable results in FID and IS, proving that AD is better than SD in multilingual understanding and can generate almost the same high-quality images as SD on general prompts. Compare with Other Baselines We compare AD with other multilingual baseline diffusion models, including Taiyi Chinese, Taiyi Bilingual ,and Japanese SD. As shown in Table 3, AD outperforms other multilingual baseline diffusion models in all metrics, especially in CLIP 2https://huggingface.co/facebook/nllb-200-3.3B The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6652 Language Model FID(↓) IS(↑) CLIP Sim(↑) English Taiyi-Bilingual 25.76 26.54 0.261 AD 19.02 27.45 0.324 Chinese Taiyi-CN 20.72 28.91 0.276 Taiyi-Bilingual 23.87 26.96 0.259 AD 20.32 29.46 0.350 Japanese Japanese SD 22.78 30.57 0.278 AD 18.90 28.13 0.356 Table 3: Comparison of zero-shot evaluation results with other multilingual baselines on MG-18. Sim, AD has achieved 24.1%, 26.8% and 28.1% percent improvement on English, Chinese and Japanese respectively. It shows that AD has stronger image generation capability and multilingual understanding capability than other multilingual baseline models. To demonstrate the strong capability of AD in multilingual T2I generation, we provide generated images in various languages in Figure 1. As shown in Figure 3(b) and (c), AD can generate images that are more consistent with multilingual text, while other models often ignore or make a mistake in concepts. For example, “black couch”, “wood floor” and “parasol” in Chinese, and “bears” and “yellow bus” in Japanese. AD can generate results comparable to SD and better than Taiyi Bilingual in English, as shown in Figure 3(a). Results on MC-18 (b) 一幅张大千的山水画 (c) 春の海 (a) 海风吹不断,江月照还空 (d) 新鮮な食材がテーブルにたくさん置いて います AltDiffusion Stable Diffusion Stable Diffusion AltDiffusion Figure 4: Comparison of generated results with translationbased SD on MC-18. Annotators familiar with culture of different countries are asked to conduct a human evaluation for the understanding capability of the model in culture-specific concepts. Evaluation Setting We assign three annotators to each language. Annotators see two images generated by the same prompt generated by AD and SD, respectively. Then they are asked to score from two dimensions: Culture Consistency, and Image-Text Consistency, scoring from 1-5. After scoring, the annotators select the final result from [“Alt is better”, “SD is better”, “Same”]. Then we calculate the total score according to the following formula: (T otalAlt, T otalSD) = |A| + 0.5 · |C| N , |B| + 0.5 · |C| N (3) where |A| , |B| and |B| is the count of “Alt is better”,“SD is better” and “Same”, N = |A| + |B| + |C|. The results for each language are the average of 3 annotator scores. Evaluation Results As shown in Table 4, AD beats SD for the final total scores in all languages and outstands in Cultural and Image-Text Consistency, showing that AD performs better in multilingual understanding. The evaluation results indicate that through training with large-scale multilingual data, culture-specific concepts of different languages can be injected into the model. AltDiffusion(AD) Stable Diffusion(SD) Culture T-I Total Culture T-I Total Cons Cons Cons Cons Chinese 4.125 3.775 0.769 3.356 3.350 0.231 Japanese 4.507 4.487 0.757 3.301 3.253 0.243 Thai 4.027 4.044 0.648 3.383 3.579 0.352 Korean 3.236 3.371 0.607 3.135 3.287 0.393 Hindi 4.824 4.301 0.523 4.784 4.261 0.477 Ukrainian 4.422 3.955 0.641 4.322 3.905 0.359 Arabic 4.824 4.401 0.609 4.647 4.314 0.391 Turkey 3.376 3.293 0.510 3.328 3.241 0.490 Vietnamese 3.637 3.511 0.533 3.467 3.396 0.467 Polish 4.243 3.735 0.546 4.130 3.676 0.454 Dutch 4.495 4.465 0.527 4.195 4.215 0.473 Portuguese 3.757 3.627 0.639 3.639 3.509 0.361 Italian 3.586 3.449 0.565 3.512 3.375 0.435 Spanish 4.202 4.113 0.554 4.138 4.064 0.446 German 3.981 3.916 0.698 3.825 3.688 0.302 French 4.655 4.582 0.527 4.652 4.579 0.473 Russian 3.258 3.086 0.556 2.868 3.002 0.444 Table 4: Comparison of human evaluation results with translation-based SD on MC-18. T-I cons means Text-Image consistency. We illustrate the performance difference between AD and SD on MC-18 in Figure 4. AD has a better understanding capability in culture-specific concepts. For example, in Figure 4(a), the prompt is actually a Chinese poem, but SD generates a realistic image. In Figure 4(b), models are asked to generate a landscape painting by Chinese traditional painter Daqian Zhang. The image of AD shows the characteristics of Chinese traditional painting, while the image of SD is an oil painting. As for “the spring sea” in Japanese in In Figure 4(c), AD generates cherry blossoms more suitable for artistic concept. Application Genernal Application Figure 6 shows the general capacities of AD, such as Image to Image(Img2Img) and Inpainting. AD supports users to to use Image to Image or Inpaint function directly use languages beyond English, e.g., Chinese. Compatibility with Downstream Tools Although large T2I models achieve impressive performance, they still lack controllability in specific commercial applications. Recently, some methods, such as ControlNet(Zhang and Agrawala The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6653 ⼀只带眼镜的sks狗 (A SKS dog with glasses) ⼀只穿旗袍的sks狗 (A SKS dog with glasses) ⼭峦起伏,苍茫的⼭川壮丽⽆垠, <sks> (The undulating mountains and vast rivers are magnificent and boundless<sks>) 仕⼥红妆,锦绣华服, <sks> (Ladies adorned in red makeup, wearing splendid and exquisite clothing<sks>) ⼀只可爱的⼩⻦(A lovely bird.) ⼀个神奇的房间(A magical room.) 穿⻄装的男⼈(The man in the suit.) Canny edge condition Normal map condition Humanpose condition Original picture Original picture Original picture Training data: Training data Training data (b) LoRA (a) ControlNet Figure 5: AD has strong compatibility with downstream T2I tools such as ControlNet and LoRA. (b) Inpainting 满脸是胡子的大叔(The old man with the beard) (a) Img2Img: 年轻女孩肖像(Portrait of young girl) Figure 6: The images generated by Img2Img and Inpainting function using AD . 一个穿着disney dress 的小女孩在微笑 सारी is a female characteristic clothing in India Ein Gemälde, das die Sterne am Himmel darstellt, Van Gogh's Style Spaghetti是一种美味的食 物 Une dame à la mode in front of the Statue of Liberty A beautiful วัดวาอารามin Thailand There is 한국치킨on the plate 宮崎駿風の絵画,there are many elves in the beautiful forest Figure 7: Images generated by mixed languages using AD. 2023) and LoRA(Hu et al. 2021), have garnered widespread attention for enhancing model controllability. Compatibility with these downstream tools is essential for a Large T2I. As shown in Figure 5, AD is totally compatible with ControlNet and LoRA. Thus users can use their imagination to create images easily. Mixed Language Generation As shown in Figure 7, AD supports mixed language input. It will be very troublesome if the model can only support English because users need to translate various languages into English and then concatenate them as input. AD can freely combine different languages, such as Thai and English, Japanese and English, Chinese and Korean, etc. Conclusion This paper introduces AltDiffusion(AD), a multilingual T2I diffusion model that supports eighteen languages. We train a multilingual text encoder and plug it into pretrained diffusion model, and then train the diffusion model using a twostage training schema. In addition, we introduce a benchmark to evaluate AD, including two datasets focusing on general and culture-specific evaluation: MG-18 and MC-18. Experimental results show AltDiffusion outperforms current state-of-the-art T2I models, e.g., Stable Diffusion in multilingual understanding, especially with respect to culturespecific concepts, while still having comparable capability for generating high-quality images. Meanwhile, as a large multilingual T2I diffusion model, AD is compatible with all downstream T2I tools, e.g., ControlNet and LoRA, which may promote research and application in multilingual T2I. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6654 Acknowledgements The work was supported by the National Key R&D Program of China (2022ZD0116312). We extend our gratitude to LAION for open-sourcing the dataset, to StabilityAI for open-sourcing the base models, and to team of stable diffusion researchers for providing helpful suggestions. Lastly, we acknowledge the data and infrastructure team at BAAI for their support throughout the project. References Aggarwal, P.; and Kale, A. 2020. Towards zero-shot Crosslingual Image retrieval. arXiv preprint arXiv:2012.05107. Carlsson, F.; Eisen, P.; Rekathati, F.; and Sahlgren, M. 2022. Cross-lingual and multilingual clip. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, 6848–6854. Chen, Z.; Liu, G.; Zhang, B.-W.; Ye, F.; Yang, Q.; and Wu, L. 2022. AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities. arXiv preprint arXiv:2211.06679. Cherti, M.; Beaumont, R.; Wightman, R.; Wortsman, M.; Ilharco, G.; Gordon, C.; Schuhmann, C.; Schmidt, L.; and Jitsev, J. 2022. Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143. Conneau, A.; Khandelwal, K.; Goyal, N.; Chaudhary, V.; Wenzek, G.; Guzm´an, F.; Grave, E.; Ott, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Unsupervised crosslingual representation learning at scale. arXiv preprint arXiv:1911.02116. Ding, M.; Yang, Z.; Hong, W.; Zheng, W.; Zhou, C.; Yin, D.; Lin, J.; Zou, X.; Shao, Z.; Yang, H.; and Tang, J. 2021. CogView: Mastering Text-to-Image Generation via Transformers. arXiv preprint arXiv:2105.13290. Ding, M.; Zheng, W.; Hong, W.; and Tang, J. 2022. CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers. arXiv preprint arXiv:2204.14217. Du, Y.; Li, C.; Guo, R.; Yin, X.; Liu, W.; Zhou, J.; Bai, Y.; Yu, Z.; Yang, Y.; Dang, Q.; et al. 2020. Pp-ocr: A practical ultra lightweight ocr system. arXiv preprint arXiv:2009.09941. Elliott, D.; Frank, S.; Sima’an, K.; and Specia, L. 2016. Multi30K: Multilingual English-German Image Descriptions. Cornell University - arXiv,Cornell University - arXiv. Feng, Z.; Zhang, Z.; Yu, X.; Fang, Y.; Li, L.; Chen, X.; Lu, Y.; Liu, J.; Yin, W.; Feng, S.; et al. 2022. ERNIEViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts. arXiv preprint arXiv:2210.15257. Feng, Z.; Zhang, Z.; Yu, X.; Fang, Y.; Li, L.; Chen, X.; Lu, Y.; Liu, J.; Yin, W.; Feng, S.; et al. 2023. ERNIE-ViLG 2.0: Improving text-to-image diffusion model with knowledgeenhanced mixture-of-denoising-experts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10135–10145. Gandikota, R.; Materzynska, J.; Fiotto-Kaufman, J.; and Bau, D. 2023. Erasing Concepts from Diffusion Models. arXiv preprint arXiv:2303.07345. Hessel, J.; Holtzman, A.; Forbes, M.; Bras, R. L.; and Choi, Y. 2021. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Ho, J.; and Salimans, T. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Plummer, B. A.; Wang, L.; Cervantes, C. M.; Caicedo, J. C.; Hockenmaier, J.; and Lazebnik, S. 2017. Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models. International Journal of Computer Vision, 74–93. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684– 10695. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6655 Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques for training gans. Advances in neural information processing systems, 29. Schamoni, S.; Hitschler, J.; and Riezler, S. 2018. A Dataset and Reranking Method for Multimodal MT of UserGenerated Image Captions. Conference of the Association for Machine Translation in the Americas,Conference of the Association for Machine Translation in the Americas. Schuhmann, C.; Beaumont, R.; Vencu, R.; Gordon, C.; Wightman, R.; Cherti, M.; Coombes, T.; Katta, A.; Mullis, C.; Wortsman, M.; et al. 2022. Laion-5b: An open largescale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402. Shing, M.; and Sawada, K. 2022a. Japanese Stable Diffusion. https://github.com/rinnakk/japanese-stable-diffusion. Shing, M.; and Sawada, K. 2022b. Japanese Stable Diffusion. https://github.com/rinnakk/japanese-stable-diffusion. Srinivasan, K.; Raman, K.; Chen, J.; Bendersky, M.; and Najork, M. 2021. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2443– 2449. Thapliyal, A. V.; Pont-Tuset, J.; Chen, X.; and Soricut, R. 2022. Crossmodal-3600: A massively multilingual multimodal evaluation dataset. arXiv preprint arXiv:2205.12522. Wang, J.; Zhang, Y.; Zhang, L.; Yang, P.; Gao, X.; Wu, Z.; Dong, X.; He, J.; Zhuo, J.; Yang, Q.; Huang, Y.; Li, X.; Wu, Y.; Lu, J.; Zhu, X.; Chen, W.; Han, T.; Pan, K.; Wang, R.; Wang, H.; Wu, X.; Zeng, Z.; Chen, C.; Gan, R.; and Zhang, J. 2022a. Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence. CoRR, abs/2209.02970. Wang, J.; Zhang, Y.; Zhang, L.; Yang, P.; Gao, X.; Wu, Z.; Dong, X.; He, J.; Zhuo, J.; Yang, Q.; et al. 2022b. Fengshenbang 1.0: Being the foundation of chinese cognitive intelligence. arXiv preprint arXiv:2209.02970. Zhang, H.; Yin, W.; Fang, Y.; Li, L.; Duan, B.; Wu, Z.; Sun, Y.; Tian, H.; Wu, H.; and Wang, H. 2021. ERNIE-ViLG: Unified generative pre-training for bidirectional visionlanguage generation. arXiv preprint arXiv:2112.15283. Zhang, L.; and Agrawala, M. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. arXiv:2302.05543. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6656 | 2024 | 739 |
18,561 | Bootstrapping Cognitive Agents with a Large Language Model Feiyu Zhu, Reid Simmons Carnegie Mellon University [email protected], [email protected] Abstract Large language models contain noisy general knowledge of the world, yet are hard to train or fine-tune. In contrast cognitive architectures have excellent interpretability and are flexible to update but require a lot of manual work to instantiate. In this work, we combine the best of both worlds: bootstrapping a cognitive-based model with the noisy knowledge encoded in large language models. Through an embodied agent doing kitchen tasks, we show that our proposed framework yields better efficiency compared to an agent entirely based on large language models. Our experiments also indicate that the cognitive agent bootstrapped using this framework can generalize to novel environments and be scaled to complex tasks. Introduction Large language models (LLM) such as GPT-4 (OpenAI 2023), have shown emerging capabilities after training on internet-scale text data with human feedback, and have been employed in robot planning (Huang et al. 2022), animal behavior analysis (Ye et al. 2023), human proxies (Zhang and Soh 2023), and many more. However, they have also been criticized for being susceptible to adversarial attacks (Zou et al. 2023), hallucination (Casper et al. 2023), and having diminishing returns for scaling (OpenAI 2023). Cognitive architectures are another approach in the pursuit of AI that attempts to model human cognition computationally (Newell 1994). Despite the variety of architectures developed, most of them share the same central components, consisting of declarative memory reflecting knowledge of the world, procedural memory dictating the agent’s behavior, and short-term working memory that assists reasoning and planning (Laird, Lebiere, and Rosenbloom 2017). The procedural memory is represented by a set of production rules, each with a precondition and an effect. Agents operate in perceive-plan-act cycles, dynamically matching relevant features of the environment to the production rules and applying their effects. Unlike operators in symbolic planning, production rules do not represent alternative actions but instead reflect different contextual knowledge (Laird 2022). These rules can be reinforced and modified throughout the agent’s learning process. Despite some pioneering Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. : AND → ⇒ LLM World Knowledge: (tomatoes belong in the fridge) Envt Knowledge: (gripper empty & table empty) Productions: (goal) (subtask: find a tomato) Task Stack: slice a tomato find a tomato push query generate observe act Figure 1: Overview of agent framework. It shows the agent executing the production of attending to a new subtask of finding a tomato when the original task is to slice a tomato and the tomato is not in the gripper nor on the table. Dotted lines represent the information a production rule may condition on. Solid lines represent information flow. work on data-driven cognitive model creation (Hake, Sibert, and Stocco 2022), almost all previous work generate their initial set of production rules manually, limiting their application to simple environments such as blocks world or psychology experiments (Park et al. 2023). In this work, we combine the two approaches in a complementary fashion (Figure 1). LLMs encode the common sense knowledge of the world (Madaan et al. 2022) that can be used in place of human labor for constructing agents in the cognitive architecture. The reasoning and learning capabilities in the cognitive architecture can identify and filter the noise in LLMs while converting the knowledge in language to actionable productions of an embodied agent. This combined framework separates knowledge generation and knowledge application, and this modularity is the key to generalization. The LLM is responsible only for generating general knowledge, such as “if the task is to find an The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 655 object, the agent should explore the places where that object is commonly stored”. Since such knowledge can be applied to almost all objects and environments, the LLM needs to generate these only once, and it is the role of the cognitive architecture to dynamically match the environment to the generated knowledge. This is significantly different from using LLMs to generate plans directly, as the plans are grounded to the specific instance of the task (e.g., finding a specific object in the specific environment), and are non-trivial to generalize to novel environments without re-generation. The contribution of this paper is threefold: 1) we propose an agent framework that combines LLMs with customized cognitive architecture, 2) we demonstrate how it can learn to perform various kitchen tasks from bootstrapping, and 3) we show that, when applied to new environments, it requires significantly fewer tokens than querying LLM for actions. 1 Related Work Learning Through Program Synthesis Interactive Task Learning (ITL) (Laird et al. 2017) aims at teaching robots new skills in a one-shot fashion. Previous work implements this in the SOAR cognitive architecture and has shown effective task and environment transferability in domains such as board games (Kirk and Laird 2019) and embodied agents (Mininger and Laird 2022). To reduce the need for extensive human input, recent research explores using LLM as the knowledge source (Lindes and Peter 2023; Kirk et al. 2023), shifting human labor from specifying the goal conditions to answering yes/no questions. In contrast, our approach uses strategic prompting and self-reflection mechanisms to eliminate the need for human supervision. Our work shares some high-level ideas with DreamCoder (Ellis et al. 2021), which learns to solve new problems by program generation and reflection. Instead of formulating it as an informed search problem, we accelerate this process by querying LLMs for their existing knowledge. Madaan et al. (2022) extract common-sense knowledge from LLMs into code form similar to how we extract productions. But they only address the general task decomposition, not applying the information to an embodied agent. Large Language Model for Embodied Agents Many studies have explored using LLMs to generate code that performs robotics tasks (Liang et al. 2023; Singh et al. 2023; Vemprala et al. 2023) and game environments (Wang et al. 2023), which is similar to the procedural memory in the cognitive architectures. Other works explored generating PDDL specifications (Liu et al. 2023a; Xie et al. 2023). Unlike the situation-grounded code produced by these methods, our approach generates abstract productions with learnable weights. This allows more generalization capabilities and choosing the best plan among multiple applicable plans. Others let LLMs select the action directly (Di Palo et al. 2023; Vemprala et al. 2023) with the help of other auxiliary components such as affordance evaluation (Ahn et al. 1Code at github.com/zfy0314/cognitive-agents 2022), memory stream (Park et al. 2023), visual summarization (Qiu et al. 2023), and knowledge base (Zhu et al. 2023). Some others explored multi-modal foundation models tailored for embodied agents (Driess et al. 2023; Xiang et al. 2023). As LLMs are non-trivial to update from a single instance, using more explicit production systems in our approach enables persistent one-shot updates and more interpretability. As we will show in our experiments, relying on LLMs for every action is also not very cost-effective. Method Architecture Overview Figure 1 illustrates the architecture and workflow of the agent. The agent has four main components. A world knowledge base that contains general knowledge, such as “Tomatoes are commonly stored in the Fridge”. Environment knowledge that reflects what the agent knows about the environment from past observations, including both information about the agent itself (e.g., the gripper is empty) and about the external world (e.g., the table is clear). These two components form the declarative memories of the agent. Another essential component is the procedural memory that contains all the production rules. In our work, however, we integrate the working memory into each production by exploiting the Python class structure, so there is no centralized working memory. And, finally, inspired by the goal module of ACT-R (Anderson 2009) and the impasse mechanism of SOAR (Laird 2022), the agent manages a task stack. At each time step, the agent searches in its procedural memory for any applicable production rule, considering the current task and environment knowledge. If there is no production applicable, the agent will summarize the current knowledge and query the LLM for both an action suggestion and a corresponding production rule, such that the agent knows what to do in similar scenarios in the future. When at least one production is applicable, it will sample an applicable production rule, based on its utility, and execute the proposed action, which can be either in the environment or internally, such as adding a subtask to its task stack. Bootstrapping Procedures The bootstrapping process starts with a curriculum. We took inspiration from (Wang et al. 2023), which uses an LLM to automatically construct the curriculum for Minecraft. As the simulator we use is not as popular as Minecraft and has some specific constraints (e.g., can only hold one object at a time), we find it better to specify the curriculum manually. Unlike previous work that requires human input on the next steps and/or goal condition for the tasks (Mininger and Laird 2022), we require only the names of the task families, so designing the curriculum is not very labor intensive. Another difference is that our curriculum consists of families of tasks (e.g., find a/an <object>) instead of specific instances (e.g., find a/an egg). We follow the SOAR syntax and keep all variables in angle brackets. With a given curriculum, the following steps are used to bootstrap a single task in the curriculum (using find a/an <object> as an example). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 656 1. Fill in the variables randomly from the environment to instantiate a concrete task (e.g., find a/an Egg); 2. Attempt the task with the existing production rules; 3. (Action Selection) If there is no production rule for a state, or there is a cycle detected through the production application, query an LLM for an action; 4. (Production Generation) Generate the corresponding production rule to the action, and load it into the agent; 5. Repeat steps 1-4 sufficient times until the robot can perform the task with only production rules; 6. (Production Improvement) Use a critic to summarize the end condition of the task for future use and improve the generated productions. The above procedures are repeated for all task families in the curriculum. While the agent might not fully learn every scenario of a task before moving on to the next one, it can still query the LLM later on to generate a production rule for a previously learned task. The training of a task is considered complete as long as the agent has sufficient experience with the task to generate a reasonable end condition such that future tasks can reuse the previously learned tasks. Action Selection The LLM is prompted with the current task, a summary of the current state, and a list of options available to the robot, which include both motor actions on the environment (e.g., move to a specific location) and internal actions (e.g., attend to a new subtask). For each previously trained subtask, we provide the end condition generated by the critic for the LLM to evaluate its relevance. Like the task names, the actions can also be parameterized (e.g., move to <receptacle>), and the LLM can replace <receptacle> with anything as it sees fit. We use chain-of-thought prompting (Wei et al. 2022), which explicitly instructs the LLM to respond to the prompt in a step-by-step manner, probing it to make the most informed decision. The LLM is instructed to reflect on common strategies for approaching the task, analyze the current situation, and evaluate the usefulness of each action before suggesting one option for the robot to take. The LLM is also prompted to state the purpose of the chosen action, which will inform the production rule generation later. Production Generation Although the production rules are generated based on the current state, we represent them not as plans for the current task, but instead as underlying decision-making principles for all similar scenarios. For example, if the current task is to find a/an egg, instead of suggesting the action sequence of exploring every cabinet in the current environment, a desirable production rule would suggest “whenever you need to find something, you should first explore the unexplored places where that object is commonly stored”. This is a systematic generalization that can be applied to finding any objects, not just eggs, and also can be applied to novel environments with different layouts and receptacle types. Listing 1: Production interface 1 class GeneratedProduction(Production): 2 def precondition(self, agent) -> bool: 3 # Returns whether the production is applicable given the agent 4 # Set variables as side-effects 5 def apply(self) -> str: 6 # Returns the effect 7 # Based on the variable bindings To generate desired production rules, we use a two-step process. The first step summarizes the action selection process and generates the English description of the production rule; the second step then converts it into executable Python code (Listing 1). This separation is inspired by how human beginners are instructed to build cognitive models (Laird 2017), and has two benefits: 1) it allows each query to the LLM to be of reasonable length (∼5k tokens), preventing LLMs from losing focus on lengthy prompts (Liu et al. 2023b); and 2) it facilitates a modular design, which enables generating code from English descriptions generated from other sources, including human feedback and postgeneration self-reflection. For each step, we also use the chain-of-thought prompting technique. For English description generation, the LLM is given the entire history of the action selection process, and is instructed to take four steps: 1) identify relevant information that leads to choosing the action; 2) generate a specific production rule that describes the current situation; 3) identify the potentially generalizable components in the specific rule and how they can be generalized; and 4) replace the components to form the generalized production description. For code generation, the LLM is given the Python interface of querying declarative memory and the current task, and is instructed to take another four steps: 1) plan what variable bindings are needed; and how their values should be assigned, 2) analyze the predicates in the precondition and associate them with relevant variables; 3) plan how each predicate should be tested using the provided function interfaces; and 4) fill in the production template. The code snippet is parsed from the response and imported into the agent. Production Improvement We use three mechanisms to monitor and improve the common interface mismatch, over-constraining, and overgeneralization problems of the LLM-generated productions. Similar to the iterative prompting design in Voyager (Wang et al. 2023), the agent replays the generated production rule on the state from which it was generated, and ensures that its precondition check passes the current conditions. This fixes most function interface mismatches, as the generated production has to comply with a specific naming scheme and the interface of the declarative knowledge. However, passing the precondition test for a single instance does not guarantee that production is ideal. As the LLM has access to accumulated observations from the past during the action selection process, it might include unnecessary conditions that happen to be true in the production’s The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 657 precondition, over-constraining it. This is handled by a critic LLM that summarizes the end condition of the task and provides suggestions on the existing productions. The critic LLM is given the name of the task family (e.g., find a/an <object>), and the English descriptions, generated by the production LLM, of the existing production rules for that task. The critic LLM is instructed to first analyze all the production rules whose effect is the done action, and summarize the end condition of the given task (e.g., the robot is holding the desired object in its gripper). These end conditions summarize the behavior of the previously learned tasks to inform the action selection process for future tasks. This summary will be added to the prompt when querying for tasks later in the curriculum to incentivize reusing previously learned tasks. Next, for each production rule, the LLM either keeps it as is, removes it entirely, or modifies it. The modifications are in the English description space for the critic, and we make use of the two-step modularity of production generation to update the production rules. Over-generalization happens when important features are left out of the production’s precondition. For example, for the pick and place task, the LLM might generate a production rule that says: IF task is pick and place <object> AND <object> in field of view AND gripper is empty THEN pick <object> This will make the robot pick up the object even when the object is already in the target receptacle. To prevent the agent from being stuck in an infinite loop, it will keep a state transition graph during the execution process and query the LLM for an alternative action once a cycle is detected using a depth-first search on the transition graph. Coupled with the production reinforcement (described below), the agent will prioritize loop-breaking productions. Production Reinforcement Following previous work in visual navigation (Anderson et al. 2018), the agent has to explicitly choose the special done action to indicate that it has completed the current task. We further extend this and give the agent a quit option to indicate that it believes the given task is impossible in the given environment. This is important as we allow the architecture to choose to attend to any subtask as it wants, and it should be able to realize when a task is impossible. As we do not pre-define the goal condition during the bootstrapping process, we give a unit reward whenever the agent decides it is done with the current task. The reward propagates back through the shortest path to the starting state. For example, if the state transition is S0 P1 ==⇒S1 P2 ==⇒S2 P3 ==⇒S0 P4 ==⇒S4 P5 ==⇒S5 Pdone ===⇒ where S0 is the start state and Pdone is the production that yields the done action. Then the shortest path is S0 P4 ==⇒S4 P5 ==⇒S5 Pdone ===⇒ Therefore only P4, P5, Pdone will receive a utility update, using the bellman backup (Sutton and Barto 2018). Uafter(P) ← 1 N(P) + 1 N(P) · Ubefore(P) + γ∆t (1) Where U(P) is the utility of production P, N(P) is the number of times P gets applied, ∆t is the time difference from production application to the done action, and γ is the discount factor (which is set to 0.95 for our experiments). When a subtask is involved, the utility is updated with respect to each task. For example, if the state transition is A0 P1 ==⇒A1 P2 ==⇒B3 Q3 ==⇒B4 Q4 ==⇒B5 Qdone ===⇒ | {z } a subtask initiated by P2 A6 Pdone ===⇒ Where A and P correspond to the states and productions of the original task respectively and B and Q correspond to the states and productions of the subtask respectively. This will be treated as two separate utility update pathways A0 P1 ==⇒A1 P2 ==⇒A6 Pdone ===⇒and B3 Q3 ==⇒B4 Q4 ==⇒B5 Qdone ===⇒ If a subtask ends up with quit then there will be no utility update, not even negative ones. Because the task might be impossible due to environmental constraints, which has nothing to do with the production rules. Intuitively, the closer a production brings the agent to choose done for its current task, the higher its utility is. This process is not provided to the LLM, so it has no incentive to “cheat” by proposing the done action all the time. We also explicitly tell the LLM to avoid selecting done or quit action unless it is “absolutely certain” about it. This worked empirically in our experiments. This utility update process helps reduce the impact of hallucination in LLMs, as the knowledge is aggregated. For example, when tasked with “explore the countertops”, the LLM may hallucinate and propose a production Pbad that keeps the agent exploring the cabinets after all countertops have been explored, instead of proposing the done action, as it should. However, when tasked with “explore the sink“ in the same bootstrapping section, the LLM may generate a production Pgood that correctly identifies the termination condition and proposes done when all receptacles of the desired type have been explored. Then later, when the agent needs to explore all the countertops (potentially as a subtask of another task) and all of the countertops have been explored, both Pbad and Pgood will be applicable. The agent will prioritize Pgood because it is guaranteed to have a higher utility value than Pbad. On the other hand, if we use LLM to generate plans for each task, we may get a correct plan for the sink but an incorrect one for the countertops. When multiple productions are applicable given the same environment knowledge, we resolve the conflict using the definition of noisy-optimal in previous works (Tian et al. 2023), where the probability of production Pi being selected and applied, given the current knowledge K, is P(Pi | K) ∝IK(Pi) · exp(U(Pi)) (2) where IK(p) indicates that the preconditions of production p hold, given knowledge K. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 658 World Knowledge Base For the sake of simplicity, we implemented the world knowledge of the agent as a dictionary that maps natural language statements to either true or false. Unlike many existing cognitive architectures that assume an absence of knowledge implies the negation, we explicitly differentiate between not knowing and knowing to be false. In the future, it can also be replaced with a real-valued vector database. When a production rule is conditioned on a statement not previously known to the agent, the LLM is used to evaluate whether the statement is true, and the result will be saved to the knowledge base to be reused later. For instance, when bootstrapping the task of finding an egg, the agent will learn the production rule that says “If there is an unexplored receptacle where the object is commonly stored, explore that receptacle”. But the agent does not know whether “egg is commonly stored in the fridge” is true or not initially, so it will query the LLM and memorize the positive response in its world knowledge base. Later when the agent is tasked to put things in their common storage place, the agent can reuse the knowledge and place eggs into the fridge. In addition to transferring to new tasks, the knowledge can be applied to new environments as well (e.g., eggs are commonly stored in fridges in most American households). This knowledge base could be easily replaced by connecting it to an existing knowledge graph or ontology. But for the purpose of this paper, we are bootstrapping it from scratch. Experiments Setup Following previous works in the embodied agents domain (Sarch et al. 2022; Trabucco et al. 2023), we evaluate our method in kitchen environments (see Figure 2) in the AI2THOR simulator (Kolve et al. 2017). As shown in Figure 2d, the agent has access to classification labels and attributes (e.g., “is opened”) for objects that are close enough (within 1.6m) or large enough (more than 5% of the frame). We also assume the agent already knows the names and locations of the large receptacles (e.g., cabinets, fridges, etc.) but does not know what objects are in the receptacles until it actively explores them. We use three different tasks for evaluation: • find a/an <object>: the goal is to have the specified object in the robot’s field of view. This is a fundamental skill that is often overlooked or directly assumed in many of the previous works (Singh et al. 2023). We want to show that our framework can bootstrap very basic skills, in addition to composite actions. • slice a/an <object>: the goal is to use a knife to slice an object. Because the robot can hold at most one item at a time, slicing involves a sequence of actions including finding the target object and the knife, putting them in the same place, and the final slice action. We want to show that our framework can handle tasks that involve multiple steps and tool use. • clear the countertops: the goal is to have all the objects on the countertops moved to suitable storage (a) training floor plan (b) testing floor plan (c) ego-centric view (d) instance segmentation Figure 2: Screenshots of the AI2THOR simulator places. This is a common household task that has also been investigated in previous work (Andrew et al. 2022; Sarch et al. 2022). We want to show that our framework can handle tasks that involve repeating similar subtasks. The goal conditions listed above are used only for evaluation purposes, but are not provided to the LLM during training or testing. The LLM has to infer the goal condition from the task description only. For find and slice, 5 target objects are chosen for each task, and we run 3 trials for each object where the initial locations of the objects are shuffled. For clear the countertops we run 3 trials each with 5 objects on the countertops that need to be put away. The specific objects and locations vary between trials, and the success of the agent is evaluated based on how many objects originally on the countertops have been relocated to other places. This results in 15 specific goal instances for each task family. We use GPT4-0613 (OpenAI 2023) for our experiments as previous works have shown that GPT3.5 is insufficient for code generation (Olausson et al. 2023; Wang et al. 2023). We set temperature to the 0 for the most deterministic response. Conditions For the experimental condition, we bootstrapped our agent with the following curriculum in the training floor plan: 1. explore <receptacle> 2. find a/an <object> 3. pick and place a/an <object> in/on a/an <receptacle> 4. slice a/an <object> 5. put things on the countertop away The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 659 This process generated 27 production rules in total. During test time, the agent can query the LLM for an immediate action if it does not have an applicable production rule for the current situation, but it cannot learn new production rules. For the baseline condition of using LLMs to query only the actions, we omit the production generation steps and only use the action selection process within our framework. This ensures the prompts used by both conditions are the same, so LLM should suggest actions of similar quality. If the action proposed by the LLM leads to an affordance error, we query the LLM another two times, and if none of the actions are viable by the agent, then it raises a failure. Although many works address the rearrangement task (Sarch et al. 2022; Wu et al. 2023a), they are not appropriate baselines as their architectures already encode the general strategies (e.g., first determine the target receptacle for each object, then navigate to the target area, etc.) while our approach bootstraps everything from scratch. Similarly, a hand-coded cognitive agent by human experts may perform even better but that defeats the purpose of eliminating the need for manual coding of knowledge. Other code generation works cannot handle multiple instances of the same kind (Singh et al. 2023) or understand the slicing preconditions (Song et al. 2022) without non-trivial modifications. Results Table 1 shows the quantitative results of different types of agents performing each kitchen task. The action-only baseline successfully completes all tasks but one, where it assumes find a/an mug is equivalent to find a/an cup, and ends the search pre-maturely without exploring the sink where the mug is actually located. On the other hand, our bootstrapped agent is able to finish most tasks completely using its learned production rule. The only exceptions are when it is tasked to find an object that was not part of its training environment. But with very limited additional queries, the bootstrapped agent is able to successfully complete those tasks as well. This shows that the knowledge in the bootstrapped agent can be easily transferred to new objects in new environments. The success rate and number of query tokens show two advantages of our framework. First, it is verifiable such that it does not make false assumptions (e.g., confusing mugs with cups). Second, it is much more efficient to be deployed into new environments as the production rules it learns can be easily transferred and require minimal further assistance from the LLM, saving computations and costs. We use a paired sample t-test to compare the number of steps taken by both agents. No significant evidence suggests that the two agents perform differently in find or slice tasks (p-values 0.446 and 0.347, respectively). This is not surprising as the knowledge source of both agents is the same LLM. However, the bootstrapped agent is taking longer in the clearing task with significance (p-value 0.001), which results from a stylistic difference between the two agents. As shown in Figures 3a and 3b, the bootstrapped agent places everything into an individual cabinet while the baseline places multiple objects in the same cabinets. This is (a) bootstrapped clearing (b) action-only clearing (c) bootstrapped slicing (d) action-only slicing Figure 3: Examples of task execution. The first row shows the bootstrapped agent put each object in their own cabinet while the baseline agent put multiple objects in the same cabinet. The second row shows the bootstrapped agent sliced the apple on the countertop while the baseline agent sliced the apple at its current location. because one of the productions generated is “if there is an object on the countertop and there is an empty receptacle, attend to the subtask pick up the object, and place it into the empty receptacle”. This production gets reused repeatedly, requiring the agent to seek a unique empty receptacle before placing each object instead of putting every object in the same cabinet. By contrast, the baseline agent is making decisions on a case-by-case basis, so it does not enforce that the target receptacle has to be empty. A similar difference is also found in the slice task where the bootstrapped agent always moves the objects to the countertops before slicing while the baseline agent slices objects at their current location (Figures 3c and 3d). Production Analysis The following are some learned productions: • IF the current task is to find a/an <object> AND the <object> is located on <location> AND the robot is not at <location> THEN choose motor action: move to <location>. • IF the current task is to slice a/an <sliceable> AND the robot is holding a/an <sliceable> AND there is no <tool> in the spatial knowledge or object knowledge THEN choose ’attend to subtask: find a/an <tool>’. • IF the current task is to clear objects from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 660 Task Agent Success ↑ Success w/o LLM ↑ Steps ↓ Tokens ↓ find a/an <object> action-only 14/15 15.67 54754.20 bootstrapped (ours) 15/15 12/15 15.80 916.87 slice a/an <object> action-only 15/15 28.20 102806.60 bootstrapped (ours) 15/15 15/15 29.13 0.00 clear the countertops action-only 15/15 5.13 18924.87 bootstrapped (ours) 15/15 15/15 7.47 0.00 Table 1: Result of experiments on household tasks. Completion steps and tokens are averaged over all task instances close open move to location pick up put down explore find slice in view pick and place clear countertops slice Figure 4: The hierarchy of tasks learned. Gray nodes denote the built-in functions of the robot, and white nodes represent the tasks learned from the curriculum. For built-in actions that involve an object (e.g., close), the object has to be within the field of view for the action to be taken. Special actions (i.e., done and quit) are omitted due to space constraints. a/an <receptacle type> AND all the <receptacle type> are empty THEN choose special action: ’done’. These show that the agent is able to represent different aspects of the given tasks using production rules. The first represents a common strategy for finding things, namely how to find things with a known location. The second represents decomposing complex tasks and reusing previously learned tasks. The third is a correct termination condition for the exploration task that is generated directly by the LLM. Figure 4 shows the task hierarchy learned by the agent after training on the given curriculum. It shows how previously learned tasks are used to perform new tasks. This reduces the number of queries needed for the LLM, fosters generality, and ensures the scalability of our approach. Discussion Explainability Our framework touches upon all three aspects of explainability as defined by Milani et al. (2022). The preconditions of the productions directly specify the feature that is being used (feature importance). Each production rule corresponds to a specific scenario during the bootstrapping process when it is created, which helps determine the training points that influence the learned policy (learning process). Lastly, the production application process can be easily converted to a verifiable decision tree by merging the precondition checks of productions (policy-level explainability). As the production rules can be formally verified, they are preferable to black-box LLM models in safety-critical situations. Limitations In this work, we explore only the high-level decision-making process of the agent and rely heavily on having a welldefined interface for low-level actions, such as navigation and object manipulation. There will likely be a considerable sim-to-real gap when applying this to physical agents. Additionally, the English description generation step requires the decision-making process to be articulable to be converted to production rules. This is hard for skills that cannot be fully expressed using language (e.g., sculpting). Future Work There are more learning opportunities in cognitive architectures such as updating the preconditions of productions or using separate productions for conflict resolutions. Also, large vision models can be used to generate production rules without separate perception modules (Wu et al. 2023b). Additionally, it is well-acknowledged that human values and preferences are hard to represent with reward functions (Casper et al. 2023). However, the production rules are interpretable and can be modified to suit each individual without extensive computation. As they are also modular, updating one specific production rule does not affect the others. It would interesting to examine whether this framework will facilitate personalization in human-AI collaboration tasks. Specifically, the user can iteratively update the production rules to fit their preference without having to worry about the agent forgetting about how to perform the task. Conclusion This paper presents a framework for bootstrapping a cognitive architecture from the existing noisy knowledge in LLMs, with minimal human inputs. We demonstrated how such an agent could efficiently learn to perform kitchen tasks and be applied to new environments. This work generalizes using LLMs to generate plans and provides an alternative to purely data-driven foundation models. And finally, we shed light on how it will benefit personalized agents in the future. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 661 Acknowledgments The authors would like to thank Michelle Zhao and Daphne Chen for their feedback on the manuscript, and the artists from the Noun Project for distributing the icons in Figure 1 under the Creative Commons License. This work was partially supported by a CMU SURF grant and by the AICARING Institute (NSF IIS-2112633). References Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; Cortes, O.; David, B.; Finn, C.; Fu, C.; Gopalakrishnan, K.; Hausman, K.; et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691. Anderson, J. R. 2009. How can the human mind occur in the physical universe? Oxford University Press. Anderson, P.; Chang, A.; Chaplot, D. S.; Dosovitskiy, A.; Gupta, S.; Koltun, V.; Kosecka, J.; Malik, J.; Mottaghi, R.; Savva, M.; et al. 2018. On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757. Andrew, S.; Karmesh, Y.; Alex, C.; Vincent-Pierre, B.; Aaron, G.; Angel, C.; Manolis, S.; Zsolt, K.; and Dhruv, B. 2022. Habitat Rearrangement Challenge 2022. https: //aihabitat.org/challenge/rearrange 2022. Accessed: 202301-02. Casper, S.; Davies, X.; Shi, C.; Gilbert, T. K.; Scheurer, J.; Rando, J.; Freedman, R.; Korbak, T.; Lindner, D.; Freire, P.; et al. 2023. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. arXiv preprint arXiv:2307.15217. Di Palo, N.; Byravan, A.; Hasenclever, L.; Wulfmeier, M.; Heess, N.; and Riedmiller, M. 2023. Towards a unified agent with foundation models. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023. Driess, D.; Xia, F.; Sajjadi, M. S.; Lynch, C.; Chowdhery, A.; Ichter, B.; Wahid, A.; Tompson, J.; Vuong, Q.; Yu, T.; et al. 2023. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378. Ellis, K.; Wong, C.; Nye, M.; Sabl´e-Meyer, M.; Morales, L.; Hewitt, L.; Cary, L.; Solar-Lezama, A.; and Tenenbaum, J. B. 2021. Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd acm sigplan international conference on programming language design and implementation, 835–850. Hake, H. S.; Sibert, C.; and Stocco, A. 2022. Inferring a Cognitive Architecture from Multitask Neuroimaging Data: A Data-Driven Test of the Common Model of Cognition Using Granger Causality. Topics in Cognitive Science, 14(4): 845–859. Huang, W.; Abbeel, P.; Pathak, D.; and Mordatch, I. 2022. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, 9118–9147. PMLR. Kirk, J. R.; and Laird, J. E. 2019. Learning Hierarchical Symbolic Representations to Support Interactive Task Learning and Knowledge Transfer. In IJCAI, 6095–6102. Kirk, J. R.; Wray, R. E.; Lindes, P.; and Laird, J. E. 2023. Integrating Diverse Knowledge Sources for Online One-shot Learning of Novel Tasks. arXiv:2208.09554. Kolve, E.; Mottaghi, R.; Han, W.; VanderBilt, E.; Weihs, L.; Herrasti, A.; Deitke, M.; Ehsani, K.; Gordon, D.; Zhu, Y.; et al. 2017. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474. Laird, J. E. 2017. SOAR 9.6.0 Tutorial. https: //soar.eecs.umich.edu/articles/downloads/soar-suite/228soar-tutorial-9-6-0. Accessed: 2023-01-02. Laird, J. E. 2022. Introduction to Soar. arXiv preprint arXiv:2205.03854. Laird, J. E.; Gluck, K.; Anderson, J.; Forbus, K. D.; Jenkins, O. C.; Lebiere, C.; Salvucci, D.; Scheutz, M.; Thomaz, A.; Trafton, G.; et al. 2017. Interactive task learning. IEEE Intelligent Systems, 32(4): 6–21. Laird, J. E.; Lebiere, C.; and Rosenbloom, P. S. 2017. A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. Ai Magazine, 38(4): 13– 26. Liang, J.; Huang, W.; Xia, F.; Xu, P.; Hausman, K.; Ichter, B.; Florence, P.; and Zeng, A. 2023. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), 9493–9500. IEEE. Lindes, J. R.; and Peter, W. 2023. Improving Knowledge Extraction from LLMs for Robotic Task Learning through Agent Analysis. arXiv preprint arXiv:2306.06770. Liu, B.; Jiang, Y.; Zhang, X.; Liu, Q.; Zhang, S.; Biswas, J.; and Stone, P. 2023a. LLM+ P: Empowering Large Language Models with Optimal Planning Proficiency. arXiv preprint arXiv:2304.11477. Liu, N. F.; Lin, K.; Hewitt, J.; Paranjape, A.; Bevilacqua, M.; Petroni, F.; and Liang, P. 2023b. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Madaan, A.; Zhou, S.; Alon, U.; Yang, Y.; and Neubig, G. 2022. Language models of code are few-shot commonsense learners. arXiv preprint arXiv:2210.07128. Milani, S.; Topin, N.; Veloso, M.; and Fang, F. 2022. A survey of explainable reinforcement learning. arXiv preprint arXiv:2202.08434. Mininger, A.; and Laird, J. E. 2022. A Demonstration of Compositional, Hierarchical Interactive Task Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 13203–13205. Newell, A. 1994. Unified theories of cognition. Harvard University Press. Olausson, T. X.; Inala, J. P.; Wang, C.; Gao, J.; and SolarLezama, A. 2023. Demystifying GPT Self-Repair for Code Generation. arXiv:2306.09896. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Park, J. S.; O’Brien, J. C.; Cai, C. J.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023. Generative agents: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 662 Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442. Qiu, J.; Xu, M.; Han, W.; Moon, S.; and Zhao, D. 2023. Embodied Executable Policy Learning with Language-based Scene Summarization. arXiv preprint arXiv:2306.05696. Sarch, G.; Fang, Z.; Harley, A. W.; Schydlo, P.; Tarr, M. J.; Gupta, S.; and Fragkiadaki, K. 2022. Tidee: Tidying up novel rooms using visuo-semantic commonsense priors. In European Conference on Computer Vision, 480–496. Springer. Singh, I.; Blukis, V.; Mousavian, A.; Goyal, A.; Xu, D.; Tremblay, J.; Fox, D.; Thomason, J.; and Garg, A. 2023. ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. In International Conference on Robotics and Automation (ICRA). Song, C. H.; Wu, J.; Washington, C.; Sadler, B. M.; Chao, W.-L.; and Su, Y. 2022. Llm-planner: Few-shot grounded planning for embodied agents with large language models. arXiv preprint arXiv:2212.04088. Sutton, R. S.; and Barto, A. G. 2018. Reinforcement learning: An introduction. MIT press. Tian, R.; Tomizuka, M.; Dragan, A. D.; and Bajcsy, A. 2023. Towards Modeling and Influencing the Dynamics of Human Learning. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 350–358. Trabucco, B.; Sigurdsson, G. A.; Piramuthu, R.; Sukhatme, G. S.; and Salakhutdinov, R. 2023. A Simple Approach for Visual Room Rearrangement: 3D Mapping and Semantic Search. In The Eleventh International Conference on Learning Representations. Vemprala, S.; Bonatti, R.; Bucker, A.; and Kapoor, A. 2023. ChatGPT for Robotics: Design Principles and Model Abilities. Technical Report MSR-TR-2023-8, Microsoft. Wang, G.; Xie, Y.; Jiang, Y.; Mandlekar, A.; Xiao, C.; Zhu, Y.; Fan, L.; and Anandkumar, A. 2023. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv:2305.16291. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-ofthought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35: 24824–24837. Wu, J.; Antonova, R.; Kan, A.; Lepert, M.; Zeng, A.; Song, S.; Bohg, J.; Rusinkiewicz, S.; and Funkhouser, T. 2023a. Tidybot: Personalized robot assistance with large language models. arXiv preprint arXiv:2305.05658. Wu, W.; Yao, H.; Zhang, M.; Song, Y.; Ouyang, W.; and Wang, J. 2023b. GPT4Vis: What Can GPT-4 Do for Zeroshot Visual Recognition? arXiv preprint arXiv:2311.15732. Xiang, J.; Tao, T.; Gu, Y.; Shu, T.; Wang, Z.; Yang, Z.; and Hu, Z. 2023. Language Models Meet World Models: Embodied Experiences Enhance Language Models. arXiv preprint arXiv:2305.10626. Xie, Y.; Yu, C.; Zhu, T.; Bai, J.; Gong, Z.; and Soh, H. 2023. Translating natural language to planning goals with largelanguage models. arXiv preprint arXiv:2302.05128. Ye, S.; Lauer, J.; Zhou, M.; Mathis, A.; and Mathis, M. W. 2023. AmadeusGPT: a natural language interface for interactive animal behavioral analysis. arXiv preprint arXiv:2307.04858. Zhang, B.; and Soh, H. 2023. Large Language Models as Zero-Shot Human Models for Human-Robot Interaction. arXiv:2303.03548. Zhu, X.; Chen, Y.; Tian, H.; Tao, C.; Su, W.; Yang, C.; Huang, G.; Li, B.; Lu, L.; Wang, X.; et al. 2023. Ghost in the Minecraft: Generally Capable Agents for Open-World Enviroments via Large Language Models with Text-based Knowledge and Memory. arXiv preprint arXiv:2305.17144. Zou, A.; Wang, Z.; Kolter, J. Z.; and Fredrikson, M. 2023. Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv:2307.15043. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 663 | 2024 | 74 |
18,562 | Mutual-Modality Adversarial Attack with Semantic Perturbation Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang † National University of Singapore [email protected], {ruonan,songhua.liu}@u.nus.edu, [email protected] Abstract Adversarial attacks constitute a notable threat to machine learning systems, given their potential to induce erroneous predictions and classifications. However, within real-world contexts, the essential specifics of the deployed model are frequently treated as a black box, consequently mitigating the vulnerability to such attacks. Thus, enhancing the transferability of the adversarial samples has become a crucial area of research, which heavily relies on selecting appropriate surrogate models. To address this challenge, we propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme. Our approach is accomplished by leveraging the pre-trained CLIP model. Firstly, we conduct a visual attack on the clean image that causes semantic perturbations on the aligned embedding space with the other textual modality. Then, we apply the corresponding defense on the textual modality by updating the prompts, which forces the re-matching on the perturbed embedding space. Finally, to enhance the attack transferability, we utilize the iterative training strategy on the visual attack and the textual defense, where the two processes optimize from each other. We evaluate our approach on several benchmark datasets and demonstrate that our mutual-modal attack strategy can effectively produce high-transferable attacks, which are stable regardless of the target networks. Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution. Introductions With the milestone performances of Deep Neural Networks (DNNs) in numerous computer vision tasks, the efficiency (Ma, Fang, and Wang 2023a; Fang et al. 2023; Liu et al. 2022; Yang et al. 2022) and reliability (Ye, Liu, and Wang 2023; Ye et al. 2022a,b) of these techniques become equally important when deployed in the real world. However, recent researches (Goodfellow, Shlens, and Szegedy 2014) have found that such DNNs are vulnerable to adversarial examples. This is, through only a small norm of perturbation applied on the original input, the maliciously crafted adversarial samples could cause misclassification or unexpected behavior to machine learning models. † Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Main Idea Textual Encoder Visual Encoder ‘A bad photo of a horse’ ‘A photo of a horse’ Defend Attack Feature Space Transfers Better Semantic Alignment Figure 1: Joint attack and defense framework in the visual and textual modalities. The visual features are attacked to push away from the original one, while the textual features defend to pull back this similarity gap. As a crucial assessment of the strength and security of DNNs, various attack algorithms (Hayes and Danezis 2017; Liu et al. 2019) have been proposed, achieving relatively high fooling rates. However, the effectiveness of these attacks is largely affected by different conditions, with the black-box setting being the most challenging yet realistic scenario. In the black-box setting, attackers cannot access the model’s parameters and structure, leading to the need for improving the transferability of attacks to arbitrary target networks (Cheng et al. 2019; Dong et al. 2018). The corresponding methods include ensemble-model attacks (Liu et al. 2016), momentum-based attacks (Dong et al. 2018), input transformation-based attacks (Xie et al. 2019), and model-specific attacks (Wu et al. 2019). Such methods aim to enhance the transferability of attacks by either exploiting the inherent weaknesses of the target model or exploring the common vulnerabilities of a group of models. The majority of current techniques striving to amplify the transferability of adversarial attacks predominantly hinge on the selection of surrogate models. However, these surrogate models often prove to be unstable and profoundly influenced by the architectural similarities between the surrogate model The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6657 itself and the target networks. Hence, the careful selection of an optimal surrogate model characterized by a robust feature extractor and exceptional generalizability emerges as a critical factor. Previous studies have used multiple ImageNetpretrained networks as surrogate models, given the large number of images and object categories in the dataset. In this paper, we leverage the recent progress of the CLIP model in computer vision and natural language processing. Having been trained on over 400 million image pairs, CLIP can now serve as our surrogate model, enabling the generation of powerful and broadly effective perturbations. In addition to its large-scale training data, we utilize the CLIP model as the surrogate model due to its ability to align visual and textual modalities in an aligned feature space. This is accomplished through a visual encoder and textual encoder pairing, allowing us to generate adversarial samples with semantic perturbations. Semantic perturbations differ from previous methods, which simply maximize feature differences. Instead, our approach maximizes semantic differences to ensure that the features after the attack retain explicit semantic information and do not fall into areas without clear semantic meaning, ensuring the effectiveness of the generated attacks. In this paper, we propose integrating attack and defense into one framework, building upon the semantic perturbations obtained from the pre-trained CLIP model’s aligned visual and textual embedding space. As shown in Fig. 1, we apply visual perturbations to clean images, increasing the semantic difference in the feature space and causing a contradiction with the textual embedding when given the input “A bad photo of a horse.” We then defend against the attack by updating the text prompt template, eliminating this semantic gap and restoring entailment. This iterative attack and defense optimization strategy enhances the attack’s transferability to target black-box networks. To summarize, we make the following contributions: • Firstly, we propose a method to generate reliable adversarial attacks by using the semantic consistency of pre-trained CLIP model to learn perturbations in the semantic feature embedding space. We ensure fidelity by constraining the perturbations with semantic consistency from the text input; • Secondly, we propose an iterative optimization strategy to improve the transferability of the generated attack across different architectures and datasets, where we attack the visual input and defend in the textual one; • Thirdly, we conduct extensive experiments to evaluate the transferability of our proposed approach in crossdataset, cross-architecture, and cross-task settings. Our results demonstrate that our approach is efficient, effective, and can be used as a plug-and-play solution. Related Work Adversarial Attack Adversarial attacks (Madry et al. 2017; Dong et al. 2017; Guo et al. 2019; Akhtar and Mian 2018; Zhang et al. 2021) are designed to deceive machine learning models by adding small, imperceptible perturbations to input data, causing the model to generate incorrect outputs or misclassify inputs. One of the traditional attack methods (Goodfellow, Shlens, and Szegedy 2014) is to use gradient information to update the adversarial example in a single step along the direction of maximum classification loss. Building on this work, GAMA (Yuan et al. 2021) is proposed as a plug-and-play method that can be integrated with any existing gradientbased attack method to improve cross-model transferability. Besides, many works (Xie et al. 2019; Wang and He 2021) have been proposed to improve the attack transferability. To ensure the efficiency of generating attacks, generative model-based attack methods (Hayes and Danezis 2017; Poursaeed et al. 2017; Liu et al. 2019; Xiang et al. 2022; Qiu et al. 2019; Salzmann et al. 2021; Aich et al. 2022) have been extensively studied. For example, Aishan et al. (Hayes and Danezis 2017) train a generative network capable of generating universal perturbations to fool a target classifier. To generate patch-based attacks, PS-GAN (Liu et al. 2019) is utilized to simultaneously enhance the visual fidelity and attacking ability of the adversarial patch. Other than designing attacks in the image domain, attacking NLP models (Morris et al. 2020; Boucher et al. 2022; Chen et al. 2021; Zhang et al. 2020; Perez and Ribeiro 2022) has become a popular research direction. Specifically, prompt learning attacks have attracted many researchers as a lighter method to tune large-scale language models, which can be easily attacked by illegally constructed prompts. Shi et al. (Shi et al. 2022) propose a malicious prompt template construction method to probe the security performance of PLMs. Du et al. (Du et al. 2022) propose obtaining poisoned prompts for PLMs and corresponding downstream tasks by prompt tuning. Different from the above attack methods, we propose the plug and play dynamic updating method, which boosts the transferability. Vision-and-Language Models A vision-and-language model (Jia et al. 2021; Radford et al. 2021; Mu et al. 2022; Yao et al. 2021; Fang, Ma, and Wang 2023; Ma, Fang, and Wang 2023b) is a powerful learning model processing both images and text in a joint manner to align the vision and texture embeddings. As the most popular VL model, CLIP (Radford et al. 2021) learn SOTA image representations from scratch on a dataset of 400 million image-text pairs collected from the internet, which enables various tasks like zero-shot classification. To further boost the VL models’ performance, Zhou et al. (Zhou et al. 2022b) propose CoOp to models the prompt’s context words with learnable vectors for adapting CLIP-like vision-language models for downstream image recognition. And CoCoOp (Zhou et al. 2022a) extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token. Target at such large-scale pre-trained VL models, there are a bulk of adversarial attack approaches (Noever and Noever 2021; Zhang, Yi, and Sang 2022) try to fool it. Noever et al. (Noever and Noever 2021) demonstrate adversarial attacks of spanning basic typographical, conceptual, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6658 Visual Encoder Generator Attack Textual Encoder Pre-trained CLIP Clean Clean Input [CLASS] … 𝑣! 𝑣" 𝑣# ‘horse’ … Label Input Token Update Defend Original … [CLASS] … 𝑣! $ Textual Input 𝑣#$ … Match Maximize Mismatch Rematch ‘horse’ Iterative Updating … ‘bird’ ‘dog’ 𝑣" $ Figure 2: The framework of the joint Mutual-modality attack-defense method. We use the pre-trained CLIP as the surrogate model. The generator is optimized by maximizing the difference with the clean image as input. And the textual input is updated to re-match the features from the textual encoder and the visual encoder. iconographic inputs generated to fool the model into making false or absurd classifications. Zhang et al. (Zhang, Yi, and Sang 2022) propose a multimodal attack method that collectively carries out the attacks on the image modality and the text modality. Hintersdorf et al. (Hintersdorf, Struppek, and Kersting 2022) introduce to assess privacy for multi-modal models and reveal whether an individual was included in the training data by querying the model with images of the same person. Jia et al. (Jia, Liu, and Gong 2022) injects backdoors into a pre-trained image encoder. In this work, we utilize the VL models to establish the attack frameworks. Previous works, however, either focus on one modality or treat the two modalities separately, which constrains transferring to target networks or other datasets. Proposed Method Framework Overview In the proposed framework, we utilize the generatororientated adversarial attack method, which integrates CLIP to enable the transferability. That is, from a training distribution of images, we train the generator G to generate universal perturbations applied on the input clean image xi. The corresponding adversarial sample x′ i can be thus obtained as: x′ i = min xi + ϵ, max(G(xi), xi −ϵ) , (1) where ϵ is the predetermined maximum perturbation on the input. Each applied perturbations are bounded with [−ϵ, +ϵ], in the rest of the paper, we simplify this process and use G(xi) to denote the bounded adversarial sample. Denote the groundtruth label of the clean image xi as ytrue, then the attack goal is to obtain the universal perturbations enabling cross-architecture and cross-dataset transferring. The goal of attacking a group of black-box models M = {M0, M1, ..., MP −1} pre-trained on various datasets D = {D0, D1, ..., DP −1} is to maximize: X 1y′̸=ytrue|y′ ←Mp G(xi) , xi ∈Dp, Mp ∈M (2) where 1 is the indicator function. To achieve it, we utilize CLIP as the surrogate model to train the powerful generator G, as is shown in Fig. 2. Considering that CLIP is a powerful model capable of learning the relationship between different modalities, we utilize CLIP as the surrogate model to generate the perturbations. Denote the textual encoder to be Ei and the visual encoder to be Et, then CLIP is capable of zero-shot predictions by matching the most similar textual embedding as: p(y|xi) = Clip xi, Xt = exp[sim Ei(xi), Et(xy t ) /τ] PC−1 c=0 exp[sim Ei(xi), Et(xc t) /τ] , (3) where sim(·) is the cosine similarity, τ is the temperature parameter. And Xt = {x0 t, x1 t, ..., xC−1 t } is text modality input that generates for each classification label c (c ∈ {0, 1, ..., C −1}). As the CLIP takes both two modalities as input to make predictions as y′ = arg maxy p(y|xi), we construct the attack and defense into one framework on these two modalities. Specifically, producing attack with CLIP can be treated as a kind of adversarial training process: max P min G V (P, G) = Exi∼pXi sim Clip G(xi), P(Xt) , Clip xi, P(Xt) + Ext∼pXt sim Clip G(Xi), xt , Clip G(Xi), P(xt) , (4) where P(·) is the prompt tuning function for the textual input, the details of which are in Sec. Similar as the GAN training, the optimization of Eq. 4 is to iteratively update G and P. The whole process is, • for optimizing G to generate the perturbations on the clean image xi, we minimize the output similarity beThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6659 tween the clean input xi and the adversarial input x′ i, which is in ‘Visual Attack with Semantic Perturbation’; • for optimizing P to defend the attack on the image modality input, we tune the prompt as P(xt) to match the image-text embedding again, which is in ‘Textual Defense with Prompt Updating’; • with the iterative training of G and P, we obtain the final generative perturbation network G, the whole algorithm is given in the supplementary. Visual Attack with Semantic Perturbation Recall that the visual attack x′ i generated from G is bounded with ϵ (in Eq. 1), it is assumed to be assigned with the wrong prediction by the target network M. Considering the fact that M keeps as the black-box, we turn to attack on the feature embedding space of the CLIP model. The CLIP model’s pre-trained image encoder Ei is a powerful feature extractor that has high transferability. To ensure the transferability of adversarial samples, we aim to maximize the distance between the feature representation of the adversarial input x′ i, denoted as Ei(x′ i), and the feature representation of the clean input xi, denoted as Ei(xi). This is achieved by minimizing the loss function ℓfeat, which is calculated as: ℓfeat = −∥Fi −F′ i∥2 Fi = Ei(xi) ∥Ei(xi)∥,F′ i = Ei(x′ i) ∥Ei(x′ i)∥, (5) where ℓfeat is calculated based on the MSE loss. Other than maximizing the features similarity with the perturbed and the clean input, an extra triplet loss is applied to ensure that the perturbed features Ei(x′ i) fool the downstream networks with a wrong prediction. To calculate the triplet loss, the original textual embedding should be precomputed as Fc t = Ei(xc t)/∥Ei(xc t)∥(c ∈{0, 1, ..., C−1}). Each xc t is composed of a prompt template and a label object, i.e. ‘dog’ + ‘A clean photo of {}’. Thus, for each label c ∈{0, 1, ..., C−1}, we pre-compute the corresponding textual embeddings as Ft = {F0 t , F1 t , ..., FC−1 t }. In this way, for each clean image xi with the groudtruth label ytrue, we use the the triplet loss (Aich et al. 2022) ℓtri to mislead the match of the features from the two modalities. Specifically, ℓtri is calculated as: ℓtri = ∥F′ i −Fy′ t ∥2 + max 0, α −∥F′ i −Fytrue t ∥2 , where y′ = arg min c sim(Fi, Fc t ), (6) where α is the margin of the triplet loss. Fy′ t is the textual embedding that is least similar to that of the clean input. This triplet loss forces the perturbed features away from its groundtruth textual embedding Fytrue t while minimizing the distance with the textual embedding Fy′ t that is originally least related to the clean image features Fi. Finally, following the previous adversarial attack methods, we utilize an extra classification loss: ℓcls = 1 σ + HCE Clip(x′ i, Xt), y , (7) where we set σ = 0.1 to prevent gradient explosion. HCE(·) is the standard cross-entropy loss, and Clip(·) is the output probabilities after softmax. Thus, the final learning objective for visual attack is: arg min G ℓfeat + ℓtri + ℓcls, (8) with which, the optimized G is capable of generating perturbations that could fool the textual input in the form of Xt and have a certain degree of transferablity. Textual Defense with Prompt Updating For a clean image xi, its feature embedding can be denoted as Fi. For each label c ∈{0, 1, ..., C −1}, the text input for each label can be organized by m + 1 text tokens as: xc t = [< CLASS(c) >, v1, v2, ..., vm]. Then the textual embedding for each label c is calculated as Fc t ←Et(xc t). And the CLIP model tends to output the probabilities for each label as: p(y|xi, Xt) = Clip(xi, Xt) ←softmax(Fi ∗Ft) where Xt = Xl + Xp. (9) Here, we separate the text input Xt into the fixed label token Xl =< CLASS(c) > and the dynamic prompt token Xp = {v1, v2, ..., vm}. Previous work on prompt learning (Zhou et al. 2022b) has indicated that the position of the label token wouldn’t make big difference on the final results, we intuitively put the label token at the beginning. Supposing that based on the current text input Xt the CLIP model makes the right prediction on the clean input xi with the groundtruth label ytrue, and the attack generator has successfully attacked it by predicting it as a wrong label y′. The attack (A) and the defense (D) process could be formulated as: [A] : arg max y p y|G(xi), Xt ̸= arg max y p y|xi, Xt . [D] : arg max y p y|G(xi), P(Xt) = arg max y p y|xi, Xt , (10) where in [A], we learn the generator G for generating adversarial perturbations and in [D], we update the text input Xt to X′ t with the prompt tuning function P to guide the right prediction on CLIP again. During the prompt tuning, we fix the label tokens Xl, and only update the prompt template X′ p by maximizing the semantic similarity comparing with the former visual embeddings: X′ t = n xc t|c ∈{0, 1, .., C −1}, xc t ′ ←arg min xytrue t Sim[Ei(x′ i), Et(xytrue t )] o (11) where Xc l is the c-th label token. However, due to the fact that it is impractical to directly learn each optimal text token, we turn to modify the Probability Weighted Word Saliency (Ren et al. 2019) method for each word token by randomly replacing each word token vn ⊂Xp as: S(vn) = max p(y′|x′, Xp) −p(y′|x′, Xn p ), 0 , where Xn p = {v1, ...vn−1, < MASK >, ..., vm}, (12) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6660 Method Surrogate CIFAR-10 (Train/ Val) ImageNet (Train/ Val) CLIP ResNet-50 Overall CLIP ResNet-50 Overall Clean 88.3 88.5 100.0 94.6 94.2 91.6 59.1 59.0 75.9 76.5 67.5 67.8 White-box Attack ResNet-50 64.2 64.2 13.7 13.4 39.0 38.8 42.0 41.9 5.7 6.0 23.9 24.0 w/o ℓfeat CLIP 12.1 12.4 53.2 54.7 32.7 33.6 9.6 9.7 55.4 54.7 32.5 32.2 w/o ℓtri CLIP 11.9 11.9 51.0 52.0 31.5 32.0 9.7 8.9 54.2 55.7 32.0 32.3 w/o ℓcls CLIP 10.1 10.6 52.8 53.1 31.5 31.9 10.6 12.2 59.2 59.8 34.9 36.0 Visual Attack w/o Iter CLIP 8.1 8.9 54.5 55.2 31.3 32.1 8.8 9.7 53.9 54.3 31.4 32.0 Random Prompt CLIP 8.4 8.6 55.2 56.4 31.8 32.5 8.3 9.1 41.3 39.7 24.8 24.4 Ours(Full) CLIP 7.9 7.2 41.3 41.8 24.6 24.5 7.5 7.8 25.0 25.3 16.3 16.6 Table 1: Ablation study on attacking CLIP. The experiments are conducted on both CIFAR-10 and ImgaeNet datasets. The attacks are obtained by the CLIP model and are tested with the CLIP model and a target pre-trained ResNet-50. where we mask each word token to calculate the saliency score that fools the model to make the wrong prediction y′ on the adversarial sample x′. We set the threshold value to be ρ, meaning that only the word tokens Xupdate = {vn|S(vn) > ρ, 1 ≤n ≤m} are set to be updated. Thus, we update each word token in Xupdate from a set of candidates, the updating process is formulated as: vn∗= arg max v′ n p(ytrue|x′, Xp(v′ n)−p(ytrue|x′, Xn p )), Xp(v′ n) = {v1, ...vn−1, v′ n, ..., vm}, vn ∈Xupdate and v′ n ∈Γ(vn), (13) where Γ(vn) is the candidate word set generated by GPT2 (Radford et al. 2019). And each candidate word token is updated to correct the semantic consistency again by ensuring most of the perturbed samples are re-matched with its groundtruth related word embedding. As a whole, the prompt tuning function can be donated as: P(Xt) = Xl + Xp(∪nvn∗). And the full algorithm can be found in the supplementary. Experiments In our experiments, we evaluated the attack performance of our proposed framework on several publicly available benchmark datasets. As our framework aims to generate highly transferable attacks, we focused on evaluating the transferability of the attacks in cross-dataset and crossarchitecture settings. Settings Datasets. Followed previous work (Hayes and Danezis 2017), We evaluate attacks using two popular datasets in adversarial examples research, which are the CIFAR-10 dataset (Krizhevsky 2009) and the ImageNet dataset (Russakovsky et al. 2014). For testing the cross-dataset transferability, we follow previous works (Naseer et al. 2019; Salzmann et al. 2021) and use the Comics and Paintings (Kaggle 2017) or ChestX datasets as source domain, and evaluate on the randomly selected 5000 images from the ImageNet. Implemental Details. We used PyTorch framework for the implementation. In the normal setting of using the pretrained CLIP as the surrogate model, we choose the ‘ViT/32’ as backbone. As for the generator, we choose to use the ResNet backbone, and set the learning rate to be 0.0001 with Adam optimizer. All images are scaled to 224 × 224 to train the generator. For the ℓ∞bound, we set ϵ = 0.04. A total of 10 iterations (we set the NUMG to be 2) are used to train the whole network, which costs about 8 hours on one NVIDIA GeForce RTX 3090 GPUs. Inference Metrics. We evaluate the performance of the generated attack by the mean accuracy of the classifier, which is better with lower values. In addition, as we aim at generating the high-transferable attacks, which is evaluated with various down-stream networks and datasets. We get the overall accuracy be the group-wise average, where the architectures with similar architectures are calculated once, i.e. we average the accuracies on ResNet-based architecture. Experimental Results Ablation study on the proposed framework. We conduct the ablation study on the proposed framework in Table 1, where the methods listed for comparison are: • Clean: clean input without any attack; • White-box Attack: we generate the attack directly on the target network ResNet-50, which serives as the upperbound; • w/o ℓfeat/ℓtri/ℓcls: the visual attack optimized without the loss item ℓfeat/ℓtri/ℓcls; • Visual Attack w/o Iter: the visual attack with semantic consistency optimized with loss ℓfeat + ℓtri + ℓcls; • Random Prompt: we use GPT-2 to randomly generate Xp in each iteration. As can be observed from the table: (1) The proposed method achieves the best attack performance in ‘Overall’, which fools the classifier by decreasing the accuracies more than 60% on CIFAR10 and more than 50% on ImageNet. (2) Only applying the visual attack on CLIP (‘Visual Attack w/o Iter’) can successfully attack the CLIP Model, but transfers bad on the down stream network. (3) Randomly update the prompt templates (‘Random Prompt’) can’t improve the attack’s transferablity a lot, indicating the effectiveness of the proposed prompt tuning P(·). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6661 Method Dataset Transfer to Target Networks Overall CLIP Res18 Res34 Res50 VGG11 VGG19 Shufflev2 Mobilev2 SimpleViT Clean ImageNet 59.0 70.3 73.6 76.5 69.2 72.5 69.8 72.1 80.9 71.0 CIFAR-10 88.5 94.6 94.7 94.9 92.2 93.1 90.2 93.9 81.7 89.9 UAN-Res ImageNet 41.9 18.5 20.1 6.0 16.4 15.6 32.4 11.3 60.5 31.0 CIFAR-10 64.2 19.6 23.9 13.4 71.7 38.7 58.2 15.3 31.7 41.4 UAN-Clip ImageNet 17.3 29.8 35.3 37.3 21.8 22.5 34.1 28.5 53.9 31.8 CIFAR-10 10.6 50.2 52.9 54.5 70.8 40.5 55.1 30.9 25.5 37.5 BIA-VGG ImageNet 45.9 23.6 23.7 25.4 9.0 3.6 30.4 20.6 62.2 32.8 CIFAR-10 Ours-Clip ImageNet 7.8 17.3 22.8 25.3 15.6 19.9 23.4 25.7 44.8 23.3 CIFAR-10 7.2 38.5 40.2 41.8 64.5 38.9 45.2 20.0 20.6 30.5 Table 2: Comparative results on the cross-architectures transferability. We evaluate the classification accuracy (lower is better) and show the results on both CIFAR-10 and ImageNet datasets. Our proposed method achieves the best in the ‘overall’ metric. Clean Input Inter 0 Inter 2 Inter 10 Perturbation Adv Sample Perturbation Adv Sample Figure 3: Visualizations on CIFAR-10 dataset on the 2-nd and the 10-th iterations. Visualization the generated adversarial samples. We show the generated adversarial samples in Fig. 3. In the figure, we sample the perturbation generator at the 2-nd iteration and the 10-th, respectively. It can be observed from the figure that the perturbations generated from the pre-trained CLIP model tend like some regular patterns. And this perturbation patterns are strengthened with the iterative training. It is worth noting that when generating the targeted perturbations, we find more interesting patterns, which could be found in the supplementary. The performance during the iterative training. The main idea of the proposed framework is to utilize the iterative training strategy. Here, we depict the generated attack’s performance while the iterative training in Fig. 4, where we compare the performance on normal training the generator G (a) and the proposed iteratively training the generator (b). As can be seen in the figure, we evaluate the attack capacity on both the surrogate network to train the generator (CLIP) (a) Normal Training (b) Iterative Training Figure 4: The attack’s performance from the generative perturbation network in each iteration training on CIFAR10. and the target network (ResNet50). Thus, the observations are: (1) In the normal training scheme, the generator convergences faster, but finally the attack success rate is lower than the proposed iterative training. (2) In the normal training scheme, the attack’s transferablity performance on the ResNet improves at first, but fails in the afterward iterations. While in our proposed framework, the transferablity improves stably, which is mainly due to the updating on the other modality. (3) About 10-iteration training would optimize an optimal perturbation generator, thus, we set the total iteration number to be 10 in the rest of experiments. Analysis on the embedding visualization. We visualize the embedding features in Fig. 5. The visualization includes the feature space before and after attack and is conducted on the CIFAR-10 dataset, where the following observations can be made: (1) The features belonging to the same category are grouped together, making it easy for the classifier to recognize them. However, the feature space after attack is mixed together, which fools the classifier. (2) Comparing the visualized features after attack of CLIP (d) and ResNet-50 (c), the adversarial features generated by our work are mixed together more evenly. This makes it much more difficult to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6662 (a) Before Attack (ResNet-50) (b) After Attack (ResNet-50) (c) Before Attack (CLIP) (d) After Attack (CLIP) airplane automobile bird cat deer dog frog horse ship truck airplane automobile bird cat deer dog frog horse ship truck airplane automobile bird cat deer dog frog horse ship truck airplane automobile bird cat deer dog frog horse ship truck Figure 5: TSNE visualization on the features with the clean/adversarial images as the input. defend against and indicates a higher attack capability. Transferability Evaluation We have compared with other methods on the cross-architecture, cross-dataset (domain) and cross-task transferability. Evaluate the cross-architecture transferability We form a set of pre-trained networks in various architectures, which are divided into 5 groups: (1) CLIP, (2) ResNet18, ResNet34, ResNet50, (3) VGG11, VGG19, (4) ShuffleNetv2, MobileNetv2 and (5) SimpleViT. Based on these grouping strategy, we calculate the overall accuracy by group-average accuracy. The experimental results are compared in Table 2. We have compared the proposed method with the other generator-oriented methods, which are UAN (Hayes and Danezis 2017) (‘UAN-Res’), modified UAN that uses CLIP as the surrogate model (‘UAN-CLIP’) and BIA (Zhang, Yi, and Sang 2022).From the table, we observe that: (1) When transferring the generated attack to the target networks, the attacks perform better between the networks in similar architecture as the surrogate model; (2) We propose to generate the attacks with high transferability, which decrease the overall accuracies most on both ImageNet and CIFAR-10 datasets; (3) The other attack methods (‘UAN’ and ‘BIA’) perform unevenly according to the target networks, which could be easily defended by the ensemble attack; while the proposed mutual-modality attack is stable whatever the target networks, making it more difficult to defend. The corresponding experiment against the ensemble defense is included in the supplementary. Evaluate the Cross-dataset Transferability. We evaluate the cross-data transferability by training the generator on the source dataset and test on the target dataset. Following the previous setting, we train the generator on Comics/Paintings/ChestX datasets with ChestXNet as the discriminator and evaluate the attack performance on ImageNet. As we propose a plug-and-play method, we test the effectiveness of our method by integrating it into the existing methods, including: GAP (Poursaeed et al. 2018), Mtd Datasets Res152 CLIP SimpleViT Curr. / Curr. + Ours GAP Cosmics 50.3/51.2 20.3/30.8 27.6/35.7 Paintings 52.9/53.0 36.7/37.7 37.8/46.9 ChestX 29.2/32.8 19.8/36.4 19.4/38.7 CDA Cosmics 38.8/39.6 36.8/46.6 37.6/42.6 Paintings 41.7/41.8 38.6/43.5 39.0/46.2 ChestX 23.7/30.3 16.7/29.3 19.2/27.9 LTAP Cosmics 55.2/56.5 43.6/45.5 48.4/54.0 Paintings 59.9/60.7 44.8/53.5 48.6/48.7 ChestX 49.5/50.3 28.9/34.2 21.9/24.6 Table 3: Extreme cross-domain (dataset) transferability analysis evaluated by attack success rate. Method VGG16 Res50 SimpleViT Overall Clean 51.3 69.9 54.7 58.6 GAMA 3.1 22.3 38.3 21.2 GAMA+Ours 3.4 20.4 35.9 19.9 Table 4: The cross-task transferability evaluation. CDA (Naseer et al. 2019) and LTAP (Salzmann et al. 2021). As can be observed from the table: (1) Enabling the crossdataset transferability of the attacks is much more difficult than the cross-architecture one, and our method also shows satisfying results; (2) Our method (‘ Curr. + Ours’) improves the cross-dataset attack success rates when integrated into the current methods especially on the cases with CLIP and SimpleViT as target networks. Evaluate the Cross-task Transferability. Following previous work (Zhang et al. 2022) we conduct the crosstask tranferability evaluation in Table 4. We integrate the proposed framework into GAMA (Aich et al. 2022) (‘GAMA+Ours’). We train the generator with Pascal-VOC dataset (Everingham et al. 2010), and then test on the ImageNet classification task. As can be observed from the figure, our proposed framework could be integrated into any adversarial attack framework, which could further enhance the attack transferability. Conclusion Overall, our proposed approach demonstrates promising results in improving the transferability and stability of adversarial attacks by generating perturbations in the semantic feature embedding space using the pre-trained CLIP model. By optimizing the attack iteratively from both image and text modalities, our method achieves improved transferability across different architectures and datasets, as demonstrated in our experiments on several benchmark datasets. We believe that our work provides a valuable contribution to the field of adversarial attacks and could have important implications for improving the security and reliability of machine learning systems in real-world applications. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6663 Acknowledgements This research is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (Award Number: MOE-T2EP20122-0006). References Aich, A.; Ta, C.-K.; Gupta, A. A.; Song, C.; Krishnamurthy, S.; Asif, M. S.; and Roy-Chowdhury, A. 2022. GAMA: Generative Adversarial Multi-Object Scene Attacks. In Advances in Neural Information Processing Systems, volume 35, 36914–36930. Akhtar, N.; and Mian, A. S. 2018. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. IEEE Access, 6: 14410–14430. Boucher, N.; Shumailov, I.; Anderson, R.; and Papernot, N. 2022. Bad characters: Imperceptible nlp attacks. In 2022 IEEE Symposium on Security and Privacy (SP), 1987–2004. IEEE. Chen, X.; Salem, A.; Chen, D.; Backes, M.; Ma, S.; Shen, Q.; Wu, Z.; and Zhang, Y. 2021. Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. In Annual Computer Security Applications Conference, 554–569. Cheng, S.; Dong, Y.; Pang, T.; Su, H.; and Zhu, J. 2019. Improving black-box adversarial attacks with a transfer-based prior. Advances in neural information processing systems, 32. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2017. Boosting Adversarial Attacks with Momentum. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9185–9193. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185–9193. Du, W.; Zhao, Y.; Li, B.; Liu, G.; and Wang, S. 2022. PPT: Backdoor Attacks on Pre-trained Models via Poisoned Prompt Tuning. In International Joint Conference on Artificial Intelligence. Everingham, M.; Gool, L. V.; Williams, C. K. I.; Winn, J. M.; and Zisserman, A. 2010. The Pascal Visual Object Classes (VOC) Challenge. International Journal of Computer Vision, 88: 303–338. Fang, G.; Ma, X.; Song, M.; Mi, M. B.; and Wang, X. 2023. DepGraph: Towards Any Structural Pruning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. Fang, G.; Ma, X.; and Wang, X. 2023. Structural Pruning for Diffusion Models. In Advances in neural information processing systems. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and Harnessing Adversarial Examples. International Conference on Learning and Representations. Guo, C.; Gardner, J.; You, Y.; Wilson, A. G.; and Weinberger, K. 2019. Simple black-box adversarial attacks. In International Conference on Machine Learning, 2484–2493. PMLR. Hayes, J.; and Danezis, G. 2017. Learning Universal Adversarial Perturbations with Generative Models. 2018 IEEE Security and Privacy Workshops (SPW), 43–49. Hintersdorf, D.; Struppek, L.; and Kersting, K. 2022. CLIPping Privacy: Identity Inference Attacks on MultiModal Machine Learning Models. arXiv preprint arXiv:2209.07341. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, 4904–4916. PMLR. Jia, J.; Liu, Y.; and Gong, N. Z. 2022. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. In 2022 IEEE Symposium on Security and Privacy (SP), 2043–2059. IEEE. Kaggle. 2017. Painter by Number. https://www.kaggle.com/ c/painter-by-numbers/data. Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. Liu, A.; Liu, X.; Fan, J.; Ma, Y.; Zhang, A.; Xie, H.; and Tao, D. 2019. Perceptual-Sensitive GAN for Generating Adversarial Patches. In AAAI Conference on Artificial Intelligence. Liu, S.; Wang, K.; Yang, X.; Ye, J.; and Wang, X. 2022. Dataset Distillation via Factorization. In Advances in neural information processing systems. Liu, Y.; Chen, X.; Liu, C.; and Song, D. 2016. Delving into Transferable Adversarial Examples and Black-box Attacks. In International Conference on Learning Representations. Ma, X.; Fang, G.; and Wang, X. 2023a. DeepCache: Accelerating Diffusion Models for Free. arXiv preprint arXiv:2312.00858. Ma, X.; Fang, G.; and Wang, X. 2023b. LLM-Pruner: On the Structural Pruning of Large Language Models. In Advances in neural information processing systems. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. International Conference on Learning Representations. Morris, J. X.; Lifland, E.; Yoo, J. Y.; Grigsby, J.; Jin, D.; and Qi, Y. 2020. TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP. EMNLP 2020, 119. Mu, N.; Kirillov, A.; Wagner, D.; and Xie, S. 2022. Slip: Self-supervision meets language-image pre-training. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVI, 529–544. Springer. Naseer, M. M.; Khan, S. H.; Khan, M. H.; Shahbaz Khan, F.; and Porikli, F. 2019. Cross-domain transferability of adversarial perturbations. Advances in Neural Information Processing Systems, 32. Noever, D. A.; and Noever, S. E. M. 2021. Reading Isn’t Believing: Adversarial Attacks On Multi-Modal Neurons. arXiv preprint arXiv:2103.10480. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6664 Perez, F.; and Ribeiro, I. 2022. Ignore Previous Prompt: Attack Techniques For Language Models. ArXiv, abs/2211.09527. Poursaeed, O.; Katsman, I.; Gao, B.; and Belongie, S. 2018. Generative adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4422–4431. Poursaeed, O.; Katsman, I.; Gao, B.; and Belongie, S. J. 2017. Generative Adversarial Perturbations. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4422–4431. Qiu, H.; Xiao, C.; Yang, L.; Yan, X.; Lee, H.; and Li, B. 2019. SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing. European Conference on Computer Vision. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language Models are Unsupervised Multitask Learners. Ren, S.; Deng, Y.; He, K.; and Che, W. 2019. Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency. In Annual Meeting of the Association for Computational Linguistics. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. S.; Berg, A. C.; and Fei-Fei, L. 2014. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115: 211–252. Salzmann, M.; et al. 2021. Learning transferable adversarial perturbations. Advances in Neural Information Processing Systems, 34: 13950–13962. Shi, Y.; Li, P.; Yin, C.; Han, Z.; Zhou, L.; and Liu, Z. 2022. PromptAttack: Prompt-Based Attack for Language Models via Gradient Search. In Natural Language Processing and Chinese Computing: 11th CCF International Conference, NLPCC 2022, Guilin, China, September 24–25, 2022, Proceedings, Part I, 682–693. Springer. Wang, X.; and He, K. 2021. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1924–1933. Wu, D.; Wang, Y.; Xia, S.-T.; Bailey, J.; and Ma, X. 2019. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. In International Conference on Learning Representations. Xiang, T.; Liu, H.; Guo, S.; Gan, Y.; and Liao, X. 2022. EGM: An Efficient Generative Model for Unrestricted Adversarial Examples. ACM Transactions on Sensor Networks (TOSN). Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; and Yuille, A. L. 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2730–2739. Yang, X.; Zhou, D.; Liu, S.; Ye, J.; and Wang, X. 2022. Deep Model Reassembly. In Advances in neural information processing systems. Yao, L.; Huang, R.; Hou, L.; Lu, G.; Niu, M.; Xu, H.; Liang, X.; Li, Z.; Jiang, X.; and Xu, C. 2021. FILIP: Fine-grained Interactive Language-Image Pre-Training. In International Conference on Learning Representations. Ye, J.; Fu, Y.; Song, J.; Yang, X.; Liu, S.; Jin, X.; Song, M.; and Wang, X. 2022a. Learning with recoverable forgetting. In European Conference on Computer Vision, 87–103. Springer. Ye, J.; Liu, S.; and Wang, X. 2023. Partial network cloning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20137–20146. Ye, J.; Mao, Y.; Song, J.; Wang, X.; Jin, C.; and Song, M. 2022b. Safe distillation box. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 3117– 3124. Yuan, Z.; Zhang, J.; Jia, Y.; Tan, C.; Xue, T.; and Shan, S. 2021. Meta Gradient Adversarial Attack. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 7728–7737. Zhang, C.; Benz, P.; Lin, C.; Karjauv, A.; Wu, J.; and Kweon, I. S. 2021. A Survey On Universal Adversarial Attack. International Joint Conference on Artificial Intelligence. Zhang, J.; Yi, Q.; and Sang, J. 2022. Towards Adversarial Attack on Vision-Language Pre-training Models. In Proceedings of the 30th ACM International Conference on Multimedia, 5005–5013. Zhang, Q.; Li, X.; Chen, Y.; Song, J.; Gao, L.; He, Y.; and Xue, H. 2022. Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. arXiv preprint arXiv:2201.11528. Zhang, W. E.; Sheng, Q. Z.; Alhazmi, A.; and Li, C. 2020. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3): 1–41. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6665 | 2024 | 740 |
18,563 | STDiff: Spatio-Temporal Diffusion for Continuous Stochastic Video Prediction Xi Ye*, Guillaume-Alexandre Bilodeau LITIV, Polytechnique Montr´eal [email protected], [email protected] Abstract Predicting future frames of a video is challenging because it is difficult to learn the uncertainty of the underlying factors influencing their contents. In this paper, we propose a novel video prediction model, which has infinite-dimensional latent variables over the spatio-temporal domain. Specifically, we first decompose the video motion and content information, then take a neural stochastic differential equation to predict the temporal motion information, and finally, an image diffusion model autoregressively generates the video frame by conditioning on the predicted motion feature and the previous frame. The better expressiveness and stronger stochasticity learning capability of our model lead to state-of-theart video prediction performances. As well, our model is able to achieve temporal continuous prediction, i.e., predicting in an unsupervised way the future video frames with an arbitrarily high frame rate. Our code is available at https: //github.com/XiYe20/STDiffProject. Introduction Given some observed past frames as input, a video prediction model forecasts plausible future frames, with the aim of mimicking the vision-based precognition ability of humans. Anticipating the future is critical for developing intelligent agents, thus video prediction models have applications in the field of autonomous driving, route planning, and anomaly detection (Lu et al. 2019). Video frame prediction (VFP) is challenging due to the inherent unpredictability of the future. Theoretically, there are an infinite number of potential future outcomes corresponding to the same past observation. Moreover, the stochasticity increases exponentially as the model predicts towards a more distant future. Early deterministic video prediction models (Finn, Goodfellow, and Levine 2016; Villegas et al. 2017) are incapable of dealing with a multimodal future and thus the prediction tends to be blurry. Subsequently, techniques such as VAE and GANs were introduced for stochastic video prediction (Babaeizadeh et al. 2018; Lee et al. 2018; Denton and Fergus 2018; Castrejon et al. 2019). However, they still fall short in achieving satisfactory results. *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Previous works (Castrejon et al. 2019; Wu et al. 2021) empirically observed that the main issue which limits the performance is that the stochastic video prediction model is not expressive enough, specifically, shallow levels of latent variables are incapable of capturing the complex groundtruth latent distribution. Castrejon et al. (2019) introduced hierarchical latent variables into variational RNNs (VRNNs) to mitigate the problem. Recent progress of diffusion-based video prediction models (Voleti et al. 2022; Harvey et al. 2022; H¨oppe et al. 2022) can also be attributed to the increase of model expressiveness, because diffusion models can be considered as infinitely deep hierarchical VAEs (Huang, Lim, and Courville 2021; Tzen and Raginsky 2019) with a fixed encoder and latent variables across different levels having the same dimensionality as the data. However, none of the previous models explored extending the stochastic expressiveness along the temporal dimension. Hierarchical VRNNs incorporate randomness at each fixed time step, which is limited by the discrete nature of RNNs. Meanwhile, almost all the previous diffusion-based video prediction models (Voleti et al. 2022; Harvey et al. 2022; H¨oppe et al. 2022) concatenate a few frames together and learn the distribution of those short video clips as a whole, which ignores an explicit temporal stochasticity estimation between frames. Therefore, there is a need to increase the expressiveness of the stochastic video prediction model over both the spatial and temporal dimensions. One more issue is that almost all video prediction models can only predict future frames at fixed time steps, ignoring the continuous nature of real-world dynamic scenes. Thus, additional video interpolation models are required to generate videos with a different frame rate. Only a few recent preliminary works have explored this topic (Park et al. 2021; Ye and Bilodeau 2023b), but they are either limited to deterministic predictions (Park et al. 2021) or exhibit low diversity in stochastic predictions (Ye and Bilodeau 2023b). To improve video prediction, a method should be able to do continuous stochastic prediction with a good level of diversity. In this paper, we propose a novel diffusion-based stochastic video prediction model that addresses the limited expressiveness and the temporal continuous prediction problems for the video prediction task. We propose to increase the expressiveness of the video prediction model by separately learning the temporal and spatial stochasticity. Specifically, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6666 we take the difference images of adjacent past frames as motion information, and those images are fed into a ConvGRU to extract the initial motion feature of future frames. Given the initial motion feature, a neural stochastic differential equation (SDE) solver predicts the motion feature at an arbitrary future time, which enables continuous temporal prediction. Finally, an image diffusion model conditions on the previous frame and on the motion feature to generate the current frame. Because the diffusion process can also be described by SDE (Song et al. 2021), our model explicitly uses SDEs to describe both the spatial and temporal latent variables, and thus our model is more flexible and expressive than previous ones. Our contributions can be summarized as follows: • We propose a novel video prediction model with better expressiveness by describing both the spatial and temporal generative process with SDEs; • Our model attains state-of-the-art (SOTA) FVD and LPIPS score across multiple video prediction datasets; • Compared to prior approaches, our model significantly enhances the efficiency of generating diverse stochastic predictions; • To the best of our knowledge, this is the first diffusionbased video prediction model with temporal continuous prediction and with motion content decomposition. Related Works Video prediction models can be classified into two types: deterministic or stochastic. Since we aim at addressing uncertainty, we focus on the stochastic models in this section. Most stochastic video prediction models (Villegas et al. 2017; Babaeizadeh et al. 2018; Denton and Fergus 2018; Lee et al. 2018) utilize a variational RNN (VRNN) as the backbone. The performance of VRNN-based models is constrained as there is only one level of latent variables. Hierarchical VAEs are utilized by some works (Castrejon et al. 2019; Chatterjee, Ahuja, and Cherian 2021; Wu et al. 2021) to deal with the stochasticity underfitting problem. FitVid (Babaeizadeh et al. 2021) proposed a carefully designed VRNN architecture to address the underfitting problem. Despite the improvements, the stochasticity of all these models is still characterized by the variance of the spatial latent variables of each frame (Chatterjee, Ahuja, and Cherian 2021). Some methods, such as MCNet (Villegas et al. 2017), decompose the motion and content of videos with the assumption that different video frames share similar content but with different motion, thus the decomposition of motion and content facilitates the learning. It was applied only spatially. Given its benefit, this strategy inspired us to learn the stochasticity over the temporal and spatial dimension separately. NUQ (Chatterjee, Ahuja, and Cherian 2021) is the only VRNN-based model that considers the temporal predictive uncertainty. It enforces an uncertainty regularization to the vanilla loss function for a better convergence without modification of the VRNN architecture. In contrast, our proposed model employ an SDE to explicitly account for the randomness of the temporal motion. Given the success of image diffusion models, some works have adapted them for video prediction (H¨oppe et al. 2022; Voleti et al. 2022; Nikankin, Haim, and Irani 2022; Harvey et al. 2022; Yang, Srivastava, and Mandt 2022). Almost all diffusion-based models are non-autoregressive models that learn to estimate the distribution of a short future video clip. Therefore, they ignore the explicit learning of motion stochasticity and generate multiple future frames with a fixed frame rate all at once. Yang, Srivastava, and Mandt (2022) combined a deterministic autoregressive video prediction model with a diffusion model, where the latter is used to generate stochastic residual change between frames to correct the deterministic prediction. TimeGrad (Rasul et al. 2021) combines an autoregressive model with a diffusion model, but it is only for low dimensional time series data prediction. In contrast, our method combines an autoregressive model with a diffusion model for videos and can make a continuous temporal prediction. Besides, few recent works investigated continuous video prediction, including Vid-ODE (Park et al. 2021) and NPVP (Ye and Bilodeau 2023b). Vid-ODE (Park et al. 2021) combines neural ODE and Conv-GRU to predict the features of future frames and a CNN decoder is taken to decode the frames in a compositional manner. It achieves temporal continuous predictions, but it is deterministic. NPVP (Ye and Bilodeau 2023b) is a non-autoregressive model that tackles the continuous problem by formulating the video prediction as an attentive neural process. However, the diversity of the stochastic predictions is low because NPVP only takes a single global latent variable for the whole sequence to account for the stochasticity. Our use of SDEs allows us to obtain better diversity. Background Neural SDEs Stochastic differential equations are widely used to describe the dynamics in engineering. Compared with an ordinary differential equation (ODE), a SDE takes into account stochasticity by incorporating randomness into the differential equation. Given an observation Xt at time t, an Itˆo SDE is formulated as dXt = µ(Xt, t)dt + σ(Xt, t)dWt, (1) where W denotes a Wiener process (Brownian motion). The first term and second term of the right hand side are the drift term and diffusion term respectively. We can integrate Eq. 1 by Itˆo’s calculus. In this paper, we use a simpler version of SDE, where µ and σ are independent of t. Given observations Xt, we can fit an SDE by parameterizing µ and σ with neural networks. This is referred to as neural SDEs (Li et al. 2020; Kidger et al. 2021). Li et al. (2020) generalized the adjoint sensitivity method for ODEs into SDEs for an efficient gradient computation to enable the learning of neural SDEs. Diffusion Models There are two types of equivalent diffusion models. The first one is denoising diffusion probabilistic models (DDPM) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6667 (Ho, Jain, and Abbeel 2020; Sohl-Dickstein et al. 2015). DDPM consists of a fixed forward diffusion and a learned reversed denoising process. The forward diffusion process gradually adds noise to the data until it converges to a standard Gaussian. During the reverse process (generative process), a random Gaussian noise is sampled, and a trained neural network is repeatedly applied to denoise and finally converts the random Gaussian noise to a clean data sample. The second type is the score matching-based method (Song et al. 2021), which describes both the forward and reversed diffusion processes as SDEs. Song et al. (2021) studied the connection between the two methods and proved that a DDPM is equivalent to a discretization of a score matching model based on continuous variance preserving SDEs (VPSDEs). As we also model the motion feature of videos by a SDE in this paper, we present the image diffusion model under the framework of SDEs, specifically, the VP-SDE score matching model. The forward and reversed processes of the VP-SDE score matching model are given in Eq. 2 and Eq. 3 respectively: dxt = −1 2β(t)xtdt + p β(t)dWt (2) dxt = [−1 2β(t)xt −β(t)∇xtlogqt(xt)]dt + p β(t)d ¯Wt, (3) where t ∼U[0, T] denotes a random diffusion timestep with a maximum value of T, xT ∼N(0, I), xt ∼qt(xt) is the perturbed image at diffusion step t, β(t) denotes the forward noise schedule, and ¯Wt denotes a backward Wiener process. The VP-SDE score matching model is trained by minimizing the following score matching loss: EtEx0Ext∼qt(xt|x0) ∥sθ(xt, t) −∇xtlogqt(xt|x0)∥2 , (4) where sθ denotes the score estimation neural network, and qt(xt|x0) = N(xt; γtx0, σ2 t I) denotes the distribution of perturbed image xt with γt = e−1 2 R t 0 β(s)ds and σ2 t = 1−e− R t 0 β(s)ds. Therefore, the forward diffusion process described by Eq. 2 can be achieved by direct re-parameterized sampling, i.e., xt = γtx0 + σtϵ, where ϵ ∼N(0, I). In this case, we can parameterize sθ(xt, t) by −ϵθ(xt,t) σt , where ϵθ denotes a noise predictor neural network. Then the score matching loss in Eq. 4 can be reduced to an equivalent noise prediction loss as DDPM: EtEx0Eϵ∼N (0,1) 1 σ2 t ∥ϵ −ϵθ(xt, t)∥2 , (5) except that here, the diffusion timestep t is continuous instead of discrete. Finally, we can apply loss weighting λ(t) = σ2 t to Eq. 5 for a good perceptual quality, which also further simplifies the loss function to be: EtEx0Eϵ∼N (0,1) ∥ϵ −ϵθ(xt, t)∥2 . (6) Under the variational framework, Huang, Lim, and Courville (2021) proved that continuous-time diffusion models, like the VP-SDE score matching model, can be viewed as “the continuous limit of hierarchical VAEs” as purposed by Tzen and Raginsky (2019). In other words, a continuous diffusion model is equivalent to an infinitely deep hierarchical VAE. Methodology 𝑥!" # 𝑥!# # 𝑥!$ # 𝑥!" % 𝑥!# % 𝑥!$ % 𝑥!" & 𝑥!# & 𝑥!$ & … … … … … … 𝑥!" ' 𝑥!# ' 𝑥!$ ' 𝑚!# 𝑚!" 𝑚!$ … 𝑥!' ' 𝑚!' Figure 1: Graphical model for the generation process of STDiff. Green arrows denote the temporal motion connections, and blue arrows denote the connections between latent variables of each frame at timestep t, i.e., the reverse image diffusion process. Red arrows denote the recurrent connection from the previous frame to each level of latent variable in the next time step. mt0 denotes the initial motion feature extracted from observed frames. x0 t0 denotes the most recent observed frame. Note that in this section, t is used to denote video temporal coordinates. We address the video prediction problem that predicts P future frames x = {x0 t1, x0 t2, ..., x0 tP } given N past frames c = {x0 1, x0 2, ..., x0 N}. During training, ti is a discrete future frame integer index. However, ti can be an arbitrary positive real number timestep during inference. The training objective is to maximize p(x|c). In order to simplify the learning problem, we decompose the motion and content features and we also factorize the joint distribution along the temporal dimension, i.e., autoregressive prediction. Considering the close relationship between diffusion models and hierarchical VAEs (Huang, Lim, and Courville 2021), we formulate our generative probabilistic model under the framework of VRNNs (Figure 1) as an extended hierarchical recurrent RNN, but with infinite-dimensional latent variables over the spatio-temporal domain. To avoid confusion, we use l instead of t to denote the spatial diffusion timesteps. l = 0 denotes the pixel space. Given the initial motion feature mt0 extracted from c and the most recent observed frame x0 t0, i.e., x0 N, we can first factorize the probability as follows with the assumption that the temporal process is Markovian: p(x|c) = p(x|x0 t0, mt0) = P Y i=1 p(x0 ti|x0 ti−1, mti−1), (7) where the motion feature mti follows a transitional distribution p(mti|mti−1) defined by the temporal neural SDE in Eq.14. Following the same technique of conditional DDPM (Ho, Jain, and Abbeel 2020), we can derive the following learning objective: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6668 CNN GRU CNN GRU … … Past Future … … … … … 𝑚! 𝑚" 𝑚"#$! 𝑚"#$" 𝑥"#$" % 𝑥"#$" %#! 𝑥"#$! & 𝑥" & 𝑥"#$! %#! 𝑥"#$! % 𝑓 SDE Solver 𝜇 𝜎 𝑑𝑚 𝑑𝑡 𝑚'! ∫ 𝑚'( 𝑡1 𝑡2 𝑑( 𝑑" 𝜖 𝜖 𝑑𝑊! Spatial diffusion Temporal diffusion Figure 2: Neural network architecture of STDiff. Difference images of past frames are encoded as motion feature mN by a ConvGRU for future motion prediction. The curved arrow denotes one step of random future motion prediction (SDE integration), i.e., the temporal motion diffusion process. Detail computation flow of the SDE solver is shown in the top left green box. The conditional image diffusion model recurrently predicts each future frame given motion feature and the previous frame. Lθ = P X i=1 Eq[LL + X l>1 Ll−1 + L0], (8) where each term is defined as follows: LL = DKL(q(xL ti|x0 ti)||pθ(xL ti)), (9) Ll−1 = DKL(q(xl−1 ti |xl ti, x0 ti)||pθ(xl−1 ti |xl ti, x0 ti−1, mti)), (10) L0 = −log pθ(x0 ti|x1 ti, x0 ti−1, mti) (11) By applying the techniques described in (Ho, Jain, and Abbeel 2020), we can simplify the objective in Eq. 8 to be Lθ = P X i=1 ElEx0 ti Eϵ∼N (0,1)
ϵ −ϵθ(xl ti, x0 ti−1, mti, l)
2 . (12) Please refer to the appendix in the arXiv version of this paper (Ye and Bilodeau 2023a) for a detailed derivation of the loss function. Spatio-Temporal Diffusion (STDiff) Architecture We call our proposed method STDiff (Spatio-Temporal Diffusion) and its architecture is shown in Figure 2. It consists of two parts, a motion predictor and a recurrent diffusion frame predictor. The motion predictor encodes all the past motion features and predicts future motion features. Given the predicted motion feature at different future time steps and the previous frame, the recurrent diffusion frame predictor generates the desired future frame. The detailed architecture of the two modules is described as follows. Motion Predictor. We decompose the motion and content feature because the SDEs are naturally designed for dynamic information modeling, and also it eases the learning burden of the neural SDE. The motion predictor is divided into two parts: 1) a Conv-GRU for past motion encoding and 2) a neural SDE for future motion prediction. Assuming a regular temporal step in the observed past frames, we utilize a Conv-GRU for past motion encoding due to its flexibility and efficiency. In order to achieve the motion and appearance decomposition, we calculate the difference images di of adjacent past frames as the input of the Conv-GRU. Given the zero-initialized motion hidden state m1 and N −1 difference images, the Conv-GRU outputs the motion feature mN for a future prediction. Given mN as the initial value of future motion features, we fit the future motion dynamic by a neural SDE, which is equivalent to a learned diffusion process. Specifically, given the motion feature mti−1 at time step ti−1, a small neural network fθ is taken to parameterize the drift coefficient and diffusion coefficient respectively. Then, the motion feature at ti is integrated with µ, σ = fθ(mti−1) (13) mti = mti−1 + Z ti ti−1 µdt + Z ti ti−1 σdWt. (14) In Figure 2, each curved arrow denotes one future motion prediction step, i.e., one integration step for the neural SDE. For better learning of the temporal dynamics, we randomly sample ti from the future time steps during training (see Algorithm 1 for details about the training). Recurrent diffusion predictor. At a future time step ti, given a noisy frame xl ti derived from the forward diffusion process, a UNet is trained to predict the noise condition on the previous clean frame x0 ti−1 and the motion feature mti. In detail, x0 ti−1 is concatenated with xl ti as the input of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6669 Models SMMNIST, 5 →10 FVD↓SSIM↑LPIPS↓ SVG-LP (Denton and Fergus 2018) 90.81 0.688 153.0 Hier-VRNN (Castrejon et al. 2019) 57.17 0.760 103.0 MCVD-concat (Voleti et al. 2022) 25.63 0.786 MCVD-spatin (Voleti et al. 2022) 23.86 0.780 NPVP (Ye and Bilodeau 2023b) 95.69 0.817 188.7 STDiff (ours) 14.93 0.748 146.2 Models BAIR, 2 →28 FVD↓SSIM↑LPIPS↓ SAVP (Lee et al. 2018) 116.4 0.789 63.4 Hier-VRNN (Castrejon et al. 2019) 143.4 0.829 55.0 STMFANet (Jin et al. 2020) 159.6 0.844 93.6 VPTR-NAR (Ye and Bilodeau 2022) 0.813 70.0 NPVP (Ye and Bilodeau 2023b) 923.62 0.842 57.43 MCVD-concat (Voleti et al. 2022) 120.6 0.785 70.74 MCVD-spatin (Voleti et al. 2022) 132.1 0.779 75.27 FitVid (Babaeizadeh et al. 2021) 93.6 STDiff (ours) 88.1 0.818 69.40 Table 1: VFP results on SMMNIST (left) and BAIR (right). Boldface: best results. Underlined: second best results. Algorithm 1: Training Input Observed frames: Vo = {x1, x2, ..., xN} Frames to predict: Vp = {xN+1, xN+2, ..., xN+M} Initialize ConvGRUθ , fθ, ϵθ, 1: repeat 2: Initialize motion states: m1 = 0 3: for i = 2, ..., N do 4: di = xi −xi−1 5: mi = ConvGRUθ(di, mi−1) 6: end for 7: {k1, .., kP } = Sort(RandomChoice({1, ..., M})) 8: {t0, t1, ..., tP } = {N, N + k1, ..., N + kP } 9: for i = 1, ..., P do 10: mti = SDESolver(fθ, mti−1, (ti−1, ti)) 11: l ∼Uniform(0, L), ϵ ∼N(0, I) 12: xl ti = γlx0 ti + σlϵ 13: Take gradient descent step on 14: ∇θ∥ϵ −ϵθ(xl ti, x0 ti−1, mti, l)∥2 15: end for 16: until convergence UNet, and mti is fed into each ResNet block of the UNet by the manner of spatially-adaptive denormalization (Park et al. 2019). Please see Algorithm 1 for training details. The inference process of the recurrent diffusion predictor is almost the same as the prediction process, with the distinction that the model generates the previous clean frame x0 ti−1 during the previous time step, and subsequently feeds it back into the UNet input for forecasting the next frame. Experiments We evaluated the performance of the proposed STDiff model on KITTI (Geiger et al. 2013), Cityscapes (Cordts et al. 2016), KTH (Schuldt, Laptev, and Caputo 2004), BAIR (Ebert et al. 2017) and stochastic moving MNIST (SMMNIST) (Denton and Fergus 2018) datasets. KITTI and Cityscapes are high-resolution traffic video datasets used to evaluate our model real-world application capabilities. The KTH dataset consists of different human motion videos, BAIR includes videos of a randomly moving robot arm that pushes different objects. BAIR and SMMNIST pose greater challenges due to their higher levels of stochasticity comGT 0 1 2 Ours 3.5 5 6.5 8 9.5 11 NPVP MCVD Figure 3: Prediction examples on the Cityscapes dataset for MCVD (Voleti et al. 2022), NPVP (Ye and Bilodeau 2023b), and our model. Gray frames indicate non-existent or unpredictable frames. MCVD exhibits issues with brightness changes and lacks the ability of continuous predictions. NPVP predictions are noticeably more blurry than the other two models. pared to the three others. The number of past frames and future frames to predict is determined according to the experimental protocols of previous works (see appendix in the arXiv version of this paper for more details (Ye and Bilodeau 2023a)). All models are trained with 4 NVIDIA V100 Volta GPU (32G memory). Frame Prediction Performance We firstly tested STDiff given the same protocol as previous work, i.e., report the best SSIM, LPIPS out of multiple different random predictions, together with the FVD score. Note that we only sample 10 different random predictions for each test example as for MCVD, instead of 100 different predictions for all other previous stochastic models. The results are presented in Table 1 and Table 2. In Table 1, we observe that STDiff achieves the new SOTA for the FVD score on SMMNIST, and also the second best LPIPS score. For BAIR, STDiff also outperforms all previous work in terms of FVD score. The results show that the random predictions of STDiff have a better temporal coherence and match the ground-truth distribution much better, i.e., the predictions are more natural and plausible. The datasets presented in Table 2 contains less stochasThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6670 Models KITTI, 4 →5 FVD ↓SSIM↑LPIPS↓ PredNet (Lotter, Kreiman, and Cox 2017) 0.48 629.5 Voxel Flow (Liu et al. 2017) 0.43 415.9 SADM (Bei, Yang, and Soatto 2021) 0.65 311.6 Wu et al. (Wu, Wen, and Chen 2022) 0.61 263.5 DMVFN (Hu et al. 2023) 0.71 260.5 NPVP (Ye and Bilodeau 2023b) 134.69 0.66 279.0 STDiff (ours) 51.39 0.54 114.6 Models Cityscapes, 2 →28 FVD↓SSIM↑LPIPS↓ SVG-LP (Denton and Fergus 2018) 1300.26 0.574 549.0 VRNN 1L(Castrejon et al. 2019) 682.08 0.609 304.0 Hier-VRNN (Castrejon et al. 2019) 567.51 0.628 264.0 GHVAEs (Wu et al. 2021) 418.00 0.740 194.0 MCVD-concat (Voleti et al. 2022) 141.31 0.690 112.0 NPVP (Ye and Bilodeau 2023b) 768.04 0.744 183.2 STDiff (ours) 107.31 0.658 136.26 Table 2: VFP results on KITTI (left) and Cityscapes (right). Boldface: best results. Underlined: second best results. ticity compared with SMMNIST and BAIR. For the KITTI dataset, STDiff outperforms previous work with a significant margin in terms of LPIPS and FVD. STDiff also achieves the best FVD score and second-best LPIPS score on the Cityscapes dataset. Some prediction examples of Cityscapes dataset are shown in Figure 3. Our model avoids the brightness changing issue seen in MCVD, and our predictions are sharper and more realistic compared to the NPVP model. In general, the evaluation results show that STDiff has a better performance than previous deterministic and stochastic models in terms of either FVD or LPIPS. It is desirable because it is well known that the quality assessments produced by FVD and LPIPS are more aligned with human assessment than SSIM or PSNR (Unterthiner et al. 2019; Zhang et al. 2018). Stochastic Video Prediction Diversity In Wang et al. (2022), generated images diversity is evaluated as LPIPS distance between different generated images. A greater LPIPS value between two generated images indicates increased dissimilarity in terms of content and structure. Likewise, we quantify the video diversity as average frame-by-frame LPIPS between different predicted video clips given the same past frames. To distinguish it from the standard LPIPS score between predictions and ground-truth, this diversity measure is termed as inter-LPIPS (iLPIPS). The iLPIPS results are listed in Table 3. All the iLPIPS values are reported at a 10−3 scale. Since iLPIPS does not incorporate the ground-truth, we use the Frechet Video Distance (FVD) as a supplementary metric. FVD ensures that randomly generated predictions exhibit satisfactory visual quality and that their distribution closely approximates the ground-truth distribution, i.e., the generated random predictions are plausible. For all methods, we sampled 10 different random predictions for each test example to calculate the iLPIPS score. For KTH, we evaluated on the whole test set. For SMMNIST, we evaluated the results on 256 different test examples, similarly to MCVD (Voleti et al. 2022). Besides, we take the same number of reverse diffusion sampling steps as MCVD, which is 100. Increasing the diffusion sampling steps would further improve the performance. NPVP (Ye and Bilodeau 2023b) and Hier-VRNN (Castrejon et al. 2019) are trained based on their official code and MCVD (Voleti et al. 2022) is tested with their officially released trained models. Comparing with the neural process-based NPVP method, GT NPVP Sample 1 0 1 2 6 10 14 18 22 26 NPVP Sample 2 Ours Sample 1 Ours Sample 2 Figure 4: Random predictions for BAIR dataset by our model and NPVP (Ye and Bilodeau 2023b). The iLPIPS score of our STDiff model is more than 50 times bigger on KTH and twice bigger on SMMNIST. NPVP has a SOTA frame-by-frame SSIM or LPIPS performance, but the generated future frames lack diversity as assessed by iLPIPS. A visual examination of the results of NPVP validates this. We believe this is explained by the fact that NPVP only uses a single layer of latent variable for the VAE and uses a single global latent variable to account for the randomness of the whole sequence. Thus, NPVP has a very limited expressiveness for stochastic modeling. Figure 4 presents several uncurated predicted examples, demonstrating the better plausibility and greater diversity of our predictions compared to NPVP. Notably, NPVP tends to move the robot arm outside the field of view to minimize MSE. We also compare our model with Hier-VRNN that has 10 layers of latent variables. STDiff outperforms Hier-VRNN by a large margin on both datasets, despite Hier-VRNN having a larger model size. Compared with the SOTA diffusionbased model MCVD, STDiff outperforms it on SMMNIST and obtains a comparable iLPIPS for KTH. STDiff accomplishes this with a smaller model that has 100M less parameters. We believe that MCVD is not as efficient as STDiff for stochastic prediction because it learns the randomness of a whole video clip at once, similarly to NPVP, which requires a bigger model and also ignores an explicit temporal stochasticity learning. These results indicate that the good performance of our proposed method is not only attributed The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6671 Models #Parameters KTH SMMNIST 10 →20 5 →10 iLPIPS ↑FVD ↓iLPIPS ↑FVD ↓ NPVP (Ye and Bilodeau 2023b) 122.68M 0.46 93.49 67.83 51.66 Hier-VRNN (Castrejon et al. 2019) 260.68M 1.22 278.83 6.72 22.65 MCVD(Voleti et al. 2022) 328.60M 26.04 93.38 123.93 21.53 STDiff-ODE 201.91M 15.60 90.68 91.07 46.62 STDiff (ours) 204.28M 25.08 89.67 136.27 14.93 Table 3: Stochastic video prediction diversity on KTH and SMMNIST datasets. A bigger iLPIPS denotes larger diversity, any deterministic prediction model has a iLPIPS of 0. Smaller FVD indicated better visual quality and more plausible predictions. Boldface: best results. Underlined: second best results. Models BAIR KITTI 2 →28, 2 × fps 4 →5, 2 × fps FVD↓SSIM↑LPIPS↓iLPIPS ↑FVD↓SSIM↑LPIPS↓iLPIPS ↑ Vid-ODE (Park et al. 2021) 2948.82 0.310 322.58 0 615.98 0.23 591.54 0 NPVP (Ye and Bilodeau 2023b) 1159.14 0.795 58.71 1.06 248.82 0.56 313.94 6.08 STDiff (ours) 122.85 0.808 75.37 109.33 78.39 0.47 160.92 136.94 Table 4: Continuous VFP results on BAIR and KITTI. Boldface: best results. Underlined: second best results. to the powerful image diffusion model, but also from our novel SDE-based recurrent motion predictor. In order to further investigate the effectiveness of our stochastic motion predictor, we implemented an ODE version of STDiff, which has the same architecture as STDiff, except that the motion predictor is ODE-based instead of SDE-based. We observe that STDiff has almost 1.5 times more diversity than STDiff-ODE on both datasets in terms of iLPIPS. Notably, our STDiff also achieves the best FVD on both datasets, highlighting that the prediction of STDiff, thanks to the use of an SDE, has large diversity and good visual quality simultaneously. We can draw the conclusion that our STDiff has a better stochasticity modeling performance than previous stochastic models. The comparison with NPVP and Hier-VRNN validates the motivation that increasing the layers of latent variable for Hierarchical VAE is beneficial. And the comparison with STDiff-ODE and MCVD experimentally proves our claim that explicitly temporal stochasticity learning is also critical for a better diversity in future predictions. Continuous Prediction We summarize the continuous prediction performance of three continuous prediction models in Table 4. MCVD is not included because it cannot conduct temporal continuous prediction. For this evaluation, we downsampled two datasets to 0.5 frame rate for training, then make the models predict with 2× frame rate during test. This way, we get access to ground-truth high-frame rate test videos for metric calculations. The stochastic NPVP and STDiff predict 10 different random trajectories for each test example. In addition to iLPIPS and FVD, we also use the traditional evaluation protocol to report the best SSIM and LPIPS scores out of 10 different random predictions. STDiff outperforms Vid-ODE by a large margin in terms of all metrics. In addition to be deterministic and not decomposing the motion and content as we do, the capacity of VidODE is too small. Indeed, performance of Vid-ODE on the original KTH dataset is not bad (Park et al. 2021), but downsampling largely increases the difficulty as the model needs to predict motion with a larger temporal gap. For the big, realistic and high resolution KITTI dataset, Vid-ODE fails to achieve reasonable performance mainly due to its limited model size. Examination of visual examples on BAIR shows that Vid-ODE cannot predict the stochastic motion of the robot arm and the image quality quickly degrades because of a large accumulated error. As shown in Table 4, for both datasets, STDiff achieves comparable or better performance in terms of SSIM and LPIPS compared to NPVP. However, we observe a significant performance gap in terms of FVD and iLPIPS with STDiff performing much better, especially for the dataset with more stochasticity, i.e., BAIR. This indicates that the capacity of randomness modeling also influences the performance of continuous prediction. Continuous prediction example on Cityscapes are shown in Figure 3, STDiff is able to predict frames at non-existent training dataset coordinates (e.g., 3.5, 6.5, and 9.5). Remarkably, STDiff holds the theoretical potential to predict videos at arbitrary frame rates. Conclusion In this paper, we propose a novel temporal continuous stochastic video prediction model. Specifically, we model both the spatial and temporal generative process as SDEs by integrating a SDE-based temporal motion predictor with a recurrent diffusion predictor, which greatly increases the stochastic expressiveness and also enables temporal continuous prediction. In this way, our model is able to predict future frames with an arbitrary frame rate and greater diversity. Our model reaches the SOTA in terms of FVD, LPIPS, and iLPIPS on multiple datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6672 Acknowledgements We thank FRQ-NT and REPARTI for the support of this research via the strategic cluster program. References Babaeizadeh, M.; Finn, C.; Erhan, D.; Campbell, R.; and Levine, S. 2018. Stochastic variational video prediction. In ICLR. Babaeizadeh, M.; Saffar, M. T.; Nair, S.; Levine, S.; Finn, C.; and Erhan, D. 2021. FitVid: Overfitting in Pixel-Level Video Prediction. ArXiv:2106.13195 [cs]. Bei, X.; Yang, Y.; and Soatto, S. 2021. Learning SemanticAware Dynamics for Video Prediction. In CVPR. Castrejon, L.; Ballas, N.; Courville; and A. 2019. Improved Conditional VRNNs for Video Prediction. In ICCV. Chatterjee, M.; Ahuja, N.; and Cherian, A. 2021. A Hierarchical Variational Neural Uncertainty Model for Stochastic Video Prediction. 9751–9761. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The Cityscapes Dataset for Semantic Urban Scene Understanding. 3213–3223. Denton, E.; and Fergus, R. 2018. Stochastic Video Generation with a Learned Prior. In ICML. Ebert, F.; Finn, C.; Lee, A. X.; and Levine, S. 2017. Selfsupervised visual planning with temporal skip connections. In CoRL. Finn, C.; Goodfellow, I.; and Levine, S. 2016. Unsupervised learning for physical interaction through video prediction. In NIPS, 64–72. Geiger, A.; Lenz, P.; Stiller, C.; and Urtasun, R. 2013. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11): 1231–1237. Harvey, W.; Naderiparizi, S.; Masrani, V.; Weilbach, C.; and Wood, F. 2022. Flexible Diffusion Modeling of Long Videos. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. In NeurIPS, volume 33, 6840–6851. Hu, X.; Huang, Z.; Huang, A.; Xu, J.; and Zhou, S. 2023. A Dynamic Multi-Scale Voxel Flow Network for Video Prediction. 6121–6131. Huang, C.-W.; Lim, J. H.; and Courville, A. C. 2021. A Variational Perspective on Diffusion-Based Generative Models and Score Matching. In Advances in Neural Information Processing Systems, volume 34, 22863–22876. Curran Associates, Inc. H¨oppe, T.; Mehrjou, A.; Bauer, S.; Nielsen, D.; and Dittadi, A. 2022. Diffusion Models for Video Prediction and Infilling. Jin, B.; Hu, Y.; Tang, Q.; Niu, J.; Shi, Z.; Han, Y.; and Li, X. 2020. Exploring Spatial-Temporal Multi-Frequency Analysis for High-Fidelity and Temporal-Consistency Video Prediction. In CVPR. Kidger, P.; Foster, J.; Li, X.; and Lyons, T. J. 2021. Neural SDEs as Infinite-Dimensional GANs. In Proceedings of the 38th International Conference on Machine Learning, 5453– 5463. PMLR. ISSN: 2640-3498. Lee, A. X.; Zhang, R.; Ebert, F.; Abbeel, P.; Finn, C.; and Levine, S. 2018. Stochastic Adversarial Video Prediction. Li, X.; Wong, T.-K. L.; Chen, R. T. Q.; and Duvenaud, D. 2020. Scalable Gradients for Stochastic Differential Equations. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 3870–3882. PMLR. ISSN: 2640-3498. Liu, Z.; Yeh, R. A.; Tang, X.; Liu, Y.; and Agarwala, A. 2017. Video Frame Synthesis Using Deep Voxel Flow. In International Conference on Computer Vision (ICCV), 4473– 4481. Lotter, W.; Kreiman, G.; and Cox, D. 2017. Deep predictive coding networks for video prediction and unsupervised learning. In ICLR, 1–18. Lu, Y.; Kumar, K. M.; Nabavi, S. s.; and Wang, Y. 2019. Future Frame Prediction Using Convolutional VRNN for Anomaly Detection. In AVSS. Nikankin, Y.; Haim, N.; and Irani, M. 2022. SinFusion: Training Diffusion Models on a Single Image or Video. ArXiv:2211.11743 [cs]. Park, S.; Kim, K.; Lee, J.; Choo, J.; Lee, J.; Kim, S.; and Choi, E. 2021. Vid-ODE: Continuous-Time Video Generation with Neural Ordinary Differential Equation. In AAAI. Park, T.; Liu, M.-Y.; Wang, T.-C.; and Zhu, J.-Y. 2019. Semantic Image Synthesis With Spatially-Adaptive Normalization. 2337–2346. Rasul, K.; Seward, C.; Schuster, I.; and Vollgraf, R. 2021. Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting. In Proceedings of the 38th International Conference on Machine Learning, 8857– 8868. PMLR. ISSN: 2640-3498. Schuldt, C.; Laptev, I.; and Caputo, B. 2004. Recognizing human actions: a local SVM approach. In ICPR. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, 2256– 2265. PMLR. ISSN: 1938-7228. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. Tzen, B.; and Raginsky, M. 2019. Neural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit. ArXiv:1905.09883 [cs, stat]. Unterthiner, T.; Steenkiste, S. v.; Kurach, K.; Marinier, R.; Michalski, M.; and Gelly, S. 2019. FVD: A new Metric for Video Generation. In ICLR Workshop. Villegas, R.; Yang, J.; Hong, S.; Lin, X.; and Lee, H. 2017. Decomposing motion and content for natural video sequence prediction. In ICLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6673 Voleti, V.; Jolicoeur-Martineau, A.; Pal; and Christopher. 2022. Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation. In Advances in Neural Information Processing Systems. Wang, W.; Bao, J.; Zhou, W.; Chen, D.; Chen, D.; Yuan, L.; and Li, H. 2022. SinDiffusion: Learning a Diffusion Model from a Single Natural Image. ArXiv:2211.12445 [cs]. Wu, B.; Nair, S.; Martin-Martin, R.; Fei-Fei, L.; and Finn, C. 2021. Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction. In CVPR, 2318–2328. Wu, Y.; Wen, Q.; and Chen, Q. 2022. Optimizing Video Prediction via Video Frame Interpolation. 17814–17823. Yang, R.; Srivastava, P.; and Mandt, S. 2022. Diffusion Probabilistic Modeling for Video Generation. ArXiv:2203.09481 [cs, stat]. Ye, X.; and Bilodeau, G.-A. 2022. VPTR: Efficient Transformers for Video Prediction. In ICPR. Ye, X.; and Bilodeau, G.-A. 2023a. STDiff: Spatiotemporal Diffusion for Continuous Stochastic Video Prediction. arXiv:2312.06486. Ye, X.; and Bilodeau, G.-A. 2023b. A Unified Model for Continuous Conditional Video Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 3603–3612. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 586–595. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6674 | 2024 | 741 |
18,564 | DiffusionEdge: Diffusion Probabilistic Model for Crisp Edge Detection Yunfan Ye1,2*, Kai Xu2*, Yuhang Huang2†, Renjiao Yi2, Zhiping Cai2 1Hunan University 2National University of Defense Technology Abstract Limited by the encoder-decoder architecture, learning-based edge detectors usually have difficulty predicting edge maps that satisfy both correctness and crispness. With the recent success of the diffusion probabilistic model (DPM), we found it is especially suitable for accurate and crisp edge detection since the denoising process is directly applied to the original image size. Therefore, we propose the first diffusion model for the task of general edge detection, which we call DiffusionEdge. To avoid expensive computational resources while retaining the final performance, we apply DPM in the latent space and enable the classic cross-entropy loss which is uncertainty-aware in pixel level to directly optimize the parameters in latent space in a distillation manner. We also adopt a decoupled architecture to speed up the denoising process and propose a corresponding adaptive Fourier filter to adjust the latent features of specific frequencies. With all the technical designs, DiffusionEdge can be stably trained with limited resources, predicting crisp and accurate edge maps with much fewer augmentation strategies. Extensive experiments on four edge detection benchmarks demonstrate the superiority of DiffusionEdge both in correctness and crispness. On the NYUDv2 dataset, compared to the second best, we increase the ODS, OIS (without post-processing) and AC by 30.2%, 28.1% and 65.1%, respectively. Code: https://github.com/GuHuangAI/DiffusionEdge. Introduction Edge detection is a longstanding vision task for detecting object boundaries and visually salient edges from images. As a fundamental problem, it benefits various downstream tasks ranging from 2D perception (Zitnick and Doll´ar 2014; Revaud et al. 2015; Cheng et al. 2020), generation (Nazeri et al. 2019; Xiong et al. 2019), and 3D curve reconstruction (Ye et al. 2023b). There are three main challenges in general edge detection, correctness (identifying edge and non-edge pixels on noisy scenes), crispness (the width of edge lines, precisely localizing edges without confusing pixels) and efficiency (the inference speed). Traditional methods extract edges based on local features such as gradient (Kittler 1983; Canny 1986), *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Encoder Condition Decoder GT Image UAED DiffusionEdge Gaussian Noise Figure 1: CNN-based methods, even the most recent and state-of-the-art one (UAED (Zhou et al. 2023)), generally have an encoder-decoder architecture with limitations of thick edges and more noise. We propose the diffusion-based edge detector which is superior in both correctness and crispness without any post-processing. which can be crisp but not correct enough. Deep learningbased methods (Xie and Tu 2015; Liu et al. 2017; He et al. 2019; Poma, Riba, and Sappa 2020; Pu et al. 2022; Zhou et al. 2023) achieve significant progress by capturing local and global features with multi-layers, which is correct but not crisp enough. Recently, efforts have also been made to design lightweight architectures (Su et al. 2021) for efficiency, or loss functions (Deng et al. 2018; Huan et al. 2021) and refinement strategies (Ye et al. 2023a) for crisp edge detection. However, none of each single edge detector can directly predict edge maps that simultaneously satisfy both correctness and crispness, without a post-processing of morphological non-maximal suppression (NMS) scheme. We ask this question: Can we learn an edge detector that can directly generate both accurate and crisp edge maps without heavily relying on post-processing? In this work, we try to answer the question through learning a diffusion model for edge detection. As demonstrated in Figure 1, DPMs have two main differences compared with methods based on the Convolutional Neural Network (CNN): (a) CNN-based models generally learn and infer the targets in a single round, while DPMs are trained to predict a denoised variant of the noisy input by several steps, which makes it easier for DPMs to learn the target distribution; (b) CNN-based edge detectors generally extract features from multi-layers and therefore are limited by the existence of downsampling (for high-level global features) and upsampling (for pixel-wise alignment) operators, which leads to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6675 thick edge predictions in nature (Huan et al. 2021), while DPMs directly perform the denoising process on the level of original image size. With the two characteristics, we found diffusion model is especially suitable for accurate and crisp edge detection. However, there are still several challenges for DiffusionEdge to be accurate and crisp enough with limited computational resources and inference time. We apply a decoupled diffusion architecture similar to DDM (Huang et al. 2023) to speed up the inference, and propose an adaptive Fourier filter before decoupling, which enables the network weights to adjust the components of the specific frequencies adaptively. Following (Rombach et al. 2022), we also train the diffusion model in latent space to reduce computations. However, most CNN-based edge detectors are trained by the annotator-robust cross entropy loss (Liu et al. 2017) in image pixel level, which provides uncertainty information when training edge datasets labeled by several annotators like BSDS (Arbelaez et al. 2010). To keep that free and valuable uncertainty prior, we apply an uncertainty distillation strategy by directly passing the optimized gradients from pixel level to latent space level based on the chain rule. With the above efforts, extensive experiments on four edge detection benchmarks show that DiffusionEdge can directly generate accurate and crisp edge maps without any post-processing, and achieve superior qualitative and quantitative performance with much less augmentation strategies. On the NYUDv2 dataset (Silberman et al. 2012), compared to the second best, we increase the ODS, OIS (without postprocessing) and AC by 30.2%, 28.1% and 65.1%, respectively. Our contributions include: • A novel diffusion-based edge detector, named DiffusionEdge, which can predict accurate and crisp edge maps without post-processing. To our best knowledge, it is the first diffusion model toward edge detection. • Several technical designs to ensure learning a satisfactory diffusion model in latent space, while keeping the uncertainty prior and adaptively filtering latent features in Fourier space. • Superior performance on four edge detection benchmarks for both correctness and crispness. Related Work Edge detection. Edge detection aims to extract object boundaries and visually salient edges from natural images. Traditional edge detectors as such Sobel (Kittler 1983) and Canny (Canny 1986) generate edges through local gradients, which often suffer from noisy pixels without global content. CNN-based methods start integrating features from multi-layers and improve the correctness of edge pixels by a large margin. HED (Xie and Tu 2015) proposed the first end-to-end edge detection architecture, and RCF (Liu et al. 2017) improved it by integrating more hierarchical features. BDCN (He et al. 2019) trains the edge detector with layerspecific supervisions in a bi-directional cascade architecture. PiDiNet (Su et al. 2021) introduced pixel difference convolution in the designed lightweight architectures for efficient edge detection. UAED (Zhou et al. 2023) measures the degree of ambiguity among different annotations from multiple annotations to focus more on hard samples. Also, EDTER (Pu et al. 2022) proposed to detect global context and local cues by vision transformers in two stages. Those learning-based methods can achieve remarkable progress in correctness via integrating features from multilayers and uncertainty information. However, the generated edge maps are too thick for downstream tasks and heavily rely on the post-processing. Although efforts for crisp edge detection have been made on loss functions (Deng et al. 2018; Huan et al. 2021) and the label refinement strategy (Ye et al. 2023a), we argue that the community still needs an edge detector that can directly satisfy both correctness and crispness without any post-processing. Diffusion probabilistic model. Diffusion models (SohlDickstein et al. 2015; Ho, Jain, and Abbeel 2020; Huang et al. 2023) are a class of generative models based on a Markov chain, which gradually recover the data sample via learning the denoising process. Diffusion models demonstrate remarkable performance in fields of computer vision (Nichol et al. 2021; Avrahami, Lischinski, and Fried 2022; Gu et al. 2022), nature language processing (Austin et al. 2021) and audio generation (Popov et al. 2021). Despite those great achievements in generative tasks, diffusion models also have great potential for perception tasks, such as image segmentation (Brempong et al. 2022; Wu et al. 2023) and object detection (Chen et al. 2022). Inspired by the above pioneers (Xie and Tu 2015; Chen et al. 2022; Huang et al. 2023), our method has two main differences to directly generate accurate and crisp edge maps with acceptable inference time. First, we design to impose a learnable Fourier convolution module in the decoupled diffusion architecture, to adaptively filter latent features in Fourier space depending on the target distribution. Second, to keep the pixel-level uncertainty prior from edge datasets with multiple annotators, we distillate the gradients directly to latent space for improved results and stabilized training. The proposed DiffusionEdge, to the best of our knowledge, is the first usage of diffusion models for generic edge detection, and is superior in both correctness and crispness. Method The overall framework of the proposed DiffusionEdge is illustrated in Figure 2. Inspired by previous works (Rombach et al. 2022; Wu et al. 2023; Huang et al. 2023), we train the diffusion model with decoupled structure in latent space and take the input image as the extra condition. Based on the diffusion process introduced in preliminaries, we introduce the adaptive FFT-filter for frequency parsing. To keep pixellevel uncertainty from multiple annotators and reduce computational resources, we proposed to directly optimize the latent space with cross-entropy loss in a distillation manner. Preliminaries Current studies (Chen et al. 2022; Wu et al. 2023) have shown the great potential of DPMs in perception tasks, however, it suffers from prolonged sampling time. Inspired by (Huang et al. 2023), we adopt a decoupled diffusion model The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6676 Condition encoder 𝑧𝑧𝑡𝑡 noise 𝑧𝑧0 + Latent Space Decoder Encoder Pixel Space GT Image Prediction Adaptive FFT-Filter FFT Learnable Filter IFFT . Fixed autoencoder Uncertainty distillation FFT Fast Fourier transform Hadamard product . ̂𝑧𝑧0̂𝑧𝑧0 Denoising U-Net Decoder Decoder Adaptive FFT Adaptive FFT Encoder Concat Cross-attention ∇𝜽𝜽𝑳𝑳𝑤𝑤𝑤𝑤𝑤𝑤= 𝜕𝜕𝑳𝑳𝑤𝑤𝑤𝑤𝑤𝑤 𝜕𝜕𝒆𝒆𝐷𝐷 ȉ 𝜕𝜕𝒆𝒆𝐷𝐷 𝜕𝜕ො𝒛𝒛0 ȉ 𝜕𝜕ො𝒛𝒛0 𝜕𝜕𝜽𝜽 𝑳𝑳𝑤𝑤𝑤𝑤𝑤𝑤 forward Uncertainty Distillation backward Decoder 𝒆𝒆𝐷𝐷 ො𝒛𝒛0 𝒆𝒆0 Figure 2: The overall framework of the proposed DiffusionEdge. (DDM) to speed up the sampling process. The decoupled forward diffusion process is governed by the combination of the explicit transition probability and the standard Wiener process: q(et|e0) = N(e0 + Z t 0 ftdt, tI), (1) where e0 and et are the initial and noisy edges, and ft is the explicit transition function representing the opposite direction of the gradient of the edge. Following (Huang et al. 2023), we use the constant function as default ft. The corresponding reversed process is represented by: q(et−∆t|et, e0) = N(et + Z t−∆t t ftdt −∆t √ tn, ∆t(t −∆t) t I), (2) where n ∼N(0, I). To train the decoupled diffusion model, we need to supervise the data and noise components simultaneously, therefore, the training objective is parameterized by: min θ Eq(e0)Eq(n)[∥fθ −f∥2 + ∥nθ −n∥2], (3) where θ is the parameter of the denoising network. Since diffusion models take up too much computational cost in original image space, we follow (Rombach et al. 2022) to transfer the training process into latent space with 4× downsampling spatial size. As shown in Fig. 2, we first train an autoencoder that consists of an encoder for compressing the edge ground truth to latent code and a decoder for recovering it from the latent code, respectively. Then, in the stage of training denoising U-Net, we fix the weights of the autoencoder and train the denoising process in latent space. The process can be represented as: fθ, nθ = Netθ(zt, t), zt = z0 + Z t 0 ftdt + √ tn, (4) where Netθ denotes the denoising U-Net, z0 = E(e0) is the latent code compressed by the encoder of autoencoder, t is the time step. We also incorporate several technical designs for edge detection, making it available to obtain accurate and crisp predictions within acceptable inference time. Adaptive FFT-filter The denoising U-Net aims to decouple the noisy input et into the denoised data e0 and the noise component n. The vanilla convolution layers are adopted as the decoupling operator, to separate the denoised edge maps and noise component from the noisy variable. However, the convolution operators focus more on feature aggregation, and no not adjust the components of specific frequencies. Therefore, we introduce a decoupling operator that can filter out different components adaptively. As shown in the left-top of Figure 2, we integrate the adaptive Fast Fourier Transform filter (Adaptive FFT-filter) into the denoising Unet to filter out edge maps and noise components in the frequency domain. Specifically, given the encoder feature F ∈RH×W ×C, we first perform 2D FFT along the spatial dimensions, and represent the transformed feature as Fc = F[F], Fc ∈ CH×W ×C. Then, to learn an adaptive spectrum filter, we construct a learnable weight map W ∈CH×W ×C and multiply W to Fc. The spectrum filter benefits the training since it can globally adjust the specific frequencies and the learned weights are adaptive for different frequencies of target distributions. With the useless components filtered out adaptively, we project the feature from the frequency domain back to the spatial domain by Inverse Fast Fourier Transform (IFFT). Finally, we adopt a residual connection from F to avoid filtering useful information out. We can describe the above process by the following equation: Fo = F + F −1[W ◦Fc], (5) where Fo is the output feature, ◦represents the hadamard product. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6677 Uncertainty Distillation Since the numbers of edge and non-edge pixels are highly imbalanced (the majority of pixels are non-edges), HED (Xie and Tu 2015) propose to apply weighted binary cross-entropy (WCE) loss for optimization, which is further improved by RCF (Liu et al. 2017) with uncertainty prior from multiple annotators. With Ei to be the ground truth edge probability of ith pixel, for the ith pixel in the jth edge map with value pj i, the uncertainty-aware WCE loss is calculated as: lj i = α · log 1 −pj i , if Ei = 0, 0, if 0 < Ei < η, β · log Ej i , otherwise, (6) in which α = λ · |E+| |E+|+|E−|, β = |E−| |E+|+|E−|, (7) where η is the threshold to decide uncertain edge pixels in ground truths, and such ambiguous samples will be ignored during subsequent optimization. E+ and E−denote the number of edge and non-edge pixels in the ground truth edge maps. λ is the weight for balancing E+ and E−. The final loss for each edge map is Lwce = Pj i lj i . Ignoring ambiguous pixels during optimization can avoid confusing the network and stabilizing the training process with improved performance. However, it is almost impossible to apply the WCE loss to the latent space with the misalignment in both numerical range and spatial size. In particular, the threshold η (generally ranges from 0 to 1) of WCE loss is defined on image space, but the latent code follows the normal distribution and has a various range. Moreover, the pixel-level uncertainty is hard to be aligned with the encoded and down-sampled latent features of different sizes. Therefore, applying the cross-entropy loss directly to latent code inevitably leads to incorrect uncertainty. On the other hand, one may choose to decode the latent code back to the image level and thus use the uncertaintyaware cross-entropy to directly supervise the predicted edge maps. Unfortunately, this implementation lets the backward gradient go through the redundant autoencoder, making it hard to feed back effective gradients. Besides, the additional gradient computation in the autoencoder leads to a huge GPU memory cost. As shown in Figure 3, we conduct two experiments to show the negative impact of feeding back the gradient through the autoencoder. We name the setting with gradient through autoencoder Baseline-A. As a comparison, we remove the WCE loss but just use Eq. 3 to supervise the latent code, which is named Baseline-B. The performance of Baseline-B is not satisfactory, and Baseline-A even performs worse with 1.5× more GPU memory. To address this problem, we propose the uncertainty distillation loss that can directly optimize the gradient on the latent space. The results of Baseline-A illustrate that feeding back the gradient through the redundant autoencoder leads to a huge GPU memory cost and hurts the performance, which introduces an inspiration of eliminating the gradient Image GT ODS: 0.775 GPU Memery: 15G ODS: 0.816 GPU Memery: 10G Baseline-A Baseline-B Figure 3: Examples of two baselines with accuracy and memory cost. of autoencoder based on Baseline-B. Specifically, assuming the reconstructed latent code is ˆz0, the decoder of the autoencoder is D, and the decoded edge is eD, we consider the gradient of WCE loss Lwce by the Chain Rule: ∇θLwce = ∂Lwce ∂eD ∂eD ∂ˆz0 ˆz0 ∂θ . (8) To remove the negative influence of autoencoder, we skip the gradient through the autoencoder ∂eD/∂ˆz0 and modify the gradient ∇θLwce by: ∇θLwce = ∂Lwce ∂eD ˆz0 ∂θ . (9) This implementation reduces the computational cost greatly and allows the WCE loss to be applied to latent code directly. In this way, with the time-variant loss weight σt = (1 −t)2, our final training objective is represented by: L = ∥fθ −f∥2 + ∥nθ −n∥2 + σtLwce(eD, e0). (10) Experiments Datasets We conduct experiments on four popular edge detection datasets: BSDS (Arbelaez et al. 2010), NYUDv2 (Silberman et al. 2012), Multicue (M´ely et al. 2016) and BIPED (Poma, Riba, and Sappa 2020). BSDS consists of 200, 100, and 200 images in the training set, validation set, and test set, respectively. Each image has 4 to 9 annotators and the final edge ground truth is computed by taking their average. NYUDv2 is built for indoor scene parsing and is also applied for edge detection evaluation. It contains 1449 densely annotated RGB-D images, and is divided into 381 training, 414 validation and 654 testing images. Multicue consists of images from 100 challenging natural scenes. Each image is annotated by several people as well. We randomly split the 100 images into training and evaluation sets, consisting of 80 and 20 images respectively. We repeat the process on Multicue-edge three times and average the scores as the final results. BIPED contains 250 annotated images of outdoor scenes and is split into a training set of 200 images and a testing set of 50 images. All images are carefully annotated at singlepixel width by experts in the computer vision field. Previous methods generally augment the dataset with various strategies. For example, images in BSDS are augmented with flipping (2×), scaling (3×), and rotation (16×), leading to a training set that is 96× larger than the original version. Others are concluded in Table 1. However, our method trains all datasets with only randomly cropped patches of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6678 320×320. In BSDS, we apply random flipping and scaling. In NYUDv2, Multicue and BIPED datasets, only random flipping is adopted. Datasets Augmentation strategies BSDS F (2×), S (3×), R (16×)=96× NYUD F (2×), S (3×), R (4×)=24× Multicue F (2×), C (3×), R (16×)=96× BIPED F (2×), C (3×), R (16×), G(3×)= 288× Table 1: Augmentation strategies adopted on four edge detection benchmarks for previous methods. F: flipping, S: scaling, R: rotation, C: cropping, G: gamma correction. Implementation Details We implement our DiffusionEdge using PyTorch (Paszke et al. 2019). To train the autoencoder, we collect the edge labels from the training set of all the datasets. For training the denoising U-Net, we set the smallest time step to 0.0001. We train the models using AdamW optimizer with an attenuated learning rate (from 5e−5 to 5e−6) for 25k iterations, and each training takes up about 15 GPU hours. We employ the exponential moving average (EMA) to prevent unstable model performances during the training process. The balancing weight λ and the threshold η to identify uncertain edge pixels are set to 1.1 and 0.3, respectively, for all experiments. We train all datasets with randomly cropped patches of size 320×320 with batch size 16. We conduct inferences with slide 240×240 and take the average value under overlap areas. All the training is conducted on a single RTX 3090 GPU. When inferencing each single image on BSDS dataset, with the sampling Equation 2, it takes about 3.5GB GPU memory, 1.2 seconds for one-step sampling and 3.2 seconds for five steps on a 3080Ti GPU. Evaluation Metrics To evaluate the precision, recall, and F-score for general edge detection, the predicted edge map should be binarized by an optimal threshold. Following prior works, we compute the F-scores of Optimal Dataset Scale (ODS) and Optimal Image Scale (OIS). ODS employs a fixed threshold throughout the dataset, while OIS chooses an optimal threshold for each image. F-scores are computed by F = 2·P ·R P +R , where P denotes precision and R denotes recall. For ODS and OIS, the maximum allowed distances between corresponding pixels from predicted edges and ground truths are set to 0.011 for NYUD and 0.0075 for other datasets. To comprehensively evaluate the crispness of edge maps, following previous works (Huan et al. 2021; Ye et al. 2023a), we also report the Standard evaluation protocol (SEval), Crispness-emphasized evaluation protocol (CEval), and the Average Crispness (AC). SEval is calculated after applying a standard post-processing scheme containing an NMS step and a mathematical morphology operation to obtain thinner edge maps. CEval is calculated without any post-processing so that thick edge maps generally get lower precision with more false positive samples. The AC for each edge map is calculated as the ratio of the sum of pixel values after NMS, to the sum of pixel values before NMS, which ranges from 0 to 1. Larger AC means crisper edge maps. Ablation Study The effect of key components. We first conduct experiments to verify the impact of the Adaptive FFT-filter (AF) and Uncertainty Distillation (UD) strategy. The quantitative results are summarized in Table 2. We can observe that each single AF or UD can promote the performance, while UD is more critical since it plays an important role of optimizing the latent space with valuable uncertainty information. Considering that the AC varies very slightly, the combination of AF and UD achieves the best performance. AF UD ODS OIS AC × × 0.816 0.829 0.521 × √ 0.831 0.845 0.528 √ × 0.825 0.837 0.461 √ √ 0.834 0.848 0.476 Table 2: Ablation study of the effectiveness of the proposed Adaptive FFT-filter (AF) and Uncertainty Distillation (UD) in DiffusionEdge on BSDS dataset. All results are computed with a single scale input, and the same for others. The effect of backbones and diffusion steps. We study the impact of different backbones for the image (condition) encoder with ResNet101 (He et al. 2016), Effecientnetb7 (Tan and Le 2019) and Swin-B (Liu et al. 2021). Also, the number of iterating steps could be another key parameter in diffusion models. All the results are reported in Table 3. We can observe that the crispness varies slightly in all settings, revealing the superiority of DiffusionEdge for crisp edge detection. Swin performs better than other backbones, and we find the number of sampling steps (ranging from 1 to 50) brings litter difference (<0.4% in ODS and OIS) to the final results. Moreover, only one sample step can already achieve state-of-the-art performance. Since more steps mean more inference time, considering all the correctness, crispness and efficiency, we adopt step 5 as the standard setting for all experiments. Backbone ODS OIS AC ResNet 0.823 0.837 0.514 EffecientNet 0.829 0.840 0.508 Swin 0.834 0.848 0.476 Steps (with Swin) ODS OIS AC Step 1 0.833 0.844 0.453 Step 3 0.835 0.847 0.476 Step 5 0.834 0.848 0.476 Step 10 0.834 0.848 0.476 Step 20 0.833 0.847 0.475 Step 50 0.833 0.846 0.478 Table 3: The ablations about different backbones and the number of iterating steps for DiffusionEdge. Comparison with State-of-the-arts On BSDS. We compare our model with traditional detectors including Canny (Canny 1986), SE (Doll´ar and ZitThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6679 Image EDTER GT RCF Ours BDCN UAED HED Figure 4: Qualitative comparisons on BSDS dataset with previous state-of-the-arts. Edge maps generated by our DiffusionEdge are both accurate and crisp with less noise. Zoom-in is highly recommended to observe the details. nick 2014) and OEF (Hallman and Fowlkes 2015), CNNbased detectors including N4-Fields (Ganin and Lempitsky 2014), DeepContour (Shen et al. 2015), HFL (Bertasius, Shi, and Torresani 2015), CEDN (Yang et al. 2016), Deep Boundary (Kokkinos 2015), COB (Maninis et al. 2017), CED (Wang et al. 2018), AMH-Net (Xu et al. 2017), DCD (Liao et al. 2017), LPCB (Deng et al. 2018), HED (Xie and Tu 2015), RCF (Liu et al. 2017), BDCN (He et al. 2019), PiDiNet (Su et al. 2021), UAED (Zhou et al. 2023) and the transformer-based detector EDTER (Pu et al. 2022). The best results of all methods are taken from their publications. Methods SEval CEval AC ODS OIS ODS OIS Canny 0.611 0.676 SE 0.743 0.764 OEF 0.746 0.770 N4-Fields 0.753 0.769 DeepContour 0.757 0.776 HFL 0.767 0.788 CEDN 0.788 0.804 DeepBoundary 0.789 0.811 COB 0.793 0.820 CED 0.794 0.811 0.642 0.656 0.207 AMH-Net 0.798 0.829 DCD 0.799 0.817 LPCB 0.800 0.816 0.693 0.700 HED 0.788 0.808 0.588 0.608 0.215 RCF 0.798 0.815 0.585 0.604 0.189 BDCN 0.806 0.826 0.636 0.650 0.233 PiDiNet 0.789 0.803 0.578 0.587 0.202 EDTER 0.824 0.841 0.698 0.706 0.288 UAED 0.829 0.847 0.722 0.731 0.227 Ours 0.834 0.848 0.749 0.754 0.476 Table 4: Quantitative results on the BSDS dataset. For fair comparison, we only list the single-scale results generated by models trained with only BSDS data. Note that other methods are trained with augmented dataset (96×), while we train DiffusionEdge with only random flipping and scaling. Image EDTER GT Ours BDCN Figure 5: Qualitative comparisons on NYUDv2 dataset with two state-of-the-art CNN-based and transformer-based methods. Edge maps generated by DiffusionEdge are much crisper and cleaner with competitive performance. By observing the quantitative and qualitative results in Table 4 and Figure 4, several conclusions can be drawn: (a) The proposed method achieves the best results in all settings, especially the AC, which means edge maps generated by DiffusionEdge are much more crisper than other methods; (b) Generally, the performance drop between SEval and CEval is smaller with crisper edge maps (larger AC), it is reasonable that thick edge maps contain many ambiguous false positive edges around true positive ones, evaluating without any post-processing lead to very low precision and thus low F-scores of ODS and OIS; (c) Thanks to the adaptive FFT-filter and uncertainty distillation strategy, our qualitative results perform even better with much less noise and more semantically meaningful contours, especially in challenging scenarios with complicated background and texture. On NYUDv2. We conduct experiments on RGB images and compare DiffusionEdge with state-of-the-art methods including AMH-Net, LPCB, HED, RCF, BDCN, PiDiNet and EDTER. Quantitative and qualitative results are shown in Table 5 and Figure 5, respectively. Our method achieves comparable performance under SEval. However, edge maps generated by other methods are extremely thick with all ACs smaller than 0.2, leading to a significant performance The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6680 drop under CEval. Such thick edge maps may come from training with the possibly existing label offsets for CNNbased methods (Ye et al. 2023a). However, DiffusionEdge can directly learn to recover the single-width label and maintain the crispness with slight performance change without post-processing. Consequently, compared to the second best (EDTER), we increase the ODS, OIS of CEval and AC by a large margin of 30.2%, 28.1% and 65.1%, respectively. Methods SEval CEval AC ODS OIS ODS OIS AMH-Net 0.744 0.758 LPCB 0.739 0.754 HED 0.722 0.737 0.387 0.404 RCF 0.745 0.759 0.398 0.413 BDCN 0.748 0.762 0.426 0.450 0.162 PiDiNet 0.733 0.747 0.399 0.424 0.173 EDTER 0.774 0.789 0.430 0.457 0.195 Ours 0.761 0.766 0.732 0.738 0.846 Table 5: Quantitative comparisons on NYUDv2. All results are computed with a single scale input. Note that other methods are trained with augmented dataset (24×), while we train DiffusionEdge with only random flipping. On Multicue and BIPED. We further compare DiffusionEdge with HED, RCF, BDCN, DexiNed, PiDiNet, EDTER and UAED, on the datasets of Multicue-edge and BIPED, via the standard evaluation procedure. As shown in Table 6, our method is superior in both correctness and crispness. It is worth noting that our method achieves a high AC of 0.849 on the BIPED dataset, which means the edges are almost all single-width with no ambiguity, as demonstrated in Figure 6. Such a success reveals the great potential to directly adopt the predicted results of DiffusionEdge without any post-processing for downstream tasks. Methods Multicue dataset BIPED dataset ODS OIS AC ODS OIS AC Human 0.750 Multicue 0.830 HED 0.851 0.864 0.829 0.847 RCF 0.851 0.862 0.843 0.859 BDCN 0.891 0.898 0.839 0.854 DexiNed 0.872 0.881 0.274 0.859 0.867 0.295 PiDiNet 0.874 0.878 0.204 0.868 0.876 0.232 EDTER 0.894 0.900 0.196 0.893 0.898 0.26 UAED 0.895 0.902 0.211 Ours 0.904 0.909 0.462 0.899 0.901 0.849 Table 6: Quantitative comparisons on Multicue and BIPED. All results are computed with a single scale input. On Crispness. To further verify the superiority of DiffusionEdge for crisp edge detection, we compare the AC of our method and other strategies proposed for generating crisp edge maps. Here we apply the Dice loss (Deng et al. 2018) (“-D” in table), the tracing loss (Huan et al. 2021) (“-T” in table) and the Guided Label Refinement (Ye et al. 2023a) (“-R” in table) based on PiDiNet (Su et al. 2021). As shown Image GT Ours Figure 6: Qualitative examples on BIPED dataset. in Table 7, our DiffusionEdge achieves the best crispness in all cases compared with other methods. Although much efforts have been made for improving the crispness of CNNbased networks (PiDiNet here as an example), the crispness is still limited by the encoder-decoder architecture in nature. However, the diffusion-based edge detection scheme recovers edge maps directly on the original size and the predictions can be almost as crisp as the ground truths. Methods AC BSDS Multicue BIPED PiDiNet-D 0.306 0.208 0.34 PiDiNet-T 0.333 0.217 0.296 PiDiNet-R 0.424 0.424 0.512 Ours 0.476 0.462 0.849 Table 7: Comparisons of the average crispness (AC) on BSDS, Multicue and BIPED dataset with the backbone of PiDiNet. “-D”, “-T” and “-R” means training with dice loss, tracing loss and training with refined labels, respectively. Conclusions and Limitations In this paper, we introduce the first diffusion-based network for crisp edge detection. With several technical designs including the adaptive FFT-filter and uncertainty distillation strategy, our DiffusionEdge is able to directly generate accurate and crisp edge maps without any post-processing. Extensive experiments demonstrate the superiority of DiffusionEdge both quantitatively and qualitatively. The crispness is even satisfactory enough and shows the potential for benefiting subsequent tasks in an end-to-end manner. Limitations. The correctness and crispness of edge maps extracted by DiffusionEdge can be simultaneously qualified for downstream tasks. However, another one of the three challenges, the efficiency, remains an open problem. Improving the diffusion model for faster inference speed is still a promising future direction to explore. Acknowledgments This work is supported in part by the NSFC (62172155, 62072465, 62325221, 62132021, 62002375, 62002376), the National Key Research and Development Program of China (2018AAA0102200), the Natural Science Foundation of Hunan Province of China(2021RC3071, 2022RC1104, 2021JJ40696) and the NUDT Research Grants (ZK22-52). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6681 References Arbelaez, P.; Maire, M.; Fowlkes, C.; and Malik, J. 2010. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5): 898–916. Austin, J.; Johnson, D. D.; Ho, J.; Tarlow, D.; and Van Den Berg, R. 2021. Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information Processing Systems, 34: 17981–17993. Avrahami, O.; Lischinski, D.; and Fried, O. 2022. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18208–18218. Bertasius, G.; Shi, J.; and Torresani, L. 2015. High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision. In Proceedings of the IEEE international conference on computer vision, 504–512. Brempong, E. A.; Kornblith, S.; Chen, T.; Parmar, N.; Minderer, M.; and Norouzi, M. 2022. Denoising pretraining for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4175–4186. Canny, J. 1986. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6): 679–698. Chen, S.; Sun, P.; Song, Y.; and Luo, P. 2022. Diffusiondet: Diffusion model for object detection. arXiv preprint arXiv:2211.09788. Cheng, T.; Wang, X.; Huang, L.; and Liu, W. 2020. Boundary-preserving mask r-cnn. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, 660–676. Springer. Deng, R.; Shen, C.; Liu, S.; Wang, H.; and Liu, X. 2018. Learning to predict crisp boundaries. In Proceedings of the European conference on computer vision (ECCV), 562–578. Doll´ar, P.; and Zitnick, C. L. 2014. Fast edge detection using structured forests. IEEE transactions on pattern analysis and machine intelligence, 37(8): 1558–1570. Ganin, Y.; and Lempitsky, V. 2014. -fields: neural network nearest neighbor fields for image transforms. In Asian conference on computer vision, 536–551. Springer. Gu, S.; Chen, D.; Bao, J.; Wen, F.; Zhang, B.; Chen, D.; Yuan, L.; and Guo, B. 2022. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10696–10706. Hallman, S.; and Fowlkes, C. C. 2015. Oriented edge forests for boundary detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1732– 1740. He, J.; Zhang, S.; Yang, M.; Shan, Y.; and Huang, T. 2019. Bi-directional cascade network for perceptual edge detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3828–3837. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Huan, L.; Xue, N.; Zheng, X.; He, W.; Gong, J.; and Xia, G.-S. 2021. Unmixing convolutional features for crisp edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10): 6602–6609. Huang, Y.; Qin, Z.; Liu, X.; and Xu, K. 2023. Decoupled Diffusion Models with Explicit Transition Probability. arXiv preprint arXiv:2306.13720. Kittler, J. 1983. On the accuracy of the Sobel edge detector. Image and Vision Computing, 1(1): 37–42. Kokkinos, I. 2015. Pushing the boundaries of boundary detection using deep learning. arXiv preprint arXiv:1511.07386. Liao, Y.; Fu, S.; Lu, X.; Zhang, C.; and Tang, Z. 2017. Deeplearning-based object-level contour detection with CCG and CRF optimization. In 2017 IEEE International Conference on Multimedia and Expo (ICME), 859–864. IEEE. Liu, Y.; Cheng, M.-M.; Hu, X.; Wang, K.; and Bai, X. 2017. Richer convolutional features for edge detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3000–3009. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022. Maninis, K.-K.; Pont-Tuset, J.; Arbel´aez, P.; and Van Gool, L. 2017. Convolutional oriented boundaries: From image segmentation to high-level tasks. IEEE transactions on pattern analysis and machine intelligence, 40(4): 819–833. M´ely, D. A.; Kim, J.; McGill, M.; Guo, Y.; and Serre, T. 2016. A systematic comparison between visual cues for boundary detection. Vision research, 120: 93–107. Nazeri, K.; Ng, E.; Joseph, T.; Qureshi, F. Z.; and Ebrahimi, M. 2019. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Poma, X. S.; Riba, E.; and Sappa, A. 2020. Dense extreme inception network: Towards a robust cnn model for edge detection. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 1923–1932. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6682 Popov, V.; Vovk, I.; Gogoryan, V.; Sadekova, T.; and Kudinov, M. 2021. Grad-tts: A diffusion probabilistic model for text-to-speech. In International Conference on Machine Learning, 8599–8608. PMLR. Pu, M.; Huang, Y.; Liu, Y.; Guan, Q.; and Ling, H. 2022. Edter: Edge detection with transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1402–1412. Revaud, J.; Weinzaepfel, P.; Harchaoui, Z.; and Schmid, C. 2015. Epicflow: Edge-preserving interpolation of correspondences for optical flow. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1164– 1172. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Shen, W.; Wang, X.; Wang, Y.; Bai, X.; and Zhang, Z. 2015. Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3982–3991. Silberman, N.; Hoiem, D.; Kohli, P.; and Fergus, R. 2012. Indoor segmentation and support inference from rgbd images. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V 12, 746–760. Springer. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, 2256–2265. PMLR. Su, Z.; Liu, W.; Yu, Z.; Hu, D.; Liao, Q.; Tian, Q.; Pietik¨ainen, M.; and Liu, L. 2021. Pixel difference networks for efficient edge detection. In Proceedings of the IEEE/CVF international conference on computer vision, 5117–5127. Tan, M.; and Le, Q. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, 6105–6114. PMLR. Wang, Y.; Zhao, X.; Li, Y.; and Huang, K. 2018. Deep crisp boundaries: From boundaries to higher-level tasks. IEEE Transactions on Image Processing, 28(3): 1285–1298. Wu, J.; FU, R.; Fang, H.; Zhang, Y.; Yang, Y.; Xiong, H.; Liu, H.; and Xu, Y. 2023. MedSegDiff: Medical Image Segmentation with Diffusion Probabilistic Model. In Medical Imaging with Deep Learning. Xie, S.; and Tu, Z. 2015. Holistically-nested edge detection. In Proceedings of the IEEE international conference on computer vision, 1395–1403. Xiong, W.; Yu, J.; Lin, Z.; Yang, J.; Lu, X.; Barnes, C.; and Luo, J. 2019. Foreground-aware image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5840–5848. Xu, D.; Ouyang, W.; Alameda-Pineda, X.; Ricci, E.; Wang, X.; and Sebe, N. 2017. Learning deep structured multi-scale features using attention-gated crfs for contour prediction. Advances in neural information processing systems, 30. Yang, J.; Price, B.; Cohen, S.; Lee, H.; and Yang, M.-H. 2016. Object contour detection with a fully convolutional encoder-decoder network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 193– 202. Ye, Y.; Yi, R.; Gao, Z.; Cai, Z.; and Xu, K. 2023a. Delving into Crispness: Guided Label Refinement for Crisp Edge Detection. IEEE Transactions on Image Processing. Ye, Y.; Yi, R.; Gao, Z.; Zhu, C.; Cai, Z.; and Xu, K. 2023b. NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from Multi-view Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8486–8495. Zhou, C.; Huang, Y.; Pu, M.; Guan, Q.; Huang, L.; and Ling, H. 2023. The Treasure Beneath Multiple Annotations: An Uncertainty-aware Edge Detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15507–15517. Zitnick, C. L.; and Doll´ar, P. 2014. Edge boxes: Locating object proposals from edges. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 391–405. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6683 | 2024 | 742 |
18,565 | Dynamic Feature Pruning and Consolidation for Occluded Person Re-identification YuTeng Ye1, Hang Zhou1, Jiale Cai1, Chenxing Gao1, Youjia Zhang1, Junle Wang2, Qiang Hu3, Junqing Yu1, Wei Yang1* 1Huazhong University of Science and Technology, Wuhan, China 2Tencent 3Shanghai Jiao Tong University, Shanghai, China {yuteng ye, henrryzh, jaile cai, cxg, youjiazhang, yjqing, weiyangcs}@hust.edu.cn; [email protected]; [email protected] Abstract Occluded person re-identification (ReID) is a challenging problem due to contamination from occluders. Existing approaches address the issue with prior knowledge cues, such as human body key points and semantic segmentations, which easily fail in the presence of heavy occlusion and other humans as occluders. In this paper, we propose a feature pruning and consolidation (FPC) framework to circumvent explicit human structure parsing. The framework mainly consists of a sparse encoder, a multi-view feature mathcing module, and a feature consolidation decoder. Specifically, the sparse encoder drops less important image tokens, mostly related to background noise and occluders, solely based on correlation within the class token attention. Subsequently, the matching stage relies on the preserved tokens produced by the sparse encoder to identify k-nearest neighbors in the gallery by measuring the image and patch-level combined similarity. Finally, we use the feature consolidation module to compensate pruned features using identified neighbors for recovering essential information while disregarding disturbance from noise and occlusion. Experimental results demonstrate the effectiveness of our proposed framework on occluded, partial, and holistic Re-ID datasets. In particular, our method outperforms state-of-the-art results by at least 8.6% mAP and 6.0% Rank-1 accuracy on the challenging Occluded-Duke dataset. Introduction Person Re-Identification (ReID) refers to the process of retrieving the same person from a gallery set under nonoverlapping surveillance cameras (Chen et al. 2017; Ye et al. 2021), and has been making remarkable progress in tackling appearance change in deep learning era (Wu et al. 2019; Lavi, Serj, and Ullah 2018). However, the re-identification of occluded persons remains a challenging problem because of two reasons: 1. the inference from wrongly included occluder features and 2. the partial or full absence of essential target features. To tackle occlusions, many existing approaches explicitly recover human semantics, via human pose estimation (Miao et al. 2019; Gao et al. 2020; Wang et al. 2022a) or body segmentation (Huang, Chen, and Huang 2020), as extra supervision to guide the network *indicates corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. to focus on non-occluded features. Others (Yu et al. 2021; Xu et al. 2022) first partition input image into horizontal or vertical parts, and then identify the occlusion status of each part with off-the-shelf models (Mask-RCNN (He et al. 2017), HR-Net (Sun et al. 2019)), and finally recover occluded features from K-nearest neighbors in a gallery according to non-occluded features. Both strategies rely on extra modules for detecting occlusions and will easily fail in the presence of heavy occlusion and persons as occluders, and further background noise persists as the partition usually is very coarse. Inspired by the recent advances in sparse encoders (Rao et al. 2021; Liang et al. 2022), we propose a feature pruning, matching, and consolidation (FPC) framework for Occluded Person Re-Identification which adaptively removes interference from occluders and background and consolidates the contaminated features. Firstly, we send the query image into a modified transformer encoder with token sparsification to drop interference tokens (usually related to occluders and background) while preserving attentive tokens. Different from extra cue-based approaches that rely on prior information about human semantics, the sparse encoder exploits correlation properties on attention maps and generalizes better to various occlusion situations. In addition, our sparse encoder removes interference from the background as an extra benefit. Then, we rank the full tokens in the gallery memory according to their similarity with the query image. We obtain the gallery memory containing [cls] token and patch tokens via pre-training a vision transformer encoder. The similarity metric for matching is defined as the linear combination of image-level cosine distance and patchlevel earth mover’s distance (Rubner, Tomasi, and Guibas 2000) for bridging the domain gap between the sparse query feature and holistic features in the gallery memory. At last, we select the k-nearest neighbors for each query and construct multi-view features by concatenating averaged [cls] tokens and respective patch tokens of both the query and its selected neighbors. We send the multi-view feature into a transformer decoder for feature consolidation. We compensate the occluded query feature with gallery neighbors and achieve better performance. During training, our method utilizes the entire training set as the gallery, a common practice in ReID literature (Xu et al. 2022; Yu et al. 2021). While for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6684 inference, only the test set query and gallery images are used without any pre-filtering steps. Our FPC framework achieves state-of-the-art performance on both occluded person, partial person, and holistic person ReID datasets. In particular, FPC outperforms the state-of-the-art by 8.6% mAP and 6.0% Rank-1 accuracy on the challenging Occluded-Duke dataset. Our main contributions are as follows: • We introduce the token sparsification mechanism to the occluded person ReID problem, the first to the best of our knowledge, which avoids explicit use of human semantics and better prunes unrelated features. • We propose feature matching and consolidation modules to recover occluded query features from multiview gallery neighbors. Compared to feature division approaches (Yu et al. 2021; Xu et al. 2022), our module uses transformer tokens that naturally preserve connectivity and richness of the features. • We design a novel token based metric to measure the similarity of images by linearly combining the patch-level distance and the image-level distance. Related Work Our method is closely related to occluded person ReID and feature pruning in vision transformers. Occluded Person ReID. The task of Occluded Person ReID is to find the same person under different cameras while the target pedestrian is obscured. The current methods are mainly divided into two categories, i.e., extra-clues based methods and feature reconstruction based methods. The extra-clues based methods locate non-occluded areas of the human body by prior knowledge cues, e.g., human pose (Miao et al. 2019; Gao et al. 2020; Wang et al. 2020a, 2022a; Miao, Wu, and Yang 2021) and human segments (Huang, Chen, and Huang 2020; Cai, Wang, and Cheng 2019). Another approach is based on feature reconstruction. Hou et al. (Hou et al. 2021) locate occluded human parts by key-points and propose Region Feature Completion (RFC) to recover the semantics of occluded regions in feature space. Xu et al. (Xu et al. 2022) extract occluded features with pose information and propose Feature Recovery Transformer (FRT) to recover the occluded features using the visible semantic features in the k-nearest gallery neighbors. In contrast, our approach adaptively removes interference from occluders and background according to correlation within the class token attention, which generalizes better to various occlusion situations. Transformer Sparsification. The techniques of accelerating vision transformer models are necessary with limited computing resources. Most methods mainly aim to simplify the structure of the model with efficient attention mechanisms (Wang et al. 2020b; Kitaev, Kaiser, and Levskaya 2020; Zhang et al. 2022; Zhu et al. 2021) and compact structures (Liu et al. 2021; Touvron et al. 2021; Wu et al. 2021; Graham et al. 2021). There are also many researches (Kong et al. 2022; Rao et al. 2021; Liang et al. 2022) focus on effective token learning to reduce the redundancy. However, the above approaches have not explored the possibility of applying the characteristics of model acceleration to the occluded person ReID problem. Actually, since there are various occlusion and background noise in occluded person ReID tasks, we observe that it is sub-optimal to apply the transformer model directly. Following (Liang et al. 2022), the purpose of our approach is to prune out ineffective tokens by exploring the sparsity of informative image patches. Method We illustrate our feature pruning and consolidation framework as illustrated in fig. 1. The framework consists of (1) sparse encoder S conducts token sparsification to prune interference tokens and preserve attentive tokens; (2) multiview feature matching module M generates a rank list between the sparse query feature and pre-trained gallery memory by the image and patch-level combined similarity; (3) feature consolidation framework C utilizes complete information of identified neighbors to compensate pruned query features. Sparse Encoder Inspired by the advances in feature pruning (Liang et al. 2022) and person ReID (He et al. 2021), we use a transformer with token sparsification to prune interference from occlusion and background. As shown in fig. 2, given an input image x ∈RH×W ×C, where W, H, C denote the width, height, and channel of the image respectively, we split x into N overlapping patches {p1, p2, · · · , pN} and embed each patch with linear projection denoted as f(·). Then we combined a learnable [cls] token with patch embedding and apply positional encoding and camera index encoding following (He et al. 2021). The final input can be described as: Z = {xcls, f (p1) , · · · , f (pN)} + P + Cid (1) where xcls ∈R1×D is learnable [cls] token. f(pi) ∈R1×D is i-th patch embeddings. P ∈R(N+1)×D is position embeddings and Cid ∈R(N+1)×D is camera index embeddings. Token Sparsification. We adopt the token sparsification strategy proposed in (Liang et al. 2022). Specifically, through the attention correlation (Vaswani et al. 2017) between [cls] token and other tokens in the vision transformer, we can express the value of [cls] token as: xcls = AclsV = softmax(QclsK √ d )V (2) where Acls denotes the attention matrix of the [cls] token, i.e., first row of the attention matrix, √ d is scale factor. Qcls, K, V represent the query matrix of [cls] token, the key matrix, and the value matrix respectively. For multiple heads in the self-attention layer, we average the attention matrix as ¯Acls = Pn i 1 n · A(i) cls, n is the total heads number. Since the [cls] token corresponds to a larger attention value in significant patch regions (Caron et al. 2021), we can evaluate the importance of a token according to its relevance to the [cls] token. As ¯ Acls represents the correlation between [cls] token and all other tokens, we hence preserve the tokens with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6685 A * * * * * * * * * * * S C [cls] token K Nearest Neighbors Query Gallery Feature Similarity Measurement Gallery Memory Query Input Rank Sparse Encoder Encoder Rank List Attentive tokens [cls] token similarity Patch token similarity Concatenate Transformer Decoder Average C C A (1) Sparse Encoder (2)Multi-view Feature Matching (3) Feature Consolidation ... ... Figure 1: Overview of the proposed framework, which consists of a sparse encoder S, a multi-view feature matching module M, and a feature consolidation module C. The sparse encoder S removes interfering tokens while preserving attentive tokens. In matching module M, we generate a rank list between the sparse query feature and holistic features in a gallery memory pretrained with a vision transformer. We use the summation of image-level cosine distance and patch-level earth mover’s distance as the metric for ranking. In the consolidation module C, we select the k-nearest neighbors for the query and construct multi-view features by concatenating averaged [cls] tokens and respective patch tokens of both the query and its selected neighbors. The multi-view features are sent to the transformer decoder for feature consolidation. K largest values in ¯ Acls and drop others, which is shown in fig. 2. We define K = ⌈γ · Nc⌉, where γ is the keep rate and Nc is the total number of tokens in the current layer, ⌈·⌉ is the ceiling operation. With token sparsification, the preserved tokens are mostly related to the region of the target pedestrian, and dropped tokens are related to occluders or backgrounds. Sparse Encoder Supervision Loss. We use the crossentropy ID loss and triplet loss to supervise [cls] token xS cls obtained by the sparse encoder as: LS = LID(xS cls) + LT (xS cls) (3) where LID denotes cross-entropy ID loss and LT denotes triplet loss. Compared with existing approaches, our feature pruning with token sparsification is adaptive and doesn’t rely on prior knowledge of human semantics, and can better handle challenging scenarios, such as heavy occlusion and other persons as occluders, as shown in fig. 6. Multi-view Feature Matching Module After the feature pruning, we would like to find the most related patches from other views that not been occluded for consolidation, such strategy has been proved to be effective (Xu et al. 2022; Yu et al. 2021). Specifically, we first learn a gallery memory with the pre-trained encoder. With the pruned query image features, we rank patches in the gallery memory according to their similarity with the query. Considering that there exists appearance gap between the pruned query feature and holistic feature in gallery memory, we measure the similarity from both the image-level and patch-level. As shown in fig. 1, we linearly combine imagelevel and patch-level distance to match the query image with gallery memory images. Image-level distance. We define the image-level distance as the cosine similarity between the [cls] tokens of query and gallery memory as follows: DCOS = 1 −⟨x(i) cls, x(i) cls⟩ |x(i) cls| · |x(j) cls| (4) where x(i) cls and x(j) cls is [cls] token of i-th query image and j-th image in gallery memory. ⟨· ⟩is the dot product. Patch-level distance. We leverage Earth Mover’s Distance (EMD) (Rubner, Tomasi, and Guibas 2000) to measure patch-level similarity. EMD is usually employed for measuring the similarity between two multidimensional distributions and is formulated as following linear programming problem: Let Q = {(q1, wq1), . . . , (qm, wqm)} be the set of patch tokens from the query image, where qi is i-th patch token and wqi is the weight of qi. Similarly, G = {(g1, wg1), . . . , (gn, wgn)} represents set of patch tokens of a gallery image. Then wqi is further defined as proportional correlation weight (Phan and Nguyen 2022) of qi to set {g1, g2, . . . , gn}, denoted as max(0, ⟨qi, 1 n Pn i gi⟩), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6686 N3× Transformer Layer × 2 Multi-Head Self-Attention * * Token Sparsification FFN Keep identity Drop identity Add * [cls] token Linear Projection Position Cam ID Figure 2: The structure of our sparse encoder. We divide and embed the image tokens using linear projection, positional encoding, and camera ID encoding. We perform token sparsification in layer 3, 6, and 9. and wgj equals max(0, ⟨gj, 1 m Pm i qi⟩). The ground distance dij is the cosine distance between pi and qj. The objective is to find the flow F, of which fij indicates the flow between pi and qj, to minimize following cost: F∗= arg min F COST(Q, G, F) = arg min F m X i=1 n X j=1 fijdij (5) subject to the following constraints: fij ≥0, 1 ≤i ≤m, 1 ≤j ≤n (6) m X i=1 fij ≤wgj, 1 ≤j ≤n (7) n X j=1 fij ≤wqi, 1 ≤i ≤m (8) m X i=1 n X j=1 fij = min( m X i=1 wqi, n X j=1 wgj) (9) eq. (5) can by solved by iterative Sinkhorn algorithm (Cuturi 2013) and produces the earth mover’s distance DEMD. Intuitively, DCOS represents global distance, while DEMD measures in the perspective of the set of local features. Naturally, we take the linear combination of image-level cosine distance and patch-level EMD as our final distance for feature matching, expressed as follows: D = (1 −α)DCOS + αDEMD (10) With D, we rank the sparse query feature with holistic features in gallery memory and generate a ranking list. In order to expedite the process of feature matching, we use an efficient two-stage selection strategy to save time costs by 138 times. Feature Consolidation Decoder After we retrieve the K nearest gallery neighbors, we then consolidate the pruned query feature from multi-view observations in the gallery memory. Specifically, we average the [cls] tokens of the query and gallery neighbors as our initial global feature. Then we combine the averaged [cls] token with patch tokens in both query and gallery neighbors to aggregate information of multi-view pedestrians. The consolidation of the multi-view features can be formulated as follows: fm = n ¯fc, fpq, f (1) pg , f (2) pg , . . . , f (K) pg o (11) where ¯fc ∈R1×C is the average of [cls] tokens of both query and its neighbors. fpq ∈RM×C is the patch tokens of query and M is the number of patch tokens. f (i) pg ∈RN×C is the patch tokens of i-th gallery neighbor and N is the number of patch tokens for each gallery. Transformer Decoder. Notice that [cls] token is class prediction (Dosovitskiy et al. 2020) in transformer and conventionally treated as global feature for image representation, the query [cls] token is usually contaminated in the occluded person ReID problem . We send the consolidated multi-view feature to a transformer decoder to compensate the incomplete [cls] token with gallery neighbors. In the decoder, we first transform the multi-view feature fm into Q, K, V vectors using linear projections, which is: Q = Wq · fm, K = Wk · fm, V = Wv · fm (12) where Wq, Wk, Wv are weights of linear projections. As the multi-view feature contains three parts: [cls] token, patch tokens of query, and patch tokens of gallery neighbors, the computation of attention with respect to [cls] token can be decomposed into the above three parts, which is expressed as: xcls = Cat(A ′ c, A ′ q, A ′ g) · Cat(Vc, Vq, Vg) = A ′ c · Vc + A ′ q · Vq + A ′ g · Vg (13) where Cat ( · ) is operation of vector combination. A ′ denotes the attention matrix of [cls] token and V is the value vector. Subscripts c, q, g correspond to [cls] token, query and gallery neighbors respectively. A ′ c · Vc + A ′ q · Vq acts as the feature learning process with sparse query and A ′ g · Vg integrates completion information from gallery neighbors to the [cls] token. The final consolidated [cls] token generated by the transformer decoder is denoted as xC cls = τ(fm). We find one layer of transformer is enough and also avoids the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6687 high memory consumption from neighborhoods (Yu et al. 2021). Consolidation Loss. The [cls] token xC cls obtained by feature consolidation decoder is supervised with cross entropy loss as ID loss and triplet loss, as expressed below: LC = LID(xC cls) + LT (xC cls) (14) Therefore, the final loss function can be expressed: L = LS + LC (15) Implementation Details We choose the ViT-B/16 as our backbone for both the sparse encoder and gallery encoder. Our sparse encoder incorporates several modifications into the backbone, including camera encoding, patch overlapping with a stride of 11, batch normalization, and token sparsification at layer 3, 6, 9. Our gallery encoder has the same structure as the sparse encoder but does not conduct token sparsification. We construct the gallery memory with gallery image tokens as in eq. (1). In the multi-view feature matching module, we first identify 100 globally-similar gallery neighbors using cos distance in eq. (4), and then search the final K nearest gallery neighbors using the proposed distance in eq. (10). We set the number of K to 10 on Occluded-Duke dataset and 5 for others. In the training process, we resize all input images to 256 × 128. The training images are augmented with random horizontal flipping, padding, random cropping and random erasing (Zhong et al. 2020). The batch size is 64 with 4 images per ID and the learning rate is initialized as 0.008 with cosine learning rate decay. The distance weight α in eq. (10) is 0.4 and the parameter of keep rate γ is 0.8. We take consolidated [cls] token in eq. (13) for model inference. For large-scale problems, we can further replace the full image tokens with [cls] tokens to save computation and memory. Experiment We conduct extensive experiments to validate our framework. We first introduce the dataset we use: Datasets and Evaluation Metric Occluded-Duke (Miao et al. 2019) includes 15,618 training images of 702 persons, 2,210 occluded query images of 519 persons, and 17,661 gallery images of 1,110 persons. Occluded-ReID (Zhuo et al. 2018) consists of 1,000 occluded query images and 1,000 full-body gallery images both belonging to 200 identities. Partial-ReID (Zheng et al. 2015b) involves 600 images from 60 persons, and each person consists 5 partial and 5 full-body images. Market1501 (Zheng et al. 2015a) contains 12,936 training images of 751 persons, 19,732 query images and 3,368 gallery images of 750 persons captured by six cameras. Evaluation Metirc. All methods are evaluated under the Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP). Floating Point Operations (FLOPs) represents the amount of model computation. Occluded-Duke Occluded-ReID Method Rank-1 mAP Rank-1 mAP DSR 40.8 30.4 72.8 62.8 PGFA 51.4 37.3 HOReID 55.1 43.8 55.1 43.8 OAMN 62.6 46.1 PAT 64.5 53.6 81.6 72.1 TransReID 67.4 59.5 PFD 69.5 61.8 81.5 83.0 FED 67.9 56.3 87.0 79.4 RFCnet 63.9 54.5 Yu et al. 67.6 64.2 68.8 67.3 FRT 70.7 61.3 80.4 71.0 FPC (ours) 76.7 72.8 86.3 84.6 Table 1: Comparison with state-of-the-art methods on Occluded-Duke and Occluded-ReID datasets. Market-1501 Partial-ReID Method Rank-1 mAP Rank-1 mAP PCB 92.3 77.4 PGFA 91.2 76.8 69.0 61.5 HOReID 94.2 84.9 85.3 OAMN 92.3 79.8 86.0 PAT 95.4 88.0 88.0 TransReID 95.0 88.2 83.0 77.5 PFD 95.5 89.7 FED 95.0 86.3 84.6 82.3 RFCnet 95.2 89.2 Yu et al. 94.5 86.5 FRT 95.5 88.1 88.2 FPC (ours) 95.1 91.4 86.3 86.5 Table 2: Comparison with state-of-the-art methods on Market-1501 and Partial-ReID datasets. Comparison with State-of-the-art Methods Experimental results on Occluded ReID Datasets. In table 1, we compare FPC with state-of-the-art methods on two occluded ReID datasets (i.e., Occluded-Duke and OccludedReID). Methods in different categories are compared, including the partial ReID methods (He et al. 2018), keypoints based methods (Miao et al. 2019; Wang et al. 2020a, 2022a; Hou et al. 2021), data augmentation methods (Chen et al. 2021; Wang et al. 2020a, 2022b), transformer-based methods (Li et al. 2021; He et al. 2021), gallery-based reconstruction methods (Yu et al. 2021; Xu et al. 2022). In addition, TransReID (He et al. 2021) and PFD (Wang et al. 2022a) use camera information. Since no training set is provided for Occluded-ReID, we adopt Market-1501 as the training set the same as other methods to ensure the fairness of comparison. We can see that our FPC outperforms existing approaches and demonstrates the effectiveness for the occluded ReID tasks. Specifically, our FPC achieves the best performance on the challenging Occluded-Duke dataset, outperforming other methods by at least 6.0% Rank1 accuracy and 8.6% mAP. On the Occluded-ReID dataset, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6688 Index S M C Rank-1 mAP 1 ✘ ✘ ✘ 65.2 56.6 2 ✔ ✘ ✘ 68.6 60.1 3 ✔ ✘ ✔ 75.8 71.6 4 ✔ ✔ ✔ 76.7 72.8 Table 3: Ablation study on each component. FPC produces the highest mAP, outperforming the other methods by at least 1.6%. FPC achieves comparable results in Rank-1 accuracy with FED (Wang et al. 2022b), and much better than others. Experimental Results on Holistic and Partial ReID Datasets. Many existing occluded ReID methods can not be effectively applicable to holistic and partial ReID datasets. On the contrary, FPC achieves great performance improvement on holistic and partial datasets, i.e., Market-1501 and Partial-ReID. The results are shown in table 2. We compare FPC with three categories of methods, including the holistic ReID methods (Sun et al. 2018), the current leading methods in occluded ReID (Miao et al. 2019; Wang et al. 2020a; Chen et al. 2021; Li et al. 2021; He et al. 2021; Wang et al. 2022a,b), the feature reconstruction based methods in occluded ReID (Hou et al. 2021; Yu et al. 2021; Xu et al. 2022). On the Partial-ReID dataset, FPC achieves the best mAP result, lower than the PAT and FRT in Rank-1 accuracy. We consider that we adopt the ViT models and are more prone to overfit on small datasets which leads poor cross-domain generalization. We also observe that FPC achieves competitive Rank-1 accuracy and the highest mAP on the Market1501 dataset, which is at least 1.7% mAP ahead of other methods. The excellent performance on holistic and partial ReID datasets illustrates the robustness of our FPC. Ablation Study Analysis of proposed components. The results are shown in table 3. Index-1 is our baseline architecture. To assess the effectiveness of S, we conduct a comparison between index-1 and index-2. Our findings demonstrate that S leads to a 3.4% improvement in Rank-1 accuracy and a 3.5% increase in mAP over the baseline. This suggests that S has the potential to mitigate the interference of inattentive features (mainly related to occlusion and background noise). Furthermore, S can also expedite model inference, as elaborated in table 4. By comparing index-2 and index-3, C improves performance by 7.2% Rank-1 accuracy and 11.5% mAP, which indicates that C can effectively compensate the occluded query feature and bring huge performance improvements. By comparing index-3 and index-4, M can increase performance by 0.9% Rank-1 accuracy and 1.2% mAP, indicating that the proposed distance metric contributes to the find the accurate gallery neighbors. Analysis of Distance Weight α. The distance weight α defined in eq. (10) balances the importance between imagelevel and patch-level distance. From fig. 3a, as α goes from 0 to 1, the patch-level distance gradually takes effect. When α is 0.4, the combination of both achieves the best performance with 76.7% Rank-1 accuracy and 72.8% mAP. 0.0 0.2 0.4 0.6 0.8 1.0 Parameter 70 71 72 73 74 75 mAP 73 74 75 76 77 Rank-1 (a) 2 4 6 8 10 12 14 Parameter K 68 70 72 74 76 78 80 mAP 72 74 76 78 Rank-1 (b) Figure 3: Analysis of distance weight α and the number of nearest neighbors K on Occluded-Duke dataset. mAP and Rank-1 are respectively depicted in blue and orange colors. Input Random Drop Salient Drop Non-salient Drop Figure 4: Illustration of three different patch drop strategies. White patches are the dropped image parts. Analysis of Number K of Nearest Gallery Neighbors. We conduct quantitative experiments to find the most appropriate number K of nearest gallery neighbors. We conclude from fig. 3b that too small a choice of K lacks sufficient information for feature consolidation and too large a choice increases the risk of incorrectly selecting neighbors. When the value of K is 10, we achieve the optimal performance. Analysis of Keep Rate γ in Sparse Encoder. Here, keep rate γ reflects the number of preserved tokens in each layer of the sparse encoder. We find instructive observations from table 4: as the γ goes from 1.0 to 0.8, the FLOPs drops while the model performance improves. This suggests that a reasonable choice of γ can effectively filter out inattentive features, thus reducing the computational complexity and enhancing the model inference capability. When γ is 0.8, we achieve the best balance between computational complexity and experimental performance, which leads to an improvement of 0.5% mAP, 0.7% Rank-1 accuracy and 25% reduction in FLOPs. Analysis of Patch Drop Strategy in Sparse Encoder. We experiment with three variants of patch drop approaches to demonstrate the effectiveness of the proposed method. As shown in fig. 4, we preserve the same number of patches and choose different preservation methods: Random Drop means that K patches are randomly selected to be retained. Nonsalient Drop, as the proposed method, preserves the patches corresponding to the K largest values in the [cls] token attention. Conversely, Salient Drop preserves the K smallest ones. The performance of the three patch drop approaches with different keep rate is shown in fig. 5. We observe that the proposed Non-salient Drop method achieves best performance, indicating the importance of attentive features for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6689 Metrics Keep Rate γ 1.0 0.9 0.8 0.7 0.6 0.5 Rank-1 (%) 76.0 76.6 (+0.6) 76.7 (+0.7) 75.9 (-0.1) 74.5 (-1.5) 72.1 (-3.9) mAP (%) 72.3 72.7 (+0.4) 72.8 (+0.5) 72.4 (+0.1) 71.3 (-1.0) 69.3 (-3.0) FLOPs (G) 20.8 18.1 (-13%) 15.7 (-25%) 13.6 (-33%) 11.9 (-43%) 10.4 (-50%) Table 4: Analysis on effectiveness of the keep rate γ. We perform FLOPs comparison on the sparse encoder. 1.0 0.9 0.8 0.7 0.6 0.5 Keep rate 20 40 60 80 mAP 1.0 0.9 0.8 0.7 0.6 0.5 Keep rate 20 40 60 80 Rank-1 Figure 5: The analysis of three different patch drop strategies. Blue, red, and orange colors represent the Non-salient Drop, Random Drop, and Salient Drop, respectively. feature matching and consolidation. In addition, the result of random drop test achieves relatively high performance, which reflects our proposed sparse encoder structure is robust against information loss. Analysis of speed and storage optimization. Our method relies on gallery information for feature consolidation, to address the practical concerns regarding speed and storage for real-world problems, we conduct analysis on our optimization strategies on the Occluded-Duke dataset. In S, token sparsification reduce 25% Flops (in table 4) with γ equals 0.8 and thus accelerates model inference. In M, we use a two-step matching procedure, i.e., use cosine similarity for initial filtering and compute EMD in the finer step, achieving a remarkable 138-fold reduction in computational costs, compared to the exhaustive search based on EMD. Our approach takes 0.43ms per image, while the full EMD calculation takes 60.0ms per image. Furthermore, if we replace full image tokens with [cls] token, our approach saves a lot memory (i.e., only takes 0.05G for the entire gallery set) and achieves 17x faster in nearest neighbor search time (i.e., from 0.43ms to 0.026ms per image), while still preserving an acceptable degree of precision (i.e., mAP: 72.8% to 70.2%, Rank-1: 76.7% to 74.3%, which still outperforms others). Comparisons with Re-ranking. An alternative approach is the re-ranking technique (Zhong et al. 2017), which also uses k-nearest gallery neighbors information. We compare C with re-ranking (Zhong et al. 2017), the second group results in table 5 indicate that re-ranking fails to attain comparable performance of C, lead to a reduction of 3.2% Rank-1 and 0.3% mAP. Further, our M + C is approximately 17 times faster than re-ranking, with respective times of 0.47ms and 7.9ms per image. Moreover, the last group shows C and reranking can be jointly employed and outperforms others. Methods Rank-1 mAP FRT(Xu et al. 2022) 70.7 61.3 FRT(Xu et al. 2022) + re-ranking 70.8 65.0 Yu et al. (Yu et al. 2021) 67.6 64.2 Yu et al. (Yu et al. 2021) + re-ranking 68.9 67.3 S 68.6 60.1 S + re-ranking 72.6 71.3 S + C 75.8 71.6 FPC (ours) 76.7 72.8 FPC (ours) + re-ranking 78.6 78.3 Table 5: Performance comparsions with re-ranking on Occluded-Duke dataset. S is our sparse encoder and C is feature consolidation. Input layer 3 layer 6 layer 9 Input layer 3 layer 6 layer 9 Figure 6: Visualization of patch drop process in different layers of the sparse encoder. We show various occlusion scenarios such as object occlusion, pedestrian occlusion and heavy occlusions. Visualization Patch pruning. The patch-drop process in different layers of sparse encode is shown in fig. 6. We observe that with the increasing number of sparse encoder layers, more object occlusion, pedestrian occlusion and background noise are filtered out, while the essential classification information of target pedestrians is preserved. Conclusion In this paper, we propose the feature pruning and consolidation (FPC) framework for the occluded person ReID task. Specifically, our framework prunes interfering image tokens (mostly related to background noise and occluders) without relying on prior human shape information. We propose an effective way to consolidate the pruned feature and achieve the SOTA performance on many occluded, partial, and holistic ReID datasets. We introduce the token sparsification technique and demonstrate its effectiveness. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6690 Acknowledgments This work is supported by the National Natural Science Foundation of China (NSFC No. 62272184). The computation is completed in the HPC Platform of Huazhong University of Science and Technology. References Cai, H.; Wang, Z.; and Cheng, J. 2019. Multi-scale bodypart mask guided attention for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 0–0. Caron, M.; Touvron, H.; Misra, I.; J´egou, H.; Mairal, J.; Bojanowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9650–9660. Chen, P.; Liu, W.; Dai, P.; Liu, J.; Ye, Q.; Xu, M.; Chen, Q.; and Ji, R. 2021. Occlude them all: Occlusion-aware attention network for occluded person re-id. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11833–11842. Chen, Y.-C.; Zhu, X.; Zheng, W.-S.; and Lai, J.-H. 2017. Person re-identification by camera correlation aware feature augmentation. IEEE transactions on pattern analysis and machine intelligence, 40(2): 392–408. Cuturi, M. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Gao, S.; Wang, J.; Lu, H.; and Liu, Z. 2020. Pose-guided visible part matching for occluded person reid. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11744–11752. Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; J´egou, H.; and Douze, M. 2021. LeViT: a Vision Transformer in ConvNet’s Clothing for Faster Inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 12239–12249. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961–2969. He, L.; Liang, J.; Li, H.; and Sun, Z. 2018. Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7073– 7082. He, S.; Luo, H.; Wang, P.; Wang, F.; Li, H.; and Jiang, W. 2021. Transreid: Transformer-based object re-identification. In Proceedings of the IEEE/CVF international conference on computer vision, 15013–15022. Hou, R.; Ma, B.; Chang, H.; Gu, X.; Shan, S.; and Chen, X. 2021. Feature completion for occluded person reidentification. IEEE Transactions on Pattern Analysis and Machine Intelligence. Huang, H.; Chen, X.; and Huang, K. 2020. Human parsing based alignment with multi-task learning for occluded person re-identification. In 2020 IEEE International Conference on Multimedia and Expo (ICME), 1–6. IEEE. Kitaev, N.; Kaiser, L.; and Levskaya, A. 2020. Reformer: The Efficient Transformer. In 8th International Conference on Learning Representations (ICLR). Kong, Z.; Dong, P.; Ma, X.; Meng, X.; Niu, W.; Sun, M.; Shen, X.; Yuan, G.; Ren, B.; Tang, H.; Qin, M.; and Wang, Y. 2022. SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning. In Proceedings of the European conference on computer vision (ECCV). Lavi, B.; Serj, M. F.; and Ullah, I. 2018. Survey on deep learning techniques for person re-identification task. arXiv preprint arXiv:1807.05284. Li, Y.; He, J.; Zhang, T.; Liu, X.; Zhang, Y.; and Wu, F. 2021. Diverse part discovery: Occluded person reidentification with part-aware transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2898–2907. Liang, Y.; Ge, C.; Tong, Z.; Song, Y.; Wang, J.; and Xie, P. 2022. Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations. In arXiv preprint arXiv:2202.07800. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10012–10022. Miao, J.; Wu, Y.; Liu, P.; Ding, Y.; and Yang, Y. 2019. Pose-guided feature alignment for occluded person reidentification. In Proceedings of the IEEE/CVF international conference on computer vision, 542–551. Miao, J.; Wu, Y.; and Yang, Y. 2021. Identifying visible parts via pose estimation for occluded person re-identification. IEEE Transactions on Neural Networks and Learning Systems. Phan, H.; and Nguyen, A. 2022. DeepFace-EMD: ReRanking Using Patch-Wise Earth Mover’s Distance Improves Out-of-Distribution Face Identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20259–20269. Rao, Y.; Zhao, Y.; Liu, B.; Lu, J.; Zhou, J.; and Hsieh, C.-J. 2021. Dynamicvit: Efficient vision transformers with dynamic token sparsification. In Advances in Neural Information Processing Systems (NeurIPS), 13937–13949. Rubner, Y.; Tomasi, C.; and Guibas, L. J. 2000. The earth mover’s distance as a metric for image retrieval. International journal of computer vision, 40(2): 99–121. Sun, K.; Xiao, B.; Liu, D.; and Wang, J. 2019. Deep highresolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5693–5703. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6691 Sun, Y.; Zheng, L.; Yang, Y.; Tian, Q.; and Wang, S. 2018. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of the European conference on computer vision (ECCV), 480– 496. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and Jegou, J. 2021. Training data-efficient image transformers & distillation through attention. In Proceedings of the 38th International Conference on Machine Learning (ICML), 10347–10357. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, G.; Yang, S.; Liu, H.; Wang, Z.; Yang, Y.; Wang, S.; Yu, G.; Zhou, E.; and Sun, J. 2020a. High-order information matters: Learning relation and topology for occluded person re-identification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6449– 6458. Wang, S.; Li, B.; Khabsa, M.; Fang, H.; and Ma, H. 2020b. Linformer: Self-Attention with Linear Complexity. In arXiv preprint arXiv:2006.04768. Wang, T.; Liu, H.; Song, P.; Guo, T.; and Shi, W. 2022a. Pose-guided feature disentangling for occluded person reidentification based on transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 2540–2549. Wang, Z.; Zhu, F.; Tang, S.; Zhao, R.; He, L.; and Song, J. 2022b. Feature Erasing and Diffusion Network for Occluded Person Re-Identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4754–4763. Wu, D.; Zheng, S.-J.; Zhang, X.-P.; Yuan, C.-A.; Cheng, F.; Zhao, Y.; Lin, Y.-J.; Zhao, Z.-Q.; Jiang, Y.-L.; and Huang, D.-S. 2019. Deep learning-based methods for person reidentification: A comprehensive review. Neurocomputing, 337: 354–371. Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Xiyang, D.; Yuan, L.; and Zhang, L. 2021. CvT: Introducing Convolutions to Vision Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 22–31. Xu, B.; He, L.; Liang, J.; and Sun, Z. 2022. Learning Feature Recovery Transformer for Occluded Person ReIdentification. IEEE Transactions on Image Processing, 31: 4651–4662. Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; and Hoi, S. C. 2021. Deep learning for person re-identification: A survey and outlook. IEEE transactions on pattern analysis and machine intelligence, 44(6): 2872–2893. Yu, S.; Chen, D.; Zhao, R.; Chen, H.; and Qiao, Y. 2021. Neighbourhood-guided feature reconstruction for occluded person re-identification. arXiv preprint arXiv:2105.07345. Zhang, J.; Peng, H.; Wu, K.; Liu, M.; Xiao, B.; Fu, J.; and Yuan, L. 2022. MiniViT: Compressing Vision Transformers with Weight Multiplexing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12145–12154. Zheng, L.; Shen, L.; Tian, L.; Wang, S.; Wang, J.; and Tian, Q. 2015a. Scalable person re-identification: A benchmark. In Proceedings of the IEEE international conference on computer vision, 1116–1124. Zheng, W.-S.; Li, X.; Xiang, T.; Liao, S.; Lai, J.; and Gong, S. 2015b. Partial person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, 4678–4686. Zhong, Z.; Zheng, L.; Cao, D.; and Li, S. 2017. Re-ranking person re-identification with k-reciprocal encoding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1318–1327. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 13001–13008. Zhu, M.; Han, K.; Tang, Y.; and Wang, Y. 2021. Visual Transformer Pruning. In arXiv preprint arXiv:1204.08500. Zhuo, J.; Chen, Z.; Lai, J.; and Wang, G. 2018. Occluded person re-identification. In 2018 IEEE International Conference on Multimedia and Expo (ICME), 1–6. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6692 | 2024 | 743 |
18,566 | Progressive Text-to-Image Diffusion with Soft Latent Direction Yuteng Ye, Jiale Cai, Hang Zhou, Guanwen Li, Youjia Zhang, Zikai Song, Chenxing Gao, Junqing Yu, Wei Yang* Huazhong University of Science and Technology, Wuhan, China {yuteng ye, jaile cai, henrryzh, juggle lee, youjiazhang, skyesong, cxg, yjqing, weiyangcs}@hust.edu.cn Abstract In spite of the rapidly evolving landscape of text-to-image generation, the synthesis and manipulation of multiple entities while adhering to specific relational constraints pose enduring challenges. This paper introduces an innovative progressive synthesis and editing operation that systematically incorporates entities into the target image, ensuring their adherence to spatial and relational constraints at each sequential step. Our key insight stems from the observation that while a pre-trained text-to-image diffusion model adeptly handles one or two entities, it often falters when dealing with a greater number. To address this limitation, we propose harnessing the capabilities of a Large Language Model (LLM) to decompose intricate and protracted text descriptions into coherent directives adhering to stringent formats. To facilitate the execution of directives involving distinct semantic operations—namely insertion, editing, and erasing—we formulate the Stimulus, Response, and Fusion (SRF) framework. Within this framework, latent regions are gently stimulated in alignment with each operation, followed by the fusion of the responsive latent components to achieve cohesive entity manipulation. Our proposed framework yields notable advancements in object synthesis, particularly when confronted with intricate and lengthy textual inputs. Consequently, it establishes a new benchmark for text-to-image generation tasks, further elevating the field’s performance standards. Introduction Text-to-image generation is a vital and rapidly evolving field in computer vision that has attracted unprecedented attention from both researchers and the general public. The remarkable advances in this area are driven by the application of state-of-the-art image-generative models, such as auto-regressive (Ramesh et al. 2021; Wang et al. 2022) and diffusion models (Ramesh et al. 2022; Saharia et al. 2022; Rombach et al. 2022), as well as the availability of large-scale language-image datasets (Sharma et al. 2018; Schuhmann et al. 2022). However, existing methods face challenges in synthesizing or editing multiple subjects with specific relational and attributive constraints from textual prompts (Chefer et al. 2023). The typical defects that oc*indicates corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. cur in the synthesis results are missing entities, and inaccurate inter-object relations, as shown in ??. Existing work improves the compositional skills of text-to-image synthesis models by incorporating linguistic structures (Feng et al. 2022), and attention controls (Hertz et al. 2022; Chefer et al. 2023) within the diffusion guidance process. Notably, Structured Diffusion (Feng et al. 2022) parse a text to extract numerous noun phrases, Attend-and-Excite (Chefer et al. 2023) strength attention activations associated with the most marginalized subject token. Yet, these remedies still face difficulties when the text description is long and complex, especially when it involves two and more subjects. Furthermore, users may find it necessary to perform subtle modifications to the unsatisfactory regions of the generated image, while preserving the remaining areas. In this paper, we propose a novel progressive synthesizing/editing operation that successively incorporates entities, that conform to the spatial and relational constraint defined in the text prompt, while preserving the structure and aesthetics in each step. Our intuition is based on the observation that text-to-image models tend to better handle shortsentence prompts with a limited number of entities (1 or 2) than long descriptions with more entities. Therefore, we can parse the long descriptions into short-text prompts and craft the image progressively via a diffusion model to prevent the leakage and missing of semantics. However, applying such a progressive operation to diffusion models faces two major challenges: • The absence of a unified method for converting the integrated text-to-image process into a progressive procedure that can handle both synthesis and editing simultaneously. Current strategies can either synthesize (Chefer et al. 2023; Ma et al. 2023) or edit (Kawar et al. 2023; Goel et al. 2023; Xie et al. 2022; Avrahami, Fried, and Lischinski 2022; Yang et al. 2023), leaving a gap in the collective integration of these functions. • The need for precise positioning and relational entity placement. Existing solutions either rely on user-supplied masks for entity insertion, necessitating manual intervention (Avrahami, Fried, and Lischinski 2022; Nichol et al. 2021), or introduce supplementary phrases to determine the entity editing direction (Hertz et al. 2022; Brooks, Holynski, and Efros 2023), which inadequately addressing spatial and relational dynamics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6693 ...a gentle river flows down the forest... ...bear wearing a shirt and a hat on the left... ...a mushroom on the right... Step 1 Step 2 Step 3 Our Progressive Text-to-Image Stable Diffusion Attend-and-excite LLM “change shirt to blue shirt” “Delete the hat” User Inference “...There is a gentle river flowing down the forest. On the left side of the river stands a bear wearing a shirt and a hat, while on the right is a mushroom...” × Figure 1: Existing text-to-image synthesis approaches struggle with textual prompts involving multiple entities and specified relational directions. We propose to decompose the protracted prompt into a set of short commands, including synthesis, editing and erasing operations, using a Large Language Model (LLM) and progressively generate the image. To overcome these hurdles, we present the Stimulus, Response, and Fusion (SRF) framework, assimilating a stimulus-response generation mechanism along with a latent fusion module into the diffusion process. Our methodology involves employing a fine-tuned GPT model to deconstruct complex texts into structured prompts, including synthesis, editing, and erasing operations governed by a unified SRF framework. Our progressive process begins with a real image or synthesized background, accompanied by the text prompt, and applies the SRF method in a step-by-step approach. Unlike previous strategies that aggressively manipulate the cross-attention map (Wu et al. 2023; Ma et al. 2023), our operation guides the attention map via a soft direction, avoiding brusque modifications that may lead to discordant synthesis. Additionally, when addressing relationships like “wearing” and “playing with”, we begin by parsing the relative positions of the objects with the help of GPT, after which we incorporate the relational description and relative positions into the diffusion model to enable object interactions. In summary, we unveil a novel, progressive text-to-image diffusion framework that leverages the capabilities of a Language Model (LLM) to simplify language description, offering a unified solution for handling synthesis and editing patterns concurrently. This represents an advancement in textto-image generation and provides a new platform for future research. Related Work Our method is closely related to image manipulation and cross-attention control within diffusion models. Image manipulating refers to the process of digitally manipulating images to modify or enhance their visual appearance. Various techniques can be employed to achieve this end, such as the use of spatial masks or natural language descriptions to guide the editing process towards specific goals. One promising line of inquiry involves the application of generative adversarial networks (GANs) for image domain transfer (Isola et al. 2017; Sangkloy et al. 2017; Zhu et al. 2017; Choi et al. 2018; Wang et al. 2018; Huang et al. 2018; Park et al. 2019; Liu, Breuel, and Kautz 2017; Baek et al. 2021) or the manipulation of latent space (Zhu et al. 2016; Huh et al. 2020; Richardson et al. 2021; Zhu et al. 2020; Wulff and Torralba 2020; Bau et al. 2021). Recently, diffusion models have emerged as the mainstream. GLIDE (Nichol et al. 2021), Blended diffusion (Avrahami, Fried, and Lischinski 2022) and SmartBrush (Xie et al. 2022) replace masked image regions with predefined objects while preserving the inherent image structure. Additionally, techniques such as prompt-to-prompt (Hertz et al. 2022) and instructpix2pix (Brooks, Holynski, and Efros 2023) enable the modification of image-level objects through text alterations. Contrasting previous methods that solely cater to either synthesis or editing, we construct a unified framework that accommodates both. Objects and positional relationships are manifested within The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6694 Tune-A-GPT “...There is a dog and a cat playing together on the right side of the yard...These apples are then replaced with oranges, which are subsequently removed...” “change [apples] to [oranges]." Step i+1 Text Decomposition “delete [oranges]." Step i+2 “[a dog] [plays with] [a cat] [on the right side of] [the yard]." Step i Figure 2: We employ a fine-tuned GPT model to deconstruct a comprehensive text into structured prompts, each classified under synthesis, editing, and erasing operations. GPT-4 “a dog and a cat play together on the right side of the yard” yard:(0, 128, 512, 512) dog: (384, 256, 512, 512) cat: (256, 256, 384, 512) Step i Step i+1 Figure 3: For the synthesis operation, we generate the layout indicated in the prompt from a frozen GPT-4 model, which subsequently yields the new bounding box coordinates for object insertion. the cross attention map of the diffusion model. Inspired by this observation (Feng et al. 2022), techniques have been devised to manipulate the cross attention map for image synthesis or editing. Prompt-to-Prompt approach (Hertz et al. 2022) aims at regulating spatial arrangement and geometry through the manipulation of attention maps derived from textual prompts. Structured Diffusion (Feng et al. 2022) utilizes a text parsing mechanism to isolate numerous noun phrases, enhancing the corresponding attention space channels. The Attend-and-Excite approach (Chefer et al. 2023) amplifies attention activations linked to the most marginalized subject tokens. Directed Diffusion (Ma et al. 2023) proposes an attention refinement strategy through the utilization of a weak and strong activation approach. The main difference between our layout generation and the layout prediction approaches is that our method enables precise increment generation and intermediate modifications, i.e., we gradually change the layout instead of generating one layout at once. As for background fusion, we use a soft mask to ensure the object’s integrity. Method Problem Formulation we elaborate upon our innovative progressive text-to-image framework. Given a multifaceted text description P and a real or generated background I, our primary goal is to synthesize an image that meticulously adheres to the modifications delineated by P in alignment with I. The principal challenge emerges from the necessity to decode the intricacy of P, manifesting across three complex dimensions: • The presence of multiple entities and attributes escalates the complexity of the scene, imposing stringent demands on the model to generate representations that are not only accurate but also internally coherent and contextually aligned. • The integration of diverse positional and relational descriptions calls for the model to exhibit an advanced level of understanding and to employ sophisticated techniques to ascertain precise spatial configuration, reflecting both explicit commands and implied semantic relations. • The concurrent introduction of synthesis, editing, and erasing operations introduces additional layers of complexity to the task. Managing these intricate operations within a unified model presents a formidable challenge, requiring a robust and carefully designed approach to ensure seamless integration and execution. We address these challenges through a unified progressive text-to-image framework that: (1) employs a fine-tuned GPT model to distill complex texts into short prompts, categorizing each as synthesis, editing, or erasing mode, and accordingly generating the object mask; (2) sequentially processes these prompts within the same framework, utilizing attention-guided generation to capture position-aware features with soft latent direction, and subsequently integrates them with the previous stage’s outcomes in a subtle manner. This approach synthesizes the intricacies of text-to-image transformation into a coherent, positionally aware procedure. Text Decomposition P may involve multiple objects and relations, we decompose P into a set of short prompts, which produces an image accurately representing P when executed sequentially. As illustrated in fig. 2, we fine-tune a GPT with OpenAI API (OpenAI 2023) to decompose P into multiple structured prompts, denoted as {P1, P2, ..., Pn}. Each Pi falls into one of the three distinct modes: Synthesis mode: “[object 1] [relation] [object 2] [position] [object 3]”, Editing mode: “change [object 1] to [object 2]”, and Erasing mode: “delete [object]”. In pursuit of this aim, we start by collecting full texts using ChatGPT (Brown et al. 2020) and then manually deconstruct them into atomic prompts. Each prompt has a minimal number of relations and is labeled with synthesis/editing/erasing mode. Using these prompts and their corresponding modes for model supervision, we fine-tune the GPT model to enhance its decomposition and generalization ability. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6695 Q KV “a dog and a cat play together” cat dog Stimulus Cross-Attention map of dog and cat DDIM Inversion Latent Fusion Step i Step i+1 Reverse Step “A dog and a cat play together on the right side of the yard.” Stimulus, Response & Fusion Layout Step i+1 Step i Figure 4: Overview of our unified framework emphasizes progressive synthesis, editing, and erasing. In each progressive step, A random latent zt is directed through the cross-attention map in inverse diffusion. Specifically, we design a soft stimulus loss that evaluates the positional difference between entity attention and the target mask region, leading to a gradient for updating the latent z∗ t−1 as a latent response. Subsequentially, another forward diffusion pass is applied to denoise z∗ t , yielding deriving z∗ t−1. In the latent fusion phase, we transform the previous i-th image into a latent code zbg t−1 using DDIM inversion. The blending of z∗ t−1 with zbg t−1 incorporates a dynamic evolving mask, which starts with a layout box and gradually shifts to cross-attention. Finally, z∗ t−1 undergoes multiple diffusion reverse steps and results in the (i + 1)-th image. Operational Layouts. For the synthesis operation, as shown in fig. 3, we feed both the prompt and a reference bounding box into a frozen GPT-4 API. This procedure produces bounding boxes for the target entity that will be used in the subsequent phase. We exploit GPT-4’s ability to extract information from positional and relational text descriptors. For example, the phrase “cat and dog play together” indicates a close spatial relationship between the “cat” and “dog”. Meanwhile, “on the right side” suggests that both animals are positioned to the right of the “yard”. For the editing and erasing operations, we employ Diffusion Inversion (Mokady et al. 2023) to obtain the cross-attention map of the target object, which serves as the layout mask. For example, when changing “apples” to “oranges”, we draw upon the attention corresponding to “apples”. On the other hand, to “delete the oranges”, we focus on the attention related to “oranges”. Notably, this approach avoids the need to retrain the diffusion model and is proficient in managing open vocabularies. we denote generated layout mask as M for all operations in following sections for convention. In the following section, we provide a complete introduction to the synthesis operation. At last, we exhibit that the editing and erasing operations only differ from the synthesis operation in parameter settings. Stimulus & Response With the synthesis prompt Pi to be executed and its mask configuration Mi. The goal of Latent Stimulus & Response is to enhance the positional feature representation on M. As illustrated in fig. 4, this is achieved by guided cross-attention generation. Differing from the approaches (Ma et al. 2023; Wu et al. 2023), which manipulate attention through numerical replacement, we modulate the attention within mask regions associated with the entity in Pi via a soft manner. Rather than directly altering the attention, we introduce a stimulus to ensure that the object attention converges to the desired scores. Specifically, we formulate a stimulus loss function between the object mask M and the corresponding attention A as: Ls = n X i=1 (softmax(Ai t) −δ · Mi) (1) where Ai t signifies the cross-attention map of the i-th object at the t-th timestep. Mi denotes the mask of the i-th object. δ represents the stimulus weights. The intent of stimulus attention leans towards a spatial-wise generation process. This is achieved by backpropagating the gradient of the stimulus loss function, as defined in Eq. 1, to update the latent code. This process serves as a latent response to the stimulated attention, which can be formally expressed as: z∗ t ←zt −αt · ∇ztLs (2) In the above equation, z∗ t represents the updated latent code and αt denotes the learning rate. Finally, we execute another forward pass of the stable diffusion model using the updated latent code z∗ t to compute z∗ t−1 for the subsequent denoising step. Based on eq. (1) and eq. (2), we observe consistent spatial behavior in both the cross-attention and latent spaces. For a more detailed analysis, we refer to fig. 5 and find this property contributes to producing faithful and position-aware image representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6696 Stable Diffusion Stimulus & Response (ours) “a kitten chasing a butterfly” “a black cat left a white dog” “a photo of a turtle” Figure 5: Visual results generated by Stable Diffusion and Stimulus & Response. Stable Diffusion shows noticeable problems in positional generation (top), semantic and attribute coupling (middle), and object omission (bottom), while ours delivers precise outcomes. Latent Fusion Recalling that z∗ t−1 denotes the latent feature of the target object, our next task is to integrate them seamlessly with the image from the preceding stage. For this purpose, we first convert the previous image into latent code by DDIM inversion, denoted as zbg. Then for timestep t, we take a latent fusion strategy (Avrahami, Lischinski, and Fried 2022) between zbg t and z∗ t , which is formulated as: zt−1 = c M · z∗ t−1 + (1 −c M) · zbg t−1 (3) where c M acts as a latent mask to blend the features of target objects with the background. In the synthesis operation, employing a uniform mask across all steps can be too restrictive, potentially destroying the object’s semantic continuity. To mitigate this, we introduce a more soft mask, ensuring both object integrity and spatial consistency. Specifically, during the initial steps of diffusion denoising, we use layout mask M to provide spatial guidance. Later, we shift to an attention mask Mattn, generated by averaging and setting a threshold on the cross-attention map, to maintain object cohesion. This process is denoted as: c M(Mattn, M, t) = M if t ≤τ Mattn if t > τ (4) Here, τ serves as a tuning parameter balancing object integrity with spatial coherence. The above response and fusion process is repeated for a subset of the diffusion timesteps, and the final output serves as the image for the next round generation. Editing and Erasing Specifications. Our editing and erasing operation differs in parameter setting: we set M in eq. (1) as editing/erasing reference attention. we set c M in eq. (3) as the editing/erasing mask in all diffusion steps for detailed, shape-specific modifications. Experiment Baselines and Evaluation. Our experimental comparison primarily concentrates on Single-Stage Generation and Progressive Generation baselines. (1) We refer to Single-Stage Generation methods as those that directly generate images from input text in a single step. Current methods include Stable Diffusion (Rombach et al. 2022), Attend-andexcite (Chefer et al. 2023), and Structured Diffusion (Feng et al. 2022). We compare these methods to analyze the efficacy of our progressive synthesis operation. We employ GPT to construct 500 text prompts that contain diverse objects and relationship types. For evaluation, we follow (Wu et al. 2023) to compute Object Recall, which quantifies the percentage of objects successfully synthesized. Moreover, we measure Relation Accuracy as the percentage of spatial or relational text descriptions that are correctly identified, based on 8 human evaluations. (2) We define Progressive Generation as a multi-turn synthesis and editing process that builds on images from preceding rounds. Our comparison encompasses our comprehensive progressive framework against other progressive methods, which includes Instructbased Diffusion models (Brooks, Holynski, and Efros 2023) and mask-based diffusion models (Rombach et al. 2022; Avrahami, Fried, and Lischinski 2022). To maintain a balanced comparison, we source the same input images from SUN (Xiao et al. 2016) and text descriptions via the GPT API (OpenAI 2023). Specifically, we collate five scenarios totaling 25 images from SUN, a dataset that showcases realworld landscapes. Each image is paired with the text description, which ensures: 1. Integration of synthesis, editing, and easing paradigms; 2. Incorporation of a diverse assortment of synthesized objects; 3. Representation of spatial relations (e.g., top, bottom, left, right) and interactional relations (e.g., “playing with”, “wearing”). For evaluation, we utilize Amazon Mechanical Turk (AMT) to assess image fidelity. Each image is evaluated based on the fidelity of the generated objects, their relationships, the execution of editing instructions, and the alignment of erasures with the text descriptions. Images are rated on a fidelity scale from 0 to 2, where 0 represents the lowest quality and 2 signifies the highest. With two evaluators assessing each generated image, the cumulative score for each aspect can reach a maximum of 100. Implementation Details. Our framework builds upon Stable Diffusion (SD) V-1.4. During the Stimulus & Response stage, we assign a weight of δ equals 0.8 in eq. (1), and set t equals 25 and αt equals 40 in eq. (2). We implement the stimulus procedure over the 16 × 16 attention units and integrate the Iterative Latent Refinement design (Chefer et al. 2023). In the latent fusion stage, the parameter τ is set to a value of 40. Qualitative and Quantitative Results Qualitative and Quantitative Comparisons with SingleGeneration Baselines. fig. 6 reveals that traditional baseline methods often struggle with object omissions and maintaining spatial and interactional relations. In contrast, our progressive generation process offers enhanced image fidelity and controllability. Additionally, we maintain finer details in the generated images, such as the shadows of the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6697 On the left side of the beach, a sea turtle can be spotted. To its right, there stands a red beach chair, and beneath these chairs, some fruits are neatly placed. There is a gentle stream flowing through the forest. On the right side of the river stands a wolf wearing a hat, while on the left is a brown bear wearing a shirt. Stable Diffusion Attend-and-Excite Structured Diffusion Ours A child shakes hands with a rabbit in the yard, while stones are on the right. Some apples are above them. There is a desert cave situated over the desert. To the left, a desert fox can be seen, and to the right, there's a desert cactus. Figure 6: Qualitative comparison with Single-Stage baselines. Common errors in the baselines include missing objects and mismatched relations. Our method demonstrates the progressive generation process. “beach chair”. Result in table 1 indicates that our method outperforms the baselines in both object recall and relation accuracy. Qualitative and Quantitative Comparisons with Progressive Generation Baselines. In fig. 8, baseline methods often fail to synthesize full objects and may not represent relationships as described in the provided text. Moreover, during editing and erasing operations, these methods tend to produce outputs with compromised quality, showcasing unnatural characteristics. It’s worth noting that any missteps or inaccuracies in the initial stages, such as those seen in InstructPix2Pix, can cascade into subsequent stages, exacerbating the degradation of results. In contrast, our proposed method consistently yields superior results through every phase. The results in table 2 further cement our method’s dominant performance in synthesis, editing, and erasing operations, as underscored by the impressive rating scores. Ablation Study Ablation study of method components is shown in table 3. Without latent fusion, we lose continuity from prior generation stages, leading to inconsistencies in object synthesis and placement. On the other hand, omitting the Stimulus & Response process results in a lack of positional Method Object Recall ↑ Relation Accuracy ↑ Stable Diffusion 40.7 19.8 Structured Diffusion 43.5 21.6 Attend-and-excite 50.3 23.4 Ours 64.4 50.8 Table 1: Quantitative comparison with Single-Stage Generation baselines. awareness, making the synthesis less precise. Both omissions manifest as significant drops in relation and entity accuracies, emphasizing the synergistic importance of these components in our approach. The analysis of Stimulus & Response in the editing operation is highlighted in fig. 7. Compared to Stable Diffusion, Stimulus & Response not only enhances object completeness and fidelity but also demonstrates a broader diversity in editing capabilities. The loss curve indicates that Stimulus & Response aligns more closely with the reference cross-attention, emphasizing its adeptness in preserving the original structure. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6698 “fox” → “...” “cat” “monkey” “jaguar” Loss Step t SD S&R Figure 7: The analysis of Stimulus & Response in the editing operation. The left side shows a visual comparison between SD (Stable Diffusion) and S&R (Stimulus & Response). The right side presents the convergence curve of cross-attention loss during diffusion sampling steps. The loss is computed as the difference between reference attention and model-generated attention. In the right figure, red, blue, and green colors represent the objects “jaguar”, “cat”, and “monkey” respectively. Solid lines indicate SD loss, while dashed lines represent S&D loss. In the yard, a white cat is playing with a red ball of yarn on the right side, while a statue of Einstein stands on the far left. Initially, there was a cat, but it was changed to a rabbit. Finally the rabbit was deleted. input image text description synthesis laytout Ours Blended Latent Stable-inpainting InstructPix2Pix Step 1 Step 2 Step 3 Step 4 Figure 8: Qualitative comparison with Progressive Generation baselines. The first two phases illustrate object synthesis operation, where target objects are color-coded in both the text and layout. Subsequent phases depict object editing and erasing processes, wherein a cat is first transformed into a rabbit and then the rabbit is removed. Method Synthesis Editing Erasing Object Relation InstructPix2Pix 19 24 32 29 Stable-inpainting 64 54 65 45 Blended Latent 67 52 67 46 Ours 74 60 72 50 Table 2: Quantitative comparison of our method against Progressive Generation baselines, using rating scores. Method Variant Object Recall ↑ Relation Accuracy ↑ w/o LF 38.8 21.8 w/o S&R 58.3 45.2 Ours 64.4 50.8 Table 3: Ablation study. LF and S&R represent Latent Fusion and Stimulus & Response respectively. Conclusion In this study, we addressed the prevailing challenges in the rapidly advancing field of text-to-image generation, particularly the synthesis and manipulation of multiple entities under specific constraints. Our innovative progressive synthesis and editing methodology ensures precise spatial and relational representations. Recognizing the limitations of existing diffusion models with increasing entities, we integrated the capabilities of a Large Language Model (LLM) to dissect complex text into structured directives. Our Stimulus, Response, and Fusion (SRF) framework, which enables seamless entity manipulation, represents a major stride in object synthesis from intricate text inputs. One major limitation of our approach is that not all text can be decomposed into a sequence of short prompts. For instance, our approach finds it challenging to sequentially parse text such as “a horse under a car and between a cat and a dog.” We plan to gather more training data and labels of this nature to improve the parsing capabilities of GPT. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6699 Acknowledgments This work is supported by the National Natural Science Foundation of China (NSFC No. 62272184). The computation is completed in the HPC Platform of Huazhong University of Science and Technology. References Avrahami, O.; Fried, O.; and Lischinski, D. 2022. Blended latent diffusion. arXiv preprint arXiv:2206.02779. Avrahami, O.; Lischinski, D.; and Fried, O. 2022. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18208–18218. Baek, K.; Choi, Y.; Uh, Y.; Yoo, J.; and Shim, H. 2021. Rethinking the truly unsupervised image-to-image translation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 14154–14163. Bau, D.; Andonian, A.; Cui, A.; Park, Y.; Jahanian, A.; Oliva, A.; and Torralba, A. 2021. Paint by word. arXiv preprint arXiv:2103.10951. Brooks, T.; Holynski, A.; and Efros, A. A. 2023. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18392–18402. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Chefer, H.; Alaluf, Y.; Vinker, Y.; Wolf, L.; and CohenOr, D. 2023. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. arXiv preprint arXiv:2301.13826. Choi, Y.; Choi, M.; Kim, M.; Ha, J.-W.; Kim, S.; and Choo, J. 2018. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8789–8797. Feng, W.; He, X.; Fu, T.-J.; Jampani, V.; Akula, A.; Narayana, P.; Basu, S.; Wang, X. E.; and Wang, W. Y. 2022. Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis. arXiv preprint arXiv:2212.05032. Goel, V.; Peruzzo, E.; Jiang, Y.; Xu, D.; Sebe, N.; Darrell, T.; Wang, Z.; and Shi, H. 2023. Pair-diffusion: Object-level image editing with structure-and-appearance paired diffusion models. arXiv preprint arXiv:2303.17546. Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626. Huang, X.; Liu, M.-Y.; Belongie, S.; and Kautz, J. 2018. Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV), 172–189. Huh, M.; Zhang, R.; Zhu, J.-Y.; Paris, S.; and Hertzmann, A. 2020. Transforming and projecting images into classconditional generative networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part II 16, 17–34. Springer. Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2017. Imageto-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1125–1134. Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and Irani, M. 2023. Imagic: Text-based real image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6007–6017. Liu, M.-Y.; Breuel, T.; and Kautz, J. 2017. Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30. Ma, W.-D. K.; Lewis, J.; Kleijn, W. B.; and Leung, T. 2023. Directed Diffusion: Direct Control of Object Placement through Attention Guidance. arXiv preprint arXiv:2302.13153. Mokady, R.; Hertz, A.; Aberman, K.; Pritch, Y.; and CohenOr, D. 2023. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6038–6047. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Park, T.; Liu, M.-Y.; Wang, T.-C.; and Zhu, J.-Y. 2019. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2337–2346. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-toimage generation. In International Conference on Machine Learning, 8821–8831. PMLR. Richardson, E.; Alaluf, Y.; Patashnik, O.; Nitzan, Y.; Azar, Y.; Shapiro, S.; and Cohen-Or, D. 2021. Encoding in style: a stylegan encoder for image-to-image translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2287–2296. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684– 10695. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour, S. K. S.; Ayan, B. K.; Mahdavi, S. S.; Lopes, R. G.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6700 Sangkloy, P.; Lu, J.; Fang, C.; Yu, F.; and Hays, J. 2017. Scribbler: Controlling deep image synthesis with sketch and color. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5400–5409. Schuhmann, C.; Beaumont, R.; Vencu, R.; Gordon, C.; Wightman, R.; Cherti, M.; Coombes, T.; Katta, A.; Mullis, C.; Wortsman, M.; et al. 2022. Laion-5b: An open largescale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402. Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2556–2565. Wang, T.-C.; Liu, M.-Y.; Zhu, J.-Y.; Tao, A.; Kautz, J.; and Catanzaro, B. 2018. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8798–8807. Wang, Z.; Liu, W.; He, Q.; Wu, X.; and Yi, Z. 2022. Clipgen: Language-free training of a text-to-image generator with clip. arXiv preprint arXiv:2203.00386. Wu, Q.; Liu, Y.; Zhao, H.; Bui, T.; Lin, Z.; Zhang, Y.; and Chang, S. 2023. Harnessing the spatial-temporal attention of diffusion models for high-fidelity text-to-image synthesis. arXiv preprint arXiv:2304.03869. Wulff, J.; and Torralba, A. 2020. Improving inversion and generation diversity in stylegan using a gaussianized latent space. arXiv preprint arXiv:2009.06529. Xiao, J.; Ehinger, K. A.; Hays, J.; Torralba, A.; and Oliva, A. 2016. Sun database: Exploring a large collection of scene categories. International Journal of Computer Vision, 119: 3–22. Xie, S.; Zhang, Z.; Lin, Z.; Hinz, T.; and Zhang, K. 2022. SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model. arXiv preprint arXiv:2212.05034. Yang, B.; Gu, S.; Zhang, B.; Zhang, T.; Chen, X.; Sun, X.; Chen, D.; and Wen, F. 2023. Paint by example: Exemplarbased image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18381–18391. Zhu, J.; Shen, Y.; Zhao, D.; and Zhou, B. 2020. In-domain gan inversion for real image editing. In Computer Vision– ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16, 592–608. Springer. Zhu, J.-Y.; Kr¨ahenb¨uhl, P.; Shechtman, E.; and Efros, A. A. 2016. Generative visual manipulation on the natural image manifold. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14, 597–613. Springer. Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223–2232. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6701 | 2024 | 744 |
18,567 | UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation Kefu Yi1*, Kai Luo2, Xiaolei Luo2, Jiangui Huang2, Hao Wu2, Rongdong Hu3, Wei Hao1* 1School of Traffic and Transportation, Changsha University of Science and Technology 2College of Automotive and Mechanical Engineering, Changsha University of Science and Technology 3Changsha Intelligent Driving Institute {corfyi, haowei}@csust.edu.cn Abstract Multi-object tracking (MOT) in video sequences remains a challenging task, especially in scenarios with significant camera movements. This is because targets can drift considerably on the image plane, leading to erroneous tracking outcomes. Addressing such challenges typically requires supplementary appearance cues or Camera Motion Compensation (CMC). While these strategies are effective, they also introduce a considerable computational burden, posing challenges for real-time MOT. In response to this, we introduce UCMCTrack, a novel motion model-based tracker robust to camera movements. Unlike conventional CMC that computes compensation parameters frame-by-frame, UCMCTrack consistently applies the same compensation parameters throughout a video sequence. It employs a Kalman filter on the ground plane and introduces the Mapped Mahalanobis Distance (MMD) as an alternative to the traditional Intersection over Union (IoU) distance measure. By leveraging projected probability distributions on the ground plane, our approach efficiently captures motion patterns and adeptly manages uncertainties introduced by homography projections. Remarkably, UCMCTrack, relying solely on motion cues, achieves state-of-the-art performance across a variety of challenging datasets, including MOT17, MOT20, DanceTrack and KITTI. More details and code are available at https://github.com/ corfyi/UCMCTrack. Introduction At the core of tracking-by-detection paradigm of multiobject tracking (MOT) is the accurate association of detections with tracked objects. Motion cues are widely used due to their effectiveness and simplicity. However, the application of motion model in scenarios with frequent camera movement is highly challenging. This issue is usually addressed by applying additional appearance cues or performing frame-by-frame Camera Motion Compensation (CMC) on video captured by the moving camera. While effective, these additional measures introduce a non-negligible computational burden, posing a obstacle for real-time MOT. Thus, a pertinent question arises: Is it possible to employ motion cues in MOT that are robust to camera movement *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 77 78 79 80 81 82 62.0 62.5 63.0 63.5 64.0 64.5 65.0 65.5 66.0 appearance & motion motion HOTA IDF1 OC-SORT ByteTrack C-BIoU MotionTrack& SparseTrack UCMCTrack+(ours) FCG Quo Vadis GHOST Bot-SORT StrongSORT Deep OCSORT UCMCTrack(ours) MOT17 test Figure 1: IDF1-HOTA-AssA comparisons of different trackers on the test set of MOT17. The horizontal axis is IDF1,the vertical axis is HOTA, and the radius of circle is AssA. Our UCMCTrack+ achieves 65.8 HOTA, 81.1 IDF1 on MOT17 test, possessing significant competitiveness compared to SOTA trackers. Details are given in Table 1. without resorting to the cumbersome frame-by-frame CMC? Our answer is YES. We have developed a pure motion model-based multi-object tracker that is robust to camera movement. For the same video sequence, it suffices to use the same camera motion compensation parameters, rather than computing the camera motion compensation parameters for every frame as traditional CMC does. We choose to model the target’s motion using a simple Kalman filter on the ground plane, instead of on the imaging plane as most MOT algorithms do, and effectively compensate the motion estimation errors caused by camera movement through the process noise parameters of the Kalman filter. We abandon the commonly used Intersection over Union (IoU) , and instead propose the Mapped Mahalanobis Distance (MMD). It computes the projected probability distribution on the ground plane, and utilizes the Mahalanobis distance to calculate the matching costs between targets. It not only effectively leverages the underlying motion patterns of the targets on the ground plane but also efficiently handles the uncertainties The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6702 caused by homography projection. A deep dive into motion-based MOT highlights a significant challenge when employing motion cues in highly dynamic scenes. Historically, IoU has been the favored metric for data association. On the surface, employing IoU on the image plane appears to be a more direct approach. However, its application often leads to inaccurate tracking outcomes, particularly in complex scenes marked by frequent camera movements. Notably, in these settings, detection and tracking boxes might completely fail to overlap, as shown in Figure 3. This observation underscores an imminent necessity: a transition from exclusive reliance on the image plane to harnessing the more robust motion patterns inherent to the ground plane. Embracing such a paradigm shift stands to effectively address challenges spawned by camera movements, setting the stage for superior tracking accuracy. Distinct from the vast majority of trackers relying IoU on the image plane, ground plane-based association can effectively considers camera movement as noise within the motion model. It minimizes the problems induced by camera movements. This methodology is notably more direct, convenient, and efficient than compensating for camera motion frame-by-frame via traditional CMC. In light of these challenges, we introduce the Uniform Camera Motion Compensation (UCMC) tracker. It is a pure motion-based multi-object tracker that offers a holistic solution robust to camera jitter and motion, without any dependency on IoU-based methodologies. The main contributions of this paper are threefold: • In the realm of multi-object tracking where IoU is conventionally employed to capitalize on motion cues, our work introduces an innovative non-IoU distance measure, singularly driven by motion cues, and manifests state-of-the-art performance across multiple established datasets, marking a significant departure from traditional tracking techniques. • In addressing the challenge of camera movements, we propose a method that diverges from conventional camera motion compensation techniques. Instead of computing camera compensation parameters frame-by-frame for video sequences, our approach uniformly applies the same compensation parameters across the entire sequence, substantially reducing the computational burden typically associated with camera motion adjustments. • We introduce UCMCTrack, a simplistic yet efficacious multi-object tracker that employs a novel, standalone motion-based distance measure. This new measure has the potential to complement commonly-used distance metrics such as IoU and ReID. Remarkably, when provided with detections, UCMCTrack operates at a very fast speed, exceeding 1000 FPS using just a single CPU. Related Work Distance Measures Distance measures play key roles in MOT to associate targets in the current frame with those in previous frames. Currently, most algorithms employ the pixel-based Intersection over Union (IoU) technique (Du et al. 2023; Liu et al. 2020), which calculates the intersecting area between the detection box and the tracking box for target matching. However, in cases of camera jitter or low sampling rates, the two boxes may not intersect, rendering IoU ineffective. In contrast, Generalized Intersection over Union (GIoU) (Rezatofighi et al. 2019) not only focuses on the overlapping region but also considers the non-overlapping area, thus improving the computation of image overlap. Distance-IoU (DIoU) builds upon GIoU by further incorporating geometric distance into the calculation (Zheng et al. 2020). However, both these methods do not adequately represent the similarity in aspect ratios of the objects. Bidirectional Intersection over Union (BIoU) and the Cascade-BIoU (C-BIoU) proposed to augment the IoU-based approach by introducing a linear average motion estimation model and expanding the search region (Yang et al. 2023). Nevertheless, all these methods operate in the image plane and cannot fully capture the actual motion patterns, leading to faulty tracking during camera motion. Recently, some methods have considered distance measures based on the ground plane. SparseTrack (Liu et al. 2023) goes beyond IoU and incorporates additional estimated pseudo-depth for supplementary metrics. Quo Vadis (Dendorfer et al. 2022) employs homography transformation to calculate the Euclidean distance in the bird’s-eye view and combines it with IoU for target matching. Although these approaches utilize additional depth information, they still rely on IoU and fail to account for the uncertainty in the projection of targets onto the ground plane. Motion Models Tracking-by-detection MOT algorithms (Wojke, Bewley, and Paulus 2017; Cao et al. 2023; Maggiolino et al. 2023) often favor motion models for their simplicity and effectiveness. Among these, the Constant Velocity (CV) model, which assumes unvarying target motion between frames, is the most favored approach (Bewley et al. 2016; Zhang et al. 2022). Numerous studies have been dedicated to improving motion estimation accuracy, employing methods such as Kalman filtering (Bewley et al. 2016; Zhang et al. 2022; Zhou, Koltun, and Kr¨ahenb¨uhl 2020), optical flow (Xiao, Wu, and Wei 2018), and displacement regression (Feichtenhofer, Pinz, and Zisserman 2017; Held, Thrun, and Savarese 2016). However, current MOT algorithms (Du et al. 2021, 2023; Aharon, Orfaig, and Bobrovsky 2022) model the motion of tracking targets directly upon the image plane using detected bounding boxes. This approach fails to reflect the actual motion patterns of the targets on the ground plane, leading to erroneous tracking results during camera motion. To further leverage the inherent motion patterns of the tracking targets, researchers (Liu, Wang, and Xu 2020; Marinello, Proesmans, and Van Gool 2022) have employed LSTM networks to predict target motion, while others (Babaee, Li, and Rigoll 2019) have used RNN networks for similar purposes. Additionally, transformer networks (Yang et al. 2022) have also been utilized to capture object motion patterns. In contrast to employing neural networks to explicitly predict target motion, the tracking-by-query propagation (Zhang, Wang, and Zhang 2023) forces each query The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6703 Image Plane Ground Plane Correlated Measure Distribution CV Motion Model Process Noise Compensation mapped measurement track prediction Hungarian Algorithm tracklets Kalman Filter Mapped Mahalanobis Distance Homography Transformation Figure 2: The pipeline of the proposed UCMCTrack. to recall the same instance across different frames. Alternatively, the approach based on a Graphs framework (Cetintas, Bras´o, and Leal-Taix´e 2023) is used to model data association. These methods use a learned network to implicitly grasp the dynamics of target motion. While they achieve promising results, their training process can be challenging, requiring a substantial amount of annotated data and computational resources. Moreover, complex network designs may not meet the real-time requirements on end devices. Camera Motion Compensation Camera Motion Compensation (CMC) is a prevalent method to address dynamic scenes in the field of MOT (Bergmann, Meinhardt, and Leal-Taixe 2019; Han et al. 2022; Khurana, Dave, and Ramanan 2021). This is often achieved by aligning frames through image registration, leveraging techniques such as Enhanced Correlation Coefficient (ECC) maximization (Evangelidis and Psarakis 2008a), or employing feature matching methods like ORB (Rublee et al. 2011). In BOT-SORT (Aharon, Orfaig, and Bobrovsky 2022), image key-points were extracted frame-by-frame, with sparse optical flow subsequently applied. The affine transformation matrix of background motion is calculated and obtained via RANSAC (Fischler and Bolles 1981), and the affine matrix is used to transform the prediction box from the (k1)-th frame coordinate system to the k-th frame coordinate system. In (Yu, Kurnianggoro, and Jo 2019), the pyramidal Lucas-Kanade optical flow is implemented to trace gridbased feature points. The affine transformation matrix between two consecutive frames is calculated through matching feature points, and the initial two frames and the background model are aligned with the current frame. In (Yeh et al. 2017), a camera motion compensation framework is proposed with utilization of temporal and spatial structure, which depends on pre-provided background model for background elimination, thereby posing challenges for its adaptation to new scenarios. However, when confronted with highresolution videos, current CMC techniques impose substantial computational overhead and hinders the implementation of real-time target tracking. Method UCMCTrack follows the tracking-by-detection paradigm, with its pipeline detailed in Figure 2. We introduce significant advancements across crucial dimensions: motion model, distance measure, and process noise compensation. Together, these improvements bolster UCMCTrack, endowing it with adaptability and efficiency across diverse tracking challenges. For the pseudocode please refer to Appendix A. Motion Modeling on Ground Plane We model objects’ motion on the ground plane to better capture the fundamental essence of their motion patterns. Selecting the appropriate state vector x, observation vector z, and determining the process noise Qk and measurement noise Rk are crucial in establishing the Kalman constant velocity motion model. In order to make observation and calculation more convenient, the choice was made to use the midpoint coordinates of the bottom edge of the bounding box in the image plane, projected onto the ground plane coordinates x and y, as the observation vector. The state vector x is defined as x = [x, ˙x, y, ˙y]. According to the linear camera model (Yu and McMillan 2004), the mapping relationship between the ground plane coordinates x and y, and the image plane coordinates u and v, can be expressed as: " u v 1 # = A 1 γ " x y 1 # (1) Where γ is the scale factor, and A represents the projection matrix which is the product of camera intrinsic and extrinsic parameters. Please refer to Appendix B for more details. Correlated Measurement Distribution In general, the measurement errors of detectors on the image plane follow an independent normal distribution, and their covariance matrix Ruv k can be represented as: Ruv k = (σmwk)2 0 0 (σmhk)2 (2) Where: σm represents the detection noise factor as a hyperparameter, and wk and hk denote the detected width and height from the detector. If we express the inverse matrix of A in Eq. 1 as: A−1 = " a11 a12 a13 a21 a22 a23 a31 a32 a33 # (3) This leads to the covariance matrix of measurement errors in the ground plane as: (Please refer to Appendix B for a detailed derivation) Rk = CRuv k CT (4) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6704 (a) Image Plane (with IoU) IoU=0 U V (a) Image Plane (with IoU) Mapped Position Mapped Distribution (b) MMD (without CMD) X Y (b) MMD (without CMD) Correlation Interval (c) MMD (with CMD) Y X (c) MMD (with CMD) Figure 3: Visualization of distance measures. (a) Visualization of IoU on the image plane. IoU fails as there is no intersection between bounding boxes. (b) Visualization of Mapped Mahalanobis Distance (MMD) without Correlated Measurement Distribution (CMD). Incorrect associations occur due to insufficient utilization of distribution information. (c) Visualization of MMD with CMD. Correct associations after using the correlated probability distribution, undergoing a rotation on the ground plane. where: C = γa11 −a31γx γa12 −a32γx γa21 −a31γy γa22 −a32γy (5) Thus, we obtain the mapped measurement noise matrix Rk in the ground plane. It’s important to highlight that the mapped distribution exhibits a strong correlation since Rk is non-diagonal. This allows for more accurate association of targets on the ground plane. Mapped Mahalanobis Distance In image plane motion modeling, IoU is the most commonly used distance measure for data association. However, when objects are in high-speed motion or captured at low FPS or in scenes with moving camera, the lack of overlap between detection boxes and tracklets renders IoU ineffective. Conversely, by employing normalized Mahalanobis distance in ground plane modeling, the issue of IoU inefficiency is effectively addressed, as depicted in Figure 3. The calculation of mapped Mahalanobis distance between track state and measurement involves three steps: 1. Calculate Residual: ϵ = z −Hˆx (6) Here, z is the mapped measurement on ground plane, ˆx is the predicted track state, and H is the observation matrix. 2. Compute Residual Covariance Matrix: S = HPHT + Rk (7) Here, P is the predicted covariance matrix, and Rk is the mapped measurement noise covariance matrix. 3. Calculate Normalized Mahalanobis Distance: D = ϵT S−1ϵ + ln |S| (8) Here, |S| represents the determinant of the matrix S, and ln is the natural logarithm. As seen in Eq. 8, we employed the normalized Mahalanobis distance, incorporating the logarithm of the determinant of the measurement covariance matrix. This ensures that data association decisions are not solely based on the discrepancies between measurements and predictions, but also holistically consider the accuracy and uncertainty of measurements. Consequently, this will yield more robust and reliable association decisions in object tracking. Process Noise Compensation In the context of MOT tasks, many previous works (Bewley et al. 2016; Wojke, Bewley, and Paulus 2017; Zhang et al. 2022; Cao et al. 2023) have treated the target’s motion model as a Constant Velocity (CV) model without considering the noise impact caused by camera motion. However, camera motion is quite common in MOT tasks and can introduce significant noise that affects the tracking performance. Assuming that the camera’s motion-induced acceleration is the source of noise, we can represent it through the system motion model as follows: ∆x = 1 2 · σ · (∆t)2 ∆v = σ · ∆t (9) where ∆x and ∆v represent the changes in position and velocity under the influence of noise, respectively. σ denotes the acceleration change due to camera motion, and ∆t represents the time interval between two image frames. Expressing Eq. 9 in matrix form yields the matrix: G = ∆t2 2 0 ∆t 0 0 ∆t2 2 0 ∆t (10) It captures the relationship between the changes in position and velocity caused by noise in each direction. For a twodimensional CV Model with a Kalman filter, the covariance matrix of the process noise can be represented as follows: Qk = G · diag(σx, σy) · GT (11) where σx and σy denote the process compensation factors along the x and y axes, handling the motion noise cause by camera movements of tilt and rotation respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6705 Tracker MOT17 MOT20 appearance & motion: HOTA↑IDF1↑ MOTA↑AssA↑ HOTA↑IDF1↑ MOTA↑AssA↑ FCG (Girbau, Marqu´es, and Satoh 2022) 62.6 77.7 76.7 63.4 57.3 69.7 68.0 58.1 Quo Vadis (Dendorfer et al. 2022) 63.1 77.7 80.3 62.1 61.5 75.7 77.8 59.9 GHOST (Seidenschwarz et al. 2023) 62.8 77.1 78.7 61.2 75.2 73.7 Bot-SORT (Aharon, Orfaig, and Bobrovsky 2022) 65.0 80.2 80.5 65.5 63.3 77.5 77.8 62.9 StrongSORT (Du et al. 2023) 64.4 79.5 79.6 64.4 62.6 77.0 73.8 64.0 Deep OCSORT (Maggiolino et al. 2023) 64.9 80.6 79.4 65.9 63.9 79.2 75.6 65.7 motion only: HOTA↑IDF1↑ MOTA↑AssA↑ HOTA↑IDF1↑ MOTA↑AssA↑ ByteTrack (Zhang et al. 2022) 63.1 77.3 80.3 62.0 61.3 75.2 77.8 59.6 C-BIoU (Yang et al. 2023) 64.1 79.7 81.1 63.7 MotionTrack (Xiao et al. 2023) 65.1 80.1 81.1 65.1 62.8 76.5 78.0 61.8 SparseTrack (Liu et al. 2023) 65.1 80.1 81.0 65.1 63.4 77.3 78.2 62.8 OCSORT (Cao et al. 2023) 63.2 77.5 78.0 63.4 62.4 76.3 75.7 62.5 UCMCTrack (Ours) 64.3 79.0 79.0 64.6 62.8 77.4 75.5 63.5 UCMCTrack+ (Ours) 65.8 81.1 80.5 66.6 62.8 77.4 75.7 63.4 Table 1: Results on MOT17 & MOT20 test. The detection results were obtained from ByteTrack (Zhang et al. 2022). Tracker HOTA↑IDF1↑MOTA↑AssA↑ appearance & motion: FCG 48.7 46.5 89.9 29.9 GHOST 56.7 57.7 91.3 39.8 StrongSORT 55.6 55.2 91.1 38.6 Deep OCSORT 61.3 61.5 92.3 45.8 motion only: ByteTrack 47.3 52.5 89.5 31.4 C-BIoU 60.6 61.6 91.6 45.4 MotionTrack 58.2 58.6 91.3 41.7 SparseTrack 55.5 58.3 91.3 39.1 OCSORT 55.1 54.9 92.2 40.4 UCMCTrack (Ours) 63.4 65.0 88.8 51.1 UCMCTrack+ (Ours) 63.6 65.0 88.9 51.3 Table 2: Results on DanceTrack-test. The detection results were obtained from ByteTrack (Zhang et al. 2022). Experiments Setting Datasets We conducted a fair evaluation of UCMCTrack on multiple publicly available datasets, including MOT17 (Milan et al. 2016), MOT20 (Dendorfer et al. 2020), DanceTrack (Sun et al. 2022), and KITTI (Geiger et al. 2013). Both MOT17 and MOT20 are pedestrian tracking datasets, and their motion is mostly linear. It is worth noting that MOT20 has a significantly higher density of pedestrians, making it a challenging dataset for tracking. The primary task of the DanceTrack (Sun et al. 2022) is to track dancers, who not only have similar appearances but also perform a large number of irregular movements.The KITTI (Geiger et al. 2013) is an autonomous driving dataset, and we only utilized the left color camera images for the visual vehicle and pedestrian tracking task. Compared to other datasets, KITTI has a lower frame rate, only 10 FPS, and the camera’s motion is more intense. Metrics We employ the CLEAR metrics (Bernardin and Stiefelhagen 2008) which include MOTA, FP, FN, and others, along with IDF1 (Ristani et al. 2016) and TA (Luiten et al. 2021), to evaluate the tracking performance comprehensively in various aspects. MOTA emphasizes the detector’s performance, while IDF1 measures the tracker’s ability to maintain consistent IDs. We also emphasize the use of AssA to evaluate the association performance. On the other hand, HOTA achieves a balance between detection accuracy, association accuracy, and localization accuracy, making it an increasingly important metric for evaluating trackers. Implementation Details For fair comparison, we directly used the existing baseline object detection method YOLOX (Ge et al. 2021). The weight files for MOT17, MOT20 and DanceTrack were obtained from ByteTrack (Zhang et al. 2022). For KITTI, we used the detection results from PermaTrack (Tokmakov et al. 2021). We applied the Enhanced Correlation Coefficient maximization (ECC) (Evangelidis and Psarakis 2008b) model for camera motion compensation, which is consistent with strongSORT (Du et al. 2023). For MOT17, MOT20, and DanceTrack datasets, we manually estimated the camera parameters since they are not publicly accessible. In contrast, KITTI dataset readily furnishes the requisite camera parameters. Benchmark Evaluation Here, we present the benchmark results for multiple datasets. ↑/↓indicate that higher/lower is better, respectively. The highest scores for each group are highlighted in bold, and the highest score for the motion group is marked in underline. ”UCMCTrack+” denotes the enhancement of UCMCTrack with the additional incorporation of CMC. MOT17 and MOT20 Our UCMCTrack results on MOT17 and MOT20 datasets are presented in Table 1, respectively. We used a private detector to generate the detection results and ensured fairness by aligning the detections with OC-SORT (Cao et al. 2023) and ByteTrack The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6706 Tracker Car Pedestrain appearance & motion: HOTA↑ MOTA↑ AssA↑ HOTA↑ MOTA↑ AssA↑ QD-3DT (Hu et al. 2022) 72.8 85.9 72.2 41.1 51.8 38.8 TuSimple (Choi 2015; He et al. 2016) 71.6 86.3 71.1 45.9 57.6 47.6 StrongSORT (Du et al. 2023) 77.8 90.4 78.2 54.5 67.4 57.3 motion only: HOTA↑ MOTA↑ AssA↑ HOTA↑ MOTA↑ AssA↑ CenterTrack (Zhou, Koltun, and Kr¨ahenb¨uhl 2020) 73.0 88.8 71.2 40.4 53.8 36.9 TrackMPNN (Rangesh et al. 2021) 72.3 87.3 70.6 39.4 52.1 35.5 OCSORT (Cao et al. 2023) 76.5 90.3 76.4 54.7 65.1 59.1 UCMCTrack (Ours) 77.1 90.4 77.2 55.2 67.4 58.0 UCMCTrack+ (Ours) 74.2 90.2 71.7 54.3 67.2 56.3 Table 3: Results on KITTI-test.The detection results were obtained from PermaTrack (Tokmakov et al. 2021). (Zhang et al. 2022). UCMCTrack+ has attained state-of-theart (SOTA) performance, notably on the MOT17 dataset, surpassing the SOTA methods by 0.9 in HOTA, 0.5 in IDF1, and 0.7 in AssA. It even surpasses leading algorithms that leverage both motion and appearance features at a considerable margin, highlighting its effective use of motion information to enhance the robustness and efficiency of the tracking. DanceTrack To demonstrate UCMCTrack’s performance under irregular motion scenarios, we present the test set results on DanceTrack in Table 2. UCMCTrack+ outperforms the SOTA methods with an improvement of 2.3 in HOTA, 3.4 in IDF1, and 5.5 in AssA. This highlights the effectiveness of our tracker in handling targets with irregular motions and further validates its SOTA performance. KITTI In Table 3, we present the results of UCMCTrack on the KITTI dataset. It’s noteworthy that the addition of CMC on the KITTI dataset did not yield favorable results. We believe that this might be due to the inaccuracies present in the CMC parameters. This observation indicates that UCMC demonstrates stronger generalization capabilities than CMC, particularly in complex scenarios.The performance of UCMCTrack in the KITTI dataset demonstrates its effectiveness in addressing challenges posed by high-speed motion and low frame-rate detections. Ablation Studies on UCMC The ablation experiments of UCMCTrack were conducted on the validation sets of MOT17 and DanceTrack. For MOT17, the validation set was split following the prevailing conventions (Zhou, Koltun, and Kr¨ahenb¨uhl 2020). The baseline is chosen as ByteTrack (Zhang et al. 2022). Component Ablation We undertook a comprehensive validation of UCMCTrack’s key components: Mapped Mahalanobis Distance (MMD), Correlated Measurement Distribution (CMD), and Process Noise Compensation (PNC), testing on both MOT17 and DanceTrack datasets. As demonstrated in Table 4, each component of UCMCTrack plays a crucial role in enhancing its overall efficacy. A noteworthy observation is the decline in performance when transitioning from IoU to MMD. This can be attributed to the IoU’s consideration of both position and height informaMethod IoU MMD CMD PNC IDF1↑HOTA↑ MOT17 Validation Set baseline ✓ 77.10 68.43 UCMCTrack-v1 ✓ 75.88 68.09 UCMCTrack-v2 ✓ ✓ 79.68 70.44 UCMCTrack-v3 ✓ ✓ ✓ 82.20 71.96 DanceTrack Validation Set baseline ✓ 47.27 47.93 UCMCTrack-v1 ✓ 43.76 46.32 UCMCTrack-v2 ✓ ✓ 53.93 55.06 UCMCTrack-v3 ✓ ✓ ✓ 62.64 60.42 Table 4: Ablation of UCMCTrack components. Method CMC UCMC IDF1↑ HOTA↑ MOT17 Validation Set baseline 77.10 68.43 baseline+CMC ✓ 81.07 70.97 UCMCTrack ✓ 82.20 71.96 UCMCTrack+ ✓ ✓ 84.05 72.97 DanceTrack Validation Set baseline 47.27 47.93 baseline+CMC ✓ 46.74 47.55 UCMCTrack ✓ 62.64 60.42 UCMCTrack+ ✓ ✓ 62.52 59.18 Table 5: CMC ablation. tion, whereas MMD solely focuses on position, leading to the observed decline in outcomes. However, when CMD is applied post MMD, it better utilizes the distribution information, thus surpassing the performance of the IoU-based baseline. This observation also underscores the potential benefits of integrating height information into MMD for future research. Lastly, the incorporation of PNC effectively mitigates the motion noise introduced by camera movements, elevating the tracking performance to state-of-the-art levels. CMC Ablation To explore the impact of CMC on UCMCTrack, we conducted ablation experiments on MOT17 and DanceTrack validation datasets, as shown in Table 5. In MOT17, employing UCMC results in a greater performance enhancement compared to using the baseline combined with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6707 -1 1 3 5 -8 -6 -4 -2 0 2 4 6 8 ΔHOTA_static ΔHOTA_dynamic ln(s ) (a) Process Compensation Factor 0.04 0.06 0.08 0.10 55 60 65 70 75 80 85 HOTA IDF1 sm HOTA score 60 65 70 75 80 85 90 IDF1 score (b) Detector Noise Factor 0 2 4 6 40 45 50 55 60 65 70 75 80 Error degree IDF1(qx) IDF1(qy) IDF1(qz) HOTA score HOTA(qx) HOTA(qy) HOTA(qz) 60 65 70 75 80 85 90 95 100 IDF1 score (c) Camera Parameters Error Figure 4: In-depth analysis of key parameters and robustness in UCMCTrack. CMC, underscoring the effective role of UCMC in compensating for camera motion. Furthermore, a subsequent application of CMC to UCMC yields an additional performance boost, consistent with the results observed on the MOT17 test set. Interestingly, upon employing CMC on DanceTrack, the performance of the baseline and UCMC algorithm actually deteriorated. This can be attributed to the fact that most video sequences on DanceTrack don’t exhibit pronounced viewpoint shifts. After employing CMC, minor detection offsets for overlapping targets might result in matching with an incorrect trajectory, culminating in a mild performance setback. In contrast, the use of UCMC resulted in a significant performance boost, suggesting its effectiveness in scenes with only minor camera jitters. Influence of Process Compensation Factor In order to further investigate the impact of the process compensation factors on the performance of UCMCTrack, we divided the MOT17 validation sequences into dynamic and static scenes. The horizontal axis represents the natural logarithm of the compensation factor, with both σx and σy set to σ as per Eq. 11, and the vertical axis represents the difference in HOTA compared to the baseline. As illustrated in Figure 4a, the influence of the compensation factor on dynamic and static scenes exhibits two distinct patterns. This divergence underscores the necessity of tailoring the compensation parameters separately for each scene type. Adapting these parameters specific to the scene’s nature ensures that the tracker operates at its optimum performance. This distinction also highlights the role of the compensation factor in mitigating the effects of camera movements. For static scenes, a smaller compensation factor is recommended, while for dynamic scenes, a larger compensation factor is essential to counteract the impact of camera motion, thus enhancing the tracking performance. Influence of Detector Noise Factor We conducted ablation studies on the hyperparameter σm in Eq. 2 to explore its impact on UCMCTrack. As shown in Figure 4b, the tracker’s performance varies with different σm. When the σm is set to 0.05, the HOTA and IDF1 metrics reach their highest values. It is evident that the influence of the σm on HOTA and IDF1 remains relatively minor within the range of 0.04 to 0.1. This indicates that our UCMCTrack is not sensitive to σm. Robustness to Camera Parameters Error Our method, UCMCTrack, relies on the camera parameters to project targets from the image plane to the ground plane. However, in scenarios where camera parameters are not provided, we manually estimate them. This means that the estimated camera parameters may not be accurate. We conducted separate ablation experiments for the camera extrinsics errors along the X, Y, and Z axes, and the results are shown in Figure 4c. We observed that errors in the Y axis have a minor impact on performance. On the other hand, errors in the X and Z axes have more significant effects. This is due to the Yaxis corresponding to the yaw direction, variations in this direction have a less impact on the deformation of the estimated ground plane. However, When there are substantial errors in the estimated camera extrinsics along the X and Y axes, the performance of UCMCTrack notably degrades. This can be attributed to significant deformations in the estimated ground plane. Under such circumstances, adjustments to the camera parameters are required to ensure that the estimated ground plane aligns closely with the actual terrain. Conclusion In this work, we presented UCMCTrack, a state-of-theart multi-object tracker demonstrating superior performance across a variety of datasets. UCMCTrack employs a novel distance measure based on normalized mahalanobis distance on mapped ground plane, marking a significant departure from the conventional reliance on IoU. This innovation enables the tracker to adeptly handle the challenges introduced by camera movements, using only a consistent set of compensation parameters across a single video sequence. However, an inherent limitation of UCMCTrack is its assumption that targets reside on a singular ground plane. Looking forward, there is considerable potential to enhance UCMCTrack’s effectiveness by integrating it with well-established distance measures such as IoU and ReID. We believe that this will lay the groundwork for subsequent advancements in the future research of motion-based multi-object tracking. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6708 Acknowledgments This work was supported by the National Key Research and Development Program of China under Grant 2022YFC3803700, in part by the Natural Science Foundation of China under Grants 52002036, the Changsha Science and Technology Major Project under Grant kh2202002 and kh2301004, the Hunan Provincial Natural Science Foundation of China under Grant 2022JJ30611, the Scientific Research Fund of Hunan Provincial Education Department under Grant 21B0342. References Aharon, N.; Orfaig, R.; and Bobrovsky, B.-Z. 2022. BoTSORT: Robust associations multi-pedestrian tracking. arXiv preprint arXiv:2206.14651. Babaee, M.; Li, Z.; and Rigoll, G. 2019. A dual CNN–RNN for multiple people tracking. Neurocomputing, 368: 69–83. Bergmann, P.; Meinhardt, T.; and Leal-Taixe, L. 2019. Tracking Without Bells and Whistles. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Bernardin, K.; and Stiefelhagen, R. 2008. Evaluating multiple object tracking performance: the clear mot metrics. EURASIP Journal on Image and Video Processing, 2008: 1–10. Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; and Upcroft, B. 2016. Simple Online and Realtime Tracking. In 2016 IEEE International Conference on Image Processing (ICIP), 3464– 3468. Cao, J.; Pang, J.; Weng, X.; Khirodkar, R.; and Kitani, K. 2023. Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9686–9696. Cetintas, O.; Bras´o, G.; and Leal-Taix´e, L. 2023. Unifying Short and Long-Term Tracking With Graph Hierarchies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 22877–22887. Choi, W. 2015. Near-online multi-target tracking with aggregated local flow descriptor. In Proceedings of the IEEE international conference on computer vision, 3029–3037. Dendorfer, P.; Rezatofighi, H.; Milan, A.; Shi, J.; Cremers, D.; Reid, I.; Roth, S.; Schindler, K.; and Leal-Taix´e, L. 2020. Mot20: A benchmark for multi object tracking in crowded scenes. arXiv preprint arXiv:2003.09003. Dendorfer, P.; Yugay, V.; Osep, A.; and Leal-Taix´e, L. 2022. Quo Vadis: Is Trajectory Forecasting the Key Towards LongTerm Multi-Object Tracking? Advances in Neural Information Processing Systems, 35: 15657–15671. Du, Y.; Wan, J.; Zhao, Y.; Zhang, B.; Tong, Z.; and Dong, J. 2021. GIAOTracker: A Comprehensive Framework for MCMOT With Global Information and Optimizing Strategies in VisDrone 2021. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2809–2819. Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; and Meng, H. 2023. Strongsort: Make deepsort great again. IEEE Transactions on Multimedia. Evangelidis, G. D.; and Psarakis, E. Z. 2008a. Parametric image alignment using enhanced correlation coefficient maximization. IEEE transactions on pattern analysis and machine intelligence, 30(10): 1858–1865. Evangelidis, G. D.; and Psarakis, E. Z. 2008b. Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10): 1858–1865. Feichtenhofer, C.; Pinz, A.; and Zisserman, A. 2017. Detect to track and track to detect. In Proceedings of the IEEE international conference on computer vision, 3038–3046. Fischler, M. A.; and Bolles, R. C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): 381–395. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; and Sun, J. 2021. YOLOX: Exceeding YOLO Series in 2021. CoRR, abs/2107.08430. Geiger, A.; Lenz, P.; Stiller, C.; and Urtasun, R. 2013. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11): 1231–1237. Girbau, A.; Marqu´es, F.; and Satoh, S. 2022. Multiple Object Tracking from appearance by hierarchically clustering tracklets. arXiv preprint arXiv:2210.03355. Han, S.; Huang, P.; Wang, H.; Yu, E.; Liu, D.; and Pan, X. 2022. MAT: Motion-aware multi-object tracking. Neurocomputing, 476: 75–86. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Held, D.; Thrun, S.; and Savarese, S. 2016. Learning to track at 100 fps with deep regression networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, 749–765. Springer. Hu, H.-N.; Yang, Y.-H.; Fischer, T.; Darrell, T.; Yu, F.; and Sun, M. 2022. Monocular quasi-dense 3d object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2): 1992–2008. Khurana, T.; Dave, A.; and Ramanan, D. 2021. Detecting Invisible People. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 3174–3184. Liu, J.; Wang, Z.; and Xu, M. 2020. DeepMTT: A deep learning maneuvering target-tracking algorithm based on bidirectional LSTM network. Information Fusion, 53: 289– 304. Liu, Q.; Chu, Q.; Liu, B.; and Yu, N. 2020. GSM: Graph Similarity Model for Multi-Object Tracking. In IJCAI, 530– 536. Liu, Z.; Wang, X.; Wang, C.; Liu, W.; and Bai, X. 2023. SparseTrack: Multi-Object Tracking by Performing Scene Decomposition based on Pseudo-Depth. arXiv:2306.05238. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6709 Luiten, J.; Osep, A.; Dendorfer, P.; Torr, P.; Geiger, A.; LealTaix´e, L.; and Leibe, B. 2021. Hota: A higher order metric for evaluating multi-object tracking. International journal of computer vision, 129: 548–578. Maggiolino, G.; Ahmad, A.; Cao, J.; and Kitani, K. 2023. Deep oc-sort: Multi-pedestrian tracking by adaptive reidentification. arXiv preprint arXiv:2302.11813. Marinello, N.; Proesmans, M.; and Van Gool, L. 2022. TripletTrack: 3D Object Tracking Using Triplet Embeddings and LSTM. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 4500–4510. Milan, A.; Leal-Taix´e, L.; Reid, I.; Roth, S.; and Schindler, K. 2016. MOT16: A benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831. Rangesh, A.; Maheshwari, P.; Gebre, M.; Mhatre, S.; Ramezani, V.; and Trivedi, M. M. 2021. TrackMPNN: A message passing graph neural architecture for multi-object tracking. arXiv preprint arXiv:2101.04206. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 658–666. Ristani, E.; Solera, F.; Zou, R.; Cucchiara, R.; and Tomasi, C. 2016. Performance measures and a data set for multitarget, multi-camera tracking. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 810 and 15-16, 2016, Proceedings, Part II, 17–35. Springer. Rublee, E.; Rabaud, V.; Konolige, K.; and Bradski, G. 2011. ORB: An efficient alternative to SIFT or SURF. In 2011 International conference on computer vision, 2564–2571. Ieee. Seidenschwarz, J.; Bras´o, G.; Serrano, V. C.; Elezi, I.; and Leal-Taix´e, L. 2023. Simple Cues Lead to a Strong MultiObject Tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13813– 13823. Sun, P.; Cao, J.; Jiang, Y.; Yuan, Z.; Bai, S.; Kitani, K.; and Luo, P. 2022. DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20993–21002. Tokmakov, P.; Li, J.; Burgard, W.; and Gaidon, A. 2021. Learning To Track With Object Permanence. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10860–10869. Wojke, N.; Bewley, A.; and Paulus, D. 2017. Simple Online and Realtime Tracking with a Deep Association Metric. In 2017 IEEE International Conference on Image Processing (ICIP), 3645–3649. Xiao, B.; Wu, H.; and Wei, Y. 2018. Simple baselines for human pose estimation and tracking. In Proceedings of the European conference on computer vision (ECCV), 466–481. Xiao, C.; Cao, Q.; Zhong, Y.; Lan, L.; Zhang, X.; Cai, H.; Luo, Z.; and Tao, D. 2023. MotionTrack: Learning Motion Predictor for Multiple Object Tracking. arXiv:2306.02585. Yang, F.; Odashima, S.; Masui, S.; and Jiang, S. 2023. Hard to track objects with irregular motions and similar appearances? make it easier by buffering the matching space. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 4799–4808. Yang, J.; Ge, H.; Su, S.; and Liu, G. 2022. Transformerbased two-source motion model for multi-object tracking. Applied Intelligence, 1–13. Yeh, C.-H.; Lin, C.-Y.; Muchtar, K.; Lai, H.-E.; and Sun, M.-T. 2017. Three-pronged compensation and hysteresis thresholding for moving object detection in real-time video surveillance. IEEE Transactions on Industrial Electronics, 64(6): 4945–4955. Yu, J.; and McMillan, L. 2004. General linear cameras. In Computer Vision-ECCV 2004: 8th European Conference on Computer Vision, Prague, Czech Republic, May 11-14, 2004. Proceedings, Part II 8, 14–27. Springer. Yu, Y.; Kurnianggoro, L.; and Jo, K.-H. 2019. Moving object detection for a moving camera based on global motion compensation and adaptive background model. International Journal of Control, Automation and Systems, 17: 1866–1874. Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; and Wang, X. 2022. Bytetrack: Multiobject tracking by associating every detection box. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXII, 1–21. Springer. Zhang, Y.; Wang, T.; and Zhang, X. 2023. MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 22056–22065. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; and Ren, D. 2020. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 12993–13000. Zhou, X.; Koltun, V.; and Kr¨ahenb¨uhl, P. 2020. Tracking objects as points. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, 474–490. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6710 | 2024 | 745 |
18,568 | DiffRAW: Leveraging Diffusion Model to Generate DSLR-Comparable Perceptual Quality sRGB from Smartphone RAW Images Mingxin Yi1, Kai Zhang1,3∗, Pei Liu2, Tanli Zuo2, Jingduo Tian2* 1Tsinghua Shenzhen International Graduate School, Tsinghua University, China 2Media Technology Lab, Huawei, China 3Research Institute of Tsinghua, Pearl River Delta [email protected], [email protected], {liupei55,zuotanli,tianjingduo}@huawei.com Abstract Deriving DSLR-quality sRGB images from smartphone RAW images has become a compelling challenge due to discernible detail disparity, color mapping instability, and spatial misalignment in RAW-sRGB data pairs. We present DiffRAW, a novel method that incorporates the diffusion model for the first time in learning RAW-to-sRGB mappings. By leveraging the diffusion model, our approach effectively learns the high-quality detail distribution of DSLR images, thereby enhancing the details of output images. Simultaneously, we use the RAW image as a diffusion condition to maintain image structure information such as contours and textures. To mitigate the interference caused by the color and spatial misalignment in training data pairs, we embed a colorposition preserving condition within DiffRAW, ensuring that the output images do not exhibit color biases and pixel shift issues. To accelerate the inference process of DiffRAW, we designed the Domain Transform Diffusion Method, an efficient diffusion process with its corresponding reverse process. The Domain Transform Diffusion Method can reduce the required inference steps for diffusion model-based image restoration/enhancement algorithms while enhancing the quality of the generated images. Through evaluations on the ZRR dataset, DiffRAW consistently demonstrates stateof-the-art performance across all perceptual quality metrics (e.g., LPIPS, FID, MUSIQ), while achieving comparable results in PSNR and SSIM. Introduction To extract natural sRGB images from RAW sensor images, a meticulously engineered image signal processing (ISP) pipeline is usually needed. This encompasses a range of manually crafted low-level vision operations such as demosaicking, white balance, color correction, denoising, gamma correction, among others (Ramanath et al. 2005). With the rapid advancement of mobile photography, smartphones have become the primary devices for photo capture, owing to their portability. However, due to hardware constraints of smartphone cameras, such as the size of the aperture and sensor, images captured by smartphones exhibit a significant quality gap compared to those taken with *Kai Zhang and Jingduo Tian are the corresponding authors of this paper. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) RAW image (visualized) (b) LiteISPNet (c) DSLR sRGB image (d) DiffRAW(ours) Figure 1: Comparison of results on ZRR dataset. The images generated by our method exhibit richer detail and higher clarity, rivaling the visual quality of DSLR images. professional DSLR cameras. To address this issue, the academic community has begun to explore end-to-end ISP algorithm research based on smartphone RAW to DSLR sRGB data pairs (Ignatov, Van Gool, and Timofte 2020; Liang et al. 2021; Schwartz, Giryes, and Bronstein 2018). To convert smartphone RAW images into DSLR-quality sRGB images, there are three challenges: First, the inherent hardware constraints of smartphones induce a loss of detail in RAW images relative to DSLR sRGB counterparts, making the task of fully reconstructing DSLR sRGB imagery from smartphone RAW an ill-posed problem. Second, the collection of smartphone RAW images and DSLR sRGB images from different devices inevitably leads to a non-precise alignment problem within the data pairs. Third, as the data pairs are collected under varying environmental conditions The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6711 and camera parameters, the RAW and sRGB images manifest not only color disparities but also an unstable color mapping relationship. In response to these challenges, we propose the DiffRAW model, which incorporates the diffusion model for the first time in learning RAW-to-sRGB mappings. To address the significant detail disparity between RAW-sRGB data pairs, we leverage the diffusion model to learn the high-quality detail distribution of DSLR images, while using the RAW image as a diffusion condition to retain the structural information (such as contours and textures) of the generated images without relying on the RAW image for details. This combined strategy allows DiffRAW’s generated results to maintain the overall image structure of the smartphone RAW image while possessing DSLR-comparable details. For spatial misalignment and unstable color mapping relationship in the data pairs, we embed a color-position preserving condition in DiffRAW to ensure that the output images do not exhibit color biases and pixel shift issues. This condition also allows for flexible color style transfer. Moreover, to address the high iteration step issue in the diffusion model’s inference process, DiffRAW designs an efficient forward and reverse diffusion process, named the Domain Transform Diffusion Method, which reduces the required iteration steps during the inference phase while enhancing the quality of the generated images. In essence, the primary contributions of our research are as follows: • We propose a novel and efficient forward and reverse process, named the Domain Transform Diffusion Method, which reduces the iteration steps required during the inference stage while enhancing the quality of the generated images. The Domain Transform Diffusion Method is a universal acceleration approach specifically designed for diffusion model-based image restoration/enhancement algorithms, and can be flexibly transferred to other Diffusion-based image enhancement/restoration algorithms for inference acceleration. • We introduce the diffusion model into the task of learning RAW-to-sRGB mapping for the first time, proposing the DiffRAW model, achieving state-of-the-art results in perceptual quality metrics. • We use RAW images as the diffusion condition for the first time, retaining structural information like texture and contours in the generated images. • Through the specially designed color-position preserving condition, we alleviate the training interference caused by color and spatial misalignment in the training data pairs, ensuring that the model’s generated results do not produce color biases and pixel shifts. • DiffRAW possesses a color pluggable feature. Using different colors of color-position preserving condition for color information injection allows for flexible adjustment of the generated images’ color style. Preliminary As our approach belongs to the diffusion-based model, we will provide a brief introduction to the background of the diffusion model in this section. Diffusion Model The diffusion model includes the forward process and the reverse process. The forward process refers to the procedure of adding noise to the image. Given a real image y0 ∼q(y), the forward process of the diffusion model accumulates noise through T steps, resulting in y1, y2, y3, · · · , yT . Given the variance hyperparameters of the Gaussian noise distribution in the T steps of the noise process {βt ∈ (0, 1)}T t=1, the definition of the noisy image sequence y1, y2, y3, · · · , yT can be given by the following formula: q(yt|yt−1) = N(yt; p 1 −βtyt−1, βtI) (1) Letting αt = 1 −βt, αt = Πt i=1αi, and through derivation, the forward process can be expressed as: q(yt|y0) = N(yt; √αty0, (1 −αt)I) (2) In the forward process, αt is a monotonically decreasing sequence, usually pinned to α0 ≈1 and αT ≈0. Thus, as t increases, yt approaches pure noise. When T →∞, yT is complete Gaussian noise. Next, we will briefly introduce the training process of the diffusion model: first obtain the input image y0 ∼q(y), randomly select t ∼Uniform({1, 2, 3, · · · , T}), sample a random noise ϵ ∼N(0, I), and from Equation 2 it is known that yt = √αty0 + √1 −αtϵ. Using a U-Net(Ronneberger, Fischer, and Brox 2015) network fθ(yt, t) to predict noise ϵ, thereby restoring the noisy image yt to the original image y0. Ho et al.(Ho, Jain, and Abbeel 2020) showed that a loss function that works well in practice is a reweighted evidence lower bound (Kingma and Welling 2013): L(θ) = Ey0,t,ϵ∥fθ(yt, t) −ϵ∥2 (3) Here, θ represents the learnable parameters of the U-Net network fθ(yt, t) (Ho, Jain, and Abbeel 2020). The reverse process is the denoising inference process of the diffusion model. The model progressively generates images by reversing the forward process. After the training stage is over, we take the moment of maximum noise strength T as the starting point for the reverse process, sampling yT ∼N(0, I) from the standard Gaussian distribution, and use yT as the generation starting point, iteratively inferring yT −1, yT −2, yT −3, · · ·. Specifically, for any moment t and the current moment’s noisy image yt, the noisy image yt−1 at the moment t−1 can be inferred using the Bayesian formula(Ho, Jain, and Abbeel 2020): pθ(yt−1|yt) = N(yt−1; µθ(yt, t), σ2 t I) (4) Here, σt is usually a pre-defined constant related to the variance schedule {βt ∈ (0, 1)}T t=1, and µθ(yt, t) can be estimated using the trained denoising network fθ(yt, t) through the following formula: µθ(yt, t) = 1 √αt (yt −1 −αt √1 −αt fθ(yt, t)) (5) Therefore, using the noisy image yt and the trained denoising network fθ(yt, t), we can estimate the distribution of yt−1. In this way, starting from moment T, using yT ∼ N(0, I), we iteratively infer yT −1, yT −2, yT −3, · · ·. In the final inference step, we directly use the predicted value from Equation 5. Thus, after T iterations, we obtain ˆy0. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6712 Image Restoration/Enhancement Algorithm Based on the Diffusion Model For convenience, we denote x and y as the LQ (Low Quality) image and HQ (High Quality) image, respectively, within the context of image restoration/enhancement algorithms based on the diffusion model. Such algorithms typically construct a noisy image sequence of the HQ image in the forward process as follows: {yt = √αty + √1 −αtϵ}T t=1. During training, information about the LQ image x is injected as a condition into the U-Net network fθ(yt, x, t), and the network is utilized to predict the noise ϵ, thereby facilitating learning of the unknown conditional distribution p(y|x). pθ(yt−1|yt) = N(yt−1; µθ(yt, x, t), σ2 t I) (6) µθ(yt, x, t) = 1 √αt (yt −1 −αt √1 −αt fθ(yt, x, t)) (7) In the inference process, yT ∼N(0, I) is typically used as the starting point for generation. By applying Equations 6 and 7, the target image y is inferred after T iterative steps. Methodology In this section, we begin by outlining how DiffRAW utilizes the information of smartphone RAW images. Subsequently, we elucidate our solution for addressing the misalignment of color and position in training data pairs. Lastly, we detail our novel and efficient diffusion process, along with its corresponding training methods and reverse process. RAW Condition We utilize smartphone RAW images as a diffusion condition exclusively for preserving image structural information, such as contours and textures, without dependence on the RAW image for intricate details. This approach facilitates the full exploitation of mobile phone RAW image information without allowing the detail loss in the RAW image to interfere with the model’s generated output. By constructing generated images using the structural information from RAW images and high-quality DSLR details learned through the diffusion model, this combined strategy ensures the model’s generation results are comparable with DSLRs in image details while preserving the overall content of the smartphone’s RAW images. Color-Position Preserving Condition For convenience, we denote w as the RAW image captured by the mobile phone, and y as the target sRGB image captured by a DSLR camera. Since w and y have an unstable color mapping relationship and are spatially misaligned, direct usage of the diffusion model to learn the conditional distribution p(y|w) could lead to color biases in the model’s output and result in image blurring and pixel shifting. To mitigate the interference caused by the color and spatial misalignment in training data pairs, we embed a color-position preserving condition c within DiffRAW, ensuring that the output images do not exhibit color biases and pixel shift issues. During training, ctrain is an sRGB image obtained by degrading y using a high-order degradation model (Wang et al. 2021). During testing, we use a color extraction network G(w; ΘG) to extract a naturally colored sRGB image from w as ctest, in order to inject color information into the model: ctrain = D2(y), ctest = G(w; ΘG) (8) Regarding the color extraction network G(w; ΘG), this paper adopts a pre-trained lightweight ISPNet (Zhang et al. 2021). Other pre-trained ISP networks, such as PyNet(Ignatov, Van Gool, and Timofte 2020), MWISPNet(Ignatov et al. 2020), etc., are also feasible. In fact, any network capable of extracting color information from w can serve as G(w; ΘG). ctest only functions to inject color information into the model. The generated results will maintain color consistency with ctest. We make fine-tuned adjustments to the high-order degradation model D2 in terms of parameters and degradation methods to ensure strict color consistency between ctrain and y. Since ctrain and y are color-consistent and spatially aligned during training, DiffRAW can effectively learn the consistency in color and space between c and y in the conditional distribution p(y|c, w), thus ensuring that the model’s generated results maintain color consistency with ctest, without producing pixel shifts and blurs. Domain Transform Diffusion Method For ease of representation, we introduce an LQ image x, where during the training phase x is the DSLR-degraded image, and during the testing phase x is the output of the color extraction network G(w; ΘG) : xtrain = D2(y), xtest = G(w; ΘG) (9) In this way, during the inference process, xs can be utilized as an approximate estimation of ys, serving as the starting point for generation. By employing equation 6 and equation 7, the target image y can be inferred through s iterative steps, thus reducing the number of iterations. Here, the definitions of xs and ys are given as follows, where s ∈{1, 2, 3, · · · , T}: xs = √αsx + √ 1 −αsϵ (10) ys = √αsy + √ 1 −αsϵ (11) However, when using too small an iteration number s, the domain gap between xs and ys could lead to inconsistency between training and testing, consequently diminishing the enhancement in detail. To address this, we construct a new image diffusion sequence mt with x and y, denoted as the Domain Transform Diffusion Method (DTDM), where m0 = y and ms = xs. In the forward process, each diffusion step involves not only a slight addition of noise but also a minor degradation in the direction from y to x. In the reverse process, we add noise to x for s steps to obtain xs as the starting point for generation, and then iterate s steps to generate the target image. Since each iteration in the reverse process achieves not only a single denoising but also a detail enhancement in the direction from x to y, we are able to significantly reduce the number of inference steps while enhancing the details more effectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6713 Figure 2: Overall DiffRAW Framework. The comprehensive structure of the proposed DiffRAW is composed of a forward process and an inverse process. In the forward process, we degrade y to x stochastically and construct a sequence mt with a starting point of y and an endpoint of xs. In the inverse process, we first extract x from w, add s steps of noise to x to attain the starting point of the inverse process xs, and then use equation 23 for step-by-step iterative inference until ˆy is generated. Forward Process Suppose we aim to utilize xs as the starting point for generation in the reverse process, iterating s steps to obtain the target image y. To ensure complete training-test consistency, we accordingly construct an image sequence {mt}s t=0 that starts from y and ends at xs. In the forward process, a diffusion step from mt−1 to mt is divided into two stages: a minor degradation in the direction from y to x, followed by a slight noise addition. For ease of expression, we let mt−1 = mt−1 t−1, mt = mt t, and the intermediate image after the first minor step of degradation from mt−1 is denoted as mt t−1. The process from mt−1 to mt can be represented as follows: mt t−1 = mt−1 t−1 + p αt−1(mt 0 −mt−1 0 ) (12) mt t = √αtmt t−1 + √ 1 −αtϵ (13) Here, t ∈ {1, 2, 3, · · · , s}, and the image sequence {mt 0}s t=0 and constant γs are determined by the training hyperparameter s ∈{1, 2, 3, · · · , T}: mt 0 = y + √1 −αt √αt [γs(x −y)], γs = √αs √1 −αs (14) We combine equation 12 and equation 13, performing both the degradation in the direction from y to x and the noise addition to mt−1 to obtain mt. Therefore, for a given s, where s ∈{1, 2, 3, · · · , T}, the diffusion process of the image sequence {mt}s t=0 can be represented as: q(mt|mt−1, x, y) = N(mt; µdiff t , (1 −αt)I) (15) µdiff t = √αtmt−1 + √αt(mt 0 −mt−1 0 ) (16) After recursively applying Equation 15, mt’s distribution can be directly computed from x and y: q(mt|x, y) = N(mt; √αtmt 0, (1 −αt)I) (17) The above can be understood as: applying noise t times to mt 0 results in mt t = mt. Substituting equation 14 into equation 17 gives: mt = √αty + √ 1 −αt[γs(x −y) + ϵ] (18) Thus, starting from m0 = y, and after s times of diffusion, we obtain ms = √αsx + √1 −αsϵ = xs. Training Process We employed a U-Net network fθ(mt, w, c, t) for training, with the learning target being: mt −√αty √1 −αt = γs(x −y) + ϵ (19) Here, γs(x −y) characterizes the high-frequency details between x and y, and ϵ represents the random noise of mt. The loss function of the network can be expressed as: L(θ) = Ex,y,t,ϵ∥fθ(mt, w, c, t) −[γs(x −y) + ϵ]∥2 (20) Upon completion of the training, for any moment t and the current image mt, the estimates for the target image y can be obtained through equation 18 and equation 20, as: ˆy(mt, x, t) = mt −√1 −αtfθ(mt, w, c, t) √αt (21) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6714 Algorithm 1: DiffRAW Training 1: repeat 2: (w, y) ∼q(w, y) 3: x = D2(y) 4: c = x 5: t ∼Uniform({1, 2, 3, · · · , s}) 6: ϵ ∼N(0, I) 7: mt = √αty + √1 −αt[γs(x −y) + ϵ] 8: Take gradient descent step on ∇θ∥fθ(mt, w, c, t) −[γs(x −y) + ϵ]∥2 9: until converged Algorithm 2: DiffRAW Inference 1: x = G(w; ΘG) 2: c = x 3: ms ∼N(ms; √αsx, (1 −αs)I) 4: for t = s, · · · , 1 do 5: z ∼N(0, I) if t > 1,else z = 0 6: mt−1 = [ √1−αt √αt λt − 1−αt √αt √1−αt ]fθ(mt, w, c, t) +[ 1 √αt − 1 √αt λt]mt +λtx + σtz 7: end for 8: return m0 Reverse Process In the reverse process, we utilize ms = xs = √αsx+√1 −αsϵ as the starting point for generation, progressively iterating to infer ms−1, ms−2, ms−3, · · · . During each iteration, a denoising operation is performed, followed by a domain transform from the x to y direction. After s iterations, we arrive at m0 = y. Specifically, for any time t and the current image mt, we can use the Bayes’ theorem to simultaneously achieve the denoising of mt and the domain transform from x to y direction, and directly infer mt−1 from mt: q(mt−1|mt, x, y) = q(mt|mt−1, x, y)q(mt−1|x, y) q(mt|x, y) (22) By substituting equation 15, equation 17 and equation 21 into equation 22, we can obtain: pθ(mt−1|mt, x) = N(mt−1; ˆµbayes θ (mt, x), σ2 t I) (23) ˆµbayes θ (mt, x) = [ √1 −αt √αt λt − 1 −αt √αt √1 −αt ]fθ(mt, w, c, t) + [ 1 √αt − 1 √αt λt]mt + λtx (24) λt = [ p 1 −αt−1(1 −√αt √1 −αt−1 √1 −αt )]γs (25) The training and sample process of DTDM are shown in algorithm1 and algorithm2. In previous diffusion-based image restoration/enhancement algorithms, if we use xs as the starting point instead of ys during inference for generation, the model merely denoises xs at each step of the generation process. However, DTDM not only denoises xs at each step of the generation process but also performs a domain transfer from x to y at each step, allowing DTDM to transform xs into y with fewer iterations while also enhancing the quality of the generated images. Experiments Implementation Details Datasets We conduct experiments on Zurich RAW to RGB (ZRR) dataset(Ignatov, Van Gool, and Timofte 2020). In the ZRR dataset, 20 thousand image pairs are collected and roughly aligned via SIFT keypoints (Lowe 2004) and the RANSAC algorithm (Vedaldi and Fulkerson 2010), and the cropped patches with cross-correlation < 0.9 are discarded, resulting in 48,043 RAW-sRGB pairs of size 448 ×448. We follow the official division to train our DiffRAW with 46.8k pairs, and report the quantitative results on the remaining 1.2k pairs. Training Details We train our DiffRAW model for 1M training steps with a batch size of 32. Consistent with (Ho, Jain, and Abbeel 2020), we use the Adam optimizer with a linear warmup schedule over 10k training steps, followed by a fixed learning rate of 1e-4. The training hyperparameters T and s, which determine the noise scheduling and the distribution of the DTDM image sequence, are respectively set to 2000 and 100. We did not conduct more engineering attempts on the training hyperparameters T and s, and only set s = 100, T = 2000 to verify the effects of inference acceleration and improved image quality by DTDM. If more training hyperparameter trials are conducted on s and T, better experimental metric results might be achieved. Evaluation Metrics For benchmarks with paired data, we employ various perceptual metrics including LPIPS (Zhang et al. 2018), FID (Heusel et al. 2017), MUSIQ (Ke et al. 2021) and CLIPIQA+ (Wang, Chan, and Loy 2023) to evaluate the perceptual quality of generated images. PSNR (Hore and Ziou 2010), SSIM (Zhou 2004), NIQE (Mittal, Soundararajan, and Bovik 2012) and ILNIQE (Zhang, Zhang, and Bovik 2015) scores are also reported for reference. It should be noted specifically, in table 1, MUSIQ-K refers to ’musiq-koniq’, MUSIQ-S refers to ’musiq-spaq’, and CLIPIQA+RN50 refers to ’clipiqa+ rn50 512’. Testing Details Reducing the number of iterations during the inference process appropriately will lower the performance of the generated results on no-reference metrics, but enhance their performance on full-reference metrics. Therefore, we balance these two types of metrics and set the number of denoising steps and iteration steps during the inference process to 93, achieving the metric results shown in tables 1 and 2. If the number of denoising steps and iteration steps during the inference process is set to s = 100, the performance of the generated results on no-reference metrics will be better, which is also consistent with the human eye’s observation of image details and image quality. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6715 Method MUSIQ-K↑ MUSIQ-S↑ CLIPIQA+↑ CLIPIQA+RN50↑ NIQE↓ ILNIQE↓ PyNet 43.56 46.4990 0.5353 0.3196 7.6856 50.55 MW-ISPNet 43.34 45.5973 0.5230 0.3097 7.9001 55.19 LiteISPNet 48.52 50.4763 0.5377 0.3063 7.4839 53.50 DiffRAW (ours) 56.67 57.3660 0.5596 0.3739 7.0072 42.65 DSLR(Reference) 56.62 57.4589 0.5622 0.3895 7.0181 44.13 Table 1: No Reference Metric Experimental Results on ZRR Dataset Method Original GT Align GT with result LPIPS↓ FID↓ PSNR↑ SSIM↑ LPIPS↓ FID↓ PSNR↑ SSIM↑ PyNet 0.193 18.69 21.19 0.7471 0.152 17.11 22.96 0.8510 MW-ISPNet 0.213 20.41 21.42 0.7544 0.164 18.48 23.31 0.8578 LiteISPNet 0.187 17.04 21.55 0.7487 0.133 15.30 23.87 0.8737 DiffRAW (ours) 0.145 15.10 21.31 0.7433 0.118 14.61 23.54 0.8682 Table 2: Full Reference Metric Experimental Results on ZRR Dataset (a) RAW image (visualized) (b) Without condition (c) w condition (d) w + c condition (e) image x (f) DSLR sRGB Figure 3: Fig3(a) is the result of visualizing the RAW image using a simple ISP pipeline. Fig3(b) represents the generated result without condition. Fig3(c) represents the generated result using condition w. Fig3(d) represents the result using both w and c as conditions. Fig3(e) illustrates the image x utilized in these experiments. Fig3(f) represents the DSLR sRGB image. (a) DDPM 1500 steps (b) DDPM 500 steps (c) DDPM 100 steps (d) DTDM 100 steps Figure 4: A comparative analysis of the experimental results between DDPM and DTDM. Please zoom in for better observation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6716 Experimental Results on ZRR Dataset To evaluate the effectiveness of the DiffRAW, we compare our model with three state-of-the-art methods, i.e., PyNet(Ignatov, Van Gool, and Timofte 2020), MW-ISPNet (Ignatov et al. 2020) and LiteISPNet(Zhang et al. 2021). As shown in table 1 and table 2, DiffRAW exceeds the competing methods on all perceptual quality metrics, while achieving comparable results in PSNR and SSIM. Ablation Study Diffusion Condition We introduced two diffusion conditions w and c to achieve exact control over the generated results. The w condition can stably generate image structure information, ensuring that the generated results maintain the original image’s contours and textures. And the c condition can control over the color of the generated images while ensuring that there are no pixel shifts and blurring. As illustrated in figures 3, we initiate the generation process by subjecting x to an eight-fold downsampling degradation followed by the addition of noise over 1500 steps, which serves as our starting point. It can be observed that with the incorporation of w, the contours and textures of the image are preserved. Furthermore, after the introduction of c, the image no longer exhibits any color bias or blurry shifts. For a more detailed demonstration of the individual functionalities of the w and c conditions, such as the flexible manipulation of the generated results’ color through the infusion of the c condition with various color representations, please refer to the supplementary material. Diffusion Process and Inference Process We experimented with two types of diffusion processes for network training as described in equations 1 and 15. We use DDPM to represent the existing method, with its diffusion and the reverse process described by equations 1 and 4. And the DTDM, our improved method, is characterized by its diffusion and the reverse process through Equations 15 and 23. As shown in figure 4, an increase in the noise addition and iterative steps during the inference process leads to a corresponding enhancement in the detail of the generated results. Notably, our enhanced DTDM diffusion and inference process is capable of using only 100 steps of iteration to achieve detail enhancement surpassing that of 1500 iterative steps in DDPM. Related Works Deep Learning-based ISP Networks To overcome the hardware limitations of mobile cameras, a significant number of attempts have been made in recent years towards the deep learning-based ISP methods. Ignatov et al. (Ignatov, Van Gool, and Timofte 2020) harnessed a RAW-sRGB dataset drawn from Huawei P20 smartphone and Canon 5D Mark IV DSLR, devising an end-to-end ISP network to supplant the conventional built-in ISP pathway of the smartphone. AWNet (Dai et al. 2020) incorporated the global context block (Cao et al. 2019) to mitigate the impact of image misalignment. Zhang et al. (Zhang et al. 2019) conceived a contextual bilateral (CoBi) loss, facilitating the discovery of the best matching patch for supervision and partly ameliorating data misalignment. However, this approach did not fully resolve the spatial displacement stemming from depth variations between objects. LiteISPNet (Zhang et al. 2021) engineered a color-shift-resistant GCM module to contend with inconsistencies in color and pixel position shifts within data pairs, introducing a lightflow alignment module to synchronize the DSLR sRGB image with the mobile coordinate system. This alignment effectively attenuated the blurring and shifting complications in the output image, resulting from the misalignment in training data pairs. Further, Tripathi et al. (Shekhar Tripathi et al. 2022) tackled the pronounced color disparity between mobile RAW images and DSLR images through the utilization of a color prediction network grounded in the Perceiver architecture (Jaegle et al. 2021). Diffusion Model Over recent years, the diffusion model, distinguished by its superior ability for intricate detail generation, has outperformed Generative Adversarial Networks (GANs), positioning itself as the state-of-the-art methodology within the realm of image generation and editing. Deriving inspiration from non-equilibrium statistical physics, Sohl-Dickstein et al. (Sohl-Dickstein et al. 2015) were the pioneers in propounding the diffusion model as a tool to fit intricate distributions. Subsequently, Ho et al. (Ho, Jain, and Abbeel 2020) established a novel nexus between the diffusion model and denoising score matching. In a subsequent development, Song et al. (Song et al. 2020) advanced a unified framework to articulate the diffusion model through the lens of stochastic differential equations (SDEs). Several concurrent works have also leveraged analogous diffusion processes to DiffRAW. Although motivated by similar objectives, these efforts have embraced distinct mathematical formulations to realize this ambition. For instance, Delbracio and Milanfar (Delbracio and Milanfar 2023) employed Inversion by Direct Iteration (InDI) to model the process, while Luo et al. (Luo et al. 2023) and Liu et al. (Liu et al. 2023) sought to express it as an SDE. Conclusion In this work, we introduced DiffRAW, a novel method that adeptly addresses the challenges of converting smartphone RAW images to DSLR-quality sRGB images. DiffRAW’s design incorporates the use of RAW images to maintain structural details and color-position preserving conditions to control color, coupled with an efficient diffusion process to enhance output quality and reduce inference steps. Evaluated on the ZRR dataset, DiffRAW consistently outperforms existing solutions in perceptual quality metrics, while achieving comparable results in PSNR and SSIM. Notably, DiffRAW marks the first achievement in reaching a level comparable to DSLR images on no-reference IQA metrics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6717 Acknowledgments This work was supported by the Project from Science and Technology Innovation Committee of Shenzhen (KCXST20221021111201002) and the Key-Area Research and Development Program of Guangdong Province (2020B0909050003) References Cao, Y.; Xu, J.; Lin, S.; Wei, F.; and Hu, H. 2019. Gcnet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF international conference on computer vision workshops, 0–0. Dai, L.; Liu, X.; Li, C.; and Chen, J. 2020. Awnet: Attentive wavelet network for image isp. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, 185–201. Springer. Delbracio, M.; and Milanfar, P. 2023. Inversion by direct iteration: An alternative to denoising diffusion for image restoration. arXiv preprint arXiv:2303.11435. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Hore, A.; and Ziou, D. 2010. Image quality metrics: PSNR vs. SSIM. In 2010 20th international conference on pattern recognition, 2366–2369. IEEE. Ignatov, A.; Timofte, R.; Zhang, Z.; Liu, M.; Wang, H.; Zuo, W.; Zhang, J.; Zhang, R.; Peng, Z.; Ren, S.; et al. 2020. Aim 2020 challenge on learned image signal processing pipeline. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, 152–170. Springer. Ignatov, A.; Van Gool, L.; and Timofte, R. 2020. Replacing mobile camera isp with a single deep learning model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 536–537. Jaegle, A.; Gimeno, F.; Brock, A.; Vinyals, O.; Zisserman, A.; and Carreira, J. 2021. Perceiver: General perception with iterative attention. In International conference on machine learning, 4651–4664. PMLR. Ke, J.; Wang, Q.; Wang, Y.; Milanfar, P.; and Yang, F. 2021. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5148–5157. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Liang, Z.; Cai, J.; Cao, Z.; and Zhang, L. 2021. Cameranet: A two-stage framework for effective camera isp learning. IEEE Transactions on Image Processing, 30: 2248–2262. Liu, G.-H.; Vahdat, A.; Huang, D.-A.; Theodorou, E. A.; Nie, W.; and Anandkumar, A. 2023. I2SB: Image-to-Image Schr¨odinger Bridge. arXiv. Lowe, D. G. 2004. Distinctive image features from scaleinvariant keypoints. International journal of computer vision, 60: 91–110. Luo, Z.; Gustafsson, F. K.; Zhao, Z.; Sj¨olund, J.; and Sch¨on, T. B. 2023. Image restoration with mean-reverting stochastic differential equations. arXiv preprint arXiv:2301.11699. Mittal, A.; Soundararajan, R.; and Bovik, A. C. 2012. Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 20(3): 209–212. Ramanath, R.; Snyder, W. E.; Yoo, Y.; and Drew, M. S. 2005. Color image processing pipeline. IEEE Signal processing magazine, 22(1): 34–43. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. Schwartz, E.; Giryes, R.; and Bronstein, A. M. 2018. Deepisp: Toward learning an end-to-end image processing pipeline. IEEE Transactions on Image Processing, 28(2): 912–923. Shekhar Tripathi, A.; Danelljan, M.; Shukla, S.; Timofte, R.; and Van Gool, L. 2022. Transform your smartphone into a DSLR camera: Learning the ISP in the wild. In European Conference on Computer Vision, 625–641. Springer. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, 2256–2265. PMLR. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2020. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456. Vedaldi, A.; and Fulkerson, B. 2010. VLFeat: An open and portable library of computer vision algorithms. In Proceedings of the 18th ACM international conference on Multimedia, 1469–1472. Wang, J.; Chan, K. C.; and Loy, C. C. 2023. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2555–2563. Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021. Realesrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF international conference on computer vision, 1905–1914. Zhang, L.; Zhang, L.; and Bovik, A. C. 2015. A featureenriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 24(8): 2579–2591. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. Zhang, X.; Chen, Q.; Ng, R.; and Koltun, V. 2019. Zoom to learn, learn to zoom. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3762–3770. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6718 Zhang, Z.; Wang, H.; Liu, M.; Wang, R.; Zhang, J.; and Zuo, W. 2021. Learning raw-to-srgb mappings with inaccurately aligned supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4348–4358. Zhou, W. 2004. Image quality assessment: from error measurement to structural similarity. IEEE transactions on image processing, 13: 600–613. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6719 | 2024 | 746 |
18,569 | Efficient Look-Up Table from Expanded Convolutional Network for Accelerating Image Super-resolution Kai Yin, Jie Shen* University of Electronic Science and Technology of China [email protected], [email protected] Abstract The look-up table (LUT) has recently shown its practicability and effectiveness in super-resolution (SR) tasks due to its low computational cost and hardware independence. However, most existing methods focus on improving the performance of SR, neglecting the demand for high-speed SR on low-computational edge devices. In this paper, we propose an efficient expanded convolution (EC) layer, which expands the output size of regular convolution to enlarge the receptive field (RF) indirectly. It can increase the size of the LUT corresponding to the network linearly with the increase of RF. Additionally, after introducing the EC, multiple LUTs are merged into one LUT, achieving faster running speed while maintaining SR performance. More specifically, we expand the coverage of the convolutional output so that the output at the current position covers the target position and its surroundings, forming an overlapping sliding window at the output end. We sum up the overlapping parts of the sliding window as the output, thereby achieving the effect of enlarging the RF size. Moreover, by expanding the numerical range of the accumulated results and rescaling them to [0, 255], the method can mitigate the error caused by quantization output. Experiments indicate that the proposed method performs better than the baseline method and is faster than other LUTbased SR methods. Introduction Single image super-resolution (SISR) aims to recover highresolution (HR) images with high-frequency image details from a single low-resolution (LR) image, which has extensive applications in medical imaging, video surveillance, satellite and aerial imaging. In the early days, interpolationbased methods were common, such as nearest neighbor interpolation, bilinear interpolation, and bicubic (Keys 1981) interpolation. These methods output the missing pixels based on the weighted average of nearby pixels at the target position. Interpolation methods have the advantages of simplicity and fast processing speed. However, interpolation methods only consider the positional information without the arrangement patterns of different pixel values, so they cannot effectively restore high-frequency signals. *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 101 102 103 104 Runtime(ms) 25 26 27 28 29 PSNR(dB) Nearest Bilinear Bicubic SR-LUT Ours SP-LUT Mu-LUT FSRCNN CARN-M RRDB Figure 1: Comparison of PSNR on Set14 benchmark dataset for ×4 SR and runtime is measured by generating 1280×720 image. We compare our method with common interpolation based methods (square), prior LUT-based methods (circle) and deep learning based methods (diamond). Compared to previous LUT-based methods, our method show better or comparable PSNR quality while achieving faster runtime. Deep neural networks (DNNs) have strong fitting and nonlinear mapping capabilities. With the development of deep learning, research on using deep neural networks to solve SR problems is gradually increasing (Dong et al. 2014; Dong, Loy, and Tang 2016; Shi et al. 2016). Through its powerful nonlinear mapping ability, depth models can learn different mapping relationships for different pixel arrangements in LR images and even predict the lost high-frequency information. Therefore, deep learning methods are significantly superior to interpolation methods in restoring image details. Despite the great success of deep models, as the depth and width of the model increase, although the super-resolution effect is improved, the computational load and storage space occupied by the model are also sharply increasing. Moreover, some attention-based methods also require a large amount of memory consumption. Without high-performance hardware support, DNN-based methods are challenging to apply in practice. A look-up table (LUT) is commonly used in low-level vision tasks by storing the outputs of complex computations in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6720 the LUT and retrieving these values directly without recomputing them when needed. Recently, some studies have applied LUT-based methods to SR tasks (Jo and Kim 2021; Ma et al. 2022; Li et al. 2022). These methods store the mapping relationship between input and output pixel values in LUTs and retrieve them during inference, trading the time cost of accessing memory for the time cost of complex computations, thereby accelerating the SR inference process. Since the inference time cost for these methods mainly comes from memory access, they could achieve real-time SR without specific hardware acceleration, significantly improving the practicality of SR on edge devices. However, existing LUT-based SR methods still have some drawbacks. When creating LUTs, all inputs need to be traversed, which makes the size of a single LUT grow exponentially with the increase of input pixels. Considering the limitation of memory size, the number of input pixels must be controlled to reduce the LUT size, further limiting the receptive field size. Therefore, the performance of LUT-based SR methods is usually limited. Although recent studies have enlarged the receptive field by using multiple LUTs (Ma et al. 2022; Li et al. 2022), the total size of the corresponding LUTs grows linearly with their indexing capability, thus improving SR performance while ensuring practicality. Nevertheless, using multiple LUTs for indexing increases the memory access time and makes these methods lose their advantage in inference speed. To address these issues, in this paper, we propose an efficient expanded convolution (EC) method that indirectly enlarges the RF by expanding the output size of the convolution. This operation helps to reduce the LUT size comparing to increasing the size from the input end and is more friendly for the LUT query operation. Therefore, the proposed method has a great advantage in inference speed. Specifically, we follow the SR-LUT approach overall, but with two differences. On the one hand, we add an EC layer at the end of the network. The output of the EC contains the output at the target position and the outputs around the target position. Compared with expanding the input, expanding the output also achieves the effect of enlarging the RF, and makes the LUT grow linearly. Moreover, since only a single query is required in the inference phase, our method is faster than using multiple LUTs. On the other hand, since discretizing the output with 8bit storage introduces errors, we expand the numerical range of the sum of index results and rescale the result to [0, 255] to obtain the final result, which effectively improves the error introduced by quantization. Since there is no floating-point operation in the whole inference process and only a single LUT is used, our method has very fast inference speed. More importantly, our method has great flexibility and can be easily combined with other methods. For example, using multiple LUTs to sacrifice more inference time for better inference quality. In summary, our contributions can be summarized as follows: • We propose a novel expanded convolution (EC) method that enlarges the RF by expanding the output size of the convolution. The EC method is very friendly for the LUT, as it can merge multiple look-ups into one and give the LUT more advantage in speed of inference. • We propose a method that is simple and effective to mitigate the error caused by quantizing the result to 8 bits during the LUT recording process by using a scaling factor to multiply the accumulated result.The computational cost of this method is negligible. • Experimental results show that our method has a significant improvement in inference speed compared with other LUT-based SR methods, while having comparable performance, demonstrating its practicality. Related Work Single Image Super-Resolution Deep neural networks have powerful fitting capabilities. With the development of deep learning, models based on deep learning have made significant progress in SISR tasks. Dong et al. (Dong et al. 2014) first successfully attempted to use a three-layer convolutional neural network for superresolution, SRCNN, which is the groundbreaking work of SR based on deep learning. SRCNN requires performing bicubic interpolation on the image to up-sampling to the target size before using CNNs to refine the results. In contrast, FSRCNN (Dong, Loy, and Tang 2016) eliminates upsampling at the input end and up-scales at the output end, improving computational efficiency. Shi et al. (Shi et al. 2016) proposed the ESPCNN, which introduced a sub-pixel convolutional layer and achieved real-time super-resolution by zooming in on images at the end of the network. Ledig et al. (Ledig et al. 2017) introduced adversarial learning, which can add more details to images, making them more natural and realistic. With the further development of deep learning, it has become a consensus that deeper networks have stronger feature representation and fitting capabilities. By increasing depth and using residual connections, VDSR (Kim, Lee, and Lee 2016a) improves performance. Lim et al. (Lim et al. 2017) further increased the depth and width of the network and proposed a model called EDSR. Zhang et al. (Zhang et al. 2018a) further improved SR performance by introducing channel attention. Kim et al. (Kim, Lee, and Lee 2016b) first used recursive methods to increase network depth, known as DRCN. Tai et al. (Tai, Yang, and Liu 2017) further proposed DRRN by incorporating local residual learning into DRCN. Ahn et al. (Ahn, Kang, and Sohn 2018) proposed a cascaded residual network, which integrates features from different layers using a cascaded framework at both local and global levels, achieving high SR performance with fewer parameters. Later, Ledig et al. (Ledig et al. 2017) proposed SRResNet, which uses dense connections for improvement. These methods using residual learning have achieved significant improvements compared to traditional SR methods, demonstrating the effectiveness of residual networks. In addition to improving super-resolution performance, reducing model parameters and accelerating running speed is another worthwhile direction. Using a pyramid framework, the LapSRN proposed by Lai et al. (Lai et al. 2018) can effectively perform SR at extremely low resolutions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6721 (a) Training. (b) Transfering. (c) Testing. Expanded Conv 12 128 Conv2d, ReLU 11 128 Conv2d, ReLU 11 9 Sub-Pixel Conv a Expanded Conv 12 128 Conv2d, ReLU 11 128 Conv2d, ReLU 11 9 Sub-Pixel Conv a LUT 0° 90° 180° 270° 0° 90° 180° 270° 0° 90° 180° 270° 0° 90° 180° 270° a LUT 0° 90° 180° 270° 0° 90° 180° 270° a SR net LUT[I0][I1][0] LUT[I0][I1][35] ES-LUT V0 V1 V34 V35 ES-LUT V0 V1 V34 V35 LUT[I0][I1][0] LUT[I0][I1][35] ES-LUT V0 V1 V34 V35 ... V2 V1 V0 ... V2 V1 V0 I1 I0 I1 I0 SR net LUT[I0][I1][0] LUT[I0][I1][35] ES-LUT V0 V1 V34 V35 ... V2 V1 V0 I1 I0 Figure 2: The overview of ECLUT. The figure is depicted for ×2 SR (r = 2). (a) A small deep SR network with an expanded convolution layer. The blue dashed part indicates the rotation trick, the green box indicates the output of the network, and the green block part indicates the output of the SR network without dilated convolution. (b) All inputs of the trained network are traversed and the corresponding outputs are stored, resulting in a LUT that is equivalent to the network. (c) In the testing phase, the LUT replaces the network. Later, Muqeet et al. (Muqeet et al. 2020) proposed stacked multi-attention blocks, effectively compensating for parameter loss. However, these methods use very deep networks, consuming a large amount of computing resources during execution. Look-up Table Space-time tradeoff is a common strategy, and the look-up table is a space-time tradeoff method. The look-up table records the output results corresponding to all inputs in advance and uses simple and fast memory access operations instead of complex and lengthy computations, thus significantly improving the algorithm running efficiency. LUT has many applications, such as color space conversion (Monga et al. 2016), numerical computation (Rizvi et al. 1995; ChinChen et al. 2000), and video coding (Lee, Lee, and Park 2010; Tsang, Chan, and Siu 2013). In addition, LUT is also commonly used in camera imaging pipelines (Karaimer and Brown 2016). After the rise of deep learning, many studies have proposed LUT methods combined with deep learning for low-level vision tasks (Wang et al. 2021a,b; Zeng et al. 2022). Zeng et al. (Zeng et al. 2022) first proposed an adaptive 3D look-up table for image enhancement, and Wang et al. (Wang et al. 2021b) further proposed a learnable spatially aware 3D look-up table. Jo et al. (Jo and Kim 2021) first proposed a LUT-based SR method by training an SR network and then transferring the mapping relationship in the SR network to LUT. Combined with Rotational Ensemble and interpolation techniques, they developed SR-LUT. Li et al. (Li et al. 2022) designed a new indexing scheme based on SR-LUT by using multiple LUTs to enlarge the receptive field, and they improved the SR performance. Ma et al. (Ma et al. 2022) proposed SP-LUT, which also used multiple cascaded LUTs to enlarge the receptive field by using MSB and LSB, two parallel branches to compensate for the accuracy loss caused by discretization. They avoided interpolation operations. Method Preliminary Given an LR image ILR, the goal of SR task is to generate an HR image ISR that is close to the ground-truth image IHR. For each pixel (i, j) on the SR image, we think that it is mapped from the pixels around (i′, j′) on the input image. Let Φ be the mapping function for an SR network. The input of Φ is all the pixels covered by the RF of the SR network. LUT-based SR methods first find an Φ, then traverse all possible inputs of Φ to generate corresponding outputs. These input data and corresponding output data form keyvalue pairs. We record these data in LUT, where the key is implicit in the index, and the output can be obtained by querying LUT with the input as the index. The inference process is to read the pixels covered by RF at position (i′, j′) on LR in turn, combine them as an index, retrieve the result from LUT, and write it to the corresponding position (i, j) on SR. Generally speaking, the larger the RF, the stronger the mapping ability of Φ, and the better the SR performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6722 Input Feature maps k channels Space Shift Output Hidden layers Sub-pixel convolution layer Expanded convolution layer Figure 3: The proposed expanded convolutional network. The penultimate layer uses sub-pixel convolution to upscale the features, and then indirectly implements expanded convolution by using space shift and summation operations on the obtained k feature maps. However, as mentioned before, LUT records all input-output pairs. As RF expands, the domain of Φ expands exponentially, which means that the size of LUT will grow dramatically. As discussed by (Jo and Kim 2021), suppose LUT stores 8bit input-output, magnification factor r = 4, when RF covers two pixels, the size of LUT is 1MB. When RF covers 3, 4, and 5 pixels, the size of LUT is 256MB, 64GB, and 16TB, respectively. The large size of LUT not only consumes more storage space but also may increase the time of retrieving LUT. Therefore, in order to control the size of LUT, some tricks are needed, such as rotational ensemble, processing each channel separately, sampling LUT, etc. Overview Our method is outlined in Figure 2. Overall, we follow the practice of SR-LUT (Jo and Kim 2021), using the rotational ensemble trick to train an SR network and then save all the output values of the network in LUT. According to the previous analysis, SR performance is limited by RF size, and RF size affects LUT size. Therefore, we suggest expanding the output, indirectly increasing the RF size, and making the LUT size grow linearly. Compared with SR-LUT, we additionally output the intermediate results of the target position (i, j) and its surrounding eight regions. These nine regions form a huge sliding window at the output end and have overlapping areas when sliding. This process is similar to the input end window sliding of convolutional networks, except that the former is writing data, and the latter is reading data. For the overlapping areas, we sum up the results as the final output. Eventually, our LUT size is nine times that of SR-LUT (for the V model of SR-LUT). These output values are stored adjacently in LUT and can be obtained by one indexing for all regions, which can speed up the indexing compared with using multiple LUTs. By such operation, EC-LUT expands RF from 5 pixels to 21 pixels, thus achieving better performance. Expand LUT Convolution operation can map an input of size ks × ks to an output of size 1 × 1. As pointed out by Dong et al. (Dong et al. 2014), there is no efficient implementation of convolutional layers with an output size larger than the input size. Subsequently, Shi et al. (Shi et al. 2016) proposed sub-pixel convolution, which achieves the magnification operation of SR by rearranging the elements in the tensor. Inspired by this, we propose expanded convolution, which also achieves the magnification operation at the output end of the convolution by rearranging the elements in the tensor. Compared with the original sub-pixel convolution, we go further and enlarge the output size of a single convolution operation to r × k × r × k. As the convolution kernel slides on the input, a sliding window with overlap is formed at the output end. The output of the convolution is obtained by summing up the overlapping parts. This process can be expressed as follows: X(i, j) = Φθ(F(i, j)), ISR(i, j) = X x∈χ x(i, j) (1) where X(i, j) denotes the sliding window obtained by the convolution operation at the position (i′, j′). Φθ denotes the feature mapping function, i.e. the expanded convolution. F(i, j) is the input feature at the position (i′, j′). I(i, j) represents the output of the EC, and χ denotes the sliding window sets that cover the position (i, j). In practice, we do not actually redesign the operator but achieve the goal by padding the tensor. Figure 3 shows the specific implementation process. We output r×r×9 channels at the last layer of the network and then rearrange the tensor to obtain a tensor of shape 9×rH ×rW. Next, we pad the separated nine tensors in different directions so that their RFs will shift in different directions. After adding up the nine feature maps, the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6723 270 90 0 Look-up Look-up Look-up Look-up 180 Addition Feature Figure 4: Visualization of the RF relative to the red-marked feature after each operation. The rotation operation maps different directions using the same LUT. Expanded convolution enlarges the RF by one cycle using a single operation. resulting feature map has a larger RF. Note that this process is similar to the working principle of the aggregation module of SP-LUT. The only difference is that SP-LUT uses horizontal aggregation module and vertical aggregation module separately, while our method generalizes to any case. Figure 4 visualizes the change process of the RF. During inference, two pixels are read from the input image, then the LUT is queried, and the retrieved data is placed on the output matrix according to the position and added in place. After one query, the RF changes from 1×2 to 3×4. Further, using the rotation trick, the RF can be enlarged to 21 pixels. Since the above operations can be completed at the position (i, j) simultaneously, our method has a faster inference speed. Rescale Output When training an SR network, 32-bit or 64-bit floating-point numbers are usually used, which have high precision. During inference, the LUT replaces the SR network to perform the mapping. When we transfer the mapping relationship of SR network to LUT, in order to further reduce the LUT size, we adjust the output to 8bit, with a numerical range of [−128, 127]. If LUT stores the final output result, then 8bit just matches the single-channel bit width of a general image. However, since we use the rotational ensemble and expanded output methods, LUT actually stores only an intermediate result, and the final result still needs to go through multiple addition operations. The inference process can be expressed as follows: X(i, j) = LUT ILR(i′, j′) (2) where ILR(i′, j′) denotes pixels on the input image used for querying. Quantizing the floating-point number in [0, 1] to the integer in [−128, 127] itself will cause some errors, and these errors will accumulate after multiple addition operations, eventually affecting the SR performance. Therefore, we propose to expand the numerical range of the accumulation results, and readjust them back to the original numerical range after the accumulation. This process can be expressed as: ISR(i, j) = clamp(round(α · X x∈χ x(i, j))) (3) where clamp denotes clipping the result to [0, 255], round denotes the rounding function, and α is the scaling factor. By expanding the numerical range and multiplying by a scaling factor (in practice, we take α as 0.25), the numerical range is pulled back to [0, 255]. This operation can be seen as adjusting the gradation value of x(i, j) from 1 to 0.25. Experimental results show that this operation effectively improves the error. At the same time, the operation of multiplying by 0.25 and then rounding can be implemented by adding 2 and then shifting right by two bits, which has a very small computational cost. Experiment Implementation Details Datasets and Metrics. Following previous studies, we use the DIV2K dataset (Timofte et al. 2017) for training. This dataset has 800 images for training, 100 images for validation, and 100 images for testing. In addition, there are five commonly used benchmark test datasets, namely Set5 (Bevilacqua et al. 2012), Set14 (Zeyde, Elad, and Protter 2012), B100 (Arbelaez et al. 2010), Urban100 (Huang, Singh, and Ahuja 2015) and Mang109 (Matsui et al. 2017). We report our results on these five datasets and compare them with previous studies. The quantitative evaluation metrics are PSNR (peak signal-to-noise ratio) on the Y channel of YCbCr space and structural similarity index (SSIM) (Wang et al. 2004). In addition, we evaluate the computation efficiency by recording and presenting the rumtime of generating 1280 × 720 output images on mobile devices. To be consistent with previous studies, according to (Lim et al. 2017; Zhang et al. 2018b), we use Matlab’s imresize function to perform bicubic interpolation on HR images to obtain LR images. Training Details. Our convolutional network structure is basically consistent with SR-LUT (Jo and Kim 2021), with the first convolutional layer having a kernel size of 1 × 2 and the rest having a kernel size of 1 × 1. The difference is that we use 128 channels of convolution and add expanded convolution, and also scale the output. We use Adam optimizer (Kingma and Ba 2015) with an initial learning rate of 1 × 10−4 for a total of 20000 epochs, halving the learning rate every 4000 epochs. The loss function is mean-squared error (MSE). We randomly crop the LR image into patches of size 48 × 48 with a mini-batch size of 16 and augment the data by random rotation and flipping. We train the ECLUT model with Pytorch (Chaudhary et al. 2020) on Nvidia 2080Ti GPU. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6724 Method Runtime Size Set5 Set14 BSDS100 Urban100 Manga109 Nearest 9ms 26.25/0.7372 24.65/0.6529 25.03/0.6293 22.17/0.6154 23.45/0.7414 Bilinear 20ms 27.55/0.7884 25.42/0.6792 25.54/0.6460 22.69/0.6346 24.21/0.7666 Bicubic 97ms 28.42/0.8101 26.00/0.7023 25.96/0.6672 23.14/0.6574 24.91/0.7871 SR-LUT 94ms 1.274MB 29.82/0.8478 27.01/0.7355 26.53/0.6953 24.02/0.6990 26.80/0.8380 SP-LUT 365ms 5.5MB 30.01/0.8516 27.21/0.7427 26.67/0.7019 24.12/0.7058 27.00/0.8430 Mu-LUT 242ms 4.062MB 30.60/0.8653 27.60/0.7541 26.86/0.7110 24.46/0.7194 27.90/0.8633 Ours 41ms 9MB 29.91/0.8461 27.14/0.7419 26.61/0.7019 23.98/0.6977 26.96/0.8362 FSRCNN 371ms 12K 30.71/0.8656 27.60/0.7543 26.96/0.7129 24.61/0.7263 27.91/0.8587 CARN-M 4955ms 412K 31.82/0.8898 2829/0.7747 27.42/0.7305 25.62/0.7694 29.85/0.8993 RRDB 31717ms 16698K 32.68/0.8999 28.88/0.7891 27.82/0.7444 27.02/0.8146 31.57/0.9185 Table 1: Quantitative comparisons with other SR methods on 5 benchmark datasets for r = 4. The best values of LUT-based methods are shown in bold. Runtime is measured on a MEIZU 16s smartphone for generating 1280 × 720 output images. Size denotes the storage space or the parameter number of each model. Bicubic SR-LUT SP-LUT MuLUT Ours GT Figure 5: Visual comparison for ×4 SR on benchmark datasets. The results show our method can generate comparable results compared to other LUT-based methods. Comparation with Others Quantitative Comparison. In this section, we compare our proposed method with three common interpolation methods, including nearest neighbor, bilinear, bicubic (Keys 1981) interpolation, LUT-based SR methods including SRLUT (Jo and Kim 2021), MuLUT (Li et al. 2022), SP-LUT (Ma et al. 2022), and DNN-based methods including FSRCNN (Dong, Loy, and Tang 2016), CARN-M (Ahn, Kang, and Sohn 2018), RRDB (Wang et al. 2018). Since MuLUT did not provide source code, we used the numbers reported in their paper. To be consistent with previous studies, we perform four times SR on input images of size 320x180, and measure the running time. A quantitative comparison is shown in Table 1. As we can see, EC-LUT has better inference time than any other methods, except for nearest neighbor and bilinear interpolation. Compared with the three interpolation methods, EC-LUT has better PSNR and SSIM (on Set5 test set, +1.49dB and +0.036, respectively, compared with bicubic interpolation). Although DNN-based methods generally have better PSNR and SSIM than LUT-based methods, their inference time is tens to hundreds of times longer than EC-LUT. In compariThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6725 Right Shift Se5 Set14 BSDS100 Urban100 Manga109 PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM 0 bit 29.83 0.8402 27.10 0.7366 26.57 0.6970 23.95 0.6928 26.90 0.8306 1 bit 29.91 0.8454 27.14 0.7404 26.59 0.7003 23.98 0.6967 26.95 0.8338 2 bit 29.91 0.8461 27.14 0.7419 26.61 0.7019 23.98 0.6977 26.96 0.8362 3 bit 29.82 0.8459 27.08 0.7408 26.57 0.7006 23.93 0.6966 26.78 0.8341 no quant 29.91 0.8470 27.13 0.7414 26.60 0.7015 23.96 0.6975 26.88 0.8344 Table 2: Ablation study of rescale operation, where no quant means that the output of the SR network is not quantized and 32bit floating-point numbers are used directly. We use bit shift operations to replace multiplication. The results show that the SR performance degrades after quantization, but it recovers by shifting the accumulated result two bits to the right (α = 0.25). Model Se5 Set14 BSDS100 Urban100 Manga109 PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM EC-LUT-V 29.91 0.8461 27.14 0.7419 26.61 0.7019 23.98 0.6977 26.96 0.8362 EC-LUT-F 30.24 0.8555 27.39 0.7488 26.76 0.7078 24.18 0.7082 27.41 0.8487 EC-LUT-S 30.35 0.8592 27.45 0.7484 26.77 0.7062 24.28 0.7101 27.39 0.8466 Table 3: Experiments on different kernel shapes. The kernel shape of V, F and S models are 1 × 2, 1 × 3 and 2 × 2 respectively. son with other LUT-based methods, EC-LUT achieves superior SR performance over SR-LUT while being slightly inferior to SP-LUT and MuLUT. However, EC-LUT has an advantage in inference time. In fact, EC-LUT can also improve the super-resolution performance by increasing the number of pixels used for direct indexing (similar to the three models of SR-LUT), which will be shown in the ablation experiment section. Qualitative Comparison. Figure 5 shows the visual comparison of bicubic interpolation, SR-LUT, EC-LUT, SPLUT, MuLUT, and GT. For the first row, we can see that SRLUT and SP-LUT produce blocking artifacts near the pupil, while our method well controls the artifacts. For the second row, SR-LUT and SP-LUT fail to produce a smooth transition at the arc edge area and instead show jagged edges. Our method and MuLUT both produce smooth arc edges. For the third and fourth rows, SP-LUT shows blocking artifacts near some vertical or horizontal lines. Overall, compared with bicubic interpolation, our method can generate clearer images. Compared with SR-LUT, our method reduces some artifacts. Compared with SP-LUT, our method does not produce blocking artifacts near some vertical or horizontal lines. Of course, our method still has less indexing capability than MuLUT, as only two pixels are used for single indexing. Analysis Rescale Operation. To verify the effectiveness of the operation of rescaling the numerical range of the intermediate results, we conducted experiments with different degrees of rescaling. To avoid floating-point multiplication and division operations, we used shift operations to replace multiplication and division, and each right shift by x bits means dividing the result by 2x. As shown in Table 2, it can be seen that after quantization, the SR performance has a significant decline, and after scaling the result by 0.5 or 0.25 times, the error introduced by quantization is basically offset. And after scaling by 0.125 times, the SR performance starts to decline significantly again. Kernal Shape. Like SR-LUT, our method can also obtain different models by changing the RF size of the SR network. However, when the number of pixels used for single indexing exceeds 2, it is necessary to sample the LUT to reduce its size, and interpolation is required to obtain the missing indexes during inference. To achieve faster inference speed, we only use two pixels as single indexing in this paper. Table 3 shows the results of increasing the number of pixels for single indexing. It can be seen that as the number of pixels for single indexing increases, the performance of EC-LUT also gradually improves. Conclusion In this paper, we propose a novel LUT-based single image SR method, which enlarges the RF effect by expanding the coverage range at the output end and improves the SR performance by using only one LUT. At the same time, as only a single LUT is used, our method has an absolute advantage in inference speed. On the other hand, we propose a method to mitigate quantization error by rescaling the accumulated results at the output end, which greatly reduces the SR performance degradation caused by quantization. Compared with previous studies, our method has surpassed or comparable performance while greatly improving the inference speed, further enhancing the practicality of the method. In the future, we will explore the application of the EC method in multiple LUTs to further improve SR performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6726 References Ahn, N.; Kang, B.; and Sohn, K.-A. 2018. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European conference on computer vision (ECCV), 252–268. Arbelaez, P.; Maire, M.; Fowlkes, C.; and Malik, J. 2010. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5): 898–916. Bevilacqua, M.; Roumy, A.; Guillemot, C.; and AlberiMorel, M. L. 2012. Low-complexity single-image superresolution based on nonnegative neighbor embedding. Chaudhary, A.; Chouhan, K. S.; Gajrani, J.; and Sharma, B. 2020. Deep Learning With PyTorch. Machine Learning and Deep Learning in Real-Time Applications. Chin-Chen; Chang; Jer-Sheng; Chou; Tung-Shou; and Chen. 2000. An efficient computation of Euclidean distances using approximated look-up table. IEEE Trans. Circuits Syst. Video Technol., 10(4): 594–599. Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a deep convolutional network for image super-resolution. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13, 184–199. Springer. Dong, C.; Loy, C. C.; and Tang, X. 2016. Accelerating the super-resolution convolutional neural network. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 391–407. Springer. Huang, J.-B.; Singh, A.; and Ahuja, N. 2015. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5197–5206. Jo, Y.; and Kim, S. J. 2021. Practical Single-Image SuperResolution Using Look-Up Table. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Karaimer, H. C.; and Brown, M. S. 2016. A Software Platform for Manipulating the Camera Imaging Pipeline. In Leibe, B.; Matas, J.; Sebe, N.; and Welling, M., eds., Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I, volume 9905 of Lecture Notes in Computer Science, 429–444. Springer. Keys, R. 1981. Cubic convolution interpolation for digital image processing. IEEE transactions on acoustics, speech, and signal processing, 29(6): 1153–1160. Kim, J.; Lee, J. K.; and Lee, K. M. 2016a. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1646–1654. Kim, J.; Lee, J. K.; and Lee, K. M. 2016b. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1637–1645. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Lai, W.-S.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2018. Fast and accurate image super-resolution with deep laplacian pyramid networks. IEEE transactions on pattern analysis and machine intelligence, 41(11): 2599–2613. Ledig, C.; Theis, L.; Husz´ar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4681–4690. Lee, J. Y.; Lee, J. J.; and Park, S. M. 2010. New Lookup Tables and Searching Algorithms for Fast H.264/AVC CAVLC Decoding. IEEE Transactions on Circuits Systems for Video Technology, 20(7): 1007–1017. Li, J.; Chen, C.; Cheng, Z.; and Xiong, Z. 2022. Mulut: Cooperating multiple look-up tables for efficient image superresolution. In European Conference on Computer Vision, 238–256. Springer. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Mu Lee, K. 2017. Enhanced deep residual networks for single image superresolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 136–144. Ma, C.; Zhang, J.; Zhou, J.; and Lu, J. 2022. Learning series-parallel lookup tables for efficient image superresolution. In European Conference on Computer Vision, 305–321. Springer. Matsui, Y.; Ito, K.; Aramaki, Y.; Fujimoto, A.; Ogawa, T.; Yamasaki, T.; and Aizawa, K. 2017. Sketch-based manga retrieval using manga109 dataset. Multim. Tools Appl., 76(20): 21811–21838. Monga; Vishal; Lee; and Chul. 2016. Power-Constrained RGB-to-RGBW Conversion for Emissive Displays: Optimization-Based Approaches. IEEE Transactions on Circuits and Systems for Video Technology. Muqeet, A.; Hwang, J.; Yang, S.; Kang, J. H.; Kim, Y.; and Bae, S.-H. 2020. Ultra lightweight image superresolution with multi-attention layers. arXiv preprint arXiv:2008.12912, 2(5). Rizvi; S., A.; Nasrabadi; and N., M. 1995. An efficient Euclidean distance computation for vector quantization using a truncated look-up table. Circuits and Systems for Video Technology, IEEE Transactions on. Shi, W.; Caballero, J.; Husz´ar, F.; Totz, J.; Aitken, A. P.; Bishop, R.; Rueckert, D.; and Wang, Z. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1874–1883. Tai, Y.; Yang, J.; and Liu, X. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3147–3155. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6727 Timofte, R.; Agustsson, E.; Van Gool, L.; Yang, M.-H.; and Zhang, L. 2017. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 114–125. Tsang, S.; Chan, Y.; and Siu, W. 2013. Region-Based Weighted Prediction for Coding Video With Local Brightness Variations. IEEE Trans. Circuits Syst. Video Technol., 23(3): 549–561. Wang, B.; Lu, C.; Yan, D.; and Zhao, Y. 2021a. Learning Pixel-Adaptive Weights for Portrait Photo Retouching. Wang, T.; Li, Y.; Peng, J.; Ma, Y.; Wang, X.; Song, F.; and Yan, Y. 2021b. Real-time Image Enhancer via Learnable Spatial-aware 3D Lookup Tables. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 2451–2460. IEEE. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Loy, C. C. 2018. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Leal-Taix´e, L.; and Roth, S., eds., Computer Vision - ECCV 2018 Workshops Munich, Germany, September 8-14, 2018, Proceedings, Part V, volume 11133 of Lecture Notes in Computer Science, 63– 79. Springer. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process., 13(4): 600–612. Zeng, H.; Cai, J.; Li, L.; Cao, Z.; and Zhang, L. 2022. Learning Image-Adaptive 3D Lookup Tables for High Performance Photo Enhancement in Real-Time. IEEE Trans. Pattern Anal. Mach. Intell., 44(4): 2058–2073. Zeyde, R.; Elad, M.; and Protter, M. 2012. On single image scale-up using sparse-representations. In Curves and Surfaces: 7th International Conference, Avignon, France, June 24-30, 2010, Revised Selected Papers 7, 711–730. Springer. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018a. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), 286–301. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2018b. Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2472–2481. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6728 | 2024 | 747 |
18,570 | CLIP-Gaze: Towards General Gaze Estimation via Visual-Linguistic Model Pengwei Yin1,2*, Guanzhong Zeng1,2*, Jingjing Wang1,2†, Di Xie1,2 1Hikvision Research Institute, Hangzhou, China 2Zhejiang Key Laboratory of Social Security Big Data, China {yinpengwei,zengguanzhong,wangjingjing9,xiedi}@hikvision.com Abstract Gaze estimation methods often experience significant performance degradation when evaluated across different domains, due to the domain gap between the testing and training data. Existing methods try to address this issue using various domain generalization approaches, but with little success because of the limited diversity of gaze datasets, such as appearance, wearable, and image quality. To overcome these limitations, we propose a novel framework called CLIP-Gaze that utilizes a pre-trained vision-language model to leverage its transferable knowledge. Our framework is the first to leverage the vision-and-language cross-modality approach for gaze estimation task. Specifically, we extract gaze-relevant feature by pushing it away from gaze-irrelevant features which can be flexibly constructed via language descriptions. To learn more suitable prompts, we propose a personalized context optimization method for text prompt tuning. Furthermore, we utilize the relationship among gaze samples to refine the distribution of gaze-relevant features, thereby improving the generalization capability of the gaze estimation model. Extensive experiments demonstrate the excellent performance of CLIP-Gaze over existing methods on four cross-domain evaluations. Introduction Gaze estimation has been widely applied for human behavior and mental analysis. High-accuracy gaze estimation can provide strong support for many applications, such as human-computer interaction(Andrist et al. 2014; Moon et al. 2014), saliency prediction(Xu, Sugano, and Bulling 2016), augmented reality(Padmanaban et al. 2017) and driver monitoring systems(Mavely et al. 2017). Recently, appearancebased gaze estimation methods(Zhang et al. 2015; Krafka et al. 2016; Cheng, Lu, and Zhang 2018; Cheng et al. 2020) achieved significant results with the development of deep learning. These methods achieve promising performance in the within-domain evaluation, but suffer from dramatic degradation in the cross-domain evaluations due to the domain gap. Gaze images contain rich information, but gaze labels are mainly determined by the eye direction. The eye area only *These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: (1) Top subgraph: The conventional gaze generalization approach enhances the model’s robustness by adversarial training, but can only mitigate a few gaze-irrelevant factors. (2) Bottom subgraph: Our method, CLIP-Gaze, constructs a text prompt from diverse language descriptions to obtain gaze-irrelevant features, and then push away the gazerelevant feature from gaze-irrelevant features in the feature space to handle various gaze disturbing factors and achieve a robust model. occupies a small proportion of pixels in the face image(Xu, Wang, and Lu 2023), so the model prediction results are easily affected by various disturbation factors, such as hair, beard, expressions, makeup, hat, environment illumination, sensor noise and motion blur. Thus, these gaze-irrelevant factors lead to the domain gap, and pose a great challenge for the generalization of gaze model. To enhance the generalization of the model, some works purify the gaze feature with self-adversarial framework(Cheng and Bao 2022) or learn stable representation with the contrastive regression loss(Wang et al. 2022). However, their performance is limited due to the simplicity of the source domain, which fails to cover the diverse data types of the unseen target domain. To address such a problem, common domain generalization approaches(Xu, Wang, and Lu 2023; Yin et al. 2024) attempt to improve the diversity of the source dataset by data augmentation or data manipulation. However, these methThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6729 ods have not achieved substantial success because it is expensive to exhaustively enumerate all possible combinations of various gaze-irrelevant factors. Moreover, existing works have increased the data diversity in few factors such as identity, expression, and illumination, but they still struggle with many other important factors, which indicates that relying on a single visual modality is insufficient to handle all gazeirrelevant factors. On top of this, we propose a novel method named CLIP-Gaze that leverages a pretrained vision-language model(VLM) CLIP (Radford et al. 2021) to impart general transferable knowledge to the gaze estimation model and enhance the generalization ability of extracted gaze feature. CLIP learns rich visual-linguistic correlations through large-scale and aligned image-text datasets, which endows it with general, powerful and flexible representation capabilities. Consequently, CLIP-Gaze can exploit CLIP to flexibly handle various gaze-irrelevant factors, rather than relying on expensive models or uncontrollable adversarial methods that can only deal with limited gaze-irrelevant factors. As shown in Fig. 1, we first employ the CLIP text encoder to generate a set of gaze-irrelevant features as gaze distractors from flexible and diverse language descriptions, which contains all gaze-irrelevant factors mentioned above. Subsequently, we maximize the distance between gaze-relevant feature and gaze-irrelevant features in the feature space via a feature separation loss function, so as to enhance the robustness of the gaze estimation model against various gaze disturbing factors. Furthermore, we develop a strategy for prompt optimizing and a loss function for feature refining, i.e., Personalized Context Optimization and Feature Rank Loss to further improve the domain generality of CLIP-Gaze for gaze estimation task. Personalized Context Optimization is a text prompt tuning (TPT) method that aims to avoid prompt engineering problems (a slight change in wording could make a huge difference in performance) and provide personalized text prompts for each individual. Feature Rank Loss refines the distribution of gaze-relevant features in gaze feature space by exploiting the relationship among samples, rather than only relying on the supervised loss of a single sample. The main contributions can be summarized as follows: • We design an efficient domain-generalization framework for gaze estimation, which is the first that introduces visual-linguistic modeling into gaze estimation task, and it deals with the diverse data types not seen in the source domain through a flexible manner. • We develop a personalized text prompt tuning method to overcome prompt engineering issues and improve adaptation to the gaze estimation task. Furthermore, we propose a novel loss function based on the relationship among gaze samples to promote more reasonable feature distribution and learn robust gaze-relevant feature. • Experimental results demonstrate that our CLIP-Gaze achieves remarkable performance improvement compared with the baseline model and also outperforms the state-of-the-art domain generalization approaches on gaze estimation tasks. Related Works Gaze Estimation Domain Generalization Appearancebased gaze estimation has become a hotspot(Zhang et al. 2015; Krafka et al. 2016; Cheng, Lu, and Zhang 2018; Cheng et al. 2020), but still has many challenges on crossdomain evaluation due to the domain gap caused by various gaze-irrelevant factors. The common approaches typically collect diverse and extensive gaze datasets(Zhang et al. 2020; Kellnhofer et al. 2019) to train a model with robust generalization capabilities, but collecting gaze data is often costly, and the diversity of the available data remains limited. This implies that we need to enhance models by using domain generalization (DG) methods, which can generalize to unseen distributions and improve cross-domain performance. However, most domain generalization methods are designed for classification tasks rather than regression tasks, and there are only a few studies on the generalization of gaze estimation. PureGaze(Cheng and Bao 2022) proposes a self-adversarial framework to alleviate the gaze disturbation by eliminating gaze-irrelevant feature and purifing gazerelevant feature. Xu et al.(Xu, Wang, and Lu 2023) disturb training data against gaze-irrelevant factors by using adversarial attack and data augmentation, but they neglect many other gaze-irrelevant factors that still challenge gaze estimation. Vision-Language Model Recently, many works have taken advantage of CLIP’s flexible text manipulation and visual alignment capabilities to enhance the open detection or generalization performance of specific tasks, such as DetCLIP(Yao et al. 2022), DenseCLIP(Rao et al. 2022), CLIPGap(Vidit, Engilberge, and Salzmann 2023), Ordinalclip(Li et al. 2022), CLIP-Cluster(Shen et al. 2023) and so on. Furthermore, to improve the performance of vision-language models on downstream tasks, a more effective approach is to learn continuous text prompts by text prompt tuning(Zhou et al. 2022b,a). In this paper, we extend the usage of CLIP to gaze estimation, by utilizing its rich domain information to enhance the generalization ability of gaze estimation models, since it is trained on a large scale of data. Method Preliminaries A generalized gaze representation learning can be formulated as: min G Ex,yℓ(G(x, y)) + γℓreg where (x, y) is the input image and label, G is the gaze estimator, γ is the trade-off parameter and ℓreg denotes some regularization term. Many methods attempt to learn a generalizable gaze feature by proposing various ℓreg, and their goals are to eliminate the influence of gaze-irrelevant factors on gaze estimation by explicit adversarial attack or data disturbation. However, it is difficult to cover all gaze-irrelevant factors. Although existing works have improved the robustness of models in some factors, i.e. identity, expression, and illumination, they still ignore many other important factors that challenge gaze estimation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6730 Figure 2: Overview of our CLIP-Gaze framework. We promote gaze domain generalization by introducing abundant knowledge outside the source domain to explicitly eliminate gaze-irrelevant features. Therefore, we design a flexible and scalable method for gaze-irrelevant feature construction to cover a variety of target domains. We comprehensively define the gaze-irrelevant factors from three dimensions: • Appearance. Face images contain rich but disturbing information for gaze estimation, such as identity features (e.g., face shape, eyebrows, eyes, mouth, and nose), expression variations, texture attributes (e.g., hair and beard color and style), and other characteristics (e.g., gender and age). • Wearable. Besides the intrinsic factors of the face, gaze estimation is also influenced by external factors, such as wearing glasses, hats, helmets and makeup. • Image Quality. Except for above image content factors, sensor noise, motion blur and environment also play a role. Therefore, we define multiple types of image clarity and illuminations(Schlett et al. 2022). By enumerating the gaze-irrelevant factors exhaustively, we get a comprehensive set, which contains K gazeirrelevant factors {ck}K k=1 in total and significantly exceeds the number of factors considered by previous methods. See the supplementary material for more details. CLIP-Gaze Framework In this section, we will elaborate on the proposed simple and flexible gaze generalization framework named CLIP-Gaze, which leverages the available pre-trained vision-language model to introduce rich general knowledge for gaze estimation task. Fig. 2 illustrates the whole pipeline of CLIP-Gaze that consists of two models. The first model is CLIP, which is fixed and used to generate image features f v and construct multiple textual gaze-irrelevant features {f ir k }K k=1 by defined prompt templates. The second model is the gaze model, which is used to extract image feature f including gaze-relevant and gazeirrelevant features through a convolution neural network (CNN). Then we use a multi-layer perceptron (MLP) to separate gaze-relevant features f re and push it away from {f ir k }K k=1 to learning robust gaze representation. In this way, the gaze estimator can generalize to multiple target domains. Next, we will describe more details. Construct Gaze-irrelevant Features: Based on the flexibility of the VLM, we use the prompt template "An image of a face with {ck}." to construct gaze-irrelevant features {f ir k }K k=1 about the content of input images via CLIP text encoder Ct, where ck is a gaze-irrelevant factor from the set {ck}K k=1. Distill to CLIP Feature Space: For a given input image x, the whole gaze estimation process can be formulated as ˆg = F(M(E(x))). Specifically, we regard ResNet-18(He et al. 2016) as our backbone E to extract image feature f, then use a MLP as feature filter M to separate gaze-relevant feature f re, and lastly we predict the gaze direction through a fully connected (FC) layer F. The gaze-irrelevant features {f ir k }K k=1 constructed above are fixed and located in the CLIP feature space. To remove gaze-irrelevant information from image features f and remain gaze-relevant features f re, we first align gaze image feature to the CLIP feature space by the following loss function: Ld (f, f v) = 1 − f · f v ∥f∥∥f v∥+ 1 ∗0.5 (1) Where f v is the feature extracted by CLIP vision encoder The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6731 Cv. The loss value is normalized to the range of 0 to 1. Separate Gaze-relevant Feature: To force the feature filter M to extract gaze-relevant feature f re, we minimize the similarity between f re and f ir k where k from 1 to K. Note that the gaze-irrelevant factors corresponding to each sample may be different, we define the ˜wk to summarize the correlation between one sample and k-th irrelevant factor. ˜wk is used to alleviate the influence of gaze-irrelevant factors that are not involved in this image. We use ˜W to represent the set of ˜wk, and ˜W = softmax(w1, . . . , wK), where wk is the degree of correlation between current sample and k-th irrelevant factor and can be described as: wk = f · f ir k ∥f∥
f ir k
Each sample has K gaze-irrelevant features elimination loss values, which will be re-weight by ˜wk and the irrelevant loss is formally expressed as: Lir f re, f ir k K k=1 = K X k=1 ˜wk f re · f ir k ∥f re∥
f ir k
(2) Gaze Estimation: Lastly, we map the gaze-relevant feature f re to gaze direction ˆg, the gaze loss function is defined as: Lg (ˆg, g) = arccos ˆg · g ∥ˆg∥∥g∥ (3) where g is the gaze label. Potential Issues and Improvement Schemes: Employing a suitable prompt template for CLIP-Gaze is non-trivial, as it requires prior knowledge about the gaze estimation task and proficiency in the language model’s underlying mechanism(Radford et al. 2021). Hence, we propose a personalized text prompt tuning method to generate {f ir k }K k=1 for each person. Moreover, it only learns robust features from individual samples without further exploring the relationships between samples, which is suboptimal for the regression task. According to widely accepted understanding(Wang et al. 2022), the label and feature relationships among samples should exhibit a strong correlation. To tackle this limitation, we propose a novel loss function that explores the relationship among gaze samples and constructs a reasonable gazerelevant feature distribution. Personalized Context Optimization To avoid the prompt engineering issues, a common approach is to learn prompts by prompt tuning. However, existing text prompt tuning methods may not be suitable for gaze estimation. CoOp(Zhou et al. 2022b) learns only one prompt for each class, which may fail to fully capture the gazeirrelevant features, since each individual has a personalized facial property. CoCoOp(Zhou et al. 2022a) learns a prompt conditioned on each input image, by learning a lightweight neural network to impose the image content into the prompt. However, directly using the whole image content to learn the prompt may introduce some detrimental information into the CLIP vision encoder [Class] Text Prompt Tuning happy glass dark ··· CLIP text encoder ··· ··· Match for True Text Class ··· 3DMM Model Face Image Personalized Parameters Meta-Net Facial Token + + + ··· ··· ··· maximize the similarity for the true class Tunable Frozen Figure 3: Our method, Personalized Context Optimization (PCO), has two learnable components: a context vector set and a lightweight neural network (Meta-Net) that produces a facial token for each identity, while the vision encoder, text encoder and 3DMM model are froze during training. prompt, since the image content also contain gaze-related information, and we want to use the prompt to extract only gaze-irrelevant feature. To this end, we propose a novel personalized context optimization (PCO) method. Fig. 3 illustrates our approach. Learn Personalizing Text Prompt: First, our PCO utilizes face attribute classification as the proxy task to optimize the prompts. Specifically, we perform text prompt tuning on CelebA(Liu et al. 2015) , a large-scale and diverse face attributes dataset, to obtain a more robust gaze-irrelevant features. We extend its attribute number and leverage the CLIP model to get the pseudo labels as the attribute label for prompt tuning, inspired by previous works(Abdelfattah et al. 2023; Schlett et al. 2022). See the supplementary material for more details about how to generate pseudo labels. We follow CoOp(Zhou et al. 2022b) and introduce L learnable context vectors, {v1, v2, . . . , vL}, each with the same dimension as the word embedding. The prompt for the i-th class, denoted by tk, is defined as {v1, v2, . . . , vL, ck} where ck is the word embedding corresponding to one class in the class name set. Next, to generate personalized text prompts for each individual and reduce gaze-related information, we use a pre-trained 3D Morphable Model (3DMM)(Tran and Liu 2018) to extract the personalization parameter f m from the most frontal face image of each individual. Specifically, we use the 3DMM coefficients corresponding to identity as f m to capture only the identity features of the face. Then, we feed f m to Meta-Net to obtain an inputconditional token π. Now, each personalizing context token is obtained by vi(f m) = vi + π. The prompt for the i-th class is thus conditioned on the input, i.e., tk(f m) = {v1(f m), v2(f m), . . . , vL(f m), ck}. Furthermore, we feed tk(f m) into text encoder Ct of CLIP to obtain text feature f t k, and obtain face image feature f v via vision encoder Cv of CLIP. During training, we specify the negative words for each category, such as “not happy” for the factor “happy”, and we could obtain negative text features f t k via aforementioned operations. Then, we could maximize the prediction probability pk for each gaze-irrelevant language description via a classification loss The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6732 function, which is formulated as: pk = exp(sim(f v, f t k)/τ) exp(sim(f v, f t k)/τ) + exp(sim(f v, f t k)/τ) where τ is a temperature parameter learned by CLIP and sim(·,·) denotes cosine similarity. Construct Personalizing Gaze-irrelevant Features: After finishing text prompt tuning, we first leverage the identity labels provided by ETH-XGaze and Gaze360 to select a frontal face image for each subject, and input this image and gaze-irrelevant factors {ck}K k=1 to our PCO module to construct K gaze-irrelevant features {f ir k }K k=1 for each identity in the gaze training dataset. Rank Gaze-Relevant Features In this part, we refine the extracted gaze-relevant features through re-thinking the relationship of samples. Intuitively, the gaze-relevant features with similar gaze directions should be close, which is suitable for multiple gaze domains. Derived from this idea, a contrastive regression loss (Wang et al. 2022) called CRLoss was proposed. CRLoss sets a threshold to push features with gaze angular difference larger than the threshold apart, and pull features with gaze angular difference less than the threshold together. However, it is hard to determine the threshold and it discards the finegrained relationships among gaze features, since gaze estimation is a continuous regression task. To deal with this deficiency, we explore the relationship among gaze-relevant features from a novel perspective. We know that multiple gaze samples can be paired in pairs, and each pair of samples can be used to calculate a label similarity sg and a feature similarity sf. As shown in the upper right box of Fig. 2, for multiple sample pairs, the color intensity represents the similarity level, with darker color indicating higher similarity. Then we rank all pairs from high to low according to the label similarities of each pair, and impose penalties to force the feature similarities sequence to maintain the same order as the label similarities. At last, the distribution of gaze-relevant features will be more reasonable, and the gaze feature also can be more robust. Specifically, based on the gaze label g and the gazerelevant feature f re, a pair is composed of sample i and sample j, we calculate a label similarity sg ij and a feature similarity sf ij as: sg ij = gi · gj ∥gi∥
gj
, sf ij = f re i · f re j ∥f re i ∥
f re j
For pair p1 and pair p2 we use Lre to re-rank gaze-relevant features: Lre = max 0, −S12 ∗ sf 1 −sf 2 (4) Where S12 evaluates to 1 if sg 1 > sg 2, while evaluating to -1 when sg 1 < sg 2. For all samples in a mini-batch of size B, we construct O = (B∗(B−1)) 2 pairs of samples, then randomly choose two pairs O times to calculate the total rank loss. Total Loss Function In summary, the total loss function applied in our method is: L = Lg + λ1Ld + λ2Lir + λ3Lre λ1, λ2, λ3 are hyper-parameters, and we empirically set λ1 = λ2 = λ3 = 1.0. Experiments Experiment Details Gaze Data Details To verify the performance of our method in the gaze estimation task, we use ETHXGaze(Zhang et al. 2020) and Gaze360(Kellnhofer et al. 2019) as training set, and test the gaze model on MPIIFaceGaze(Zhang et al. 2017) and Eye-Diap(Funes Mora, Monay, and Odobez 2014). Thus, we totally evaluate on four cross-domain task, and denote them as DE (ETH-XGaze)→DM(MPIIFaceGaze), DE→DD(EyeDiap), DG(Gaze360)→DM, DG→DD. See Supplementary Material for more details on the data pre-processing. Comparison Methods For Baseline, we only use Lg as training loss. For DG methods, we choose CDG(Wang et al. 2022), PureGaze and Xu et al.’s method(Xu, Wang, and Lu 2023) for comparison and use the results report by the author. Additionally, the results of SOTA UDA methods, including PnP-GA(Liu et al. 2021), RUDA(Bao et al. 2022), CRGA(Wang et al. 2022), LatentGaze(Lee et al. 2022), Liu et al.’s work(Liu et al. 2022) and UnReGA(Cai et al. 2023) as a reference. Implementation Details We conduct the experiments on a single Tesla V100 GPU. We resize and normalize all the images to 224×224 and [0, 1]. We set the batch size to 128 and train the model for 30 epochs on ETH-XGaze and Gaze360. See Supplementary Materials for more details on the network, training setting, CLIP setting and others. Performance Comparison with SOTA Methods Quantitative result of four cross-domain gaze estimation tasks are shown in Tab. 1. The second row shows the comparison between our method and SOTA domain generalization (DG) methods. The CLIP-Gaze−is the plain CLIPGaze framework without PCO module and feature rank loss. It improves the generalizable capablity of Baseline and achieves the comparable performance with Xu et al.’s method using ResNet-18 as the backbone. The complete CLIP-Gaze achieves the best overall performance. It shows the state-of-the-art performance on three cross-domain evaluation tasks and achieves similar performance to the best method for DE→DD, which proves the effectiveness of our proposed PCO module and feature rank loss. Besides, we provide the comparison results with SOTA unsupervised domain adaption (UDA) methods in the third row of Tab. 1. Note that UDA methods require a small number of unlabeled target domain samples. It can be observed that our method demonstrates advanced performance with no access to target domain data. Specifically, we surpass LatentGaze on task DE→DD, achieve better performance than Liu et al..’s work on task DG→DM and outperform PnP-GA The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6733 Task Methods | Dt | DE DE DG DG Avg →DM →DD →DM →DD DG Baseline 0 8.35 9.66 7.58 9.01 8.65 PureGaze 0 7.08 7.48 9.28 9.32 8.29 CDG ‡ 0 6.73 7.95 7.03 7.27 7.25 Xu et al. 0 6.50 7.44 7.55 9.03 7.63 CLIP-Gaze− 0 7.04 8.51 7.55 7.73 7.71 CLIP-Gaze 0 6.41 7.51 6.89 7.06 6.97 UDA PnP-GA ∗ 10 5.53 5.87 6.18 7.92 6.38 RUDA 100 5.70 6.29 6.20 5.86 6.01 CRGA > 0 5.48 5.66 5.89 6.49 5.88 LatentGaze 100 5.21 7.81 6.51 Liu et al. 100 5.35 6.62 7.18 8.61 6.94 UnReGA 100 5.11 5.70 5.42 5.80 5.51 SDA Baseline ⋆ 100 4.63 5.86 5.67 6.26 5.61 CLIP-Gaze⋆ 100 4.45 5.27 4.94 5.60 5.07 Table 1: Comparison with SOTA methods. Results are reported by angular error in degrees, bold and underline denotes the best and the second best result among each column on one specific task. ‡ expresses the model employs ResNet50 as backbone, ∗indicates that experimental settings are different, ⋆denotes model is fine-tuned on target-domain. and Liu et al..’s method on task DG→DD. It demonstrates the strength of our proposed method since it does not need any target domain information. To further demonstrate the improvement of the proposed method, we randomly choose 100 target samples with labels for fine-tuning on our baseline and CLIP-Gaze model. The evaluation results in the last row of Tab. 1 show that our fine-tuned model consistently outperforms the baseline after fine-tuning. The details of the fine-tuning experiments can be found in the supplementary materials. Ablation Study Ablation Study of Text Prompt Tuning methods In this section, we compare different TPT methods in Tab. 2, where “Baseline” is the same as in Tab. 1. We denote CLIP-Gaze without text prompt tuning as w/o TPT, which achieved a significant improvement over the Baseline. In our TPT experiments, CoOp(Zhou et al. 2022b) learns only one prompt for each class, which only improves the performance slightly. CoCoOp(Zhou et al. 2022a) introduces instanceconditional text embedding for prompt tuning, but the instance containing gaze-relevant feature is detrimental to CLIP-Gaze, thus resulting in worse performance. CoCoOp∗ only uses one user face image embedding as a conditional embedding for prompt tuning, but this image embedding Methods DE DE DG DG Avg →DM →DD →DM →DD Baseline 8.35 9.66 7.58 9.01 8.65 w/o TPT 6.72 8.16 7.07 7.64 7.40 CoOp 7.44 7.42 7.41 7.15 7.36 CoCoOp 7.56 8.03 7.52 8.12 7.81 CoCoOp ∗ 7.36 7.77 7.31 8.46 7.73 PCO 6.41 7.51 6.89 7.06 6.97 Table 2: Comparison of different text prompt tuning methods for gaze model cross-domain evaluation in four tasks. Bold indicates the best results in each column, and underline denote the second best result results in each column. Methods DE DE DG DG Avg →DM →DD →DM →DD Baseline 8.35 9.66 7.58 9.01 8.65 Appearance(A) 7.32 7.09 7.49 7.54 7.36 Wearable(W) 7.33 7.95 7.57 7.45 7.58 Quality(Q) 7.30 7.36 7.24 7.49 7.35 W + Q 6.75 7.87 6.99 7.95 7.39 A + Q 6.91 7.23 6.85 7.42 7.10 A + W 7.18 7.32 7.23 7.66 7.35 A + W + Q 6.41 7.51 6.89 7.06 6.97 Table 3: Comparison of different gaze-irrelevant combinations for gaze model generalization in four cross-domain tasks. Bold indicates the best results in each column. may also contain impure identity features that may affect prompt tuning. Our PCO method achieves the best performance, which demonstrates using the 3DMM model to extract personalizing features and introducing the identityconditional text embedding can learn more suitable prompts. Ablation Study of Attributes and Conditions To investigate the effects of different gaze-irrelevant factors on the gaze model, we conduct different factor combinations experiments, as shown in Tab. 3. We can draw the following three conclusions: (1) The combination of multiple group of gaze-irrelevant factors outperforms the single group factors individually, possibly because filtering out more gazeirrelevant features can make the model more generalizable; (2) Appearance(A) and image quality(Q) have larger impacts on the generalization ability of the gaze model than other factors, possibly because they account for the major domain gap; (3) The full combination of all factors(A+W+ Q) achieves the best performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6734 Methods DE DE DG DG Avg →DM →DD →DM →DD Baseline 8.35 9.66 7.58 9.01 8.65 +Ld 7.82 9.32 7.24 9.14 8.38 +Ld + Lir 6.54 7.94 7.44 7.57 7.37 +Ld + Lir + LCR 6.96 7.89 7.24 7.28 7.34 +Ld + Lir + LL1 re 6.81 7.91 7.07 6.84 7.16 +Ld + Lir + LL2 re 6.79 7.59 6.92 7.21 7.13 +Ld + Lir + LKL re 6.96 7.89 6.97 7.08 7.23 +Ld + Lir + Lre 6.41 7.51 6.89 7.06 6.97 Table 4: Ablation study on loss functions. Results are reported by angular error in degrees. Ablation Study of Loss Functions Compared with our baseline, we propose three loss functions to achieve the goal of extracting robust gaze-relevant features for gaze estimation. To investigate the effectiveness of Ld, Lir and Lre, we conduct experiments to prove their effects by gradually increasing the loss term in baseline model training. For Lre, we compare the CRLoss LCR and our proposed rank loss and its variants, such asLL1 re , LL2 re , and LKL re , which compute the L1, L2, and KL losses between the feature similarity and label similarity sequences of sample pairs, respectively. Based on the results shown in in the third row of Tab. 4, we can see distilling gaze features to CLIP feature space enhances the average cross-domain performance, and there is a significant performance improvement after eliminating gaze-irrelevant features, this proves our proposed framework is efficient and superior. On this basis, we compare different loss forms of the relationship among sample features, it can be observed in the last row, the improvement brought by LCR is slight and even worse than these variants of Lre that directly align the feature similarity to label similarity. Instead of learning the absolute values of feature similarities about sample pairs, our Lre constrains their relative magnitudes and achieve the best overall performance, these results above demonstrate the effectiveness of our framework and proposed loss functions. Visualization of Extracted Features To compare and analyze the extracted features f re of different models, we follow the manner in (Wang et al. 2022) to visualize the distribution of feature on the task DG→DD with t-SNE(Van der Maaten and Hinton 2008). Fig. 4 displays the visualization results from four different models in Tab. 4, where feature points with similar gaze directions share similar colors. For the Baseline model, the features with different gaze directions are mixed together and the feature cluster is quite dispersed which is not sensible for regression task. After eliminating the gaze-irrelevant parts from extracted feature, the model of Fig. 4 (b) which omit the Lre in CLIP-Gaze Figure 4: Visualization of the feature distribution. Different colors denotes different gaze directions and close gaze directions share similar colors. (Best viewed in color). shows the overall feature distribution becoming ordered on gaze directions, and the similar colors are close in the feature space. However, there is an unreasonably purple features cluster appears in the green area in the upper right of the box. Similarly, as shown in Fig. 4 (c), the model with additional LCR does not improve the feature distribution compared with the model of Fig. 4 (b). This confirms that simply pushing features away or pulling features together by a fixed threshold is not optimal. In general, CLIP-Gaze has the most reasonable feature distribution and the visualization is shown in Fig. 4 (d), the gaze direction similarities and feature similarities have a strong correlation, this means our proposed feature rank loss Lre is effective. More visualizations provided in the supplementary materials. Conclusion In this paper, we propose a domain-generalization framework for gaze estimation models, which leverages visuallinguistic models to handle diverse target domains. Specifically, we define the gaze-irrelevant factors, such as face appearance, wearable, and image quality, and construct gazeirrelevant features using language descriptions. Then, we decouple the gaze features from the CLIP feature space to enhance the model’s generalization ability. To enchance the performance, a personalized context optimization method is proposed for text prompt tuning, and a rank loss is designed to learn a more reasonable gaze feature distribution. Our proposed framework achieves state-of-the-art performance on domain generalization for gaze estimation tasks. Acknowledgements This work was sponsored by National Key R&D Program of China (2023YFE0204200). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6735 References Abdelfattah, R.; Guo, Q.; Li, X.; Wang, X.; and Wang, S. 2023. Cdul: Clip-driven unsupervised learning for multilabel image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1348–1357. Andrist, S.; Tan, X. Z.; Gleicher, M.; and Mutlu, B. 2014. Conversational Gaze Aversion for Humanlike Robots. In 2014 9th ACM/IEEE International Conference on HumanRobot Interaction (HRI), 25–32. Bao, Y.; Liu, Y.; Wang, H.; and Lu, F. 2022. Generalizing Gaze Estimation With Rotation Consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4207–4216. Cai, X.; Zeng, J.; Shan, S.; and Chen, X. 2023. SourceFree Adaptive Gaze Estimation by Uncertainty Reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22035–22045. Cheng, Y.; and Bao, Y. 2022. Puregaze: Purifying gaze feature for generalizable gaze estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 436–443. Cheng, Y.; Lu, F.; and Zhang, X. 2018. Appearance-Based Gaze Estimation via Evaluation-Guided Asymmetric Regression. In Proceedings of the European Conference on Computer Vision (ECCV). Cheng, Y.; Zhang, X.; Lu, F.; and Sato, Y. 2020. Gaze Estimation by Exploring Two-Eye Asymmetry. IEEE Transactions on Image Processing, 29: 5259–5272. Funes Mora, K. A.; Monay, F.; and Odobez, J.-M. 2014. Eyediap: A database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras. In Proceedings of the symposium on eye tracking research and applications, 255–258. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Kellnhofer, P.; Recasens, A.; Stent, S.; Matusik, W.; and Torralba, A. 2019. Gaze360: Physically unconstrained gaze estimation in the wild. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6912–6921. Krafka, K.; Khosla, A.; Kellnhofer, P.; Kannan, H.; Bhandarkar, S. M.; Matusik, W.; and Torralba, A. 2016. Eye Tracking for Everyone. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2176–2184. Lee, I.; Yun, J.-S.; Kim, H. H.; Na, Y.; and Yoo, S. B. 2022. LatentGaze: Cross-Domain Gaze Estimation through GazeAware Analytic Latent Code Manipulation. In Proceedings of the Asian Conference on Computer Vision, 3379–3395. Li, W.; Huang, X.; Zhu, Z.; Tang, Y.; Li, X.; Zhou, J.; and Lu, J. 2022. Ordinalclip: Learning rank prompts for language-guided ordinal regression. Advances in Neural Information Processing Systems, 35: 35313–35325. Liu, R.; Bao, Y.; Xu, M.; Wang, H.; Liu, Y.; and Lu, F. 2022. Jitter Does Matter: Adapting Gaze Estimation to New Domains. arXiv preprint arXiv:2210.02082. Liu, Y.; Liu, R.; Wang, H.; and Lu, F. 2021. Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep Learning Face Attributes in the Wild. In Proceedings of International Conference on Computer Vision (ICCV). Mavely, A. G.; Judith, J. E.; Sahal, P. A.; and Kuruvilla, S. A. 2017. Eye gaze tracking based driver monitoring system. In 2017 IEEE International Conference on Circuits and Systems (ICCS), 364–367. Moon, A. J.; Troniak, D. M.; Gleeson, B.; Pan, M. K.; Zheng, M.; Blumer, B. A.; MacLean, K.; and Crof, E. A. 2014. Meet Me where I’m Gazing: How Shared Attention Gaze Affects Human-Robot Handover Timing. In 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 334–341. Padmanaban, N.; Konrad, R.; Cooper, E. A.; and Wetzstein, G. 2017. Optimizing VR for All Users through Adaptive Focus Displays. In ACM SIGGRAPH 2017 Talks, SIGGRAPH ’17. New York, NY, USA: Association for Computing Machinery. ISBN 9781450350082. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18082–18091. Schlett, T.; Rathgeb, C.; Henniger, O.; Galbally, J.; Fierrez, J.; and Busch, C. 2022. Face image quality assessment: A literature survey. ACM Computing Surveys (CSUR), 54(10s): 1–49. Shen, S.; Li, W.; Wang, X.; Zhang, D.; Jin, Z.; Zhou, J.; and Lu, J. 2023. CLIP-Cluster: CLIP-Guided Attribute Hallucination for Face Clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 20786–20795. Tran, L.; and Liu, X. 2018. Nonlinear 3D Face Morphable Model. In IEEE Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Vidit, V.; Engilberge, M.; and Salzmann, M. 2023. CLIP the Gap: A Single Domain Generalization Approach for Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3219–3229. Wang, Y.; Jiang, Y.; Li, J.; Ni, B.; Dai, W.; Li, C.; Xiong, H.; and Li, T. 2022. Contrastive Regression for Domain Adaptation on Gaze Estimation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19354– 19363. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6736 Xu, M.; Wang, H.; and Lu, F. 2023. Learning a Generalized Gaze Estimator from Gaze-Consistent Feature. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3): 3027–3035. Xu, P.; Sugano, Y.; and Bulling, A. 2016. Spatio-temporal modeling and prediction of visual attention in graphical user interfaces. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 3299–3310. Yao, L.; Han, J.; Wen, Y.; Liang, X.; Xu, D.; Zhang, W.; Li, Z.; XU, C.; and Xu, H. 2022. DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection. In Koyejo, S.; Mohamed, S.; Agarwal, A.; Belgrave, D.; Cho, K.; and Oh, A., eds., Advances in Neural Information Processing Systems, volume 35, 9125–9138. Curran Associates, Inc. Yin, P.; Wang, J.; Dai, J.; and Wu, X. 2024. NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Zhang, X.; Park, S.; Beeler, T.; Bradley, D.; Tang, S.; and Hilliges, O. 2020. ETH-XGaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation. In European Conference on Computer Vision, 365– 381. Springer. Zhang, X.; Sugano, Y.; Fritz, M.; and Bulling, A. 2015. Appearance-based gaze estimation in the wild. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4511–4520. Zhang, X.; Sugano, Y.; Fritz, M.; and Bulling, A. 2017. Mpiigaze: Real-world dataset and deep appearance-based gaze estimation. IEEE transactions on pattern analysis and machine intelligence, 41(1): 162–175. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6737 | 2024 | 748 |
18,571 | Point Deformable Network with Enhanced Normal Embedding for Point Cloud Analysis Xingyilang Yin1, Xi Yang1*, Liangchen Liu1, Nannan Wang1, Xinbo Gao2 1Xidian University 2 Chongqing University of Posts and Telecommunications [email protected], {yangx, nnwang}@xidian.edu.cn, [email protected], [email protected] Abstract Recently MLP-based methods have shown strong performance in point cloud analysis. Simple MLP architectures are able to learn geometric features in local point groups yet fail to model long-range dependencies directly. In this paper, we propose Point Deformable Network (PDNet), a concise MLP-based network that can capture long-range relations with strong representation ability. Specifically, we put forward Point Deformable Aggregation Module (PDAM) to improve representation capability in both long-range dependency and adaptive aggregation among points. For each query point, PDAM aggregates information from deformable reference points rather than points in limited local areas. The deformable reference points are generated data-dependent, and we initialize them according to the input point positions. Additional offsets and modulation scalars are learned on the whole point features, which shift the deformable reference points to the regions of interest. We also suggest estimating the normal vector for point clouds and applying Enhanced Normal Embedding (ENE) to the geometric extractors to improve the representation ability of single-point. Extensive experiments and ablation studies on various benchmarks demonstrate the effectiveness and superiority of our PDNet. Introduction Point cloud analysis receives great interest due to numerous 3D data acquisition devices applied in various areas, such as autonomous driving and robotics. Unlike images that have regular 2D grids, point clouds are inherently sparse, unordered, and unstructured data. Thus directly processing point clouds is challenging. Recently, MLP-based methods have obtained significant performance with simple components. PointNeXt (Qian et al. 2022) revisits the classical PointNet++ (Qi et al. 2017b) and improves it with modern training and scaling strategies. PointNeXt makes enormous improvements compared to the PointNet++ and even outperforms dedicated designed convolution-based (Xu et al. 2021a), graph-based (Zhou et al. 2021), and powerful point transformers methods (Zhao et al. 2021; Lai et al. 2022; Wu et al. 2022). This reveals that concise MLP modules can already describe the local geometric properties of point *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) kNN (b) ball query (c) deformable Figure 1: Comparison of grouping methods. (a) kNN: query point (orange point) and its k-nearest (k=3) neighbors (blue points). (b) ball query: query point and its neighbors in local region (red circle). (c) query point and its deformable reference points (blue points) located in non-local regions. The green points and purple arrows represent the initial reference points generated on all point positions and their learnable offsets conditioned on the whole point features, respectively. clouds. The following PointMetaBase (Lin et al. 2023) further modifies PointNeXt with explicit position encoding and MLP before grouping operation. Although the existing MLP-based approaches show strong generalization ability in various tasks, they ignore modeling long-range dependencies. As illustrated in Figure 1(a) and 1(b), the previous MLP-based methods (Qi et al. 2017b; Ma et al. 2022; Qian et al. 2022; Lin et al. 2023) only focus on aggregating information in local point groups constructed by kNN or ball query, which fails to learn features from a long distance. However, capturing long-range relations has been demonstrated to be crucial in understanding global shape context (Wang et al. 2018; Lai et al. 2022). To solve the aforementioned problem, we need to explore aggregating information from distant regions for each query point. In the literature on processing images, learning deformable convolution filters has been shown effective in various challenging vision tasks due to adaptive spatial aggregation in long-range and more informative regions (Dai et al. 2017; Zhu et al. 2019, 2020; Wang et al. 2023). This motivates us to design the deformable mechanism for point clouds. In contrast to the images that have structured 2D grids, point clouds are sparse and unstructured data. Thus naive implementation of deformable mechanisms suited for images can not directly apply to point clouds. To alleviate it, KPConv (Thomas et al. 2019) first introduces a deformable mechanism to point clouds. It adopts pseudo-grid convoluThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6738 tion through predefined kernel points based on local point positions with weight matrices learned by local point features. The deformable mechanism is further applied to learn offsets constrained in local areas, which helps refine the kernel points. However, KPConv still focuses on developing sophisticated modules to extract local structures and long-range dependency is not considered. To this end, we propose a simple and effective MLP-based network named Point Deformable Network (PDNet). Specifically, we first put forward Point Deformable Aggregation Module (PDAM), which achieves both long-range dependency and adaptive spatial aggregation that suits point clouds at the same time. As shown in Figure 1(c), different from extractors that aggregate information in fixed local regions, our PDAM aggregates information from deformable reference points for each query point. The initial reference points are generated on the positions of all points, and the additional offsets and modulation scalars are learned on the whole point features. Thus the reference points are shifted to the relevant regions and bring more informative geometric features for aggregation, which strengthens the representation ability among points. Further, we suggest applying the least square fitting to estimate the normal vector of a point cloud and using Enhanced Normal Embedding (ENE) to improve the representation ability of single-point. Extensive experiments on various challenging benchmarks demonstrate the effectiveness of our methods. Our PDNet outperforms other competitive MLP-based models and achieves state-of-the-art. Related Work Point-based networks on point clouds. In contrast to the project methods that project point clouds to multi-view images (Su et al. 2015; Goyal et al. 2021) or structured 3D voxel (Wu et al. 2015; Maturana and Scherer 2015), pointbased methods process unstructured point clouds directly. PointNet (Qi et al. 2017a), the pioneering point-based network, proposes to model the permutation invariance of point clouds by using shared MLPs to encode pointwise features and aggregating them by symmetric functions like maxpooling. To better capture local geometric structures, PointNet++ (Qi et al. 2017b) proposes a hierarchical structure by gradually downsampling with farthest point sampling and aggregating features from neighbor points with kNN or ball query method. Currently, most point-based methods focus on the design of local geometric extractors. Convolutionbased approaches (Li et al. 2018; Liu et al. 2019; Thomas et al. 2019; Xu et al. 2021a) propose several invariant and dynamic convolution kernels to aggregate point features. Graph convolution-based methods (Wang, Samari, and Siddiqi 2018; Wang et al. 2019; Zhou et al. 2021) treat points and their relations as vertices and edges of a graph, respectively. Point features can then be extracted by applying graph convolution on the graph. Point Transformers (Zhao et al. 2021; Guo et al. 2021; Lai et al. 2022; Wu et al. 2022) capture local and global information through self-attention. Recently, MLP-based approaches (Ma et al. 2022; Ran, Liu, and Wang 2022; Tang et al. 2022b; Qian et al. 2022; Zhang et al. 2023; Lin et al. 2023) obtain competitive results with simple network architectures. PointMLP proposes a geometric affine module to enhance the residual MLPs network. PointNeXt follows the design philosophy of PointNet++ and integrates with improved training and scaling strategies. PointMetaBase revisits the existing methods and proposes a meta-architecture for point cloud analysis. Although these MLP-based networks show high performance in learning local geometry, the exploration of long-range dependency is omitted. Our PDNet is an MLP-based network that enjoys both long-range dependency and adaptive position aggregation inspired by deformable mechanisms. Deformable networks on images. The deformable mechanism is first presented by Deformable convolutional network (DCN) (Dai et al. 2017) to enhance the capability of convolution with additional offsets and adaptive spatial aggregation conditioned on input data. DCNv2 (Zhu et al. 2019) improves its ability by introducing a modulation mechanism. The deformable mechanism has also been applied to ViTs (Zhu et al. 2020; Yue et al. 2021; Xia et al. 2022), which shows powerful capability in refining visual tokens. Recently, InternImage (Wang et al. 2023) proposes large-scale ViT architecture with DCNv3, which gains both benefits in long-range dependency and adaptive spatial aggregation and outperforms related work. However, deformable mechanisms designed for images do not fit unstructured point clouds. This work aims to develop a deformable mechanism for point clouds to aggregate point features from relevant areas through learned initial positions and offsets conditioned on the input points. Methods In this section, we first shortly describe the background of MLP-based approaches. Second, we propose Point Deformable Aggregation Module to achieve both long-range dependency and adaptive spatial aggregation in a datadependent way. Third, we introduce the least square fitting to estimate the point normal vector and suggest applying additional normal embedding to strengthen the representation ability of the network. Finally, we present the overall architectures of PDNet for classification and segmentation tasks. Preliminary In this subsection, we briefly revisit some point MLP-based approaches such as PointNet++ (Qi et al. 2017b), PointNeXt (Qian et al. 2022), and PointMetaBase (Lin et al. 2023). PointNet++ captures local geometric features through the set abstraction (SA) module. SA module consists of subsample layer to select the input points and neighborhood aggregation module to extract local patterns. The neighborhood aggregation is formulated as: f l+1 i = A({M([f l j, pl j −pl i]), ∀j ∈Ni}), (1) where Ni is the index set of neighbors of point i. pl i, pl j, f l j are the point coordinates selected through farthest point sampling, the coordinates, and the features of neighbor j in the stage l of the network, respectively. M represents the shared MLPs that encode the concatenation of point features The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6739 input point cloud {𝑝! "}!#$ % {𝑓! "}!#$ % FPS OGN {𝑝&"}&#$ ' {∆(! " }&#$ ' 3𝑅 coordinates features 𝑝! " Grouping & Interpolation Aggregation {𝑓𝑝!"! }!#$ % 𝑓! ")$ 𝑓! " {𝑝&""}&#$ ' Figure 2: Illustration of Point Deformable Aggregation Module. Given input N points with coordinates {pl i}N i=1 and features {f l i}N i=1, FPS generates the initial reference points (green points) based on {pl i}N i=1, and OGN determines the 3R offsets (blue arrows) learned on {f l i}N i=1. Then, features of adaptive position f(pl r ′ ) are computed through grouping and interpolation. Finally, for point i, point position pl i, point feature f l i, positions {pl r ′ }R r=1 (blue points) and features {f(pl r ′ )}R r=1 of deformable reference points participate together in forming updated point features f l+1 i . of neighbor j and the relative coordinates pl j −pl i. A is the symmetric aggregation function such as max-pooling. PointNeXt further appends Inverted residual MLP (InvResMLP) block after SA module to enhance point features: f l+1 i = M2(A({M1([f l j, pl j −pl i]), ∀j ∈Ni})) + f l i, (2) where PointNeXt uses one layer MLP M1 for neighbor feature aggregation and 2-layer MLP M2 for point feature update. f l i is the input point features in stage l. PointMetaBase slightly modifies InvResMLP and applies position encoding δ for relative coordinates pl j −pl i: f l i ′ = M3(f l i), f l j ′ = Group(f l i ′ , pl i), (3) f l+1 i = M2(A({f l j ′ + δ(pl j −pl i), ∀j ∈Ni})) + f l i. (4) Notice that PointNeXt uses the mapping function M1 (eg: MLP) after the grouping layer while PointMetaBase adopts M3 before the grouping operation to reduce computation. f l i ′ and f l j ′ are the updated point features of point i and its neighbor j, respectively. Point Deformable Aggregation Module As discussed in Section 3.1, previous MLP-based approaches capture geometric features through local point groups. They aggregate features in local areas such as the fixed numbers of local neighbor points or points in a small radius, which fail to directly model long-range dependency. To solve it, we propose Point Deformable Aggregation Module (PDAM), which captures long-range relations and achieves adaptive spatial aggregation at the same time in a data-dependent way. Given an input image x ∈RC×H×W , deformable mechanism proposed in DCNv2 (Zhu et al. 2019) can be described as: y(p) = K X k=1 wkmkx(p + pk + ∆pk), (5) where y(p) and x(p) denote the output feature maps and input feature maps at location p. K represents the number of sampling locations, wk and pk are weight projection and predefined offset for the k-th location, respectively. For example, for a convolution with 3 × 3 kernel and dilation 1, K = 9, pk ∈{(−1, −1), (−1, 0), ..., (1, 1)}. ∆pk is the learnable offset of 2K channels condition on the input feature x. mk is the modulation scalar of K channels obtained through a convolutional layer and sigmoid activation over the same input. However, applying the deformable mechanism in images to points is a non-trivial problem. In contrast to the images that have structured 2D grids, point clouds are kind of unstructured data that are unevenly distributed in space. Directly using the predefined grid sampling like pk does not suit point clouds. Inspired by the deformable mechanism presented by (Dai et al. 2017; Zhu et al. 2019), we propose the PDAM to offer point clouds adaptive position aggregation and long-range relations via deformable reference points, as illustrated in Figure 2. Specifically, given the input point cloud with N points at l-th stage as {pl i, f l i}N i=1, where pl i ∈R1×3 and f l i ∈R1×C are the coordinate and feature of point i, respectively. For every point i, we first initialize R points as references through farthest point sampling (FPS) based on all the point positions, which solves the defect of irregular point cloud and leads the initial reference points {pl r}R r=1 to be uniformly distributed. This process can be described as: {pl r}R r=1 = FPS({pl i}N i=1). (6) Different from KPConv (Thomas et al. 2019) that computes and refines kernel points within a local sphere, our initial reference points are based on the positions of all points, which enjoy larger receptive field. Then, to obtain the offset for each reference point, we feed the point features {f l i}N i=1, f l i ∈R1×C of all points to the offset generation network (OGN) to output the offsets {∆pl r}R r=1, ∆pl r ∈ R1×3 as the following: {∆pl r}R r=1 = OGN({f l i}N i=1), (7) where OGN is implemented as two linear layers with learnable weight matrics W1 ∈RN×N and W2 ∈RN×3R to get 3R offsets, hence the 3R offsets will shift reference The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6740 MLP PDSA PLAM PDSA PLAM PDSA PLAM +PDAM PDSA PLAM +PDAM Feat. Prop. Feat. Prop. Feat. Prop. Feat. Prop. MLP [N, 32] [N/4, 64] [N/16, 128] [N/64, 256] [N/256, 512] 2× 4× 2× 2× Subsample MLPs(64) Grouping Reduction MLP PDSA PDSA PDSA Global Pooling PDSA [N, 32] [N/2, 64] [N/4, 128] [N/8, 256] [N/16, 512] Chair MLPs(512) MLPs {𝒇𝒊 𝒍}𝒊#𝟏 𝑵 {𝒑𝒊 𝒍}𝒊#𝟏 𝑵 Subsample Grouping {𝒑𝒓 𝒍}𝒓#𝟏 𝑹 {𝒎𝒓𝒍}𝒓#𝟏 𝑹 Interpolation {∆𝒑𝒓 𝒍}𝒓#𝟏 𝑹 {𝒑𝒓𝒍&}𝒓#𝟏 𝑹 NE PE Reduction {𝒇(𝒑𝒓𝒍&)}𝒓#𝟏 𝑹 {𝒇𝒊 𝒍+𝟏}𝒊#𝟏 𝑵 PLAM {𝒏𝒓𝒍}𝒓#𝟏 𝑹 {𝒑𝒓𝒍&}𝒓#𝟏 𝑹 MLPs(128) Grouping Reduction MLPs(128) PE NE 𝒇𝒊 𝒍 𝒑𝒊 𝒍 𝒇𝒋 𝒍& 𝒑𝒋 𝒍 𝒏𝒋 𝒍 𝒇𝒊 𝒍+𝟏 Interpolation MLPs Figure 3: Illustration of Point Deformable Network (PDNet) and macro-design of PDNet-L. For classification (bottom left), we use consecutively PDSA block, which incorporate Set Abstraction module (Qi et al. 2017b) with position encoding and normal embedding. For segmentation (top), we adopt a U-net style architecture with Feature Propagation (Qi et al. 2017b) as decoder and PDSA, PLAM, and PDAM as encoder. points to any reasonable position. It will bring the query point more relevant information in global contexts learned on the whole point features. However, since point clouds are discrete data points, there may not exist point on positions {pl r ′ }R r=1, pl r ′ = pl r + ∆pl r. To alleviate it, we adopt inverse distance weighted average based on K nearest neighbors (KNN) and interpolate features nearby position pl r ′ to get the features of deformable reference point in the local region of position pl r ′ as follows: {pl k}K k=1 = KNN(pl r ′ ), (8) f (j)(pl r ′ ) = PK k=1 wk(pl r ′ )f (j) k PK k=1 wk(plr ′) , j = 1, ..., C, (9) where {pl k}K k=1 are the K nearest neighbor points of position pl r ′ . wk(pl r ′ ) = 1 d(pl r ′,pl k) is the weighted parameter, and d(·, ·) computes the distance between two points. Thus the deformable reference points have the information f(pl r ′ ) in relevant regions, which will be involved in the aggregation with the query point i. Further, we use two linear layers with a sigmoid layer to obtain R channels of the modulation scalars {∆ml r}R r=1 for deformable reference point features. Finally, the aggregation procedure for the query point can be defined as: f l+1 i = M(A({∆ml rf(pl r ′ )+δ(pl r ′ −pl i)}R r=1))+f l i, (10) where δ(pl r ′ −pl i) is relative position embedding of point i and its deformable reference points pl r ′ . A is the symmetric aggregation function (max-pooling), M is a mapping function such as MLP. Enhanced Normal Embedding PDAM aggregates long-range contexts from regions of interest for each query point, strengthening the representation capability among points. In this subsection, we further propose Enhanced Normal Embedding to improve the representation ability of each point itself. Normal features provide geometric information about point clouds. Using additional point normals rather than only consuming point coordinates in the network has been proven effective in various works (Qi et al. 2017a,b; Li, Chen, and Lee 2018; Wu, Qi, and Fuxin 2019). However, it does not work if no point normals exist in the dataset. Inspired by (Mitra and Nguyen 2003), we adopt the least square fitting to estimate the normal vector of a point cloud. Considering point i and its k −1 nearest neighbor points Ni, the covariance matrix M can be computed as: M = 1 k k X i=1 (pi −¯p)(pi −¯p)T , (11) where pi ∈R1×3 is the coordinate of point i in the l-th stage, ¯p = 1 k Pk i=1 pi denotes the centroid of point i and its neighbors. Thus M is 3 × 3 symmetric positive semidefinite matrix. The normal to the local least square plane for point i can be estimated as the eigenvector corresponding to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6741 the minimum eigenvalue of M (Mitra and Nguyen 2003). We utilize singular value decomposition to obtain the normal feature ni for each point i. Like widely used position encoding in attention-based and MLP-based methods (Lai et al. 2022; Wu et al. 2022; Lin et al. 2023) to learn complex point cloud positional relations among point groups or all points, we propose to strengthen the point normal via Enhanced Normal Embedding (ENE). According to (Yang et al. 2020; Ran, Liu, and Wang 2022), point position and point normal are features with different distributions, which can be decoupled along channel dimension and fused through summation after embedding. In this paper, we implement ENE with 2-layer MLP. Point Deformable Network Architectures As illustrated in Figure 3, we propose Point Deformable Network (PDNet), shared the similar hierarchical structure as (Lin et al. 2023; Qian et al. 2022), and incorporated with Point Deformable Aggregation Module (PDAM) and Enhanced Normal Embedding (ENE). For the segmentation task, we use a U-net architecture, which contains an encoder and a decoder. For the classification task, we only use an encoder. The decoder comprises widely used Feature Propagation layers (Qi et al. 2017b; Qian et al. 2022; Lin et al. 2023) to gradually upsample features via interpolation. Incorporating position embedding and ENE, we tweak the Set Abstraction module (Qi et al. 2017b; Qian et al. 2022) as the reduction block, termed Point Deformable Set Abstraction (PDSA). The encoder is composed of PDSA, Point Local Aggregation Module (PLAM), and PDAM. In the first and second stages of PDNet, we implement PLAM by modifying the PointMetaBase block (defined in equation 4) with additional ENE as follows: f l+1 i = M2(A({f l j ′ + δ(pl j −pl i) +γ(nl j), ∀j ∈Ni})) + f l i, (12) where nl j is the point normals of point neighbor group of point i at l-th stage, and γ is the ENE implemented with MLP that map the input three dimensions of point normal vector to the dimension of high-level features. Incorporating with ENE, the PDAM defined in equation 10 can be further modified as: f l+1 i = M(A({∆ml rf(pl r ′ ) + δ(pl r ′ −pl i) +γ(nl r)}R r=1)) + f l i. (13) We introduce parallel PLAM and PDAM in the third and fourth stages of PDNet. The point features are fed into PLAM to aggregate information locally (see equation 12) and passed through PDAM (shown in equation 13) to aggregate information globally at the same time. This design of MLP-based blocks with local and long-range dependencies helps our network learn strong generalization ability. For a fair comparison, we adopt the same scaling strategies as (Lin et al. 2023; Qian et al. 2022) to construct our PDNet. We define the number of deformable reference points R = 32 to be consistent with the number of points in their local point groups. The configuration of three variants of PDNet is shown as follows: Method (time order) mAcc (%) OA (%) PointNet (Qi et al. 2017a) 63.4 68.2 PointNet++ (Qi et al. 2017b) 75.4 77.9 PointCNN (Li et al. 2018) 75.1 78.5 DGCNN (Wang et al. 2019) 73.6 78.1 PRA-Net (Cheng et al. 2021) 77.9 81.0 PointMLP (Ma et al. 2022) 84.4 85.7 PointNeXt (Qian et al. 2022) 86.8 88.2 GAM (Hu et al. 2023) 86.5 88.4 Point-PN (Zhang et al. 2023) 87.1 PointMetaBase (Lin et al. 2023) 86.8 88.2 PDNet (ours) 86.8 88.5 Table 1: Shape classification results on PB T50 RS of ScanObjectNN. mAcc is the mean of class accuracy (%) and OA is the overall accuracy (%). • PDNet-S: C = 32, B = 0 • PDNet-L: C = 32, B = (2, 4, 2, 2) • PDNet-XXL: C = 64, B = (4, 8, 4, 4) We denote C as the channel size of the stem MLP and B as the number of blocks in a stage. Notice that B = 0 means only one PDSA block but no PLAM or PDAM blocks are used at each stage. Experiments In this section, we evaluate our PDNet on ScanObjectNN (Uy et al. 2019) for shape classification, S3DIS (Armeni et al. 2016) for semantic segmentation, and ShapeNetPart (Yi et al. 2016) for part segmentation. We also provide various ablation studies to better understand the PDNet. Classification and Segmentation Experimental setups. We train our models by using CrossEntropy loss with label smoothing (Szegedy et al. 2016), AdamW optimizer (Loshchilov and Hutter 2018), an initial learning rate lr = 0.001, and weight decay 10−4 with Cosine Decay for all tasks. For S3DIS semantic segmentation task, point clouds are downsampled with a voxel size of 0.4 m following the previous methods (Zhao et al. 2021; Qian et al. 2021, 2022; Lin et al. 2023). For S3DIS, our PDNet is trained using a fixed number of 24000 points per batch with batch size set to 8 with an initial lr=0.01 for 100 epochs on a NVIDIA 3090 GPU and a 12-core Intel Xeon @ 2.50GHz CPU. For ScanObjectNN shape classification task, following (Qian et al. 2021; Lin et al. 2023), our PDNet is trained by 1024 points with a weight decay of 0.05 for 250 epochs on a NVIDIA 3090 GPU. The points are randomly sampled during training and uniformly sampled during testing. For ShapeNetPart, we train our model using 2048 randomly sampled points with normals for 300 epochs on 4 NVIDIA 3090 GPUs. ShapeNetPart has the normal vectors of point clouds, so we do not apply the least square fitting to estimate the point normals. The original point normals are used for normal embedding. All the details of data augmentation are the same as those in PointNeXt (Qian et al. 2022) and PointMetaBase (Lin et al. 2023). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6742 S3DIS 6-Fold S3DIS Area-5 Method (time order) OA (%) mAcc (%) mIoU (%) OA (%) mAcc (%) mIoU (%) PointNet (Qi et al. 2017a) 78.5 66.2 47.6 49.0 41.1 PointCNN (Li et al. 2018) 88.1 75.6 65.4 85.9 63.9 57.3 DGCNN (Wang et al. 2019) 84.1 56.1 83.6 47.9 KPConv (Thomas et al. 2019) 79.1 70.6 72.8 67.1 PCT (Guo et al. 2021) 67.7 61.3 PAConv (Xu et al. 2021a) 73.0 66.6 AdaptConv (Zhou et al. 2021) 90.0 73.2 67.9 Point Transformer (Zhao et al. 2021) 90.2 81.9 73.5 90.8 76.5 70.4 ASSANet (Qian et al. 2021) 66.8 CBL (Tang et al. 2022a) 89.6 79.4 73.1 90.6 75.2 69.4 StratifiedFormer (Lai et al. 2022) 91.5 78.1 72.0 Point TransformerV2 (Wu et al. 2022) 91.1 77.9 71.6 GAM (Hu et al. 2023) 90.6 83.2 74.4 PointNet++ (Qi et al. 2017b) 81.0 67.1 54.5 83.0 53.5 PointNeXt-L (Qian et al. 2022) 89.8 82.2 73.9 90.0±0.1 69.0±0.5 PointNeXt-XL (Qian et al. 2022) 90.3 83.0 74.9 90.6±0.1 70.5±0.3 PointMetaBase-L (Lin et al. 2023) 90.6 75.6 90.5±0.1 69.5±0.3 PointMetaBase-XXL (Lin et al. 2023) 91.3 77.0 90.8±0.6 71.3±0.7 PDNet-L (ours) 91.4 85.5 76.7 90.7 77.1 70.8 PDNet-XXL (ours) 91.9 86.2 78.3 91.3 78.1 72.3 Table 2: Semantic segmentation results on S3DIS (6-Fold and Area 5). OA is the overall accuracy (%), mAcc is the mean of class accuracy (%), and mIoU is the mean of instance IoU (%). Input Ground truth PDNet PointMetaBase Figure 4: Visual comparison between MLP-based networks, PointMetaBase and our PDNet. Shape Classification. We first conduct experiments on a real-world shape classification dataset ScanobjectNN (Uy et al. 2019). ScanObjectNN contains approximately 15,000 objects, which have 2902 unique instances that are categorized into 15 classes. We choose the hardest perturbed variant (PB T50 RS) and report the overall accuracy (OA) and the mean of class accuracy (mAcc) results. As shown in Table 1, our PDNet outperforms all baselines with the mAcc of 86.8% and OA of 88.5%. It shows point normals provide geometric information, and applying ENE helps improve the representation ability of the model. Semantic Segmentation. We also validate our PDNet on widely used Stanford Large-Scale 3D Indoor Spaces (S3DIS) (Armeni et al. 2016) dataset for semantic segmentation task. S3DIS is a challenging benchmark that conFigure 5: Visualization results on ShapeNetPart. tains 271 rooms with 13 semantic categories in 6 areas. We report the OA, the mAcc, and the mean of instance IoU (mIoU) results of standard 6-fold cross-validation and Area-5 on S3DIS. As illustrated in Table 2, Our PDNetXXL outperforms all baselines with the OA of 91.9%, mAcc of 86.2%, and mIoU of 78.3% on S3DIS 6-Fold and OA of 91.3%, mAcc of 78.1%, and mIoU of 72.3% on S3DIS Area-5. Notably, the superior performance over powerful point Transformer architectures (StratifiedFormer and Point TransformerV2) shows the potential of MLP-based methods in point cloud analysis. Compared with recent MLP-based networks (PointNeXt and PointMetaBase), our PDNet-L gains +2.8% and +1.1% improvement in mIoU on S3DIS 6-Fold, respectively. Consistent progress is obtained when scaling up the models. It demonstrates the importance of long-range dependency in the point semantic segmentation task and the effectiveness of our method in aggregating information from deformable reference points conditioned on the input points. We also provide visualization of semantic segmentation results in Figure 4, which clearly shows the superiority of our approach. Due to the direct modeling of long-range dependency, our method can recognize the objects in red circles while others fail. Part Segmentation. ShapeNetPart (Yi et al. 2016) is an object-level dataset for part segmentation. It contains 16,880 models with 16 different shape categories. Each category The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6743 Method (time order) c. mIoU i. mIoU PointNet (Qi et al. 2017a) 80.4 83.7 PointNet++ (Qi et al. 2017b) 81.9 85.1 SO-Net (Li, Chen, and Lee 2018) 84.6 PointCNN (Li et al. 2018) 84.6 86.1 DGCNN (Wang et al. 2019) 82.3 85.1 KPConv (Thomas et al. 2019) 85.1 86.4 PCT (Guo et al. 2021) 86.4 PAConv (Xu et al. 2021a) 84.6 86.1 AdaptConv (Zhou et al. 2021) 83.4 86.4 GDANet (Xu et al. 2021b) 85.0 86.5 Point Trans. (Zhao et al. 2021) 83.7 86.6 PointMLP (Ma et al. 2022) 84.6 86.1 PointNeXt (Qian et al. 2022) 85.4 87.1 GAM (Hu et al. 2023) 87.0 Point-PN (Zhang et al. 2023) 86.6 PointMetaBase (Lin et al. 2023) 85.4 87.0 PDNet (ours) 85.4 87.2 Table 3: Part segmentation results on ShapeNetPart. PLAM PDAM NE mIoU Params FLOPs ✓ 69.5±0.3 2.7 2.0 ✓ 70.1±0.2 3.6 2.1 ✓ ✓ 69.7±0.3 2.7 2.0 ✓ ✓ 70.4±0.3 4.9 2.4 ✓ ✓ ✓ 70.6±0.2 4.9 2.4 Table 4: Evaluation of proposed components on S3DIS Area-5. mIoU is the mean of IoU (%). has 2-6 parts and up to 50 part labels in total. We evaluate the performance with the mean of class IoU (c. mIoU) and the mean of instance IoU (i. mIoU) in Table 3. PDNet also achieves the best performance of 85.4% in cls. mIoU and 87.2% in mIoU. Visualization of part segmentation results are presented in Figure 5. Ablation Studies Effectiveness of Proposed Components. We evaluate the performance of proposed components of PDNet-L in Table 4 with mean±std in three random runs. PLAM and PDAM in the first and second column of Table 4 represents whether to use them in the third and fourth stages of PDNet-L. The results show that using PDAM to aggregate information from the deformable reference regions is better than adopting PLAM to aggregate information within local point groups. It demonstrates that the network prefers to learn global relations in the deep stages. In the case of using both PLAM and PDAM, it obtains better result than only use PLAM or PDAM. Our PDNet-L with all the proposed components achieves the best performance of 70.6±0.2% in mIoU. PDAM. We first explore adopting Point Deformable Aggregation Module (PDAM) at different stages. As shown in Table 5, only adopting PDAM in the last stage improves by 0.4% and applying it in the last two stages leads to the best performance of 70.8% in mIoU. However, using PDAM at Stage2 Stage3 Stage4 mIoU (%) 69.7 ✓ 70.1 ✓ ✓ 70.8 ✓ ✓ ✓ 69.9 Table 5: Ablation study on applying PDAM in different stages on S3DIS Area-5. method mIoU (%) random 70.1 center 70.6 FPS 70.8 successive 70.4 parallel 70.8 Table 6: Ablation study on different types of initial reference points and ablation study on combining strategy of PDAM and PLAM on S3DIS Area-5. the early stage obtains decreasement in mIoU, which reveals the cues that our network performs better when adopting Point Local Aggregation Module (PLAM) in the early stages to capture local geometries and PDAM in the deeper stages to model long-range dependencies. We also investigate several types of initializing methods for the reference points on S3DIS Area-5. The results are presented in Table 6, which suggests that using FPS to acquire the initial reference point based on the input point positions is superior to random initialization. Moreover, using the center of the input points as the initial status for all the initial reference points performs worse than considering each reference point independently to be uniformly distributed in space. We further conduct an ablation study on combining strategy of PDAM and PLAM in Table 6. For 3D point clouds, applying PLAM and PDAM to aggregate information from local and distant regions at the same time is better than widely used successively designed (Chu et al. 2021; Yang et al. 2022; Xia et al. 2022) in 2D images that adopt PDAM to model long-range relations after capturing local features by PLAM. Conclusion In this paper, we propose PDNet, a concise MLP-based network for point cloud processing. Equipped with Point Deformable Aggregation Module (PDAM), our model achieves both long-range dependency and adaptive spatial aggregation in a data-dependent way. For each query point, PDAM aggregates information from deformable reference points, which are initialized according to the point positions and then shifted via additional offsets and modulation scalars conditioned on the input point features. Enhanced Normal Embedding further helps improve the representation ability of point itself. Extensive experiments and ablation studies illustrate the effectiveness of PDNet over various tasks. We hope our work can inspire insights toward exploring suitable deformable mechanisms for point clouds. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6744 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62372348, Grant 62036007, Grant U22A2096, and Grant U21A20514, in part by the Shaanxi Outstanding Youth Science Fund Project under Grant 2023-JC-JQ-53, in part by the Technology Innovation Leading Program of Shaanxi under Grant 2022QFY0115, in part by the Fundamental Research Funds for the Central Universities under Grant QTZX23042, in part by Open Research Projects of Zhejiang Laboratory under Grant 2021KG0AB01. References Armeni, I.; Sener, O.; Zamir, A. R.; Jiang, H.; Brilakis, I.; Fischer, M.; and Savarese, S. 2016. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1534–1543. Cheng, S.; Chen, X.; He, X.; Liu, Z.; and Bai, X. 2021. Pranet: Point relation-aware network for 3d point cloud analysis. IEEE Transactions on Image Processing, 30: 4436– 4448. Chu, X.; Tian, Z.; Wang, Y.; Zhang, B.; Ren, H.; Wei, X.; Xia, H.; and Shen, C. 2021. Twins: Revisiting the design of spatial attention in vision transformers. Advances in Neural Information Processing Systems, 34: 9355–9366. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, 764–773. Goyal, A.; Law, H.; Liu, B.; Newell, A.; and Deng, J. 2021. Revisiting point cloud shape classification with a simple and effective baseline. In International Conference on Machine Learning, 3809–3820. PMLR. Guo, M.-H.; Cai, J.-X.; Liu, Z.-N.; Mu, T.-J.; Martin, R. R.; and Hu, S.-M. 2021. Pct: Point cloud transformer. Computational Visual Media, 7: 187–199. Hu, H.; Fanyi, W.; Jingwen, S.; Hongtao, Z.; Yaonong, W.; Laifeng, H.; Yanhao, Z.; and Zhiwang, Z. 2023. GAM : Gradient Attention Module of Optimization for Point Clouds Analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 835–843. Lai, X.; Liu, J.; Jiang, L.; Wang, L.; Zhao, H.; Liu, S.; Qi, X.; and Jia, J. 2022. Stratified transformer for 3d point cloud segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8500–8509. Li, J.; Chen, B. M.; and Lee, G. H. 2018. So-net: Selforganizing network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9397–9406. Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; and Chen, B. 2018. Pointcnn: Convolution on x-transformed points. Advances in Neural Information Processing Systems, 31. Lin, H.; Zheng, X.; Li, L.; Chao, F.; Wang, S.; Wang, Y.; Tian, Y.; and Ji, R. 2023. Meta Architecture for Point Cloud Analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17682–17691. Liu, Y.; Fan, B.; Xiang, S.; and Pan, C. 2019. Relationshape convolutional neural network for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8895–8904. Loshchilov, I.; and Hutter, F. 2018. Decoupled Weight Decay Regularization. In International Conference on Learning Representations. Ma, X.; Qin, C.; You, H.; Ran, H.; and Fu, Y. 2022. Rethinking network design and local geometry in point cloud: A simple residual MLP framework. arXiv preprint arXiv:2202.07123. Maturana, D.; and Scherer, S. 2015. Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 922–928. IEEE. Mitra, N. J.; and Nguyen, A. 2003. Estimating surface normals in noisy point cloud data. In Proceedings of the Nineteenth Annual Symposium on Computational Geometry, 322–328. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 30. Qian, G.; Hammoud, H.; Li, G.; Thabet, A.; and Ghanem, B. 2021. Assanet: An anisotropic separable set abstraction for efficient point cloud representation learning. Advances in Neural Information Processing Systems, 34: 28119–28130. Qian, G.; Li, Y.; Peng, H.; Mai, J.; Hammoud, H.; Elhoseiny, M.; and Ghanem, B. 2022. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Advances in Neural Information Processing Systems, 35: 23192–23204. Ran, H.; Liu, J.; and Wang, C. 2022. Surface representation for point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18942– 18952. Su, H.; Maji, S.; Kalogerakis, E.; and Learned-Miller, E. 2015. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, 945–953. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2818–2826. Tang, L.; Zhan, Y.; Chen, Z.; Yu, B.; and Tao, D. 2022a. Contrastive boundary learning for point cloud segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8489–8499. Tang, Y.; Qian, Y.; Zhang, Q.; Zeng, Y.; Hou, J.; and Zhe, X. 2022b. WarpingGAN: Warping multiple uniform priors for adversarial 3D point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6397–6405. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6745 Thomas, H.; Qi, C. R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; and Guibas, L. J. 2019. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6411–6420. Uy, M. A.; Pham, Q.-H.; Hua, B.-S.; Nguyen, T.; and Yeung, S.-K. 2019. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1588–1597. Wang, C.; Samari, B.; and Siddiqi, K. 2018. Local spectral graph convolution for point set feature learning. In Proceedings of the European Conference on Computer Vision (ECCV), 52–66. Wang, W.; Dai, J.; Chen, Z.; Huang, Z.; Li, Z.; Zhu, X.; Hu, X.; Lu, T.; Lu, L.; Li, H.; et al. 2023. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14408–14419. Wang, X.; Girshick, R.; Gupta, A.; and He, K. 2018. Nonlocal neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7794– 7803. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (ToG), 38(5): 1–12. Wu, W.; Qi, Z.; and Fuxin, L. 2019. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9621–9630. Wu, X.; Lao, Y.; Jiang, L.; Liu, X.; and Zhao, H. 2022. Point transformer v2: Grouped vector attention and partitionbased pooling. Advances in Neural Information Processing Systems, 35: 33330–33342. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1912–1920. Xia, Z.; Pan, X.; Song, S.; Li, L. E.; and Huang, G. 2022. Vision transformer with deformable attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4794–4803. Xu, M.; Ding, R.; Zhao, H.; and Qi, X. 2021a. Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3173– 3182. Xu, M.; Zhang, J.; Zhou, Z.; Xu, M.; Qi, X.; and Qiao, Y. 2021b. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3056–3064. Yang, C.; Qiao, S.; Yu, Q.; Yuan, X.; Zhu, Y.; Yuille, A.; Adam, H.; and Chen, L.-C. 2022. MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models. In International Conference on Learning Representations. Yang, Z.; Sun, Y.; Liu, S.; Qi, X.; and Jia, J. 2020. Cn: Channel normalization for point cloud recognition. In Proceedings of the European Conference on Computer Vision (ECCV), 600–616. Yi, L.; Kim, V. G.; Ceylan, D.; Shen, I.-C.; Yan, M.; Su, H.; Lu, C.; Huang, Q.; Sheffer, A.; and Guibas, L. 2016. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6): 1–12. Yue, X.; Sun, S.; Kuang, Z.; Wei, M.; Torr, P. H.; Zhang, W.; and Lin, D. 2021. Vision transformer with progressive sampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 387–396. Zhang, R.; Wang, L.; Wang, Y.; Gao, P.; Li, H.; and Shi, J. 2023. Parameter is not all you need: Starting from non-parametric networks for 3d point cloud analysis. arXiv:2303.08134. Zhao, H.; Jiang, L.; Jia, J.; Torr, P. H.; and Koltun, V. 2021. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16259–16268. Zhou, H.; Feng, Y.; Fang, M.; Wei, M.; Qin, J.; and Lu, T. 2021. Adaptive graph convolution for point cloud analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4965–4974. Zhu, X.; Hu, H.; Lin, S.; and Dai, J. 2019. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9308–9316. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6746 | 2024 | 749 |
18,572 | Data Augmented Graph Neural Networks for Personality Detection Yangfu Zhu, Yue Xia, Meiling Li, Tingting Zhang, Bin Wu∗ Beijing University of Posts and Telecommunications, Beijing, China zhuyangfu,meilinglee,zhangtingting,[email protected], [email protected] Abstract Personality detection is a fundamental task for user psychology research. One of the biggest challenges in personality detection lies in the quantitative limitation of labeled data collected by completing the personality questionnaire, which is very time-consuming and labor-intensive. Most of the existing works are mainly devoted to learning the rich representations of posts based on labeled data. However, they still suffer from the inherent weakness of the amount limitation of labels, which potentially restricts the capability of the model to deal with unseen data. In this paper, we construct a heterogeneous personality graph for each labeled and unlabeled user and develop a novel psycholinguistic augmented graph neural network to detect personality in a semi-supervised manner, namely Semi-PerGCN. Specifically, our model first explores a supervised Personality Graph Neural Network (PGNN) to refine labeled user representation on the heterogeneous graph. For the remaining massive unlabeled users, we utilize the empirical psychological knowledge of the Linguistic Inquiry and Word Count (LIWC) lexicon for multi-view graph augmentation and perform unsupervised graph consistent constraints on the parameters shared PGNN. During the learning process of finite labeled users, noise-invariant learning on a large scale of unlabeled users is combined to enhance the generalization ability. Extensive experiments on three real-world datasets, Youtube, PAN2015, and MyPersonality demonstrate the effectiveness of our Semi-PerGCN in personality detection, especially in scenarios with limited labeled users. Introduction Personality is the overall characteristics and manifestations of an individual in terms of their psychology and behavior (Fang et al. 2023). Personality detection aims to identify the personality traits implied in social media posts that offering a deeper insight into human behavior (Nutescu and Mocanu 2023), emotional processes (Lian, Liu, and Tao 2022), and mental health (Zanwar et al. 2023). Besides, it can provide timely and objective support for downstream applications, such as human–computer interaction systems (Chien, Chen, and Chan 2022), virtual dialogue systems (Yang, Chen, and Narasimhan 2021), and recommendation systems (Yang et al. 2022; Shen et al. 2020). ∗Corresponding author Copyright c⃝2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Early researchers primarily combined two sources of psychological lexicon such as Linguistic Inquiry Word Count (LIWC) (Tausczik and Pennebaker 2010) and Medical Research Council (MRC) (Coltheart 1981) to manually find statistical word usage patterns for identifying personality traits from the text. With the blossoming of social media, users are posting massive content daily that reveals their psychological activities, providing new opportunity for automatically inferring traits. Subsequently, Deep Neural Networks (DNNs) were employed to learn meaningful representations of posts to detect personality (Kampman et al. 2018; Sun et al. 2018). However, understanding the traits behind the posts is non-trivial. Recently, a line of efforts focused on the structure of posts to dig deeper into the relationship between language and personality traits, including using hierarchical attention network (Lynn, Balasubramanian, and Schwartz 2020), constructing a heterogeneous tripartite graph with psycholinguistic information (Yang et al. 2021b), learning dynamic graph neural networks for post set (Yang et al. 2023a). Despite considerable progress in personality detection, existing models still suffer from the inherent weakness of the amount limitation of labels. Modern trait theory (John, Robins, and Pervin 2010) tries to model the personality by several dimensions and construct a questionnaire to measure their ground-truth traits. As a famous personality indicator, the Big Five personality inventory typically includes 50 or more items to answer, which is very time-consuming and requires a lot of human resources. Furthermore, due to privacy concerns, people are less willing to share personal trait information on the Internet. Hence, insufficient trained DNNs may limit the inference of personality from posts. The question of how to accurately recognize personality with limited labeled data remains unresolved. To address the above issues, we similarly start from the structure of user-generated documents and propose psycholinguistic data augmented graph neural network to detect personality in a semi-supervised manner, called SemiPerGCN. Specifically, we construct a heterogeneous personality graph for each labeled and unlabeled user, which includes three kinds of nodes, i.e., user nodes, word nodes, and LIWC category nodes. Then a personality graph convolution network with path-specific attention is employed to refine labeled user representation on a heterogeneous graph. For unThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 664 labeled users, we utilize the empirical psychological knowledge of the LIWC lexicon for multi-view graph augmentation, and perform unsupervised graph consistent learning on the parameters shared personality graph neural network. By incorporating noise invariant learning on large-scale unlabeled data during the learning process for finite labeled data, our Semi-PerGCN is more generalized for unseen data. In summary, the contributions of this work can be summarized as follows: • To the best of our knowledge, this is the first effort to utilize the inherent signals of massive unlabeled data to promote finite labeled personality detection, which provides a new perspective to alleviate the dilemma of training data for such data-hungry tasks. • We propose a novel psycholinguistic augmented graph neural network, for which based on heterogeneous graphs constructed by labeled and unlabeled users, supervised personality detection and unsupervised graph consistency learning are co-trained to avoid noise and improve generalizability. • We conduct extensive experiments to demonstrate the effectiveness of Semi-PerGCN on three representative datasets (i.e., Youtube, PAN2015, and MyPersonality) for personality detection. Related Work Personality Detection In the field of psychology, most personality research focused on the Big Five, which encompasses five broad dimensions that describe human personality and psychology in a common language (Digman 1990). Manual personality scale measures are the standard methodology today, but they are difficult to meet the demands of large-scale investigations. Recently, personality computing has attracted the attention of psychologists and computer scientists due to its wide range of application scenarios. The scope of related research extends from early linguistic feature analysis to current deep learning methods. Early researchers utilized psycholinguistic statistics features such as LIWC (Tausczik and Pennebaker 2010), Mairesse (Mehta et al. 2020a), and MRC (Tausczik and Pennebaker 2010) to assess personality, as they believe that personality traits affect language use patterns. With the rapid development of deep learning, DNNs are applied to personality detection task and achieve great success, such as Convolutional Neural Networks (CNNs) (Kampman et al. 2018), Recurrent Neural Networks (RNNs) (Sun et al. 2018), and Transformer (Yang et al. 2021a). Benefiting from largescale pre-trained language models, A line of pretraining finetuning paradigm are also explored on this tasks, such as fine-tuning the BERT (Jiang, Zhang, and Choi 2020) and personality-specific prompt-tuning (Wen et al. 2023). Furthermore, another line of approaches focuses on the structure of user-generated documents. Hierarchical structure model incrementally aggregates documents from the post level and the user level (Lynn, Balasubramanian, and Schwartz 2020). Subsequently, TrigNet (Yang et al. 2021b) considers that there is a psycholinguistic structure between posts and integrates the information in the LIWC dictionary and posts by message-passing based graph neural network. D-DGCN (Yang et al. 2023a) holds a different view that structure between user-generated posts is agnostic and utilizes dynamic graph neural network to automatically learns the structure between posts. The above methods mainly focus on how to obtain a meaningful post representation of users and rarely pay attention to the limitation of labeled data, which may restrict the generalization ability of the model when meet unseen data. Graph Structure Learning on Text Although the text is usually modeled as serialized tokens in Natural Language Processing (NLP) field, there are a large number of tasks that can be better modeled using graphs. In recent years, different GNN models have been applied to NLP tasks with great success. In these studies, how to build graphs to better capture textual information had attracted extensive attention. TextGCN (Yao, Mao, and Luo 2019) constructs a global heterogeneous graph containing word and document nodes for text classification. Then, an independent graph is constructed in (Huang et al. 2019) to represent each document, which greatly reduces the model’s memory requirements and dependence on the corpus. TextING (Zhang et al. 2020) further points out that in a document-level graph, words in different texts should not share the same representation but should be trained separately. This method is obviously more suitable for inductive learning. Returning to our task, the above subsection mentioned TrigNet and D-DGCN exactly proves the feasibility of GNNs in modeling at the document structure level for personality detection. Unlike TrigNet and D-DGCN, we represent user information by building co-occurring connections of words in different posts, which enhances the semantic learning of words. More importantly, we leverage massive unlabeled data for unsupervised training to stretch the upper bound of supervised detection models. Consistency Regularization Our work is also closely related to consistency regularization, which is used to smooth the output distribution, which is beneficial to model performance, even if the input data changes slightly, the output of the model can basically remain unchanged (Yang et al. 2023b). The consistency constraints of existing methods include the results of two forward operations for each sample participating in training: current model prediction results and historical predictions results (Tarvainen and Valpola 2017), model prediction results and prediction results after adding adversarial noise (Araslanov and Roth 2021), and model prediction results and prediction results after data augmentation (Zhao and Yao 2022). These methods are widely used in semisupervised classification tasks, proving the effectiveness of using consistent regularization terms to extract information from unlabeled data. Based on the above successful enlightenment, we propose a personality graph neural network with consistent regularization for personality detection and make The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 665 full use of the psychological knowledge to construct the disturbance agreement item, as well as extract the implicit personality information from the unlabeled data. Preliminaries Personality detection task can be regarded as a multidocument multi-dimensional regression problem. Mathematically, given a set of posts P = {p1, p2...pn} of a user x, where pi = {w1 i , w2 i ...wk i } is the i-th post with k words, our goal is to learn a representation mapping function F : P = {p1, p2...pn} →Y to score t-dimensional personality traits intensity Y = {y1,y2...yt} for this user based on the posts. In this paper, we model user-generated documents as heterogeneous graphs over three different types of nodes, including user nodes, word nodes, and LIWC nodes. For each user, the heterogeneous graph can be represented as G = (V, E), where V = Vu ∪Vw ∪Vl is a set of nodes and E represents the edges between nodes, Vu represents the user nodes, Vw = {w1, w2, ..., wp} denotes p words that appear in the posts and Vl = {l1, l2, ..., lq} is the q categories selected from LIWC 2015 dictionary. For the edges between word-user and word-LIWC are constructed undirected, indicating that words belong to the user or LIWC category, while for word-word, we simply use sliding windows to find their co-occurrence relationship and connect them without direction. And then based on the heterogeneous graph, we propose a semi-supervised graph neural network (SemiPerGCN) for personality detection. Methodology The architecture of our proposed Semi-PerGCN is shown in Figure 1, which consists of two components, i.e., supervised graph neural networks for personality detection and unsupervised augmented graph consistency learning. The personality graph neural network aims to learn rich user representations on heterogeneous user graphs via precious traits scores while the consistency learning component is to utilize unlabeled data for learning a more generalized model on such data-hungry task. Personality Graph Neural Networks Upon each personality graph, we employ Graph Convolutional Networks (GCNs) (Kipf and Welling 2016) to refine user presentation from both the global structure and the specific psycholinguistic structure of the heterogeneous user graph respectively. Specifically, each convolutional layer in GCNs can process first-order neighborhood information, and multi-level neighborhood information transmission can be realized by superimposing several convolutional layers. The propagation and transformation are as follows: Xk+1 = σ(AXkWk), (1) where ˆA = D−1 2 (A + I)D−1 2 is the normalized symmetric adjacency matrix, Wk ∈Rn×n is a weight matrix, σ is the activation function. Formally, based on constructed personality graph G = (V, E, A, X), where X represents the initial node embedding of three types of nodes. We use BERT to initialize the word nodes Vw and randomly initialize the user nodes Vu and LIWC nodes Vl. The A is the adjacency matrix of the heterogeneous personality graph G. We first update updates all node representations with twolayer GCNs as follows: X1 = σ(AX0W0), (2) H = σ(AX1W1), (3) After two layers of iterations, we obtain the user Hu and LIWC Hl node representations under the global structure. To further highlight the psychological structure of usergenerated documents, attentional mechanisms on the specific path of user-LIWC are leveraged to aggregate summarized psycholinguistic information. As shown in Figure 2, Hu is the representation of a user used as the attention query and [H1 l , H2 l ,..., Hq l ] as the key, the attention mechanism can be described as follows: βi = δ(Wz[WuHu||WlHi l]), (4) αi = exp(βi) pP i=1 exp(βi) , (5) bHu = tanh( p X i=1 βiWvHi l) + Hu, (6) where Wz, Wu and Wl are learnable linear transformation matrices. δ is the LeakyReLU activation function. αi is the attention weight for Hi l. The ˆHu is the weighted combination of the LIWC node and the user node Vu itself. Finally, the refine representation ˆHu of each labeled user x are fed into the linear layer and a sigmoid layer for the supervised training personality assessment model. yd = pθ(y|x) = Sigmiod Wd ˆHu + bd , (7) where Wd is a trainable weight matrix of the detection component with an output dimension set to 5 in accordance with the Big Five personality traits. Unsupervised Augmented Graph Consistency Learning We assume that a good detection model should be robust to any small change in the input examples and have good generalization to unseen data. Hence, we make full use of the large-scale unlabeled users by heterogeneous graph augmentations and design unsupervised data consistency regularization in the model training. In a nutshell, consistency training methods simply regularize model predictions to be invariant to small noise applied to input examples. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 666 Figure 1: The overall architecture of our Semi-PerGCN, which consists of two components, (A) the supervised personality graph neural networks component aims to learn rich user representations on constructed heterogeneous user graphs via precious traits scores, and (B) the unsupervised augmented graph consistency learning component to utilize unlabeled data for helping to learn a more generalized model on such data-hungry task. Figure 2: Learning user representation with LIWC information by attention mechanism. Specifically, for any user xu in unlabeled data, we compute the output distribution pθ(y|xu) by the supervised personality graph neural network in the above subsection, and then compute augmentation data with noise version pθ(y|(xu, ε)) by injecting a small noise ε. Subsequently, the divergence metric between the two distributions D (pθ(y | xu)||pθ(y | xu, (xu, ε))) is minimized by Cross Entrop (CE) loss: Lc = CE (pθ(y | xu)||pθ(y | x, (xu, ε))) , (8) where θ is a copy of the current parameters Wk, Wz, Wu, Wl, Wd of supervised detection modal indicating that the gradient is not propagated through θ. This approach ensures that the model is less affected by noise, resulting in a smoother response when there are variations in the input. Alternatively, by minimizing the consistency loss, the model gradually transfers label information from labeled examples to unlabeled ones. However, how to construct a suitable sample pair (xu, (xu, ε)) is the key to the effect of consistency learning. As described in the classic unsupervised data augmentation method UDA (Zhang et al. 2021), more diverse and naturally more advanced data enhancements can lead to significant performance gains in supervised settings. Following this idea, we explored ways to augment data on this specific personality detection task. We are inspired by the lexical hypothesis of personality (Galton 1884), which posits that traits of personality are revealed by the descriptive vocabulary of human language. For a given user sample xu, we randomly choose one of the following two operations to get (xu, ε), as illustrated in Figure 3: • LIWC Synonym Replacement (SR): Replace each psychology word with synonyms randomly. The psychology words belong to the same category in the LIWC dictionary as synonyms. • Randomly Deletion (RD): Randomly removes some nonpsychological words from user-generated documents. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 667 Figure 3: Two data enhancement methods on the personality graph, i.e., word node replacement (left side) and word node deletion (right side). In our personality graph neural network, words belonging to the same category are simultaneously connected to the same LIWC node. It can be regarded that performing synonym replacement in the text as adding noise to the attributes of word nodes in the graph. Similarly, randomly deleting words can be viewed as a slight disturbance to the structure of the graph. Therefore, we can regard the conversion method of xu to (xu, ε) as the data augmentation of the text, or as the addition of weak noise to the graph. Objective Function Following previous work, we minimize the objective function for the supervised detection process with the traditional MSE loss. The distance between the predicted personality score yd and the ground-truth personality traits for the given data (x, y) minimize as: Ld = MSE(yd, y), (9) We jointly trained supervised detection and unsupervised detection under consistency constraints with labeled examples and large-scale unlabeled data. This auxiliary unsupervised detection task is included to help learn the invariant for the input noise, and the two tasks share the same parameters. The final optimization is to minimize the supervised MSE loss and the unsupervised consistency training loss, respectively. L = Ld + λLc, (10) where λ is a balance hyperparameter. In the actual training process, x represents the training data in the current mini-batch, and xu is randomly selected from the unlabeled dataset. Experiments Experimental Settings Datasets. Following previous studies (Mehta et al. 2020b), we conduct experiments on the Youtube Personality, Datasets User numbers Post numbers Youtube 404 3,205 PAN2015 294 27,344 MyPersonality 10,000 1,136,153 Table 1: Statistics of datasets. PAN2015, and Mypersonality datasets with Big Five taxonomy. The number of users and posts of different datasets are shown in Table 1. • Youtube (Biel et al. 2013): consists of a collection of speech transcriptions with their Big Five personality scores which range from 1 to 7 1. The labels of this dataset are collected from the crowd-sourced annotation task. Annotators watch each video blog and then rated Big Five personality scores with a questionnaire. • PAN2015 (Rangel Pardo et al. 2015): collected from the data science competition PAN2015 2 and includes four languages datasets. We choose English data and their Big Five personality scores from -0.5 to 0.5. • MyPersonality (Celli et al. 2013): collected from a Facebook application 3. The data we use here is mainly collected from this work (Xue et al. 2018) and their Big Five personality scores from 0 to 5. Evaluation Metrics. Following previous works, we choose MAE as our evaluation metric in each personality trait and use the average of each personality dimension MAE to measure their overall performance, formulated as: MAE = 1 N N X i=1 |yi −ˆyi|, (11) where N denotes the number of samples, ˆyi and yi are the ground truth and the predicted personality trait scores of users. Baselines. We compare our Semi-PerGCN with three groups of baseline methods: 1) Psychological vocabularybased models; 2) Deep neural networks-based models 3) Document structure-based models, which can be categorized as follows: Psychological vocabulary-based models: LIWC: LIWC features contain 89 features extracted by LIWC API. They are fed into the SVM model to predict the Big-Five personality traits. The reason why we choose this comparison method is that it mainly focuses on the lexical features of the text. Deep neural networks-based models: TextCNN (Kampman et al. 2018): is a tri-modal architecture CNN to predict Big Five personality trait scores from video clips with different channels for audio, text, and video data. We use the CNN structure designed for text in their research. TextCNN+LIWC (Wei et al. 2017): concatenates 1https://www.idiap.ch/en/dataset/youtube-personality/ index html 2https://pan.webis.de/clef15/pan15-web/author-profiling.html 3http://mypersonality.org. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 668 Datasets Traits Methods LIWC CNN CNN+LIWC 2CLSTM BERT TrigNet Attn D-DGCN Ours Youtube OPE 61.55 60.54 61.61 61.21 65.17 65.22 66.77 64.23 65.81 CON 57.25 62.65 57.34 62.63 59.98 56.87 57.98 55.87 55.01 EXT 90.84 86.83 88.68 86.62 86.28 85.02 84.91 84.30 85.24 AGR 61.09 65.66 62.61 66.08 64.56 66.94 62.11 60.15 56.34 NEU 64.18 62.39 64.71 66.94 61.81 61.88 61.77 61.46 58.64 AVE 66.98 67.61 66.99 68.70 67.56 66.79 66.71 65.20 64.21 PAN2015 OPE 13.88 13.91 13.62 14.13 13.14 13.44 12.78 12.81 12.84 CON 12.19 12.35 12.22 12.48 11.67 12.37 12.43 12.55 12.17 EXT 13.12 13.22 13.25 13.34 13.21 13.33 12.83 12.67 12.71 AGR 11.12 11.13 11.64 11.39 11.29 11.59 11.12 11.56 11.49 NEU 18.62 18.59 18.51 18.24 19.93 16.72 18.12 18.34 16.13 AVE 13.76 13.81 13.49 13.97 13.81 13.59 13.45 13.50 12.98 MyPersonality OPE 54.49 55.91 55.90 60.37 55.75 55.35 53.68 52.23 51.05 CON 54.97 58.49 57.33 61.41 57.24 56.48 55.78 55.57 54.38 EXT 62.48 63.70 63.46 68.88 63.93 64.18 61.21 60.96 60.42 AGR 56.08 56.80 56.84 60.65 56.67 56.86 55.80 55.37 55.01 NEU 65.73 66.91 66.57 67.55 66.22 66.34 65.97 65.34 64.80 AVE 58.75 60.36 60.02 63.77 59.96 59.84 58.49 57.90 57.13 Table 2: Performance comparison MAE (%) of our proposed Semi-PerGCN with baselines. textual semantic features with the LIWC features for personality prediction. It combines deep neural networks and personality lexicon features for detection. 2CLSTM (Sun et al. 2018): uses Bi-directional LSTM (Hochreiter and Schmidhuber 1997) and CNN to encode texts for detecting personality traits. BERT (Devlin et al. 2018): the fine-tuned BERT is firstly used to encode each post, and then mean pooling is performed over all posts to generate the user representation. Document structure-based models: Attn (Lynn, Balasubramanian, and Schwartz 2020): is a hierarchical structure model which uses word-level attention to encode each post and another post-level attention to generate user representation. TrigNet (Yang et al. 2021b): is a psycholinguistic knowledge-based tripartite graph network that transmitting messages between neighboring parties in the posts graph by the specific flow. D-DGCN (Yang et al. 2023a): is a dynamic deep graph convolutional network that automatically learns the structure between user-generated posts. Implementation Details. We use Pytorch to implement all the deep learning models on our three 2080Ti GPU cards. Empirically, we use a batch size of 16,16, and 64 for the labeled data and a batch size of 32, 32, and 112 for the unlabeled data in Youtube, PAN2015, and MyPersonality datasets respectively. Adam is utilized as the optimizer and the learning rate of our model is set to 0.0001, 0.0003, and 0.0003 in PAN2015, Youtube, and MyPersonality datasets respectively. The pre-trained language models BERT are employed to initialize the word node embeddings by the bert-base-cased (Devlin et al. 2018), and the dimensions of word nodes, LIWC nodes, and user nodes are set to 200. All the hyperparameters are tuned over the validation set to obtain the optimized results. Overall Results We compare our Semi-PerGCN to all baselines on the Youtube, PAN2015, and MyPersonality datasets. The MAE scores of different models are shown in Table 2. The major findings can be summarized as follows: • The proposed Semi-PerGCN performs the best on all datasets. Especially, compared with the state-of-the-art baseline method GRU+Attn, Semi-PerGCN achieves 1.51%, 3.49%, and 1.32% improvements in averageMAE on Youtube, PAN2015, and MyPersonality datasets respectively. The results verify the effectiveness of our model in personality detection. We believe the reasons are two-fold: (1) Our model Semi-PerGCN uses a large amount of unlabeled data for consistency learning, which help to learn better user representations and reduces the risk of overfitting on a small training set. (2) Rich user representations are well captured by supervised graph learning on constructed heterogeneous user graphs. • TrigNet performs slightly inferior to the GRU+Attn model, especially on the PAN2015 dataset, in which the redundancy of posts makes it difficult to obtain effective representations of the posts in TrigNet. Our model avoids this problem by directly modeling the user node. Dgcb is essentially the sub-optimal model on three datasets, which suggests that the upper bound of the supervised model may be harder to approximate. Additionally, the indicator on the PAN2015 dataset outperforms the other datasets due to the category imbalance in the PAN2015 dataset. • We find the deep neural networks model 2CLSTMs performs the worst on all datasets, We think that the bidirectional LSTM may be not suitable for capturing long dependencies of text. The TextCNN performs slightly better than 2CLSTMs due to its capability to aggregate the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 669 Datasets Mean-MAE (%) Sim-PerGCN - Regularization - Attention OPE 65.81 64.85 66.13 CON 55.01 57.62 56.66 EXT 85.24 85.32 81.98 AGR 56.34 58.73 60.46 NEU 58.64 62.50 61.24 AVE 64.21 65.04 (↓1.27%) 65.29 (↓1.65%) Table 3: Results of ablation study of Semi-PerGCN on Youtube, where “–” denotes the removal of a component, i.e., “-Regularization” refers to remove the data augmentation component, and “-Attention” means remove the knowledge from LIWC. associated textual information. • TextCNN+LIWC performs better than TextCNN, which shows the effectiveness of psycholinguistic domain knowledge. Surprisingly, the model that only contains LIWC information performs better than CNN+LIWC. LIWC can directly capture psycholinguistic features and provide the most useful information for personality prediction. The poor performance of the CNN+LIWC combined model may be due to the conflict of features learned in CNN and LIWC. Experimental Analysis In this section, we analyze the usefulness of each component used in Semi-PerGCN and the effect of using different data augmentation strategies as well as the amount of unlabeled data. Ablation Study. We conduct an ablation study for our Semi-PerGCN model on the Youtube dataset to investigate the effects of unlabeled data and psycholinguistic knowledge. As shown in Table 3, the performance of the model drops obviously after removing data augmentation or psycholinguistic knowledge information. The performance of the model drops 1.27% when we remove the consistency loss from Eq 10, which demonstrates the necessity of introducing external unlabeled data to make data augmentation for our task. Besides, the performance of the model drops 1.65% when we eliminate the impact of LIWC nodes and only use the user node representation after the interaction of the GCN layer. This shows the importance of psycholinguistic structure for personality detection. Effect of Different Data Augmentation Methods. We investigate the effect of our model under different data augmentation strategies, including LIWC Synonym Replacement(SR), Randomly Deletion (RD), and both. The performances of our model under different data augmentation methods and different hyperparameters λ values settings in Eq 10 are shown in Figure 4. Compared to RD, SR is a more effective data enhancement method. This demonstrates that the LIWC lexicon, collected empirically by psychologists, is a suitable source of data enhancement on personality detection. Meanwhile, combining the two enhancements can play a positive role and achieve the best effect in our graph Figure 4: Performance curves f of different data augmentation methods and different hyperparameter λ values. (a) Youtube (b) PAN2015 Figure 5: Results of adding different amounts of unlabeled data in Youtube and PAN2015 datasets. model. On the other hand, as the value of λ increases, the performance improves initially but eventually starts to decline beyond a certain point. This indicates that excessive unsupervised learning likewise weaken the model’s generalization ability. Effect of Different Amounts of Unlabeled Data. We also explore the effect of different amounts of unlabeled data on the prediction results of the model. We extracted 100, 400, and 1000 unlabeled data for consistency training on the Youtube and PAN2015 datasets. As shown in Figure 5, performance improves as the amount of unlabeled data increases. It is worth noting that the performance improvement from 400 to 1000 is slower than that from 100 to 400, suggesting that there is an upper limit for the model to extract information from unlabeled data. Conclusion In this paper, we propose a semi-supervised graph neural network (Semi-PerGCN) for personality prediction. SemiPerGCN expects to leverage the large amount of unlabeled data to help the model be insensitive to input noise. To this end, we construct a personality graph neural network that enhances generalization to unknown data by incorporating noise-invariant learning of large-scale unlabeled data during the learning process of limited labeled data. While the idea of data augmentation in a personality detection is validated, our implementation can be further improved by trying more metric learning methods such as contrastive learning to explore user representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 670 Acknowledgments This work is supported by the NSFC-General Technology Basic Research Joint Funds under Grant (U1936220), the National Natural Science Foundation of China under Grant (61972047, 62372060), and the BUPT Excellent Ph.D. Students Foundation (CX2022219). References Araslanov, N.; and Roth, S. 2021. Self-supervised augmentation consistency for adapting semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15384–15394. Biel, J.-I.; Tsiminaki, V.; Dines, J.; and Gatica-Perez, D. 2013. Hi YouTube! Personality impressions and verbal content in social video. In Proceedings of the 15th ACM on International conference on multimodal interaction, 119–126. Celli, F.; Pianesi, F.; Stillwell, D.; and Kosinski, M. 2013. Workshop on computational personality recognition: Shared task. In Proceedings of the International AAAI Conference on Web and Social Media, volume 7. Chien, S.-Y.; Chen, C.-L.; and Chan, Y.-C. 2022. The Influence of Personality Traits in Human-Humanoid Robot Interaction. Proceedings of the Association for Information Science and Technology, 59(1): 415–419. Coltheart, M. 1981. The MRC psycholinguistic database. The Quarterly Journal of Experimental Psychology Section A, 33(4): 497–505. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Digman, J. M. 1990. Personality structure: Emergence of the five-factor model. Annual review of psychology, 41(1): 417–440. Fang, Q.; Giachanou, A.; Bagheri, A.; Boeschoten, L.; van Kesteren, E.-J.; Shafiee Kamalabad, M.; and Oberski, D. 2023. On Text-based Personality Computing: Challenges and Future Directions. In Findings of the Association for Computational Linguistics: ACL 2023, 10861–10879. Galton, F. 1884. Measurement of character. Fortnightly Review, 36, 179-185. Galton17936Fortnightly Review1884. Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural computation, 9(8): 1735–1780. Huang, L.; Ma, D.; Li, S.; Zhang, X.; and Wang, H. 2019. Text Level Graph Neural Network for Text Classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3444–3450. Jiang, H.; Zhang, X.; and Choi, J. D. 2020. Automatic TextBased Personality Recognition on Monologues and Multiparty Dialogues Using Attentive Networks and Contextual Embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 13821–13822. John, O. P.; Robins, R. W.; and Pervin, L. A. 2010. Handbook of personality: Theory and research. Guilford Press. Kampman, O.; Barezi, E. J.; Bertero, D.; and Fung, P. 2018. Investigating Audio, Video, and Text Fusion Methods for End-to-End Automatic Personality Prediction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 606–611. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Lian, Z.; Liu, B.; and Tao, J. 2022. Pirnet: Personalityenhanced iterative refinement network for emotion recognition in conversation. IEEE Transactions on Neural Networks and Learning Systems, 1–12. Lynn, V.; Balasubramanian, N.; and Schwartz, H. A. 2020. Hierarchical modeling for user personality prediction: The role of message-level attention. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5306–5316. Mehta, Y.; Fatehi, S.; Kazameini, A.; Stachl, C.; Cambria, E.; and Eetemadi, S. 2020a. Bottom-up and top-down: Predicting personality with psycholinguistic and language model features. In 2020 IEEE International Conference on Data Mining (ICDM), 1184–1189. IEEE. Mehta, Y.; Majumder, N.; Gelbukh, A.; and Cambria, E. 2020b. Recent trends in deep learning based personality detection. Artificial Intelligence Review. Nutescu, C.-I.; and Mocanu, M. 2023. Creating personality model using genetic algorithms and behavioral psychology. In 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 01–04. Rangel Pardo, F. M.; Celli, F.; Rosso, P.; Potthast, M.; Stein, B.; and Daelemans, W. 2015. Overview of the 3rd Author Profiling Task at PAN 2015. In CLEF 2015 evaluation labs and workshop working notes papers, 1–8. Shen, T.; Jia, J.; Li, Y.; Ma, Y.; Bu, Y.; Wang, H.; Chen, B.; Chua, T.-S.; and Hall, W. 2020. Peia: Personality and emotion integrated attentive model for music recommendation on social media platforms. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 206–213. Sun, X.; Liu, B.; Cao, J.; Luo, J.; and Shen, X. 2018. Who am I? Personality detection based on deep learning for texts. In 2018 IEEE International Conference on Communications (ICC), 1–6. IEEE. Tarvainen, A.; and Valpola, H. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30. Tausczik, Y. R.; and Pennebaker, J. W. 2010. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of language and social psychology, 29(1): 24–54. Wei, H.; Zhang, F.; Yuan, N. J.; Cao, C.; Fu, H.; Xie, X.; Rui, Y.; and Ma, W.-Y. 2017. Beyond the words: Predicting user personality from heterogeneous information. In Proceedings of the tenth ACM international conference on web search and data mining, 305–314. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 671 Wen, Z.; Cao, J.; Yang, Y.; Wang, H.; Yang, R.; and Liu, S. 2023. DesPrompt: Personality-descriptive prompt tuning for few-shot personality recognition. Information Processing & Management, 60(5): 103422. Xue, D.; Wu, L.; Hong, Z.; Guo, S.; Gao, L.; Wu, Z.; Zhong, X.; and Sun, J. 2018. Deep learning-based personality recognition from text posts of online social networks. Applied Intelligence, 48(11): 4232–4246. Yang, F.; Quan, X.; Yang, Y.; and Yu, J. 2021a. Multidocument transformer for personality detection. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 14221–14229. Yang, Q.; Nikolenko, S.; Huang, A.; and Farseev, A. 2022. Personality-Driven Social Multimedia Content Recommendation. In Proceedings of the 30th ACM International Conference on Multimedia, 7290–7299. Yang, R.; Chen, J.; and Narasimhan, K. 2021. Improving Dialog Systems for Negotiation with Personality Modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 681–693. Yang, T.; Deng, J.; Quan, X.; and Wang, Q. 2023a. Orders are unwanted: dynamic deep graph convolutional network for personality detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 13896–13904. Yang, T.; Yang, F.; Ouyang, H.; and Quan, X. 2021b. Psycholinguistic Tripartite Graph Network for Personality Detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 4229–4239. Yang, X.; Song, Z.; King, I.; and Xu, Z. 2023b. A Survey on Deep Semi-Supervised Learning. IEEE Transactions on Knowledge and Data Engineering, 35(9): 8934–8954. Yao, L.; Mao, C.; and Luo, Y. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 7370–7377. Zanwar, S.; Li, X.; Wiechmann, D.; Qiao, Y.; and Kerz, E. 2023. What to Fuse and How to Fuse: Exploring Emotion and Personality Fusion Strategies for Explainable Mental Disorder Detection. In Findings of the Association for Computational Linguistics, 8926–8940. Zhang, Y.; Yu, X.; Cui, Z.; Wu, S.; Wen, Z.; and Wang, L. 2020. Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 334–339. Zhang, Y.; Zhang, H.; Deng, B.; Li, S.; Jia, K.; and Zhang, L. 2021. Semi-supervised models are strong unsupervised domain adaptation learners. arXiv preprint arXiv:2106.00417. Zhao, L.; and Yao, C. 2022. EICO: Improving Few-Shot Text Classification via Explicit and Implicit Consistency Regularization. In Findings of the Association for Computational Linguistics: ACL 2022, 3582–3587. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 672 | 2024 | 75 |
18,573 | Revisiting Open-Set Panoptic Segmentation Yufei Yin1, Hao Chen2, Wengang Zhou1,3,*, Jiajun Deng4, Haiming Xu4, Houqiang Li1,3,* 1 CAS Key Laboratory of Technology in GIPAS, EEIS Department, University of Science and Technology of China 2 Zhejiang University 3 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center 4 Australian Institute for Machine Learning, University of Adelaide [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract In this paper, we focus on the open-set panoptic segmentation (OPS) task to circumvent the data explosion problem. Different from the close-set setting, OPS targets to detect both known and unknown categories, where the latter is not annotated during training. Different from existing work that only selects a few common categories (≤16) as unknown ones, we move forward to the real-world scenario by considering the various tail categories (∼1k). To this end, we first build a new dataset with long-tail distribution for the OPS task. Based on this dataset, we additionally add a new class type for unknown classes and re-define the training annotations to make the OPS definition more complete and reasonable. Moreover, we analyze the influence of several significant factors in the OPS task and explore the upper bound of performance on unknown classes with different settings. Furthermore, based on the analyses, we design an effective twophase framework for the OPS task, including thing-agnostic map generation and unknown segment mining. We further adopt semi-supervised learning to improve the OPS performance. Experimental results on different datasets validate the effectiveness of our method. 1 Introduction Recent decades have witnessed a surge of high-quality datasets (Deng et al. 2009; Everingham et al. 2010; Lin et al. 2014; Cordts et al. 2016; Gupta, Dollar, and Girshick 2019), which lead to tremendous advances in visual perception algorithms (He et al. 2016; Ren et al. 2015; Redmon et al. 2016; He et al. 2017). However, the thirst for data is far from being satisfied since the models cannot perform robustly in complex real-world scenarios. Datasets with a large variety are crucial to the generalization performance for neural networks, but simply adding more labeled samples is not a viable solution. As the size and complexity of the datasets increase, the problem of long-tail distribution and label ambiguity becomes more significant. We refer to this problem as data explosion. Since it is unrealistic to extensively generate categorical labels for thousands of classes, we look for a more feasible approach by revisiting the relation between the categorical labels and the perception tasks. *Corresponding authors: Wengang Zhou and Houqiang Li Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Comparison between COCO and LVIS-PS. The first row presents images, and the subsequent two rows present the annotations of COCO and LVIS-PS, respectively. In this paper, we propose to circumvent the data explosion problem by studying a more realistic setting, termed openset panoptic segmentation task (OPS). As an extension of panoptic segmentation (Kirillov et al. 2019), OPS requires to detect instances that are not annotated in the training set, a.k.a. the unknown category. In this setting, the annotation complexity does not increase as the dataset grows. On one hand, the tail categories1 can be regarded as unknown categories, and no annotations for them are needed for training. On the other hand, the annotations of panoptic segmentation do not have overlapping ambiguity compared to the box-level ones, and each pixel is one-to-one mapped to a target. Therefore, OPS is a proper setting for robust perception network training where the dataset complexity exceeds manual label capability. Only a few works have been explored on the OPS task. The pioneering work EOPSN (Hwang et al. 2021) first extends panoptic segmentation to the open-set setting and proposes an exemplar-based approach to discover unlabeled objects in the training set. Nevertheless, the existing OPS benchmark is in small scale and suffers some limitations in setting: (1) The COCO dataset (Lin et al. 2014) utilized in the benchmark only includes 80 common categories, omit1The tail parts of categories in long-tail distribution The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6747 ting a significant portion of rare classes. The incomplete annotations for rare classes in COCO may result in some correct open-set predictions being overlooked or incorrectly identified as “false positive” during inference. (2) Only a few common categories (≤16) in COCO are selected as unknown ones, which is a significant deviation from the realworld scenario where un-annotated categories can be rare and diverse. Moreover, the instances of the unknown classes all appear in the training images, which may leak some information to implicitly help the model to identify them. (3) Pixels with unknown classes are re-annotated as “void” (“ignore”) type during training, which provides too much extra prior information that unknown classes only exist in the small parts of “void” areas in the image. In addition, certain important factors that have substantial impacts on the OPS task remain undiscussed in previous works, such as class information, which may affect the generalization capability from known categories to the unknown ones; annotation propotion, which affects the information of novel categories. To address the above issues, in this paper, we first revisit the OPS task and re-formulate its benchmark settings. To involve more diverse categories and complete annotations, we construct a new LVIS-PS dataset for the OPS task based on the LVIS dataset (Gupta, Dollar, and Girshick 2019) and COCO. As shown in Fig. 1, LVIS-PS adds more segments with various tail categories to the void or stuff areas of COCO. We treat all these tail categories (∼1k) in LVISPS as unknown ones. We also introduce a new class type for unknown classes (i.e., unseen), which is absent from the training images, and propose a new metric to evaluate it. Furthermore, we re-define the available training annotations to make the OPS settings more reasonable yet challenging. Subsequently, we conduct a thorough analysis of several crucial factors that impact the performance of OPS, including different usage of class information, different annotation and category numbers. Finally, based on these analyses, we propose an effective two-phase framework for the OPS task, which consists of thing-agnostic map generation and unknown segment mining. We also build a Semi-PanoFCN-2s model with semi-supervised training to further improve the OPS performance. The proposed framework can be regarded as a simple yet effective baseline for the new challenging OPS benchmark. Our framework outperforms (Hwang et al. 2021) by a considerable margin on the unknown classes on LVIS-PS. Moreover, compared with the pure class-agnostic model (Qi et al. 2021), our framework not only has classspecific segmentation capability, but also shows better generalization capability to the other dataset (i.e., ADE20K (Zhou et al. 2017)). 2 Related Work 2.1 Open-set Detection and Segmentation Recently, the open-set problem has been explored in various computer vision tasks (Bendale and Boult 2015; Dhamija et al. 2020; Joseph et al. 2021; Gupta et al. 2022; Zhao et al. 2022; Vaze et al. 2021; Qi et al. 2021; Saito et al. 2021; Wang et al. 2022b; Hwang et al. 2021; Wang et al. 2022a, 2021). Dhamija et al. (Dhamija et al. 2020) first formalize the open-set object detection problem and propose the open-set object detection protocol to better estimate the performance under real-world conditions. Joseph et al. (Joseph et al. 2021) propose the ORE model to achieve the openworld detection task based on the energy-based identifier and contrastive clustering. For the segmentation task, Lu et al. (Qi et al. 2021) propose a class-agnostic entity segmentation task and construct a Global Kernel Bank with both dynamic and static kernels to generate entity masks. LDET (Saito et al. 2021) introduces a new data augmentation and uses decoupled training for open-world instance segmentation. Hwang et al. (Hwang et al. 2021) extend panoptic segmentation to the open-set setting and propose an EOPSN model which uses RPN to obtain proposals for unknown classes and applies clustering to mine reliable exemplars. Our work focuses on the open-set panoptic segmentation task following (Hwang et al. 2021). However, different from (Hwang et al. 2021), we re-formulate the open-set panoptic segmentation task from several aspects and introduce various tail categories to make it closer to the real-world condition but more challenging. 3 Rethinking Open-Set Panoptic Segmentation In this section, we first formalize the open-set panoptic segmentation (OPS) task (Sec. 3.1). To address the drawbacks of the original OPS settings, we construct a new OPS benchmark to make it closer to the real-world scenario yet more challenging (Sec. 3.2). After that, we introduce the applied evaluation metrics for the new OPS task (Sec. 3.3). 3.1 Problem Formulation Panoptic segmentation is a combination of instance segmentation and semantic segmentation. It aims to classify each pixel to its corresponding thing or stuff class and segment each individual instance for thing classes. The main difference between open-set panoptic segmentation (OPS) and the common panoptic segmentation setting (close-set) is that the former involves a special unknown class, which is not available for training. Concretely, suppose a set of known classes C = {0, · · · , C −1} and a set of unknown categories is predefined. All these unknown categories are selected from the thing categories and regarded as a special “unknown class” U. In the training stage, only the data of the known classes are available, and their annotations are the same as the closeset settings. In the inference stage, all segments with the known classes or the special unknown class are supposed to be found in a given image. 3.2 Towards a New OPS Benchmark As discussed in Sec. 1, the current OPS setting (Hwang et al. 2021) remains drawbacks and suffers large gaps from the real-world scenario. Therefore, we construct a new benchmark for the OPS task according to the following steps: Annotation aggregation. We aim to adopt the more complicated LVIS dataset (Gupta, Dollar, and Girshick 2019) for the OPS task, which shares the same images with the COCO dataset (Lin et al. 2014) while re-annotating them with more The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6748 Method Train Test Unknown Classes Anno. Source Unk. Annotated Anno. Source Number Type Source EOPSN COCO Void COCO 4/8/16 Seen COCO Ours COCO Void & Stuff LVIS-PS 1020 Seen & Unseen Long Tail in LVIS-PS Table 1: Comparison of the benchmark settings of EOPSN (Hwang et al. 2021) and ours. “Anno. Source” denotes the source of annotations during training or testing. “Unk. Annotated” denotes the classes that unknown segments may be annotated in training annotations. diverse categories and complete annotations. However, LVIS cannot be used for OPS directly since it is constructed primarily for the instance segmentation task with overlapped instance annotations and no stuff categories. To address the problem, we build a new panoptic segmentation dataset, named “LVIS-PS”, based on (Gupta, Dollar, and Girshick 2019) and (Lin et al. 2014). Concretely, we follow a “Thing First, COCO First” principle to generate the panoptic segmentation level annotations of the LVIS-PS dataset. For each image, we place the annotations from different sources on the remaining blank areas of a panoptic map in the following order: COCO-THING, LVIS-THING, COCO-STUFF 2. Specially, if a newly added instance has high overlaps with existing ones on the map, it will be discarded to avoid ambiguity. Detailed information of LVIS-PS and the procedure to construct it are presented in the Supplementary Material. Consequently, all categories in COCO are retained in LVISPS while more tail categories are added with no overlap. As shown in Fig.1, LVIS-PS additionally labels more instances that are originally regarded as stuff or ignored in COCO. Category split. Considering the un-annotated categories can be rare and diverse in the real-world scenario, we select various tail categories (∼1k) as unknown classes, i.e., the newly added ones in LVIS-PS compared to COCO. Correspondingly, categories in COCO are regarded as known classes. Moreover, though the annotations of the unknown classes are not available at the training stage, their corresponding objects still exist in the training images in the previous OPS setting, which will help some class-agnostic classifiers (i.e., region proposal network in (Hwang et al. 2021)) to identify them implicitly. In contrast, the OPS method also needs to possess the capability to find segments with classes that never appear in training, which are denoted as unseen classes. These classes are genuinely “open-set” to some degree. To this end, we select a portion of unknown classes with few samples as unseen classes, and remove all images that contain these classes during training (about 10% training images). Accordingly, the remained unknown classes are denoted as seen classes. Unknown region removal. According to the definition of the OPS task, annotations of unknown classes need to be removed before training. A problem naturally arises as to what their corresponding pixels should be re-annotated as. In the previous OPS setting, these pixels are re-annotated as “void” (or “ignore”) type, which is ignored at the training stage. However, this operation will introduce unreasonable prior information that the unknown classes are solely present 2We use COCO-Thing to represent “thing classes in COCO dataset” for simplicity. LVIS-Thing and COCO-Stuff share the same representation. in the limited regions of “void” areas in the image. Instead, there is another reasonable situation where segments of those unknown classes are annotated as stuff classes under a loose criterion. Based on this assumption, original COCO annotations become an optimal training source for LVISPS since the annotations of LVIS-PS are extended based on COCO annotations, and these newly added segments are naturally annotated as “void” type or stuff classes in original COCO annotations. In summary, we adopt the LVIS-PS dataset for the OPS task. We use the corresponding original COCO annotations during training, while using generated LVIS-PS annotations for inference. Four class types (i.e., known-thing, knownstuff, seen, unseen) are considered during evaluation. Comparison of the benchmark settings proposed in (Hwang et al. 2021) and ours is shown in Tab. 1. 3.3 Evaluation Metrics Following (Hwang et al. 2021), we use three standard panoptic segmentation metrics (Kirillov et al. 2019), including panoptic quality (PQ), segmentation quality (SQ), and recognition quality (RQ). However, for unseen class, it’s not appropriate to use original RQ and PQ, which contains “false positive” (FP), to measure its performance for the following reasons. First, FPs may face an important condition that the prediction is actually an object but with no groundtruth assigned to it. However, these kinds of predictions are not real “false positives” for unseen class in the open-set setting. Second, for unseen class, we tend to find as many potential objects as possible, hence the “false positives” are not a major concern. Third, compared with other tasks (e.g., object detection), recall in panoptic segmentation can better reflect real-world performance, since the latter requires each pixel to be one-to-one mapped to a class. More FPs of unseen can be reflected from the performance of other class types (e.g., stuff). Moreover, unseen class shares the same FPs with seen class since they are both supposed to be predicted as the special unknown class. It’s reasonable to treat all these FPs as FPs of seen class since there are significantly more ground-truths in seen class than in unseen class. Hence, we propose a modified PQ (denoted as PQ*), which replaces the RQ with Recall during its computation, to measure the performance of unseen class. The detailed modification is presented in Equ. 1: PQ = P (p,g)∈T P IoU(p, g) |TP| | {z } segmentation quality (SQ) · |TP| |TP| + 1 2|FP| + 1 2|FN| | {z } recognition quality (RQ) , PQ* = P (p,g)∈T P IoU(p, g) |TP| | {z } segmentation quality (SQ) · |TP| |TP| + |FN| | {z } Recall . (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6749 Settings Seen Unseen PQ PQ-thing PQ* PQ*-thing Class-Specific 18.53 23.21 8.59 20.32 Comb-Seen 17.91 21.21 17.67 26.88 Comb-All 25.18 29.77 Table 2: Performance on unknown classes with different class information. 4 Analysis of Influencing Factors in OPS In this section, we study the influence of several significant factors on the OPS task, and explore the upper bound of performance on unknown classes with different settings. In these experiments, we assume that all annotations of seen categories are provided. We use a two-stage Panoptic FCN model (denoted as PanoFCN-2s) for all the experiments, which is modified from the original Panoptic FCN (Li et al. 2021) and the details will be discussed in the next section. 4.1 Influence of Class Information Recent studies (Li et al. 2020; Kim et al. 2022) indicate that a class-agnostic detector will help detect more open-world instances. This inspires us to investigate the impact of class information on the OPS task. We consider three types of class information for training: (1) Class-Specific. All segments are annotated with their specific classes. (2) CombSeen. All seen classes are combined as a single class (referred as “unknown-comb”). In other words, we re-annotate all segments of seen classes with “unknown-comb” class. (3) Comb-All. We combine all thing (i.e., known-thing, seen) classes as a single “thing-comb” class, while leaving the stuff classes unchanged. Unseen classes are not considered here, as they only occur in the test set. Considering that they are trained with different category numbers, we need to unify their evaluation methods on unknown classes. Following the OPS settings, if segments of unknown classes are classified as any one of the unknown classes (for (1)) or “unknown-comb” class (for (2)), they will be regarded as “true positives (TPs)”. To compare (1), (2) with (3), we follow another principle that segments of unknown classes are TPs if they are classified as any one of the thing classes when calculating PQ, which is denoted as “PQ-thing” (“PQ*-thing”). It’s worth noting that when calculating the “PQ-thing” of seen class for (3), we use the expectation of FPs since its true value cannot be obtained. The results are shown in Tab. 2. We find that the performance of Comb-All performs the best among the three settings on both two unknown classes. These results verify that if we follow a class-agnostic setting to reduce or eliminate the class-variation information, models will have better segmentation and generalization capabilities on unknown classes. We attribute this to the fact that this setting will drive model to ignore the differences between each thing class, thus forcing it to learn stronger objectness cues. 4.2 Influence of Annotation Propotion It’s widely acknowledged that a dataset with more annotations is likely to enhance model performance, as the model Ratios 20% 40% 60% 80% 100% seen Seen 12.31 16.64 17.66 18.03 17.91 anno. Unseen 5.78 11.17 14.69 16.82 17.67 seen Seen 6.48 14.03 15.32 17.39 17.91 category Unseen 4.00 7.80 12.18 15.20 17.67 Table 3: Performance with different ratios of seen annotation numbers or seen category numbers. The experiments are based on the PanoFCN-2s model and Comb-Seen setting. 20 40 60 80 100 Selection Ratio 4 6 8 10 12 14 16 18 PQ* on Unseen Class more categories selected categories Figure 2: The red points denote the performance with different ratios of seen classes. For each selection ratio, the blue point denotes the performance with similar annotation amounts but more category numbers. can be exposed to a greater variety of samples in the training stage. It motivates us to quantitatively study the influence of annotation numbers in this task. Specifically, we construct four different splits with different selection ratios (20%, 40%, 60%, 80%). We randomly select the seen annotations with corresponding ratios, while the known annotations remain unchanged. We use PanoFCN-2s model with Comb-Seen setting in these experiments. The results are shown in the top parts of Tab. 3. On one hand, the performance of seen classes is notably improved when the ratio increases initially, but this improvement gradually diminishes and may even become negative. On the other hand, the increment of ratio brings continuous performance improvement of unseen classes. We attribute it to the fact that the increment of annotations will guide the network to mine more potential instances, thereby aiding the discovery of unseen classes. However, this may also lead to more false positives, thus hindering the performance improvement of seen classes. Furthermore, we adopt another strategy to select annotations, that is, we randomly choose different ratios of seen categories with their corresponding annotations. As shown in the bottom parts of Tab. 3, the performance trends are similar to the above results. Moreover, these results drive us to think of a question: does the increment of category numbers play an important role in the performance improvement? To this end, for each setting of category numbers, we conduct another experiment with similar annotation amounts but containing more categories. The results are shown in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6750 COCO Annotation Input Image Comb-All Thing-agnostic Map SemiPanoFCN-2s PanoFCN-2s Result Unknown Segment Mining Algorithm Thing-Agnostic Map Generation Unknown Segment Mining Thing-agnostic Annotation Input Image Pseudo Label Input + known-thing + unknown + stuff Map-O Map-T Training Path Inference Path Label Path USM Path Figure 3: Overview of the proposed two-phase framework, including thing-agnostic map generation (first phase) and unknown segment mining (second phase). In the first phase, we apply Semi-PanoFCN-2s to mine more potential unknown instances. Fig. 2, from which we can draw two conclusions: (1) With similar annotation numbers, more category numbers perform better. (2) The increment of category numbers has a more significant impact than only increasing corresponding amounts of annotations when the annotation number reaches a certain value (40%). The great influence of category variety reminds us that introducing more categories is more important than generating more annotations for the OPS task. However, the diversity of categories is far from sufficient under the COCO annotations. Labels with more various categories are needed to improve the model’s generalization capability. Therefore, we propose to design a framework that generates pseudo labels of instances with novel categories automatically. 5 Method Based on our analyses in Sec. 4, we can conclude that the class-agnostic setting leads to better performance on unknown classes, and annotations containing more categories will significantly help the OPS task. However, the number of known classes is very limited in the OPS setting. To this end, we first modify (Li et al. 2021) into a two-stage structure (Sec. 5.1) and then design a two-phase semi-supervised framework (Sec. 5.2 - 5.4) to enrich the category variety in the annotations, thus better completing the OPS task. The whole framework is shown in Fig. 3. 5.1 PanoFCN-2s Due to its one-stage structure, Panoptic FCN (Li et al. 2021) will suffer from the foreground-background class imbalance problem, hence not excel at detecting more potential unknown segments. Therefore, we modify it into a two-stage structure (denoted as PanoFCN-2s) to better fit the OPS task. We construct an RoI Kernel Head to generate kernels for thing classes following the structure in (He et al. 2017), and use it to replace the Kernel Generator module in (Li et al. 2021). Details please refer to the Supplementary Material. 5.2 First Phase: Thing-Agnostic Map Generation To find potential unknown segments sufficiently and with high quality, we need to choose a reasonable training strategy. As discussed in Sec. 4, we find that the Comb-All setting performs the best on unknown classes. Therefore, we first combine all thing (i.e., known-thing, seen) classes into a single “thing-comb” class and re-annotate thing segments with it. Stuff classes remain unchanged. Next, we use these re-annotated training samples to train a PanoFCN-2s model. Particularly, the output dimension of the classification branch in PanoFCN-2s is set to S +1, where S is the number of stuff classes. After training, we pass training images through the model to obtain the prediction maps. Benefited from the Comb-All setting, these maps contain many potential thing segments but are thing-agnostic that all these segments belong to one “thing-comb” class. 5.3 Second Phase: Unknown Segment Mining We now have two kinds of panoptic segmentation maps of training images, one is the accurate original annotations but without unknown classes (denoted as Map-O), the other is the generated thing-agnostic maps (denoted as Map-T). To mine potential unknown segments and generate complete segmentation maps, we design an Unknown Segment Mining (USM) algorithm to take advantage of both two maps. First of all, we need to clarify the areas where the unknown segments may be found. As shown in Fig. 1, the original COCO annotations tend to place the unknown segments into void (ignore) or stuff areas. Hence, we choose to mine unknown segments from these two areas. Concretely, we first fetch the segments with thing class from the generated MapT, denoted as TH = [th1, · · · , thn]. Next, we calculate the intersection areas of each segment in TH with void and stuff areas in Map-O separately. Segments with high intersections will be chosen as potential unknown segments. After obtaining unknown segments, we then need to combine them with the Map-O. We follow a “Thing First, Known First” principle to construct the complete annotations. Specifically, for a training image, we first take knownthing segments and known-stuff segments from Map-O, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6751 Model Known Classes Unknown Classes Known-Thing Known-Stuff Seen Unseen PQ SQ RQ PQ SQ RQ PQ SQ RQ PQ* SQ Recall Supervised model PanoFCN-2s 42.41 78.15 52.17 27.44 70.31 33.87 17.91 78.35 22.87 17.67 79.61 22.20 Open-set Panoptic segmentation methods Void-Supp (Hwang et al. 2021) 44.81 79.90 53.97 26.67 72.36 33.94 4.57 73.80 6.19 9.70 71.96 13.48 EOPSN (Hwang et al. 2021) 44.74 80.74 54.05 26.45 72.62 33.64 0.51 74.64 0.68 0.26 74.41 0.35 Two-Phase (PanoFCN) 44.00 80.88 53.74 29.59 75.41 36.70 3.49 81.42 4.29 4.20 79.97 5.26 Two-Phase (PanoFCN-2s) 43.80 78.37 53.73 28.07 75.43 34.62 7.89 78.55 10.04 16.85 79.02 21.33 Semi-Two-Phase 43.23 78.36 53.03 27.53 74.22 33.94 9.98 80.17 12.45 19.80 80.19 22.20 Table 4: Open-set Panoptic segmentation results on LVIS-PS val set under the proposed OPS setting, which needs to predict known classes and unknown classes with only annotations of known classes are used. “Supervised model” represents that a PanoFCN-2s model is trained with annotations where seen classes are available. It is worth mentioning that our performance on unseen classes even outperforms that of the supervised model with some margins. initialize a blank panoptic segmentation map. Then, we place these segments on the blank areas of the map following the order: (1) known-thing segments, (2) unknown segments, (3) known-stuff segments. Finally, these complete panoptic segmentation maps are used as pseudo labels to train another PanoFCN-2s model. In this way, many potential unknown segments are added in the annotations, enriching their category varieties, hence benefiting the OPS training. Particularly, the output dimension of its classification branch is set to T + S + 1, where T, S are the number of known-thing, known-stuff classes, respectively. Only this PanoFCN-2s model is applied during inference. 5.4 Semi-PanoFCN-2s Though we have built a simple yet effective baseline to achieve the OPS task, we further improve the first PanoFCN2s model to make it more suitable for this task, thereby boosting the performance on the unknown classes. In the first phase, the PanoFCN-2s model is able to find potential thing segments from the images, benefiting from the proper model structure and the Comb-All setting. However, it relies much on the model’s generalization capability while lacking task-specific guidance. Hence, we adopt the semisupervised learning strategy into the training procedure and modify the PanoFCN-2s model to achieve it. Specifically, we add a new classification branch CLS2 in the RoI Kernel Head of PanoFCN-2s model, paralleling with the original one (CLS1). Different from CLS1, CLS2 aims to mine more potential thing segments following the online semisupervised training strategy. Hence, we denote the modified PanoFCN-2s model as Semi-PanoFCN-2s. During training, we first select top-k proposals according to their classification scores on the thing class, generate their corresponding masks, and filter out low-scoring ones. The kept masks are considered as proposals for unknown segments. As mentioned in Sec. 5.3, unknown segments are likely to hide in the void or stuff areas. Hence, we calculate the intersection areas of each of those unknown proposals with the two areas separately and remove the proposals with low intersections. Besides, we additionally set a scoring threshold on the proposals which have high intersections with the stuff areas to guarantee the quality of stuff classes. Finally, we relabel the remained proposals as the thing class, and use these pseudo labels to train the CLS2. During inference, we only use CLS2 to obtain classification scores. The method to mine potential unknown segments is similar to the USM algorithm, but the most significant difference is that it participates in the training procedure following a semi-supervised training strategy, hence is able to enhance the ability of the network to find more unknown segments. It is worth noting that we only replace the PanoFCN-2s model with Semi-PanoFCN-2s in the first phase of the framework. 6 Experiments We evaluate our method on the proposed LVIS-PS dataset. During training, as discussed in Sec. 3.2, we use the corresponding original COCO annotations of LVIS-PS train set, which contain 80 thing classes and 53 stuff classes. LVISPS val set is utilized for evaluation, which has 994 classes in total. Three kinds of standard panoptic segmentation metrics (Kirillov et al. 2019), including panoptic quality (PQ), segmentation quality (SQ) and recognition quality (RQ) are applied for three class types, i.e., known-thing, known-stuff and seen. PQ* and Recall are used for unseen classes. 6.1 Experiment Setup As described in Sec. 5, we follow a two-phase paradigm to achieve the OPS task. In the first phase, we train the proposed Semi-PanoFCN-2s with the Comb-All setting to produce thing-agnostic maps. In the second phase, we use USM algorithm to generate pseudo labels and train the proposed PanoFCN-2s with the Comb-Seen setting using these generated labels. During inference, only the PanoFCN-2s model is applied. In both two phases, we follow the original settings of (Li et al. 2021) with 1× and multi-scale strategies. For hyperparameters, the overlap thresholds are set to 0.8 and 0.9 for void and stuff areas, respectively. The score threshold for stuff areas is set to 0.3 in Semi-PanoFCN-2s. k is set to 50, which is the same with (Li et al. 2021). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6752 Models Epoch AP m e AP m e 50 AP m e 75 AP m e s AP m e m AP m e l Trained on COCO Panoptic FCN (Li et al. 2021) 12 11.88 24.19 10.61 3.49 7.75 22.73 ES (Qi et al. 2021) 12 13.78 26.57 12.55 1.73 11.27 26.98 ES (Qi et al. 2021) 36 14.66 27.96 13.43 2.06 13.03 28.08 Trained on LVIS-PS (a part of COCO) with COCO annotations Two-Phase 12 16.02 30.35 14.48 4.49 9.55 29.34 Semi-Two-Phase 12 16.28 30.79 14.63 4.23 9.67 29.91 Table 5: Cross-dataset results on ADE20K val set. The models of (Li et al. 2021) and (Qi et al. 2021) are trained with COCO, while our model is trained with LVIS-PS with COCO annotations, which has fewer training images. 6.2 Evaluation on LVIS-PS Dataset Tab. 4 shows the performances on the LVIS-PS val set of different models. The supervised one (1st row) is a PanoFCN2s model, which is directly trained following the Comb-Seen setting and with complete annotations, in which annotations of seen classes are available. Compared with the supervised model, the proposed two-phase framework (5th row) can achieve comparable performance on unseen classes, and even performs slightly better on both two types of known classes. For seen classes, the performance of the supervised model can be seen as an upper bound. With lacking annotations of over 700 kinds of seen classes, the performance gap of the two-phase framework with the supervised one is reasonable. Overall, these results demonstrate that the proposed framework can achieve the OPS task with a relatively good performance. When the proposed Semi-PanoFCN-2s is employed in the first phase (6th row), the performance on the unknown classes improves in all aspects, while still achieving competitive performance on known classes. It is worth mentioning that the performance on unseen classes even outperforms that of the supervised model with some margins (+2.13% on PQ*, +0.58% on SQ). In addition, when we replace PanoFCN-2s with Panoptic FCN (Li et al. 2021) (4th row), the performance on unknown classes drops to a great degree, which demonstrates the importance of PanoFCN-2s on the OPS task. These results verify the effectiveness of our proposed contributions in the OPS setting for constructing a two-phase framework with a two-stage model and adopting semi-supervised learning to enable the network to find more potential accurate unknown segments. We also compare our framework with the previous OPS method EOPSN (Hwang et al. 2021). Their results are shown in the second and third rows, where “Void-Supp” represents the baseline model of EOPSN. We re-train and infer these models on the LVIS-PS datasets, strictly following the original settings. As shown in Tab. 4, our method is superior to EOPSN on unknown classes by a large margin, and has comparable performance with it on known classes. We attribute its poor performance on LVIS-PS datasets to the fact that the samples of tail unknown classes are rare and diverse, and thus are hard to be clustered as in EOPSN. The visualization results of different models are shown in Fig. 3 in Supplementary Material. Segments with unknown classes are in deep blue. Compared with Panoptic FCN (Li et al. 2021) (3rd row) and Void-Supp (Hwang et al. 2021) (4th row), our method is able to find more unknown instances (in deep blue) with comparable or better stuff quality (1st-3rd column). Moreover, in many cases, we observe that our method also has better segmentation capability on known classes (4th-5th column). 6.3 Cross-Dataset Evaluation To validate the generalization capability of our method, we evaluate our trained model on another dataset ADE20K (Zhou et al. 2017). Considering that the classes of ADE20K and COCO (LVIS-PS) are different, we apply the entity segmentation metric AP m e (Qi et al. 2021) for evaluation. AP m e is similar to AP m used in instance segmentation, while it regards all segments as one class, including those in thing or stuff classes, and gives no tolerance to the overlaps of different segments. Tab. 5 shows the generalization results on the ADE20k dataset with different models. For Panoptic FCN (Li et al. 2021) and ES (Qi et al. 2021), we use their released models trained on COCO and evaluate them on the whole ADE20K val set. It’s worth mentioning that we actually use fewer training samples than them, since the training set of LVISPS is a part of that of COCO. Despite this, our proposed method (Line 4-5) outperforms them (Line 1-2) by at least 2.24% AP m e , and even surpasses (Qi et al. 2021) (Line 3) trained with more epochs. Especially, compared with the class-agnostic model (Qi et al. 2021), our method not only shows better generalization performance, but also possesses class-specific segmentation capability. 7 Conclusion In this paper, we first build a new dataset LVIS-PS for the OPS task and redefine the OPS settings in a more reasonable and practical way. We regard tail categories in LVISPS as unknown classes and redefine the training annotations to avoid unreasonable prior information. Subsequently, we analyze the influence of several significant factors for the OPS task, such as class information and annotation propotion. Based on these analyses, we design an effective twophase semi-supervised framework to accomplish the OPS task, which comprises of thing-agnostic map generation and unknown segment mining. Experimental results on different datasets demonstrate the effectiveness of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6753 Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Contract U20A20183 and 62021001. It was also supported by the GPU cluster built by MCC Lab of Information Science and Technology Institution and the Supercomputing Center of the USTC. This work was also supported in part by the National Key R&D Program of China (NO.2022ZD0160101). References Bendale, A.; and Boult, T. 2015. Towards open world recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1893–1902. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3213–3223. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 248–255. Dhamija, A.; Gunther, M.; Ventura, J.; and Boult, T. 2020. The overlooked elephant of object detection: Open set. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 1021–1030. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2): 303–338. Gupta, A.; Dollar, P.; and Girshick, R. 2019. LVIS: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5356–5364. Gupta, A.; Narayan, S.; Joseph, K.; Khan, S.; Khan, F. S.; and Shah, M. 2022. OW-DETR: Open-world detection transformer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9235–9244. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask R-CNN. In Proceedings of the IEEE Conference on International Conference on Computer Vision, 2961–2969. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778. Hwang, J.; Oh, S. W.; Lee, J.-Y.; and Han, B. 2021. Exemplar-based open-set panoptic segmentation network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1175–1184. Joseph, K.; Khan, S.; Khan, F. S.; and Balasubramanian, V. N. 2021. Towards open world object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5830–5840. Kim, D.; Lin, T.-Y.; Angelova, A.; Kweon, I. S.; and Kuo, W. 2022. Learning Open-World Object Proposals without Learning to Classify. IEEE Robotics and Automation Letters. Kirillov, A.; He, K.; Girshick, R.; Rother, C.; and Doll´ar, P. 2019. Panoptic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9404–9413. Li, S.; Zhou, J.; Jia, Z.; Yeung, D.-Y.; and Mason, M. T. 2020. Learning accurate objectness instance segmentation from photorealistic rendering for robotic manipulation. In Proceedings of the 2018 International Symposium on Experimental Robotics, 245–255. Springer. Li, Y.; Zhao, H.; Qi, X.; Wang, L.; Li, Z.; Sun, J.; and Jia, J. 2021. Fully convolutional networks for panoptic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 214–223. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common objects in context. In Proceedings of the European Conference on Computer Vision, 740–755. Qi, L.; Kuen, J.; Wang, Y.; Gu, J.; Zhao, H.; Lin, Z.; Torr, P.; and Jia, J. 2021. Open-world entity segmentation. arXiv preprint arXiv:2107.14228. Redmon, J.; Divvala, S.; Girshick, R.; and Farhadi, A. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779–788. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster RCNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28. Saito, K.; Hu, P.; Darrell, T.; and Saenko, K. 2021. Learning to detect every thing in an open world. arXiv preprint arXiv:2112.01698. Vaze, S.; Han, K.; Vedaldi, A.; and Zisserman, A. 2021. Open-set recognition: A good closed-set classifier is all you need. arXiv preprint arXiv:2110.06207. Wang, W.; Feiszli, M.; Wang, H.; Malik, J.; and Tran, D. 2022a. Open-world instance segmentation: Exploiting pseudo ground truth from learned pairwise affinity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4422–4432. Wang, W.; Feiszli, M.; Wang, H.; and Tran, D. 2021. Unidentified video objects: A benchmark for dense, openworld segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10776–10785. Wang, X.; Zhao, K.; Zhang, R.; Ding, S.; Wang, Y.; and Shen, W. 2022b. ContrastMask: Contrastive Learning to Segment Every Thing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 11604– 11613. Zhao, X.; Liu, X.; Shen, Y.; Ma, Y.; Qiao, Y.; and Wang, D. 2022. Revisiting open world object detection. arXiv preprint arXiv:2201.00471. Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; and Torralba, A. 2017. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 633–641. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6754 | 2024 | 750 |
18,574 | VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models Ziyi Yin1, Muchao Ye1, Tianrong Zhang1, Jiaqi Wang1, Han Liu2 Jinghui Chen1, Ting Wang3, Fenglong Ma1* 1The Pennsylvania State University 2Dalian University of Technology 3Stony Brook University {ziyiyin, muchao, tbz5156, jcz5917, fenglong}@psu.edu [email protected], [email protected] Abstract Visual Question Answering (VQA) is a fundamental task in computer vision and natural language process fields. Although the “pre-training & finetuning” learning paradigm significantly improves the VQA performance, the adversarial robustness of such a learning paradigm has not been explored. In this paper, we delve into a new problem: using a pre-trained multimodal source model to create adversarial image-text pairs and then transferring them to attack the target VQA models. Correspondingly, we propose a novel VQATTACK model, which can iteratively generate both image and text perturbations with the designed modules: the large language model (LLM)-enhanced image attack and the cross-modal joint attack module. At each iteration, the LLM-enhanced image attack module first optimizes the latent representation-based loss to generate feature-level image perturbations. Then it incorporates an LLM to further enhance the image perturbations by optimizing the designed masked answer anti-recovery loss. The cross-modal joint attack module will be triggered at a specific iteration, which updates the image and text perturbations sequentially. Notably, the text perturbation updates are based on both the learned gradients in the word embedding space and word synonymbased substitution. Experimental results on two VQA datasets with five validated models demonstrate the effectiveness of the proposed VQATTACK in the transferable attack setting, compared with state-of-the-art baselines. This work reveals a significant blind spot in the “pre-training & fine-tuning” paradigm on VQA tasks. The source code can be found in the link https://github.com/ericyinyzy/VQAttack. Introduction Visual Question Answering (VQA) is dedicated to extracting essential information from images to formulate responses to textual queries. While this application has proven to be highly versatile across various domains, including recommendation systems (Yu, Shen, and Jin 2019), medicine (Zhan et al. 2020), and robotics (Kenfack et al. 2020), the exploration of VQA system robustness remains a challenging endeavor. Current research primarily revolves around investigating the robustness of end-to-end *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Pre-trained Source Model Victim VQA Model Image Text Attack Generate image-text Perturbations What color is the fork? What colour is the fork? silver + Figure 1: An example of Transferable adversarial attacks on VQA via pre-trained models. trained VQA models through the development of effective attack methodologies, exemplified by Fool-VQA (Xu et al. 2018) and TrojVQA (Walmer et al. 2022). However, models trained end-to-end often exhibit inferior performance compared to the prevalent “pre-training & finetuning” paradigm. Within this paradigm, models are initially pre-trained on extensive collections of image-text pairs from the public domain, facilitating the acquisition of intermodal relationships. Subsequently, the models undergo finetuning using specific VQA datasets to enhance their performance on downstream tasks. This instructional framework has yielded commendable predictive accuracy (Bao et al. 2022; Kim, Son, and Kim 2021; Li et al. 2021b). Nevertheless, the aspect of adversarial robustness within the context of the VQA task, as governed by this paradigm, remains insufficiently explored. This attack scenario presents notable complexities, which arise from the following two fundamental aspects: • C1 – Transferability across models. The challenge here involves the transferability of adversarial attacks across distinct models. An example is shown in Figure 1. Pretrained source models and victim target VQA models are usually trained for dissimilar tasks and trained on separate datasets. Furthermore, their structural disparities may result from variations introduced during fine-tuning. While the concept of transferability has been widely validated in the context of image models (Madry et al. 2018; Xie et al. 2019), such property within the domain of pretrained models has yet to be comprehensively explored. • C2 – Joint attacks across different modalities. Our task is centered around a multi-modal problem, necessitatThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6755 Figure 2: Overview of the proposed VQATTACK. ing the introduction of perturbations to both images and textual questions to achieve improved performance. Although previous methodologies have effectively devised attack strategies for each individual modality (Li et al. 2020; Madry et al. 2018), the intricate challenge lies in the simultaneous optimization of perturbations on images with continuous values and textual content characterized by discrete tokens. This joint attack task continues to pose a significant hurdle that requires innovative solutions. To address these challenges, we propose a novel method named VQATTACK to explore the adversarial transferability between pre-trained source and victim target VQA models. As shown in Figure 2, the proposed VQATTACK generates image and text perturbations solely based on the pre-trained source model F with a novel multi-step attacking framework. After initializing the input image-text pair (I, T), VQATTACK will iteratively generate both image and text perturbations at each iteration m via two key modules: large language model (LLM)-enhanced image attack and crossmodal joint attack. In the LLM-enhanced image attack module, VQATTACK first follows existing work (Naseer et al. 2020; Zhang, Yi, and Sang 2022) to minimize the similarity of latent features between the clean and perturbed input and then uses the clipping technique to obtain the image perturbation ˆI′ m. To further enhance the transferability of attacks, VQATTACK introduces a new masked answer anti-recovery loss with the help of ChatGPT (OpenAI 2023), which differs from the existing latent feature-level attack by involving the correct answer label Y during the perturbation generation. The LLM-enhanced image attack module will be executed at each iteration, and the output from this module is denoted as ˆI∗ m. Due to the discrete nature of text data and the limited number of informative words in each text input, attacking the text at every iteration might not be necessary or beneficial for perturbation generation. Consequently, the crossmodal joint attack module will be triggered when m satisfies a specific condition. During this stage, VQATTACK first updates the perturbation of the image (i.e., ˆI∗ m) via the cross-modal feature perturbation and the clipping technique. It then updates the text perturbation ˆTm using the learned gradients and word synonym-based substitution in the word embedding space. VQATTACK will return the output after iterating for M steps as the final adversarial image-text pair, i.e., (ˆIM, ˆTM), which will be used to attack the victim VQA model S. Our contributions can be summarized as follows: • To the best of our knowledge, this is the first study on the adversarial robustness of the VQA task under the “pretraining & fine-tuning” paradigm. It does not only discuss the robustness of this paradigm but, more importantly, probes the potential security concern under a realistic scenario. • We propose VQATTACK, which is a novel method to generate adversarial image-text pairs on the pre-trained vision language models. It consists of two novel modules, utilizes an LLM to generate masked text, and enables the iterative joint attack between image and text modalities. • Five pre-trained models and two VQA datasets are involved in our experiment. Experimental results verify the effectiveness of the proposed VQATTACK under the transferable attack setting. The Proposed VQATTACK Problem Formulation We use F to denote the publicly available pre-trained VL source model and S to represent the victim VQA target model. The goal of the transferable VQA attack is to generate an adversarial image-text pair (ˆI, ˆT) on the pre-trained source model F using the clean input (I, T), which will make the target victim model S have a wrong prediction, i.e., S(ˆI, ˆT) /∈Y, where Y is the set of correct answers. However, the victim model S in our setting is a black-box, arbitrary, and unknown model, and the only model that we can access is the pre-trained source model F. Let G denote the proposed transferable attack strategy VQATTACK. We The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6756 Algorithm 1: The proposed VQATTACK Input: A pre-trained source model F, a clean image-text pair (I, T) and the ground-truth label Y, step-size ϵ, prompt P, LLM; Input: Perturbation budget σi on image, σs on text, and the number of total iterations M. 1: Initialization ˆI0 = I + δ, δ ∈U(0, 1), ; ˆT0 = T, and use BERT model to generate candidate token set C. 2: for m = 1 to M do 3: // LLM-enhanced Image Attack 4: // Perturbation Generation with Latent features 5: Calculate ∇iLm f via Eq. (3) using (ˆIm−1, ˆTm−1); 6: ˆI′ m = clipσi(ˆIm−1 + ϵsign(∇iLT )); 7: // LLM-based Perturbation Enhancement 8: Masked text generation with LLM using ˆTm−1, label Y, and prompt P; 9: Calculate gradiants ∇iLm a via Eq. (4); 10: ˆI∗ m = clipσi(ˆI′ m + ϵsign(∇iLm a )); 11: // Cross-modal Joint Attack 12: if m mod ⌊ M |W|+1⌋= 0 then 13: // Image Perturbation Update 14: Calculate ∇iLm c via Eq. (5) using (ˆI∗ m ˆTm−1); 15: ˆIm = clipσi(ˆI∗ m + ϵsign(∇iLm c )); 16: // Text Perturbation Update 17: Latent word embedding estimation via Eq. (6); 18: Obtain the synonym ranks R(C) according to Eq. (7); 19: Conduct synonym substitution to obtain ˆTm; 20: else ˆIm = ˆI∗ m, ˆTm = ˆTm−1; 21: end if 22: end for 23: return (ˆIM, ˆTM) use the following function to generate an adversarial imagetext pair (ˆI, ˆT): (ˆI, ˆT) = G(F, (I, T), M, σi, σs), (1) where G is an iterative attacking function, and M is the number of iterations. σi and σs are two hyperparameters to control the quality of adversarial images and text, which are defined as follows: Image: ∥ˆI −I∥∞< σi, Text: Cos(U( ˆT), U(T)) > σs. (2) For an adversarial image ˆI, we add pixel-level perturbations under the L∞-norm distance. The distance threshold is set to σi. For an adversarial sentence ˆT, we replace words with their synonyms and enforce a semantic similarity constraint σs, which is implemented through the cosine similarity Cos(·, ·) between the sentence embeddings U( ˆT) and U(T). Here, U(·) represents the universal sentence encoder (Cer et al. 2018), which has been widely adopted in text attack methods (Jin et al. 2020; Li et al. 2021a, 2019). Overview As shown in Figure 2, the proposed VQATTACK G first initializes the input pair (I, T) as (ˆI0, ˆT0), and then updates (ˆIm, ˆTm) at each iteration m through the proposed large language model (LLM)-enhanced image attack and crossmodal joint attack until the maximum iteration M. The final output (ˆIM, ˆTM) is then used to attack the victim model S. Algorithm 1 shows the algorithm flow of the proposed VQATTACK. Next, we provide the details of our model design step by step. Initialization As shown in Algorithm 1 line 1, for the input image I, we follow the Projected Gradient Decent (PGD) (Madry et al. 2018) method to initialize ˆI by adding noise δ sampled from the Gaussian distribution U, i.e., ˆI0 = I + δ, where δ ∈ U(0, 1). For the text modality, we directly use the original input as the initialization, i.e., ˆT0 = T. Intuitively, the initialized pair (ˆI0, ˆT0) can serve as the initial input for the cross-modal joint attack module, where iterative updates are performed on (ˆIm, ˆTm) at each iteration. However, it is worth noting that this seemingly straightforward approach may not yield adversarial examples of high quality for effectively attacking the targeted model S. One aspect to consider is the intrinsic disparity between the numerical pixel representation of the input image I and the sequence-based nature of the input text T. Frequent perturbations to the discrete T can often result in significant gradient fluctuations, which could subsequently adversely impact the perturbation of the numerical I. As such, strictly coupling the updates of these two modalities throughout the entire attack process may not be the most optimal strategy. Besides, the input text T is typically characterized by a relatively short average length1, containing only a limited number of informative words. This leads us to recognize that attacking the text at every iteration might not be necessary or beneficial. It is due to these considerations that we put forth a novel module, namely the LLM-enhanced image attack. This module is designed to first learn an effective image perturbation independently, subsequently followed by a collaborative update of both image and text perturbations iteratively. LLM-enhanced Image Attack Perturbation Generation with Latent Features Several approaches have been proposed to generate the image perturbations using pre-trained models, such as CoAttack (Zhang, Yi, and Sang 2022) and BadEncoder (Jia, Liu, and Gong 2022). The goal of these approaches is to minimize the similarity between the latent features learned by the pre-trained model F using the clean I and the perturbed ˆIm−1 at each iteration m, respectively. Most multimodal VL pre-trained models such as ViLT (Kim, Son, and Kim 2021) and VLMO (Bao et al. 1According to our investigation on the VQAv2 validation set, each sentence is only composed of an average of 6.21 words. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6757 2022) usually consist of three encoders to learn latent features, including an image encoder, a text encoder, and a multimodal encoder. To generate the perturbation of ˆIm, we first follow existing work to update the image perturbation by minimizing the following loss function: Lm f = Lp X i=1 Dp X j=1 Cos(f p i,j, ˆf p i,j) | {z } image encoder + Lq X i=1 Dq X j=1 Cos(f q i,j, ˆf q i,j) | {z } multimodal encoder , (3) where Lp and Lq denote the number of layers in the image encoder and multimodal encoder, respectively. Dp and Dq represent the number of input tokens of the image encoder and multimodal encoder. For the image encoder, the input tokens are image patches; and the multimodal encoder takes the representations from both image patches and text words as the input tokens. f p i,j and f q i,j are the output feature representation vectors of the j-th token in the i-th layer with the clean input pair (I, T). ˆf p i,j and ˆf q i,j denote the output feature representation vectors of the j-th neuron in the i-th layer with the perturbed input pair (ˆIm−1, ˆTm−1). Let ˆI′ m denote the output by optimizing Eq. (3) with the clipping technique, which is further used to generate an enhanced image perturbation in the following section. This step is shown in Algorithm 1 lines 4-6. LLM-based Perturbation Enhancement In the context of transferable attacks, it is common for the pre-trained source model F to exhibit notable dissimilarities when compared to the victim target model S. Consequently, relying solely on perturbing the latent representations using Eq. (3) may prove insufficient in ensuring the creation of highquality adversarial samples capable of effectively attacking S. To tackle this challenge, we present a solution that leverages the capabilities of Large Language Models (LLMs), such as ChatGPT (OpenAI 2023), and the corresponding answers Y to bolster the process of perturbation generation. • Masked Text Generation with LLM. In a given visualquestion pairing, multiple correct answers can exist, represented as Y = [y1, · · · , yN], where N corresponds to the count of correct answers. The primary objective of the transferable attack is to create adversarial instances in such a manner that the output of S(ˆI, ˆT) does not belong to the set Y. To maximize the effectiveness of this transferable attack, a straightforward approach could involve compelling the pre-trained model F to produce incorrect predictions at each iteration. More specifically, this would entail ensuring that F(ˆI′ m, ˆTm−1) /∈Y. However, it is important to note that this approach is impractical for the current state of pre-trained models F, as they are not explicitly designed for predicting VQA answers during their pre-training phase. Fortunately, a viable alternative stems from the fact that many of these models incorporate the masked language modeling (MLM) task as part of their pre-training. In this context, we can transform the answer prediction task into a masked answer recovery task using the MLM framework. Towards this end, we need to combine the perturbed question ˆTm−1 and each correct answer yi ∈ Y with a predefined prompt P using LLMs. Let ˆZm,i = LLM( ˆTm−1, yi, P) denote the combined sentence for the i-th correct answer. The next step is to mask the answer yi from the generated sentence ˆZm,i. Note that each answer yi may contain multiple words. Let Mi denote the set of masked indices, and we can use ˆZm,i\Mi to represent the masked sentence. • Masked Answer Anti-Recovery. To achieve the transferable attack, we will prevent the model from recovering the correct answer tokens for each masked text ˆZm,i\Mi, by minimizing the following anti-recovery loss: Lm a = N X i=1 X j∈Mi log(pc(zm,i,j|ˆZm,i\Mi,ˆI′ m)), (4) where zm,i,j is the j-th token in ˆZm,i, and pc is the conditional probability score generated from the MLM head the pre-trained model F that is composed of a fully-connected layer and a softmax layer. After optimizing Eq. (4), we clip the learned image perturbation again, and the output is denoted as ˆI∗ m. This step is shown in Algorithm 1 lines 7-10. Cross-modal Joint Attack Due to the differences of input image ˆI∗ m and text ˆTm−1, we cannot use a unified approach to update their perturbations. For the numerical image, we can still use gradients and the clipping technique to update the perturbation, but for the discrete text, we propose to use the word substitution technique to replace words in the text with the help of continuous word embeddings. Joint Attack Trigger As discussed before, updating the text perturbation at each iteration is unnecessary. We design a heuristic function to determine when to trigger the joint attack by taking the number of informative words in the text (denoted as |W|) and the maximum iterations M into consideration. When m mod ⌊ M |W|+1⌋= 0, then VQATTACK triggers the joint attack. Here, the “+1” operation is to prevent attacking ˆTm−1 only in the last iteration step. The trigger is shown in Algorithm 1 line 12. Otherwise, VQATTACK will output (ˆI∗ m, ˆTm−1) as (ˆIm, ˆTm) for the m-the iteration (Algorithm 1 line 20). Next, we introduce how to identify informative words and extract their synonyms. Given a clean text T, we first tokenize it and filter out all stop words using the Natural Language Toolkit (NLTK)2, which results in a set {ti|i ∈W}, where W represents the indices of the unfiltered tokens. For each token ti, we follow BERT-Attack (Li et al. 2020) and employ the BERT model (Devlin et al. 2019) to predict the top-K candidate words that share similar contexts, which results in a set of candidate words {ci,1, · · · , ci,K}. We then obtain the candidate set for all tokens i ∈W, and obtain a set C = {ci,j|1 ≤j ≤K}i∈W. The motivation for using BERT is that it can better capture the context of a word, 2https://www.nltk.org/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6758 compared to other methods like Glove (Pennington, Socher, and Manning 2014) and Word2Vec (Mikolov et al. 2013). These candidates can retain more accurate syntactic and semantic information, making them more likely to satisfy semantic constraints. Note that this step can be done during “Initialization”, which is fixed during word substitutions. Cross-modal Perturbation Generation After triggering the cross-modal attack, VQATTACK will update the gradients with regard to both perturbed image and text via minimizing the following latent feature-level loss function: Lm c = Lm f + Lt X i=1 Dt X j=1 Cos(f t i,j, ˆf t i,j) | {z } text encoder , (5) where Lm f is the loss function from image and multimodal encoders with Eq. (3) using (ˆI∗ m, ˆTm−1) and their corresponding token representations as the inputs, respectively. The second loss term is used to measure the feature similarity from the text encoder. Lt denotes the mumbler of layers in the text encoder, and Dt represents the number of input word tokens. f t i,j and ˆf t i,j denote the output feature representation vectors of from the clean input (I, T) and the perturbed input pair (ˆI∗ m, ˆTm−1), respectively. • Image Perturbation Update. Since the image perturbations are numerical values, we can directly calculate the gradients using Eq. (5) and then apply the clipping technique to generate the output ˆIm for the m-th iteration, as shown in Algorithm 1 lines 13-15. • Text Perturbation Update. Due to the discrete nature of text words, we need to unitize the learned gradients with Eq. (5) in the latent word embedding space to generate text perturbations motivated by (Ye et al. 2022b). Toward this end, we propose to use word substitution attacks to generate text perturbations. Latent Word Embedding Estimation. The word substitution attack aims to replace the original, informative words in text ˆTm−1 with their synonyms, i.e., the words in set C. To this end, we need to estimate the word representations after the attack first using the original informative word embeddings E(ti) (i ∈W) and its gradient ∇Lm c (ti) learned by Eq. (5) as follows: E(ˆti) = E(ti) + ∇Lm c (ti). (6) Synonym Ranking. The goal of synonym substitution is to find a synonym of ti from {ci,1, · · · , ci,K} to replace the original informative word ti and make the embedding of the synonym close to E(ˆti). Since there may be several informative words in W, we need to decide the order of replacement. Intuitively, the larger similarity between E(ˆti) and the embedding of a synonym ci,j, the higher chance of ci,j being a perturbation. To this end, we replace the original word with each synonym ci,j to generate each synonym’s contextaware word embedding E(ci,j). We then calculate the pairwise cosine similarity between the estimated latent representation and the synonym context-aware word embedding as follows: γi,j = Cos(E(ˆti), E(ci,j)). (7) According to the similarity score values, we rank all the synonyms in C in descending order, denoted as R(C). Synonym Substitution. We replace the original word in ˆTm−1 with its synonym that has the largest similarity in R(C). Let ˆT′ m−1 denote the new text sample. Then we check whether the new sample ˆT′ m−1 satisfies the constraint listed in Eq. (2). If Cos(U( ˆT′ m−1), U(T)) > σs, then we keep the replacement in ˆTm−1, remove all the other synonyms of this word in R(C), and move to the next informative word. If Cos(U( ˆT′ m−1), U(T)) ≤σs, we do not conduct the replacement and use the synonym with the second largest value in R(C). We will repeat this procedure until all informative words are replaced or all synonyms in R(C) are checked. The output from this step is the perturbed text ˆTm as shown in Algorithm 1 lines 16-19. After executing all the above steps for M iterations, we generate the final perturbed image-text pair (ˆIM, ˆTM), which will be fed into different unknown victim models to conduct the transferable adversarial attack. Experiments Experimental Setup Datasets & Models We evaluate the proposed VQATTACK on the VQAv2 (Antol et al. 2015) and TextVQA (Singh et al. 2019) datasets. We randomly select 6,000 and 1,000 correctly predicted samples from the VQAv2 and TextVQA validation datasets, respectively. Because an image-question pair may have multiple candidate answers provided by crowd workers, we define a correct prediction only if the predicted result is the same as the label with the highest VQA score3. Each selected sample is correctly classified by all target models. We also development experiments on five models, including ViLT (Kim, Son, and Kim 2021), TCL (Yang et al. 2022), ALBEF (Li et al. 2021b), VLMO-Base (VLMO-B) (Bao et al. 2022), and VLMOLarge (VLMO-L) (Bao et al. 2022). Note that VLMO-B and VLMO-L share the same structure but have different model sizes. These models are first pre-trained on public image-text pairs and then fine-tuned on VQA datasets. Baselines We comprehensively compare VQATTACK with text, image, and multi-modal adversarial attack methods. Specifically, we first adopt BERT-Attack (B&A) (Li et al. 2020) and Rewrite-Rollback (R&R) (Xu et al. 2022) as text-attack baselines. For image attack methods, we adopt DR (Lu et al. 2020), SSP (Naseer et al. 2020), and FDA (Ganeshan, S., and Radhakrishnan 2019) as baselines. These methods generate adversarial images by only perturbing intermediate features and can thus be directly utilized in our problem. VQATTACK is also compared with the multimodal attack approach Co-Attack (CoA) (Zhang, Yi, and Sang 2022). To the best of our knowledge, it is the only scheme that attempts to simultaneously add image and text 3VQA score calculates the percentage of the predicted answer that appears in 10 reference ground truth answers. More details can be found via https://visualqa.org/evaluation.html The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6759 Source Model Target Model VQAv2 TextVQA Text Only Image Only Multi-modality Text Only Image Only Multi-modality B&A R&R DR FDA SSP CoA VQATTACK B&A R&R DR FDA SSP CoA VQATTACK ViLT ALBEF 10.28 5.20 8.78 9.84 24.90 16.70 30.36 13.00 5.80 8.20 9.40 17.00 15.40 22.20 TCL 11.86 6.08 8.74 9.62 22.54 17.84 27.96 12.20 4.80 7.10 8.20 13.60 14.90 19.80 VLMO-B 6.34 1.82 5.08 5.70 21.48 13.64 25.72 7.30 3.20 7.40 5.70 13.90 12.90 19.50 VLMO-L 5.02 2.18 5.58 5.72 13.08 10.64 25.98 6.60 0.30 2.80 2.40 7.20 7.60 8.40 TCL ViLT 6.68 2.52 5.74 5.78 11.04 11.22 21.80 9.30 2.60 4.60 5.20 7.10 10.80 16.30 ALBEF 5.58 2.92 11.10 12.52 38.26 33.24 58.42 10.80 8.70 9.10 10.50 31.80 26.10 46.80 VLMO-B 7.52 3.84 15.82 9.00 23.88 18.32 47.48 7.82 2.54 6.50 7.60 16.70 15.50 34.00 VLMO-L 5.64 2.22 8.04 6.14 15.26 12.64 30.46 2.40 5.96 3.80 4.70 9.50 10.00 18.60 ALBEF ViLT 6.72 2.42 6.90 7.02 11.42 11.36 21.60 8.70 2.60 4.60 5.80 8.20 11.70 15.60 TCL 6.96 1.80 12.64 11.78 35.46 27.24 61.32 9.90 2.90 9.60 8.80 13.10 20.50 43.70 VLMO-B 5.68 2.04 8.14 9.04 21.48 16.16 42.32 8.50 3.30 7.70 8.10 15.20 14.50 28.30 VLMO-L 5.02 2.18 5.58 5.72 21.56 10.64 25.98 5.70 2.20 4.10 4.50 8.20 7.40 16.20 VLMO-B ViLT 7.72 2.04 4.36 5.34 10.20 10.90 18.70 10.90 0.80 3.20 3.40 7.80 11.70 15.20 TCL 12.20 6.26 10.98 13.64 20.24 21.52 43.62 13.50 4.50 8.20 9.30 14.30 18.00 28.30 ALBEF 10.74 6.30 11.22 14.52 22.66 22.46 48.06 13.50 6.10 9.50 12.70 16.80 19.60 32.60 VLMO-L 5.98 3.96 4.58 5.48 10.66 12.52 30.82 6.70 0.60 2.70 4.20 6.80 9.60 17.40 VLMO-L ViLT 7.50 1.62 7.48 3.52 7.94 8.78 13.08 10.30 1.30 3.00 2.90 5.80 9.20 13.10 TCL 12.20 6.14 12.10 10.92 21.18 15.48 32.96 12.90 4.40 6.90 6.80 15.60 13.70 21.70 ALBEF 10.84 5.98 24.84 10.90 24.50 15.14 37.48 13.00 6.40 9.30 9.40 17.00 12.30 26.80 VLMO-B 8.22 1.86 20.96 7.58 19.60 12.70 33.78 8.70 1.90 6.00 4.50 14.20 11.60 25.20 Table 1: Comparison between VQATTACK and baselines on different models using the VQAv2 and TextVQA datasets (%). (a) VQAv2 (b) Text-VQA Figure 3: Ablation study results on the source model TCL and the victim model VLMO-B. perturbations. We adopt the Attack Success Rate (ASR) as the evaluation metric, which measures the ratio of samples whose predicted labels are not in the correct answers. Result Analysis We alternatively select a pre-trained model as the source model to generate adversarial samples, which are then used to attack the remaining models treated as victims. Experimental results are listed in Table 1. We can observe that the proposed VQATTACK significantly outperforms all baselines on each dataset for the five transferable attack experiments. Specifically, VQATTACK achieves an average ASR of 22.49% using ViLT as the source model, 34.23% for TCL, 31.88% for ALBEF, 29.33% for VLMO-B and 25.51% for VLMO-L. The ASR value is comparatively lower when using ViLT as the source model because its model structure and pre-training strategies are greatly different from others. Also, the ASR value obtained by using VLMO-L as the source model is slightly lower than that of using VLMO-B as the source model. This observation demonstrates that the model owns larger parameters can present better adversarial robustness. Finally, all of these results have demonstrated the effectiveness of our proposed approach and also comprehensively reveal the huge threat of adversarial attacks in the “pre-training & fine-tuning” learning paradigm. Ablation Study This ablation study aims to validate the effectiveness of the two designed modules. Figure 3 shows the ablation study results using the adversarial samples generated by TCL to attack VLMO-B. “IE” means only using the latent presentations learned by the image encoder to generate adversarial samples in Eq. (3). “LRP” means the latent representation perturbation used in the LLM-enhanced image attack module, where we only use Eq. (3) to generate the adversarial samples. We can observe that using the multimodal encoder can make significant ASR improvements. “LLM-E” means using both Eqs. (3) and (4) to generate perturbations. Compared with “LRP”, the performance can increase, which indicates the efficacy of introducing LLM to help generate masked text and the effectiveness of the designed masked answer anti-recovery loss in Eq. (4). The proposed VQATTACK achieves the best performance. The performance gap between LLM-E and VQATTACK demonstrates the effectiveness of the proposed cross-modal joint attack module. Case Study We conduct a case study on the VQAv2 dataset using the source model TCL, as shown in Figure 4. We can observe that the generated adversarial samples largely change the original correct prediction to a wrong answer. For instance, recognizing a kitchen as a bedroom (column 3). Furthermore, the generated adversarial samples still keep the natThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6760 Modality Method VQAv2 TextVQA ViLT ALBEF TCL VLMO-B VLMO-L ViLT ALBEF TCL VLMO-B VLMO-L Text Only B&A 15.16 8.24 8.96 10.16 11.72 20.20 11.50 14.90 13.00 10.10 R&R 7.30 4.68 5.64 6.86 4.62 7.40 8.30 5.90 2.70 3.80 Image Only DR 16.90 20.42 15.82 17.12 11.02 14.40 14.50 11.60 14.50 7.90 FDA 20.08 17.72 16.74 22.16 9.92 13.90 12.80 11.70 19.50 7.50 SSP 61.36 49.68 51.46 46.32 41.94 49.80 36.70 40.70 34.60 28.40 Multimodality Co-Attack 50.12 46.50 52.74 43.56 18.48 42.30 35.80 45.80 35.80 16.80 VQATTACK 79.00 75.16 76.46 75.04 61.60 65.00 61.90 65.70 66.20 48.70 Table 2: Results of transferable attacks between F and S with the same pre-trained structures. Figure 4: Qualitative results of VQATTACK on the VQAv2 dataset generated by the TCL model. The original answer and perturbed words are displayed in blue and red, respectively. The wrong prediction is shown with an underline. ural appearance as the benign samples, which demonstrates a serious security threat in the present VQA systems. Transferable Attacks with Shared Information In this experiment, we use the pre-trained model F as the source model and its downstream VQA task as the target S. F and S share most of the structures, and only the final prediction layers are different. Table 2 shows the experimental results. We can observe that the proposed VQATTACK still outperforms all the baselines on the two VQA datasets. Compared with the results listed in Table 1, we can observe that the performance of all approaches improves significantly under this setting. This experiment concludes that the shared information is sensitive, which may make the target models vulnerable. Related Work Robustness of VQA The robustness of VQA is moderately explored. Recently, Fool-VQA (Xu et al. 2018) explores the adversarial vulnerability of a VQA system by adding image noise constrained by l∞distance. TrojVQA (Walmer et al. 2022) performs a backdoor attack by injecting deliberate image patches and word tokens. These studies concentrate on the robustness of end-to-end trained VQA models and design algorithms based on the final predictions. Because the outputs of pre-trained and fine-tuned VQA models are different, they cannot be extended to our problem. Adversarial Attacks Adversarial image attacks are initially explored in Fast Gradient Sign Method (Goodfellow, Shlens, and Szegedy 2015) and Projected Gradient Decent (Madry et al. 2018). An intriguing property of these adversarial images is their “transferability”, which can be utilized to attack different image models with unknown parameters and structures. To enhance the transferability, the recently proposed methods exploit features from intermediate layers for adversarial attacks. They either combine the feature distortion loss with the classification cross-entropy term (Huang et al. 2019; Inkawhich et al. 2020a,b) or fully rely on the intermediate feature disruption (Ganeshan, S., and Radhakrishnan 2019; Naseer et al. 2020). Text attack methods are primarily divided into searching-based and gradient-based algorithms. Searching-based attacks include a set of heuristic ranking algorithms (Li et al. 2021a, 2020; Xu et al. 2022) with sub-optimal performance. Recently, gradient-based attacking approaches (Guo et al. 2021; Wang et al. 2022; Ye et al. 2022a,b) are proposed. Unlike image attacks, the gradient cannot be directly projected onto discrete text inputs. Accordingly, gradient change is instantiated either through distance matching on candidate word embeddings (Wang et al. 2022; Ye et al. 2022a,b), or by using Gumbel-softmax sampling (Jang, Gu, and Poole 2017) on a learnable distribution of all candidate words. For multi-modal attacks, the recently proposed Co-attack (Zhang, Yi, and Sang 2022) method firstly combines both image and text attacks, which utilizes word substitution to guide image adversarial attacks. It has demonstrated to some extent that perturbations across both modalities can be more effective than a single source. However, it does not take into account the dynamic connections between perturbations on different modalities, indicating potential space for significant improvements under more challenging scenarios. Conclusion In this paper, we investigate a novel transferable adversarial attack scenario, aiming to generate adversarial samples only using pre-trained models, which are used to attack different black-box victim models. Correspondingly, we propose a new model named VQATTACK, which can jointly update both image and text perturbations. Besides, we propose to incorporate the large language model to enhance the transferability of the source model. Experimental results on two VQA datasets with five models show the effectiveness of the proposed VQATTACK for the transferable attacks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6761 Acknowledgements This work is partially supported by the National Science Foundation under Grant No. 1951729, 1953813, 2119331, 2212323, and 2238275. References Antol, S.; Agrawal, A.; Lu, J.; Mitchell, M.; Batra, D.; Zitnick, C. L.; and Parikh, D. 2015. VQA: Visual Question Answering. In ICCV, 2425–2433. IEEE Computer Society. Bao, H.; Wang, W.; Dong, L.; Liu, Q.; Mohammed, O. K.; Aggarwal, K.; Som, S.; Piao, S.; and Wei, F. 2022. VLMo: Unified Vision-Language Pre-Training with Mixture-ofModality-Experts. In NeurIPS. Cer, D.; Yang, Y.; Kong, S.; Hua, N.; Limtiaco, N.; John, R. S.; Constant, N.; Guajardo-Cespedes, M.; Yuan, S.; Tar, C.; Strope, B.; and Kurzweil, R. 2018. Universal Sentence Encoder for English. In EMNLP (Demonstration), 169–174. Association for Computational Linguistics. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1), 4171–4186. Association for Computational Linguistics. Ganeshan, A.; S., V. B.; and Radhakrishnan, V. B. 2019. FDA: Feature Disruptive Attack. In ICCV, 8068–8078. IEEE. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In ICLR (Poster). Guo, C.; Sablayrolles, A.; J´egou, H.; and Kiela, D. 2021. Gradient-based Adversarial Attacks against Text Transformers. In EMNLP (1), 5747–5757. Association for Computational Linguistics. Huang, Q.; Katsman, I.; Gu, Z.; He, H.; Belongie, S. J.; and Lim, S. 2019. Enhancing Adversarial Example Transferability With an Intermediate Level Attack. In ICCV, 4732–4741. IEEE. Inkawhich, N.; Liang, K. J.; Carin, L.; and Chen, Y. 2020a. Transferable Perturbations of Deep Feature Distributions. In ICLR. OpenReview.net. Inkawhich, N.; Liang, K. J.; Wang, B.; Inkawhich, M.; Carin, L.; and Chen, Y. 2020b. Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability. In NeurIPS. Jang, E.; Gu, S.; and Poole, B. 2017. Categorical Reparameterization with Gumbel-Softmax. In ICLR (Poster). OpenReview.net. Jia, J.; Liu, Y.; and Gong, N. Z. 2022. BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. In IEEE Symposium on Security and Privacy, 2043–2059. IEEE. Jin, D.; Jin, Z.; Zhou, J. T.; and Szolovits, P. 2020. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. In AAAI, 8018– 8025. AAAI Press. Kenfack, F. K.; Siddiky, F. A.; Balint-Benczedi, F.; and Beetz, M. 2020. RobotVQA - A Scene-Graph- and DeepLearning-based Visual Question Answering System for Robot Manipulation. In IROS, 9667–9674. IEEE. Kim, W.; Son, B.; and Kim, I. 2021. ViLT: Vision-andLanguage Transformer Without Convolution or Region Supervision. In ICML, volume 139 of Proceedings of Machine Learning Research, 5583–5594. PMLR. Li, D.; Zhang, Y.; Peng, H.; Chen, L.; Brockett, C.; Sun, M.; and Dolan, B. 2021a. Contextualized Perturbation for Textual Adversarial Attack. In NAACL-HLT, 5053–5069. Association for Computational Linguistics. Li, J.; Ji, S.; Du, T.; Li, B.; and Wang, T. 2019. TextBugger: Generating Adversarial Text Against Real-world Applications. In NDSS. The Internet Society. Li, J.; Selvaraju, R. R.; Gotmare, A.; Joty, S. R.; Xiong, C.; and Hoi, S. C. 2021b. Align before Fuse: Vision and Language Representation Learning with Momentum Distillation. In NeurIPS, 9694–9705. Li, L.; Ma, R.; Guo, Q.; Xue, X.; and Qiu, X. 2020. BERTATTACK: Adversarial Attack Against BERT Using BERT. In EMNLP (1), 6193–6202. Association for Computational Linguistics. Lu, Y.; Jia, Y.; Wang, J.; Li, B.; Chai, W.; Carin, L.; and Velipasalar, S. 2020. Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction. In CVPR, 937–946. Computer Vision Foundation / IEEE. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR (Poster). OpenReview.net. Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013. Efficient Estimation of Word Representations in Vector Space. In ICLR (Workshop Poster). Naseer, M.; Khan, S. H.; Hayat, M.; Khan, F. S.; and Porikli, F. 2020. A Self-supervised Approach for Adversarial Robustness. In CVPR, 259–268. Computer Vision Foundation / IEEE. OpenAI. 2023. GPT-4 Technical Report. CoRR, abs/2303.08774. Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global Vectors for Word Representation. In EMNLP, 1532– 1543. ACL. Singh, A.; Natarajan, V.; Shah, M.; Jiang, Y.; Chen, X.; Batra, D.; Parikh, D.; and Rohrbach, M. 2019. Towards VQA Models That Can Read. In CVPR, 8317–8326. Computer Vision Foundation / IEEE. Walmer, M.; Sikka, K.; Sur, I.; Shrivastava, A.; and Jha, S. 2022. Dual-Key Multimodal Backdoors for Visual Question Answering. In CVPR, 15354–15364. IEEE. Wang, B.; Xu, C.; Liu, X.; Cheng, Y.; and Li, B. 2022. SemAttack: Natural Textual Attacks via Different Semantic Spaces. In NAACL-HLT (Findings), 176–205. Association for Computational Linguistics. Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; and Yuille, A. L. 2019. Improving Transferability of Adversarial Examples With Input Diversity. In CVPR, 2730–2739. Computer Vision Foundation / IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6762 Xu, L.; Cuesta-Infante, A.; Berti-´Equille, L.; and Veeramachaneni, K. 2022. R&R: Metric-guided Adversarial Sentence Generation. In AACL/IJCNLP (Findings), 438–452. Association for Computational Linguistics. Xu, X.; Chen, X.; Liu, C.; Rohrbach, A.; Darrell, T.; and Song, D. 2018. Fooling Vision and Language Models Despite Localization and Attention Mechanism. In CVPR, 4951–4961. Computer Vision Foundation / IEEE Computer Society. Yang, J.; Duan, J.; Tran, S.; Xu, Y.; Chanda, S.; Chen, L.; Zeng, B.; Chilimbi, T.; and Huang, J. 2022. VisionLanguage Pre-Training with Triple Contrastive Learning. In CVPR, 15650–15659. IEEE. Ye, M.; Chen, J.; Miao, C.; Wang, T.; and Ma, F. 2022a. LeapAttack: Hard-Label Adversarial Attack on Text via Gradient-Based Optimization. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2307–2315. Ye, M.; Miao, C.; Wang, T.; and Ma, F. 2022b. TextHoaxer: budgeted hard-label adversarial attacks on text. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 3877–3884. Yu, T.; Shen, Y.; and Jin, H. 2019. A Visual Dialog Augmented Interactive Recommender System. In KDD, 157– 165. ACM. Zhan, L.; Liu, B.; Fan, L.; Chen, J.; and Wu, X. 2020. Medical Visual Question Answering via Conditional Reasoning. In ACM Multimedia, 2345–2354. ACM. Zhang, J.; Yi, Q.; and Sang, J. 2022. Towards Adversarial Attack on Vision-Language Pre-training Models. In ACM Multimedia, 5005–5013. ACM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6763 | 2024 | 751 |
18,575 | TF-CLIP: Learning Text-Free CLIP for Video-Based Person Re-identification Chenyang Yu1, Xuehu Liu2, Yingquan Wang1, Pingping Zhang3*, Huchuan Lu1,3,4 1 School of Information and Communication Engineering, Dalian University of Technology, Dalian, China 2 School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China 3 School of Future Technology, School of Artificial Intelligence, Dalian University of Technology, Dalian, China 4 Ningbo Institute, Dalian University of Technology, Ningbo, China [email protected];[email protected];yingquan [email protected];{zhpp, lhchuan}@dlut.edu.cn Abstract Large-scale language-image pre-trained models (e.g., CLIP) have shown superior performances on many cross-modal retrieval tasks. However, the problem of transferring the knowledge learned from such models to video-based person reidentification (ReID) has barely been explored. In addition, there is a lack of decent text descriptions in current ReID benchmarks. To address these issues, in this work, we propose a novel one-stage text-free CLIP-based learning framework named TF-CLIP for video-based person ReID. More specifically, we extract the identity-specific sequence feature as the CLIP-Memory to replace the text feature. Meanwhile, we design a Sequence-Specific Prompt (SSP) module to update the CLIP-Memory online. To capture temporal information, we further propose a Temporal Memory Diffusion (TMD) module, which consists of two key components: Temporal Memory Construction (TMC) and Memory Diffusion (MD). Technically, TMC allows the frame-level memories in a sequence to communicate with each other, and to extract temporal information based on the relations within the sequence. MD further diffuses the temporal memories to each token in the original features to obtain more robust sequence features. Extensive experiments demonstrate that our proposed method shows much better results than other state-of-the-art methods on MARS, LS-VID and iLIDS-VID. Introduction Video-based person Re-Identification (ReID) (Liu et al. 2021c; Xu et al. 2017; Liu et al. 2015) aims at re-identifying specific persons from videos across non-overlapping cameras. Over the past decade, various approaches have been proposed to solve this challenging task, including CNNbased methods (Zhang et al. 2020; Fu et al. 2019; Subramaniam, Nambiar, and Mittal 2019; Chen et al. 2018; Dai et al. 2019) and Transformer-based methods (Liu et al. 2021b; Zhang et al. 2021a; Wu et al. 2022; Zhu et al. 2023). However, existing works employ a unimodal framework, where the video encoders are trained to predict a fixed set of predefined labels. Considering that the ReID task is in an open-set setting, such closed-set training setting limits the robustness of these methods. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A video of a person. Visual Encoder Text Encoder (a) The CLIP embedding space Wellaligned Visual Encoder Text Encoder Visual Encoder Visual Encoder … Text Encoder Stage 1 A video of a person. 1 Num X X Learnable Stage 2 Align (b) Two-stage CLIP-ReID training Training Set CLIP-Memory Align (c) Our one-stage TF-CLIP training Training Examples Figure 1: Comparison of CLIP-ReID and our one-stage textfree learning framework. (a) An illustration of the CLIP joint embedding space. (b) Two-stage CLIP-ReID training. (c) Our one-stage TF-CLIP training. Recently, the outstanding Contrastive Language-Image Pre-training (CLIP) (Radford et al. 2021) has shown a great capability of learning robust representations. Different from traditional unimodal frameworks, CLIP is trained on largescale language-image pairs in a contrastive way. Apparently, CLIP-based methods have achieved great success in video domains (Ju et al. 2022; Xu et al. 2021; Wang, Xing, and Liu 2021; Ni et al. 2022). However, the transfer and adaptation to video-based person ReID is not well explored. A major challenge is that compared with other video tasks, video sequences in video-based person ReID only have integer index labels, and there is no text labels. A straightforward solution is to annotate the natural language descriptions for existing video-based person ReID datasets. Unfortunately, it is both time consuming and labour cost to annotate suitable text descriptions for fine-grained person videos. To overcome these issues, inspired by recent prompt engineering researches, CLIP-ReID (Li, Sun, and Li 2022) is proposed for image-based person ReID. As shown in Fig. 1 (b), CLIP-ReID fixes the text encoder and visual encoder in the first training stage, and optimizes a set of learnable text tokens to generate the label-specific text features. Then, it uses the learned text features to update the visual encoder in the second training stage. Although CLIP-ReID has achieved excellent performances, it does not consider the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6764 temporal information in video sequences. For video-based person ReID, extracting robust temporal features is a critical step (Gao and Nevatia 2018). Thus, directly applying CLIP-ReID in video-based person ReID is sub-optimal. On the other hand, the two-stage training strategy is not elegant enough and needs hyper-parameter tuning. To address the aforementioned issues, we propose an onestage text-free CLIP-based person ReID framework named TF-CLIP. Specifically, as shown in Fig. 1 (c), we first use a pre-trained CLIP visual encoder for all video sequences of each identity to extract sequence features, and then take the average to obtain the identity-level feature denoted as CLIP-Memory. In this form, we replace the text encoder and achieve text-free CLIP-based person ReID. As shown in Fig. 1 (a), this is reasonable because the pre-trained CLIP text encoder and visual encoder are constrained by contrastive learning losses, where the outputs of the two encoders are aligned in a common feature space. What’s more, the essence of the first-stage training of CLIP-ReID is to align text prompts with the outputs of the pre-trained visual encoder. Based on the above facts, we propose CLIPMemory to address the lack of appropriate text descriptions when using pre-trained visual-language models. Considering that the CLIP-Memory is fixed during the training process, the resulting method will be only optimized for a specific set of training identities, while ignoring the appearance diversity of the same identity. Inspired by CoCoOp (Zhou et al. 2022a), we further design a Sequence-Specific Prompt (SSP) module, which updates this CLIP-Memory according to each video sequence when training. On the other hand, we further propose a Temporal Memory Diffusion (TMD) module to capture temporal information. TMD is composed of Temporal Memory Construction (TMC) and Memory Diffusion (MD). Technically, it first takes the feature of each frame as input, and constructs a memory token. Then, based on memory tokens, TMC is employed to capture temporal information within the sequence. The updated memory token will store the context information of the video sequence. Next, MD combines the obtained temporal memory tokens with the original features, and diffuses the temporal memory to each token. Finally, we perform a Temporal Average Pooling (TAP) on the updated frame-level features to obtain robust sequence-level feature representations. Extensive experiments on three public ReID benchmarks demonstrate that our approach achieves promising results over most of previous methods. In summary, our contributions are three folds: • We propose a novel one-stage text-free CLIP-based learning framework named TF-CLIP for video-based person ReID. To our best knowledge, we are the first to extract identity-specific sequence features to replace the text features of CLIP. Meanwhile, we further design a Sequence-Specific Prompt (SSP) module to update the CLIP-Memory online. • We propose a Temporal Memory Diffusion (TMD) module to capture temporal information. The frame-level memories in a sequence first communicate with each other to extract temporal information. The temporal information is then further diffused to each token, and finally aggregated to obtain more robust temporal features. • Extensive experiments demonstrate that our proposed method shows superior performance over existing methods on three video-based person ReID datasets, i.e., MARS, LS-VID and iLIDS-VID. Related Works Video-based Person ReID Most of video-based person ReID methods exploit spatial and temporal cues for video representation learning. Early studies use RNNs/LSTMs (McLaughlin, Martinez del Rincon, and Miller 2016; Dai et al. 2018), 3D convolutions (Li, Zhang, and Huang 2019; Gu et al. 2020), temporal pooling (Zheng et al. 2016; Wu et al. 2018) and attention mechanisms (Liu et al. 2021c; Liu, Zhang, and Lu 2023), to extract temporal features. For example, Hou et al. (Hou et al. 2020) propose a temporal complementary learning network to extract complementary features of consecutive video frames. Different from these methods, we propose a Temporal Memory Diffusion (TMD) module to mine temporal memory and diffuse it to each frame. In recent years, the vision Transformer has shown impressive results in person ReID compared with CNN-based methods. Liu et al. (Liu et al. 2021b) propose a trigeminal Transformer in spatial, temporal and spatial-temporal views to obtain more cues. He et al. (He et al. 2021) propose a hybrid framework based on dense interaction learning, which utilizes both CNNs and attention mechanisms to enhance multi-grained spatio-temporal modeling. Despite the impressive success of above approaches, they all employ a unimodal framework, where video encoders are trained to predict a fixed set of predefined labels. In contrast, we propose a new paradigm based on a visual-language multimodal learning framework for video-based person ReID. Visual-language Learning Recently, visual-language joint learning methods (Jia et al. 2021; Yuan et al. 2021) (e.g., CLIP) have demonstrated great potential in extracting generic visual representations. For example, Rao et al. (Rao et al. 2022) propose DenseCLIP for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP. Zhou et al. propose CoOp (Zhou et al. 2022b) and CoCoOp (Zhou et al. 2022a) for zero-shot image classification by prompt learning. Khattak et al. (Khattak et al. 2022) propose multi-modal prompt learning for both vision and language branches to improve alignments between vision and language representations. Li et al. (Li, Sun, and Li 2022) utilize CLIP for image-based person ReID with a two-stage training strategy. Different from the above methods, we design an one-stage training strategy to extend CLIP to the video-based person ReID task. On the other hand, several studies also try to extend the existing CLIP model to the video domain. Rasheed et al. (Rasheed et al. 2022) propose ViFi-CLIP to bridge the domain gap from images to videos. Wang et al. (Wang, Xing, and Liu 2021) propose ActionCLIP for action recognition by designing appropriate prompts. Ni et al. (Ni et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6765 ReID L B V … TMC … MD T A P … … Visual Encoder Visual Encoder 1 M … 1 V … V2M L … Training Set … Y M … SSP + 1 M… Y M … B V MD … :Frame-level features :[CLS] tokens :Init temporal memory tokens :Mined temporal memories :Updated frame-level features :Sum operation :Sequence-specific prompt SSP 1 M Y M … :Identity-specific CLIP-Memory 1 M Y M :Updated CLIP-Memory … + TMC :Temporal memory construction MD :Memory diffusion TAP :Temporal average pooling :Updated [CLS] tokens (1) CLIP-Memory (2) Temporal Memory Diffusion PK Sampling Figure 2: Illustration of the proposed TF-CLIP framework. It consists of two key modules, i.e., a CLIP-Memory module and a Temporal Memory Diffusion (TMD) module. (1) In the CLIP-Memory module, we first extract the identity-specific sequence feature as CLIP-Memory to replace the text feature. Meanwhile, a Sequence-Specific Prompt (SSP) module is proposed to update the CLIP-Memory online. (2) In TMD, Temporal Memory Construction (TMC) is first proposed to capture temporal information based on the relations within the sequence. Then, Memory Diffusion (MD) further diffuses the temporal information to each token in the original features to obtain more robust sequence features. 2022) propose XCLIP and use a text prompt generation for better generalization. Inspired by the successful applications of CLIP, we further design a Temporal Memory Diffusion (TMD) module for video-based person ReID. Brief Reviews of CLIP and CLIP-ReID CLIP consists of a visual encoder V(·) and a text encoder T (·), jointly trained with contrastive learning to respectively map the input image and text into a unified representation space. Specifically, let {imgi, texti}B i=1 be a set of B training visual-language pairs within a batch, where imgi is an image and texti is a corresponding text description. CLIP uses the above two encoders and combines two linear projecting layers to encode images and texts into corresponding image features and text features. Then, the similarity of the two features is calculated as: S(imgi, texti) = Jv(V(imgi)) · Jt(T (texti)), (1) where Jv and Jt are linear layers to project features into a joint feature space. Finally, two contrastive losses denoted as Lv2t and Lt2v are employed to train models (Radford et al. 2021). Thanks to Lv2t and Lt2v, the cross-modal features will be aligned. In the application of downstream tasks, the text usually takes the form of prompt, such as “A photo/video of a {class}.” where “{}” is filled with a specific class name (e.g., cat). Unfortunately, the labels in person ReID are integer indexes instead of specific texts. To solve this problem, CLIP-ReID (Li, Sun, and Li 2022) introduces some identity-specific tokens to learn text descriptions. Specifically, the text prompt becomes “A photo of a [X]1, [X]2, · · · , [X]Num person”. Then, in the additional training stage, CLIP-ReID fixes the parameters of V(·) and T (·), and uses the Lv2t and Lt2v to optimize the learnable identity-specific tokens. In this way, CLIP-ReID learns a corresponding ambiguous text description for each identity. Although CLIP-ReID achieves impressive results, it requires an additional training stage to learn text descriptions and does not consider the temporal information in videos. Our Method To deal with the aforementioned problems, we propose an one-stage text-free framework named TF-CLIP. The proposed pipeline is depicted in Fig. 2, which consists of two main modules, a CLIP-Memory module to replace the original text branch and a Temporal Memory Diffusion (TMD) module to extend CLIP to video-based person ReID. CLIP-Memory Module Once the computation of the visual branch is completed, the next step of previous CLIP-based methods (Li, Sun, and Li 2022; Zhou et al. 2022b; Rao et al. 2022) is to generate the corresponding text features. Unlike them, we argue that learning text descriptions is not necessary for CLIP-based person ReID method. A CLIP-Memory can be used to replace the original text branch. Specifically, given a videobased person ReID training set D = {(xi, yi)}Ns i=1 with labels yi ∈{1, · · · , Y }, the total number of the person tracklets is Ns. Taking identity yi as an example, we first employ the pre-trained CLIP visual encoder for all video sequences with the identity yi to extract identity-specific sequence features denoted as CLIP-Memory. More specifically, given one of image sequences {It}T t=1 containing T images belongings to the identity yi, we split each frame It ∈RH×W ×3 into Np non-overlapping patches {It,i}Np i=1 ∈RP 2×3, in which H and W represent the number of height and width, respectively. Np = H P × W P and P is the patch size. Then the patches are projected to token embeddings using a linear projection layer Eemb ∈R3P 2×D, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6766 V2M L … SSP FFN MHSA + M V M … : Tuned :CLIP-Memory :Updated CLIP Memory :Sequence-level feature SSP :Sequence-specific prompt … Video Encoding I query key value Figure 3: Illustration of our CLIP-Memory module. where D represents the dimension of each token. Meanwhile, an extra learnable [CLS] token denoted as CLSt ∈ RD is added to the embedded tokens for each frame. Thus, the input of the video encoder at the frame t is given by: zP atch t = [CLSt, ET embIt,1, · · · , ET embIt,Np] + esp, (2) where esp is the spatial position embedding. Then the above embeddings are sequentially processed through CLIP. At last, the frame-level representation is defined as, zt = V∗(zP atch t ), (3) where the superscript ∗indicates that the parameters of this visual encoder V are frozen during training. What’s more, the obtained class token CLSt is projected to a visuallanguage unified space via a Visual Project (VP) layer, ft = V P(zt,0), where ft is the final representation of frame t. In practice, V P is implemented by a fully connected layer, represented as V P = WV P ∈RD×d. Finally, a Temporal Average Pooling (TAP) is employed to obtain the sequence-level feature va. Once all the video sequence features belonging to the identity yi are obtained, the average of them can represent the identity-specific feature Myi defined as CLIP-Memory, Myi = 1 Ni P va∈yi va, where Ni is the number of sequences belonging to the identity yi. When traversing the entire training set, we can get an identityspecific CLIP-memory M ∈RY ×d. Superior to previous text-associated CLIP-based methods, CLIP-Memory can exploit the image-text alignment potential of CLIP without using text information. Sequence-Specific Prompt. In fact, if the CLIP-Memory is fixed during the training process, the framework will be only optimized for a specific set of training labels, while ignoring the appearance diversity of the same identity. To address the above problem, as shown in Fig. 3, we introduce a novel module: Sequence-Specific Prompt (SSP). The key idea is to generate a prompt conditioned on each input sequence to update the CLIP-Memory. When training SSP, the PK sampling strategy (Hermans, Beyer, and Leibe 2017) is employed to form a mini-batch {Ib}B=P ×K b=1 , which contains P different identities and K video sequences for each identity. After video encoding, the sequence-level features in the mini-batch can be expressed as V = [v1 y=1, · · · , vK y=1, · · · , vP ×K y=P ]. Then, SSP takes the sequence-level feature V and CLIP-Memory M as inputs. As shown in Fig. 3, the CLIP-Memory M is used as query, and the sequence-level feature V is used as key and value. Then, SSP employs the cross-attention mechanism to model the interactions between query, key and value, which can be achieved by: Py=1,··· ,y=P = Transdecoder(query, key, value), (4) where P represents the generated sequence-specific prompt. There are N blocks in Transdecoder, and each block consists of a Multi-Head Cross-Attention (MHCA) and a FeedForward Network (FFN). Subscripts y = 1, · · · , y = P represent the identities, which correspond to the index in the CLIP-Memory. We then update the identity-specific CLIPMemory through a residual connection: M ′ y=1,··· ,y=P ←Py=1,··· ,y=P + My=1,··· ,y=P . (5) In this way, CLIP-Memory can be updated online. Finally, the updated CLIP-Memory M ′ ∈RY ×d and the sequencelevel feature V ∈RP K×d are employed for contrastive learning. Compared with a fixed CLIP-Memory, SSP shifts the network’s focus from a specific set of identities to each input sequence, thereby reducing overfitting and ultimately resulting in more discriminative features. Temporal Memory Diffusion To capture temporal information, we further propose a Temporal Memory Diffusion (TMD) module. As shown in Fig. 4, TMD is mainly composed of Temporal Memory Construction (TMC) and Memory Diffusion (MD). Temporal Memory Construction. Taking the last video sequence in the training batch as an example, which is defined as IB = {IB 1 , IB 2 , · · · , IB T }, where B = P × K. According to Eq. 2 and Eq. 3, the representation at the frame level denoted as zt can be obtained through the video encoder. During the standard training process of CLIP, the [CLS] token zt,0 is used as the output of the visual encoder while the other tokens zt,others = [zt,1, · · · , zt,Np] are usually neglected. However, recent works (Zeng et al. 2022; Chen, Fan, and Panda 2021; He et al. 2022) have shown that zt,others still retains enough semantic information and can be used as a feature map. Thus, Temporal Memory Construction (TMC) takes zt ∈R(Np+1)×D as input. As shown in Fig. 4, we first construct a memory token st = θ( 1 Np+1 PNp i=0 zt,i) for each frame t based on frame-level features, where we get the initial memory token based on the average of all tokens in each frame, and θ represents a liner projection. Then, to capture the temporal cues in the sequence and transfer the information of each frame into the memory tokens, a Multi-Head SelfAttention (MHSA) is computed over the initial memory tokens S = {s1, · · · , sT }: S ′ = MHSA(LN(S)) + S, (6) where LN denotes Layer Normalization, and S ′ ∈RT ×D are the temporal-aware memory tokens. TMC allows the frame-level memories to communicate with each other, and extracts temporal information based on the relations obtained from neighbors. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6767 tri ce L L + ˆv B I 1z 0 Tz , 1s Ts LN S S MHSA TMC S [ , ] T T z s MHSA FFN MD MD 1z Tz Video Encoding ,others Tz … ˆ ˆ [ , ] T T z s TMD TAP … … … … … … Figure 4: Illustration of the proposed Temporal Memory Diffusion (TMD) module. Memory Diffusion. In order to make a better use of the temporal memory tokens, we design a diffusion-aggregation strategy named Memory Diffusion (MD). As shown in Fig. 4, the temporal memory token s ′ t ∈R1×D is first concatenated with each frame feature zt ∈R(Np+1)×D to become [zt, s ′ t] ∈R(Np+2)×D, where [·, ·] represents a concatenation of two features. Then, a MHSA followed by a FFN layer is employed to diffuse the temporal information to each token in the frame, resulting in [ˆzt, ˆs ′ t] ∈ R(Np+2)×D, which can be expressed as: [ˆzt, ˆs ′ t] = FFN(MHSA([zt, s ′ t])) + [zt, s ′ t]. (7) Finally, all tokens in each frame are aggregated to obtain frame-level features ˜zt and a TAP is employed to further aggregate frame-level features into a sequence-level feature ˆv. The above processes can be formulated as: ˜zt = 1 Np + 2( Np X i=0 ˆzi + ˆs ′ t), (8) ˆv = TAP([˜z1, · · · , ˜zT ]). (9) Superior to previous aggregation strategies, MD can diffuse the temporal memories to each token in the original features and obtain more robust sequence features. Optimization In this work, we utilize the video-to-memory contrastive loss denoted by LV 2M to train the SSP module and the visual encoder in the CLIP-Memory. It is worth noting that the visual encoder used to compute CLIP-Memory does not update during training. Given the sequence-level feature V = [v1, · · · , vB] and the updated CLIP-Memory M ′, it can be expressed as: L(yi) V 2M = − 1 |P(yi)| X p∈P (yi) log exp(< vp, M ′ yi >) PB j=1 exp(< vj, M ′ yi >) . (10) where P(yi) represents all the positives for M ′ yi in the training batch and |·| is its cardinality. < · > represents the cosine similarity function. B represents the number of video sequences in a batch. What’s more, the triplet loss (Hermans, Beyer, and Leibe 2017) denoted by Ltri and the label smooth cross-entropy loss denoted by Lce are employed to optimize the visual encoder and the TMD module. Finally, the losses used in our method are summarized as follows: Ltotal = LV 2M + Ltri + Lce. (11) Experiments Datasets and Evaluation Protocols We evaluate our proposed approach on three video-based person ReID benchmarks, including MARS (Zheng et al. 2016), LS-VID (Li et al. 2019) and iLIDS-VID (Wang et al. 2014). MARS is a large-scale benchmark which contains 20,715 videos with 1,261 identities. LS-VID is a new benchmark which collects 3,772 identities and includes 14,943 video tracklets captured by 15 cameras. iLIDS-VID is a small-scale dataset with 600 videos of 300 identities. In addition, we follow common practices and adopt the Cumulative Matching Characteristic (CMC) at Rank-k and mean Average Precision (mAP) to measure the performance. Experiment Settings Our model is implemented on the PyTorch platform and trained with one NVIDIA Tesla A30 GPU (24G memory). The ViT-B/16 from CLIP (Radford et al. 2021) is used as the visual encoder’s backbone. The number of blocks in SPP is set to N=2. During training, we sample 8 frames from each video sequence and each frame is resized to 256×128. In each mini-batch, we sample 4 identities, each with 4 tracklets. Thus, the number of images in a batch is 4×4×8=128. We also adopt random flipping and random erasing (Zhong et al. 2020) for data augmentation. We train our framework for 60 epochs in total by the Adam optimizer (Kingma and Ba 2014). Following CLIP-ReID (Li, Sun, and Li 2022), we first warm up the model for 10 epochs with a linearly growing learning rate from 5 ×10−7 to 5 ×10−6. Then, the learning rate is divided by 10 at the 30th and 50th epochs. The original sequence-level feature v ∈R1×512 and the aggregated feature ˆv ∈R1×768 are concatenated to obtain the final video representation during testing. The Euclidean distance is employed as the distance metric for ranking. The code is available at https://github.com/AsuradaYuci/TF-CLIP. Comparison with State-of-the-arts In this section, we compare our methods with other methods on three video-based person ReID benchmarks. The results are shown in Tab. 1. These experimental results clearly The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6768 Methods Source Backbone MARS LS-VID iLIDS-VID mAP Rank-1 mAP Rank-1 Rank-1 Rank-5 STMP (Liu et al. 2019) AAAI19 C 72.7 84.4 39.1 56.8 84.3 96.8 M3D (Li, Zhang, and Huang 2019) AAAI19 C 74.1 84.4 40.1 57.7 74.0 94.3 GLTR (Li et al. 2019) ICCV19 C 78.5 87.0 44.3 63.1 86.0 98.0 TCLNet (Hou et al. 2020) ECCV20 C 85.1 89.8 70.3 81.5 86.6 MGH (Yan et al. 2020) CVPR20 C 85.8 90.0 61.8 79.6 85.6 97.1 GRL (Liu et al. 2021c) CVPR21 C 84.8 91.0 90.4 98.3 BiCnet-TKS (Hou et al. 2021) CVPR21 C 86.0 90.2 75.1 84.6 CTL (Liu et al. 2021a) CVPR21 C 86.7 91.4 89.7 97.0 STMN (Eom et al. 2021) ICCV21 C 84.5 90.5 69.2 82.1 PSTA (Wang et al. 2021) ICCV21 C 85.8 91.5 91.5 98.1 DIL (He et al. 2021) ICCV21 T 87.0 90.8 92.0 98.0 STT (Zhang et al. 2021b) Arxiv21 T 86.3 88.7 78.0 87.5 87.5 95.0 TMT (Liu et al. 2021b) Arxiv21 T 85.8 91.2 91.3 98.6 CAVIT (Wu et al. 2022) ECCV22 T 87.2 90.8 79.2 89.2 93.3 98.0 SINet (Bai et al. 2022) CVPR22 C 86.2 91.0 79.6 87.4 92.5 MFA (Gu et al. 2022) TIP22 C 85.0 90.4 78.9 88.2 93.3 98.7 DCCT (Liu et al. 2023) TNNLS23 T 87.5 92.3 91.7 98.6 LSTRL (Liu, Zhang, and Lu 2023) ICIG23 C 86.8 91.6 82.4 89.8 92.2 98.6 TF-CLIP(Ours) T 89.4 93.0 83.8 90.4 94.5 99.1 Table 1: Comparison with typical CNN-based (C) and Transformer-based (T) methods on MARS, LS-VID and iLIDS-VID. Model Components Params(M) FLOPs(G) MARS LS-VID CLIP-M SSP TMD mAP Rank-1 Rank-5 mAP Rank-1 Rank-5 1 × × × 126.78 14.24 88.1 91.7 97.4 80.6 88.8 96.3 2 ✓ × × 86.94 11.26 88.4 91.6 97.9 80.3 87.7 95.7 3 ✓ ✓ × 94.21 12.11 88.8 92.1 97.6 81.3 89.9 96.2 4 ✓ × ✓ 97.02 11.72 89.2 92.3 97.7 81.6 88.9 96.4 5 × × ✓ 136.86 14.70 89.3 92.7 97.8 82.3 90.3 96.7 6 ✓ ✓ ✓ 104.26 12.53 89.4 93.0 97.9 83.8 90.4 97.1 Table 2: Comparison of different components and the computational cost on MARS and LS-VID. confirm the superiority and effectiveness of the proposed method on large-scale ReID datasets. Results on MARS. Tab. 1 shows that our proposed method achieves the best mAP of 89.4% and the best Rank1 of 93.0% on MARS. We can observe that our method shows much better results than other methods. We note that most recent works (e.g., DIL (He et al. 2021), CAVIT (Wu et al. 2022), etc.) are developed with Transformers. However, they are all based on unimodal pre-training. Different from them, we propose TF-CLIP based on multi-modal pretraining. Thus, our method surpasses CAVIT by 2.2% and 2.2% in terms of mAP and Rank-1 accuracy on MARS. Results on LS-VID. It is observed that our method outperforms other state-of-the-arts on LS-VID. Among them, STMN (Eom et al. 2021) also uses a temporal memory, which attains 82.1% Rank-1 accuracy. Instead, we propose the TMD module to generate the temporal memory in the sequence. As a result, our method achieves 90.4% Rank-1 accuracy, which surpasses STMN by 8.3%. Results on iLIDS-VID. As can be observed, Tab. 1 also demonstrates the significant superiority of the proposed model over existing methods on iLIDS-VID. Specially, our method achieves the best Rank-1 and Rank-5 accuracy of 94.2% and 99.1%, respectively. It surpasses the previous best approaches MFA (Gu et al. 2022) and CAVIT (Wu et al. 2022) by 1.2% in terms of Rank-1 accuracy. Ablation Study To verify the impact of each component in our model, we conduct several experiments on MARS and LS-VID, and show compared results in Tab. 2. “CLIP-M” stands for CLIP-Memory. Model1 means that CLIP-ReID is directly applied to video-based person ReID as the baseline, in which a TAP is employed to obtain sequence-level features. Effectiveness of CLIP-Memory. As can be seen from the first two rows in Tab. 2, Model2 obtains a higher Rank5 and mAP (by 0.5% and 0.3%) than Model1 on MARS. Meanwhile, competitive results are achieved on LS-VID. The above results clearly demonstrate that it is feasible to use the CLIP-Memory to replace the text branch, which can inspire the subsequent extension of CLIP to other downstream tasks where text information is missing. Effectiveness of SSP. As shown in Tab. 2, compared with Model2, Model3 with SPP brings 0.5% mAP and 0.4% Rank-1 accuracy gains on MARS, respectively. What’s more, compared with Model4, Model6 also improves the Rank-1 accuracy by 1.5% on LS-VID. These results clearly demonstrate that our proposed SSP can indeed improve the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6769 Conv1D Transf_cls Transf TMD Model2 Model4 wo CLIP-M − wo CLIP-M − (a) Temporal fusion methods (b) Layers in SSP (c) w/wo CLIP-Memory TAP 1 2 3 4 Figure 5: Illustration of (a) the impact of different temporal fusion methods, (b) the impact of different layers in SSP, and (c) comparison w/wo CLIP-Memory. (c) Persons in the test set of MARS. (a) Baseline (b) Ours Figure 6: t-SNE visualization of the baseline and our method on the MARS test set. Different colored dots represent different identities. Best viewed in color. performance. A reasonable explanation for this improvement is that updating the CLIP-Memory online helps the network to learn a more discriminative representation. Effectiveness of TMD. As shown in Tab. 2, the proposed TMD improves the performance remarkably. Compared with Model2, Model4 using TMD brings 0.8% mAP and 0.7% Rank-1 accuracy gains on MARS, respectively. What’s more, compared with Model3, Model6 also brings 0.6% and 2.5% Rank-1 accuracy gains on MARS and LSVID, respectively. The main reason is that our proposed TMD can capture temporal information in the sequence and obtain more robust sequence-level features. Comparison of different temporal fusion methods. Following ActionCLIP (Wang, Xing, and Liu 2021), we further investigate the impact of different temporal fusion methods with Model2 on MARS. The experimental results are shown in Fig. 5 (a). It can be observed that the proposed TMD is more suitable for video-based person ReID. For example, our method achieves 89.2% Rank-1 accuracy on MARS, which surpasses one-dimensional convolution (Conv1D) by 1.1%. The reason is that our proposed TMD not only extracts temporal information in the sequence, but also passes it to each token in the frame. The effect of different layers in SSP. As shown in Fig. 5 (b), we also carry out experiments to investigate the effect of different layers in SSP with Model3 on MARS. We can see that our method is not sensitive to this hyper-parameter. To balance the computation and performance, we finally choose N = 2, which achieves the best mAP accuracy of 88.8%. The necessity of CLIP-Memory. In fact, we can train the proposed framework without using CLIP-Memory. The experimental results based on Model2 and Model4 are shown in Fig. 5 (c). It can be observed that not using CLIPMemory will lead to a significant decrease in performance. For example, the unimodal baseline method achieves only 84.9% mAP accuracy on MARS, which is 3.5% lower than Model2. A plausible explanation for this performance drop is that the person ReID dataset is too small to adequately fine-tune the CLIP visual encoder when trained with unimodality. This reflects the superiority of multimodal training and the necessity of our CLIP-Memory. Visualization To better understand the effect of the proposed TF-CLIP, some persons with similar appearances from MARS are chosen to visualize the feature distribution by t-SNE (Van der Maaten and Hinton 2008). As shown in Fig. 6 (c), these persons are wearing green clothes with small inter-person variations. Comparing Fig. 6 (a) and (b), we can find that the proposed TF-CLIP indeed helps to learn more discriminative embeddings, with which the intra-person variance is minimized and the intra-person variance is maximized. Conclusion In this paper, we explore the application of vision-language pre-trained model to video-based person ReID without suitable textual descriptions. Specifically, we propose a novel one-stage text-free CLIP-based framework named TF-CLIP. To achieve text-free purpose, we propose CLIP-Memory to extract identity-specific sequence features to replace the text features. Meanwhile, we design a Sequence-Specific Prompt (SSP) to update the CLIP-Memory online. To capture temporal information, we further propose a Temporal Memory Diffusion (TMD) module. It first constructs frame-level memories and lets them communicate with each other in the sequence to extract the temporal information. Then the temporal memory is diffused into each token of the original frame, and aggregated to obtain more robust sequence features. Extensive experiments on three public ReID datasets demonstrate the effectiveness and superiority of our method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6770 Acknowledgments This work was supported in part by the National Key Research and Development Program of China (No. 2018AAA0102001), National Natural Science Foundation of China (No. 62101092) and Fundamental Research Funds for the Central Universities (No. DUT22QN228). References Bai, S.; Ma, B.; Chang, H.; Huang, R.; and Chen, X. 2022. Salient-to-broad transition for video person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7339–7348. Chen, C.-F. R.; Fan, Q.; and Panda, R. 2021. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE International Conference on Computer Vision, 357–366. Chen, D.; Li, H.; Xiao, T.; Yi, S.; and Wang, X. 2018. Video person re-identification with competitive snippet-similarity aggregation and co-attentive snippet embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1169–1178. Dai, J.; Zhang, P.; Wang, D.; Lu, H.; and Wang, H. 2018. Video person re-identification by temporal residual learning. IEEE Transactions on Image Processing, 28(3): 1366–1377. Dai, J.; Zhang, P.; Wang, D.; Lu, H.; and Wang, H. 2019. Video Person Re-Identification by Temporal Residual Learning. IEEE Transactions on Image Processing, 28: 1366–1377. Eom, C.; Lee, G.; Lee, J.; and Ham, B. 2021. Video-based person re-identification with spatial and temporal memory networks. In Proceedings of the IEEE International Conference on Computer Vision, 12036–12045. Fu, Y.; Wang, X.; Wei, Y.; and Huang, T. 2019. Sta: Spatialtemporal attention for large-scale video-based person reidentification. In Proceedings of the AAAI Conference on Artificial Intelligence, 8287–8294. Gao, J.; and Nevatia, R. 2018. Revisiting temporal modeling for video-based person reid. arXiv preprint arXiv:1805.02104. Gu, X.; Chang, H.; Ma, B.; and Shan, S. 2022. Motion feature aggregation for video-based person re-identification. IEEE Transactions on Image Processing, 31: 3908–3919. Gu, X.; Chang, H.; Ma, B.; Zhang, H.; and Chen, X. 2020. Appearance-preserving 3d convolution for video-based person re-identification. In Proceedings of the European Conference on Computer Vision, 228–243. He, J.; Chen, J.-N.; Liu, S.; Kortylewski, A.; Yang, C.; Bai, Y.; and Wang, C. 2022. Transfg: A transformer architecture for fine-grained recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 852–860. He, T.; Jin, X.; Shen, X.; Huang, J.; Chen, Z.; and Hua, X.S. 2021. Dense interaction learning for video-based person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, 1490–1501. Hermans, A.; Beyer, L.; and Leibe, B. 2017. In defense of the triplet loss for person re-identification. arXiv:1703.07737. Hou, R.; Chang, H.; Ma, B.; Huang, R.; and Shan, S. 2021. BiCnet-TKS: Learning efficient spatial-temporal representation for video person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014–2023. Hou, R.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2020. Temporal complementary learning for video person reidentification. In Proceedings of the European Conference on Computer Vision, 388–405. Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, 4904–4916. PMLR. Ju, C.; Han, T.; Zheng, K.; Zhang, Y.; and Xie, W. 2022. Prompting visual-language models for efficient video understanding. In Proceedings of the European Conference on Computer Vision, 105–124. Springer. Khattak, M. U.; Rasheed, H.; Maaz, M.; Khan, S.; and Khan, F. S. 2022. Maple: Multi-modal prompt learning. arXiv preprint arXiv:2210.03117. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Li, J.; Wang, J.; Tian, Q.; Gao, W.; and Zhang, S. 2019. Global-local temporal representations for video person reidentification. In Proceedings of the IEEE International Conference on Computer Vision, 3958–3967. Li, J.; Zhang, S.; and Huang, T. 2019. Multi-scale 3d convolution network for video based person re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, 8618–8625. Li, S.; Sun, L.; and Li, Q. 2022. CLIP-ReID: Exploiting vision-language model for image re-identification without concrete text labels. arXiv preprint arXiv:2211.13977. Liu, J.; Zha, Z.-J.; Wu, W.; Zheng, K.; and Sun, Q. 2021a. Spatial-temporal correlation and topology learning for person re-identification in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4370–4379. Liu, K.; Ma, B.; Zhang, W.; and Huang, R. 2015. A spatiotemporal appearance representation for viceo-based pedestrian re-identification. In Proceedings of the IEEE International Conference on Computer Vision, 3810–3818. Liu, X.; Yu, C.; Zhang, P.; and Lu, H. 2023. Deeply-coupled convolution-transformer with spatial-temporal complementary learning for video-based person re-identification. arXiv preprint arXiv:2304.14122. Liu, X.; Zhang, P.; and Lu, H. 2023. Video-based Person Reidentification with Long Short-Term Representation Learning. arXiv preprint arXiv:2308.03703. Liu, X.; Zhang, P.; Yu, C.; Lu, H.; Qian, X.; and Yang, X. 2021b. A video is worth three views: Trigeminal transformers for video-based person re-identification. arXiv:2104.01745. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6771 Liu, X.; Zhang, P.; Yu, C.; Lu, H.; and Yang, X. 2021c. Watching you: Global-guided reciprocal learning for videobased person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 13334–13343. Liu, Y.; Yuan, Z.; Zhou, W.; and Li, H. 2019. Spatial and temporal mutual promotion for video-based person reidentification. In Proceedings of the AAAI Conference on Artificial Intelligence, 8786–8793. McLaughlin, N.; Martinez del Rincon, J.; and Miller, P. 2016. Recurrent convolutional network for video-based person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1325–1334. Ni, B.; Peng, H.; Chen, M.; Zhang, S.; Meng, G.; Fu, J.; Xiang, S.; and Ling, H. 2022. Expanding language-image pretrained models for general video recognition. In Proceedings of the European Conference on Computer Vision, 1–18. Springer. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 8748–8763. PMLR. Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 18082–18091. Rasheed, H.; Khattak, M. U.; Maaz, M.; Khan, S.; and Khan, F. S. 2022. Fine-tuned CLIP models are efficient video learners. arXiv preprint arXiv:2212.03640. Subramaniam, A.; Nambiar, A.; and Mittal, A. 2019. Cosegmentation inspired attention networks for video-based person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, 562–572. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9: 2579–2605. Wang, M.; Xing, J.; and Liu, Y. 2021. Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472. Wang, T.; Gong, S.; Zhu, X.; and Wang, S. 2014. Person Re-identification by video ranking. In Proceedings of the European Conference on Computer Vision, 688–703. Wang, Y.; Zhang, P.; Gao, S.; Geng, X.; Lu, H.; and Wang, D. 2021. Pyramid spatial-temporal aggregation for videobased person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 12026–12035. Wu, J.; He, L.; Liu, W.; Yang, Y.; Lei, Z.; Mei, T.; and Li, S. Z. 2022. CAViT: Contextual alignment vision transformer for video object re-identification. In Proceedings of the European Conference on Computer Vision, 549–566. Springer. Wu, Y.; Lin, Y.; Dong, X.; Yan, Y.; Ouyang, W.; and Yang, Y. 2018. Exploit the unknown gradually: One-shot videobased person re-identification by stepwise learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5177–5186. Xu, H.; Ghosh, G.; Huang, P.-Y.; Okhonko, D.; Aghajanyan, A.; Metze, F.; Zettlemoyer, L.; and Feichtenhofer, C. 2021. Videoclip: Contrastive pre-training for zero-shot video-text understanding. arXiv preprint arXiv:2109.14084. Xu, S.; Cheng, Y.; Gu, K.; Yang, Y.; Chang, S.; and Zhou, P. 2017. Jointly attentive spatial-temporal pooling networks for video-based person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, 4733– 4742. Yan, Y.; Qin, J.; Chen, J.; Liu, L.; Zhu, F.; Tai, Y.; and Shao, L. 2020. Learning multi-granular hypergraphs for videobased person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2899–2908. Yuan, L.; Chen, D.; Chen, Y.-L.; Codella, N.; Dai, X.; Gao, J.; Hu, H.; Huang, X.; Li, B.; Li, C.; et al. 2021. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432. Zeng, W.; Jin, S.; Liu, W.; Qian, C.; Luo, P.; Ouyang, W.; and Wang, X. 2022. Not all tokens are equal: Human-centric visual analysis via token clustering transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11101–11111. Zhang, G.; Zhang, P.; Qi, J.; and Lu, H. 2021a. Hat: Hierarchical aggregation transformers for person re-identification. In Proceedings of the 29th ACM International Conference on Multimedia, 516–525. Zhang, T.; Wei, L.; Xie, L.; Zhuang, Z.; Zhang, Y.; Li, B.; and Tian, Q. 2021b. Spatiotemporal transformer for videobased person re-identification. arXiv:2103.16469. Zhang, Z.; Lan, C.; Zeng, W.; and Chen, Z. 2020. Multigranularity reference-aided attentive feature aggregation for video-based person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 10407–10416. Zheng, L.; Bie, Z.; Sun, Y.; Wang, J.; Su, C.; Wang, S.; and Tian, Q. 2016. Mars: A video benchmark for large-scale person re-identification. In Proceedings of the European Conference on Computer Vision, 868–884. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, 13001–13008. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 16816–16825. Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9): 2337–2348. Zhu, J.; Lai, S.; Chen, X.; Wang, D.; and Lu, H. 2023. Visual prompt multi-modal tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9516–9526. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6772 | 2024 | 752 |
18,576 | MM-Point: Multi-View Information-Enhanced Multi-Modal Self-Supervised 3D Point Cloud Understanding Hai-Tao Yu1,3, Mofei Song2,3 * 1 School of Cyber Science and Engineering, Southeast University, Nanjing, China 2 School of Computer Science and Engineering, Southeast University, Nanjing, China 3 Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China {yuht, songmf}@seu.edu.cn Abstract In perception, multiple sensory information is integrated to map visual information from 2D views onto 3D objects, which is beneficial for understanding in 3D environments. But in terms of a single 2D view rendered from different angles, only limited partial information can be provided. The richness and value of Multi-view 2D information can provide superior self-supervised signals for 3D objects. In this paper, we propose a novel self-supervised point cloud representation learning method, MM-Point, which is driven by intra-modal and inter-modal similarity objectives. The core of MM-Point lies in the Multi-modal interaction and transmission between 3D objects and multiple 2D views at the same time. In order to more effectively simultaneously perform the consistent cross-modal objective of 2D multi-view information based on contrastive learning, we further propose Multi-MLP and Multi-level Augmentation strategies. Through carefully designed transformation strategies, we further learn Multi-level invariance in 2D Multi-views. MM-Point demonstrates stateof-the-art (SOTA) performance in various downstream tasks. For instance, it achieves a peak accuracy of 92.4% on the synthetic dataset ModelNet40, and a top accuracy of 87.8% on the real-world dataset ScanObjectNN, comparable to fully supervised methods. Additionally, we demonstrate its effectiveness in tasks such as few-shot classification, 3D part segmentation and 3D semantic segmentation. Introduction In recent years, the demand for 3D perception technologies has surged in the real world. As a fundamental 3D representation, point cloud learning plays a crucial role in numerous tasks, including 3D object classification, detection, and segmentation. However, the cost of point cloud annotation is high, and 3D scans with labels are usually scarce in reality. To address these challenges, many studies have turned their attention to self-supervised methods. Interestingly, while 3D point clouds can be obtained by sampling, 2D multi-view images can be generated by rendering. Some recently proposed methods (Yang et al. 2020; Huang et al. 2020) have focused on generating high-quality point cloud representations using multi-modal data. *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. p|fƟ{dWz fnWpfƮ)WpĬ )ppfƮWpĬƮ 𝒛ଵ ଶ āāā 澝 𝑬ଶሺ∙ሻ 𝑬ଶሺ∙ሻ 𝑬ଶሺ∙ሻ 𝑬ଶሺ∙ሻ 𝒛ଶ ଶ 𝒛ଷ ଶ 𝒛 ଶ 𝑬ଷሺ∙ሻ ÏƮzpƟpfƮ{Wnf ƮÐƮ)p|Ʈ zd ÏƮzpƟpfƮ{WnfƮ{afddp|nƮ-Wbf ÏƟÐƮp|Ʈ{afddp|nƮ-Wbf p|W p|f p|f p|f 𝒛ଷ p|WƟ{dWz p|fƟ{dWz |bdf ,fŁ {afdŁ āāā Figure 1: Schematic of MM-Point multi-modal contrastive learning. Given 2D multi-views rendered from a 3D object, the model distinguishes the views of the same object from those of different objects in the embedding space, enabling self-supervised learning of deep representation. This naturally raises a question: Can we better facilitate the understanding of 3D point cloud representations by leveraging the abundant information hidden in 2D Multiviews? We reconsider the contrast paradigm in 2D-3D, hypothesizing that 3D objects and rendered 2D Multi-views share mutual information. Our primary motivation is to view each 2D view as a unique pattern that is useful for guiding 3D objects and improving their representations, as each 2D view observes different aspects of 3D objects and representations from different angles are distinctive. From another perspective, we encourage the 3D-2D relationship to be consistent across different-angle views, i.e. the similarity correspondence between the 2D images and 3D point cloud in one view should also exist in another view. Due to the distinct nature of 3D representations from 2D visual information, a simple alignment between these two types of representations may lead to limited gain or even negative transfer in multi-modal learning. Therefore, we propose a novel 3D point cloud representation learning framework and a multi-modal pre-training strategy between 3D and 2D, namely MM-Point (as shown in Fig.1), which The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6773 transfers the features extracted from 2D multi-views into 3D representations. Specifically, intra-modal training is used to capture the intrinsic patterns of 3D augmented data samples. The inter-modal training scheme aims to learn point cloud representations by accepting 2D-3D interactions. Meanwhile, considering the differences in multi-modal data properties, We further ponder: how to effectively simultaneously transfer multiple 2D view information from different angles to 3D point cloud objects in a self-supervised manner? Extending this, we propose a Multi-MLP strategy to construct multi-level feature learning between each 2D view and 3D object, thus the consistency goal is set to contrast between 2D multi-views and 3D objects across different feature spaces. Such architectural design not only extracts shared information in 2D-3D multi-modal contrast, but also preserve specific information in multiple 2D views at the same time, thereby the overall consistency goal could better extract semantic information between 3D point clouds and as many different 2D views as possible, resulting in improved 3D representations. Additionally, treating each angle of 2D view equally during training with the same type of augmentation transformation may lead to a suboptimal representation for downstream tasks in multimodal contrast. We suggest a Multi-level Augmentation strategy based on multi-view, integrating rendered 2D multi-view information with augmentation information. Moreover, we control the strength of all augmentation modules, ensuring the mutual information between 3D point cloud and 2D image augmentation pairs remains low and within a certain range, thus enabling the model to gradually accumulate more complex information during learning. To evaluate our proposed 2D-3D multi-modal selfsupervised learning framework, we assess the performance of MM-Point across several downstream tasks. The learned 3D point cloud representation can be directly transferred to these tasks. For 3D object shape classification, we performed on both synthetic dataset ModelNet40 and realworld object dataset ScanObjectNN, achieving state-ofthe-art performance, surpassing all existing self-supervised methods. Furthermore, part segmentation and semantic segmentation experiments validate MM-Point’s capability to capture fine-grained features of 3D point clouds. In summary, our research contributions are as follows: • We introduce MM-Point, a novel 3D representation learning scheme based on 2D-3D multi-modal training. • Our research applies multi-modal contrastive learning to multi-view settings, maximizing shared mutual information between different 2D view and the same 3D object. • We propose Multi-MLP and Multi-level Augmentation strategies, thereby ensuring more effective learning of 3D representation in multi-modal contrast under a selfsupervised setting, achieving effective pre-training from 2D multi-views to 3D objects. • MM-Point demonstrates remarkable transferability. The pre-trained 3D representations can be directly transferred to numerous downstream tasks, achieving state-of-the-art performance in extensive experiments. Related Work Self-supervised Point Cloud Learning Unsupervised learning for point cloud understanding can be broadly divided into generative or discriminative tasks based on the proxy tasks. Generative models learn features by selfreconstruction, as exemplified by methods like JigSaw (Qi et al. 2018) . Furthermore, Point-BERT (Yu et al. 2022) predicts discrete labels, while Point-MAE (Pang et al. 2022) randomly masks patches of the input point cloud and reconstructs the missing parts. However, these methods are computationally expensive. On the other hand, discriminative methods predict or discriminate the enhanced versions of the inputs. Our work learns point cloud representations based on this approach. Recent works (You et al. 2021; Sun et al. 2021) have explored contrastive learning for point clouds following the success of image contrastive learning. PointContrast (Xie et al. 2020) is the first unified framework investigating 3D representation learning with a contrastive paradigm. Compared to these works, we investigate contrastive pre-training of point clouds from a new perspective, leveraging the semantic information hidden in 2D multiviews to design a multi-modal learning network, and thus enhancing the representational capacity of 3D point clouds. Multi-modal Representation Learning Recent studies on self-supervised learning have leveraged the multi-modal attributes of data (Caron et al. 2020a; Wang et al. 2020) , with a common strategy being the exploration of natural correspondences across differing modals, emphasizing the extraction of cross-modal shared information. For instance, in the field of visual-textual multi-modalities, large-scale image-text pairs (Li et al. 2020) have been pretrained, enabling these models to be applicable for numerous downstream tasks. A few works have integrated point cloud representations with other modalities, such as voxels (Shi, Wang, and Li 2020; Shi, Zhou, and Li 2020) or multi-view images (Shi, Wang, and Li 2019; Qian et al. 2020) . Prior3D (Liu and Li 2020) proposed a geometric prior contrastive loss to enhance the representation learning of RGB-D data. Tran et al. (Tran et al. 2022) employ self-supervision through local correspondence losses and global losses based on knowledge distillation. Of note, CrossPoint (Afham 2022), most related to our work, learns 2D-3D cross-modal representations via contrastive learning, concentrating on shared features across different modes. In comparison, our approach diverges significantly. MM-Point is able to utilize information from multiple views in 2D space simultaneously for self-supervision of 3D point cloud features. By contrasting with Multi-views simultaneously, it better facilitates the maximization of mutual information shared across multi-modalities, leading to more robust and enhanced performance in various downstream tasks. The Proposed Method The proposed MM-Point multi-modal pre-training architecture is depicted in Fig. 2. It comprises two contrasting training strategies: intra-modal point cloud contrastive learning and inter-modal 2D-3D contrast, each accompanied by The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6774 āāāā āāāā āāāā āāāā āāāā āāāā āāāā āāāā āāāā āā ā āāāā āāāā āāāā ā āāāā ā ā āāā āāāā āāāāā āāāāāā zpƟ5pfƮ,f|dfp|n &2QKPV%NQWF ÏƮ,f|dfp|n 1௦௧pfƮ 𝑇௨ ଵ 𝑇௨ ଶ 𝑇௨ ଷ 𝑇௨ āāā zpƟzffz Wn{f|Wp| &8KGYKOCIGU /W|m{Ʈ𝑇𝟐 /W|m{Ʈ𝑇𝟏 𝑃 ்భ 𝑃 𝑃 ்మ ÐƮf|bdfƮ𝒇ሺ∙ሻ ÐƮf|bdfƮ𝒇ሺ∙ሻ 𝒈ଶ ଷௗ oWfd ÏƮ f|bdf ÏƮ f|bdf ÏƮ f|bdf ÏƮ f|bdf oWfd oWfd oWfd 𝒛ଵ ଷௗ 2ௗpfƮ 3ௗpfƮ 𝑚௧pfƮ ) fWf 𝑓ሺ∙ሻ 𝒛ଵ ଶௗ 𝒛𝟐 ଶௗ 𝒛𝟑 ଶௗ 𝒛𝒎 ଶௗ 𝒛ଶ ଷௗ 𝒛ଷ ଷௗ 𝒛𝟏 ଷௗ WW fy ÐƟÐ Wzpn|{f| ÏƟÐƮzpƟ{dWzƮ)fƟWp|p|nŗ WfdƮ|Ʈ-p{zW|fƮ |WƮmƮzpzfƮÏƮ5pf zp Ɵ) 𝒈𝟑 ଷௗ 𝒈𝟏 ଷௗ 𝒈𝟏 ଷௗ 𝒈𝟒 ଷௗ 𝒛𝒎 ଷௗ 𝒈𝒎 ଷௗ 𝒈𝟏 ଶௗ 𝒈𝟐 ଶௗ 𝒈𝟑 ଶௗ 𝒈𝒎 ଶௗ 𝒛ସ ଷௗ Figure 2: Schematic of the MM-Point architecture. MM-Point carries out intra-modal self-supervised learning in the 3D point cloud (blue path) and cross-modal learning between 2D and 3D (other color paths), aligning 2D multi-view features with 3D point cloud features. To better utilize the information from the 2D multi-views, MM-Point introduces two strategies: 1) MultiMLP: constructing a multi-level feature space; 2) Multi-level augmentation: establishing multi-level invariance. different types of loss functions. The inter-modal training scheme enhances 3D point cloud features by interacting with rendered 2D multi-views, while the point cloud representation focuses on the shared mutual information among multiple 2D images at the same time, inherently boosting the diversity of multi-modal contrastive learning. Furthermore, we employ a Multi-MLP strategy to project multi-view and multi-modal features. Finally, we put forward a Multi-level augmentation invariance strategy based on 2D multi-views information rendered from the same 3D object. Intra-modal and Inter-modal Alignment We propose to learn a encoder for aligning 3D point cloud features with visual features of 2D views. The pretraining process jointly handles the alignment of multiple modalities. For each point cloud Pi, we obtain two variants, P 1 i and P 2 i , through augmentation operations. We then encode the augmented point clouds separately into the feature space F 1 i , F 2 i ∈Rn×d. The projected features are subsequently mapped into the latent space, producing the representations z1 i and z2 i . By performing contrastive loss, we enforce that the distance between feature representations of the same object is smaller than the distance between different objects. MM-Point aims to learn two objectives through crossmodal alignment: fP (.) and fI(.). Cross-modal contrast seeks to minimize the distance between point clouds and the corresponding rendered 2D images while maximizing the distance from other images. Given a sample pair (Pi, Ii), where Pi and Ii represent the embeddings of the 3D point cloud and its rendered 2D image description, respectively. For each sample Pi in the mini-batch M, the negative sample set Ni is defined as Ni = {Ij | ∀Ij ∈M, j ̸= i}. The corresponding crossmodal contrastive loss on M is as follows: Loss inter = Ei∈M −log exp(fP (Pi)T fI(Ii)) exp(fP (Pi)T fI(Ii))+P Ij ∈Ni exp(fP (Pi)T fI(Ij)) (1) In addition, we map cross-modal features to different spaces and decouple training within and across modalities. We design projection heads with larger dimensions for cross-modal contrastive learning. Multi-Modal Learning Based on Multi-View In this section, we describe the contrast between 3D point clouds and 2D multi-view images. By pairing multiple 2D images from different angles with a 3D object, this collaboration scheme benefits the model. Rethinking the Contrast between 3D Object and 2D Multi-view For 3D point clouds and their corresponding rendered 2D multi-view images, we simply fix the 3D point clouds and enumerate positive and negative samples from the rendered 2D view images. Suppose the loss LPiVi contrast treats the 3D point cloud Pi as an anchor and enumerates the 2D rendered views Vi. Symmetrically, we can obtain the loss by anchoring on the 2D Vi and enumerating the 3D Pi. Then, we sum them up as the overall contrastive loss: L (Pi, Vi) = LPi,Vi contrast + LVi,P1 contrast . The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6775 Figure 3: Mutual Information plot for the modal comparison between 3D and 2D. The yellow area signifies mutual information. The figure illustrates the impact of different strategies (a-c) on the contribution to mutual information. Combining the theoretical proof in CMC (Tian, Krishnan, and Isola 2019), minimizing the loss should be equivalent to maximizing the lower bound of I (zi; zj) , that is: I (zi; zj) ≥log(k) −Lcontrast . In this case, zi and zj represent the latent representations of the point cloud and image, respectively, while k denotes the number of negative sample pairs. Besides, research (Hjelm et al. 2019; Chen et al. 2021) has shown that the boundaries of I (zi; zj) may not be clear, and finding a better mutual information estimator is more important. We consider the 3D point cloud and rendered multi-angle 2D views to construct all possible relationships between different 2D views and 3D point clouds. By involving all pairs, our optimized objective function is: LF = X 1≤j≤M L (P1, Vj) + L (P2, Vj) (2) When learning 2D-3D contrast, the mutual information will change proportionally with the number of 2D views. When the view count reaches a certain level, multi-modal mutual information will reach a stable level. In Fig.3, the visualization demonstrates that contrasting 2D Multi-views with 3D point clouds enables the point cloud features to capture more mutual information between different 2D views. Multi-modal Contrastive based on 2D Multi-view MMPoint aims to maximize the mutual information between the differently augmented 3D point clouds and 2D views from various angles in the same scene. Naturally, too few or too many rendered 2D views exhibit poor mutual information performance, while an optimal position exists in between. We randomly sample m 2D rendered views from different angles. The value of m will be introduced in experiment. Let D represent the pretraining dataset D = n Pi, {Iij, Mij}m j=1 on i=1, where n denotes the number of 3D objects and m represents the number of 2D views. Let Pi be the 3D point cloud of the i-th object, and Iij denotes the 2D image of the j-th view of the i-th 3D object. For the multi-view 2D images Iij, we extract features HI ij ∈R1×C, ÐƮ|bdf ÏÍÑÕ ÎÍÏÑ ÎÏÕ ÐƮ|bdf ÏÍÑÕ ÎÍÏÑ ÒÎÏ ÐƮ|bdf ÏÍÑÕ ÎÍÏÑ ÏÒÓ ÏƮ|bdf ÏÍÑÕ ÎÍÏÑ ÒÎÏ ÏƮ|bdf ÏÍÑÕ ÒÎÏ ÏÒÓ ÏƮ ) ÐƮ ) oWfd oWfd oWfd 𝒛𝟐𝑫 భ 𝒛𝟐𝑫 మ 𝒛𝟑𝑫 భ 𝒛𝟑𝑫 మ 𝒛𝟑𝑫 య ÐƮ ,fŁ ÏƮ ,fŁ ÏƮ {afd ÐƮ {afd 𝑭𝟑𝑫 య 𝑭𝟑𝑫 మ 𝑭𝟑𝑫 భ 𝑭𝟐𝑫 భ 𝑭𝟐𝑫 మ 𝑬𝟐𝟓𝟔 ଷ 𝑬𝟓𝟏𝟐 ଷ 𝑬𝟓𝟏𝟐 ଶ 𝑬𝟐𝟓𝟔 ଶ |WƟ |W |fƟ |W Figure 4: Schematic of the Multi-MLP strategy. Multimodal learning enhances point cloud representation through 2D-3D interaction. Multiple 2D features and 3D features are contrasted through a multi-level feature space and adjusted via multi-path backpropagation. where j ∈{1, m}. These 2D image features correspond to the feature ZP i of the i-th 3D object. Building on the 2D-3D cross-modal contrastive scheme, we further extend it and design a cumulative loss Lossinter-plus : Ji inter (P, I) = n X k=1 exp sim ZP i , ZP k /τ + exp sim ZP i , HI k /τ , Li inter (P, I) = −log exp sim ZP i , HI i /τ Ji inter (P, I) −exp sim ZP i , ZP i /τ , Loss inter-plus = 1 2n Pn i=1 Pm j=1 Li inter (P, Ij) + Li inter (Ij, P) (3) Multi-modal Contrastive based on Multi-MLP To alleviate the consistency conflicts between different 2D views and 3D object, we construct a multi-level feature learning strategy based on Multi-MLP. The multi-modal objective is then achieved by contrasting 2D multi-view and 3D objects in different dimensional feature spaces, as depicted in Fig.4. Building upon the modality-specific and cross-modal training paths designed, for multiple 2D views, we further stack different MLPs on the encoder n f P , f Io , mapping them to distinct feature spaces. The mappings for the 3D encoder and 2D encoder are as follows: F {GH}m H=1 ; f P ; F {GH}m H=1 ; f I (4) Here, {GH}m H=1 denotes the additional projection heads of the MLP layers of the m 2D multi-views, each with different output dimensions. The output feature dimension of cross-modal projection heads exceeds that of intra-modal outputs. The feature extraction process for the j-th 2D view corresponding to the i-th 3D object is as follows: FH H=j = {GH}H=j HI ij = {GH}H=j (fI (Iij)) (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6776 Multi-level Augmentation Invariance An overview of our method is shown in Fig.5. We propose an improvement to the contrastive framework for 3D point clouds and 2D multi-views through a multi-level augmentation module. Crucially, we employ an incremental strategy to generate multi-level augmentation, which in turn applies distinct transformations to the 2D views. In 2D multi-view data, there are two types of information contained: the shared semantic information across all multiviews and the private information specific to each individual 2D view. If each view is treated equally with the same type or intensity of enhancement during training, the model will learn a non-optimal representation. Meanwhile, in alignment with the InfoMax (Bell 1995) principle, the goal of contrastive is to capture as much information as possible about stimuli. The InfoMin (Noroozi and Favaro 2016) principle suggests that further reduction of mutual information at the intermediate optimal can be achieved by utilizing more robust augmentation. Furthermore, we explore enhancing representational performance by mining hard samples through RandomCrop. Given a 2D view image I, we first determine the cropping ratio s and aspect ratio r from a predefined range. This can be described as follows: (x, y, h, w) = Rcrop (s, r, I), (6) where Rcrop (·, ·, ·) is a random sampling function that returns a quaternion (x, y, h, w). where (x, y) represents the coordinates of the cropping center, and (h, w) represents the height and width of the cropping. Assuming the number of 2D views is m, the complete augmentation pipeline is specified as T = Combine {t0, t1, t2, t3, · · · , tm} , where t0 contains the basic augmentation and t1 ∼tm represents a specific type of augmentation. Then, the incremental strategy can be represented as: T1 = Combine {t0, t1} T2 = Combine {t0, t1, t2} T3 = Combine {t0, t1, t2, t3} Tm = Combine {t0, t1, t2, t3, · · · , tm} (7) Using these modules, we transform the 2D multi-view samples {xi}1≤i≤m of the same 3D object into m images with different augmentation intensities: vi = Ti (xi) , i = 1, 2, 3, · · · , m (8) The augmentation transformations from T1 to Tm gradually increase in intensity and number, reducing the shared mutual information between the transformed 2D view and the 3D object. Further, we use a projection head gi based on Multi-MLP to map the features into the loss space: zi = gi (fi (Ti (vi))) , i = 1, 2, 3, · · · , m (9) Here, fi represents the encoder and zi represents the features in the latent space. Note that the number of projection heads, rendered multi-view 2D images and augmentation 𝑭𝟐𝑫 𝒊 𝑭𝟑𝑫 mfWf zpƟ5pfƮ ,f|dfp|n 𝒁𝟑𝑫 𝒁𝟑𝑫 𝑻௨ ଵ ~ 𝑻௨ āāā āāāāāā 𝒇௨ zpƟzffz n{f|Wp| zpƟpfƮÏƮ{Wnf Ð )p|Ʈ zd ÏƮ |bdf 𝒇ଶௗሺ∙ሻ ÏƮ fWf 𝑻௨ ଵ 𝑻௨ ଶ 𝑻௨ ଷ 𝑻௨ ÏƟÐƮzpƟdWzƮ |W 𝒁𝟐𝑫 𝒎 𝒁𝟐𝑫 𝟑 𝒁𝟐𝑫 𝟐 𝒁𝟐𝑫 𝟏 𝒇ଷௗሺ∙ሻ ÐƮ fWf ÐƮ |bdf 𝒁𝟐𝑫 𝒊 )Ʈ )Ʈ mfWf WWƮz f6y )xfbp|Ʈ fWd dzf 𝒇ଶௗሺ∙ሻ 𝒇ଶௗሺ∙ሻ 𝒇ଶௗሺ∙ሻ Figure 5: Schematic of Multi-level Augmentation. Multilevel augmentation are generated in an incremental manner. 2D multi-views correspond to a certain level of augmentation indicated by different colors. Type Method Accuracy (%) ModelNet40 ModelNet10 Sup. PointNet (Qi et al. 2017a) 89.2 GIFT (Dovrat et al. 2019) 89.5 91.5 MVCNN (Su et al. 2015) 89.7 Self. Point-BERT (Yu et al. 2022) 87.4 Point-MAE (2022) 91.0 Jigsaw3D (Sauder 2019) 90.6 94.5 Vconv-DAE (Dovrat 2021) 75.5 80.5 SwAV (Caron et al. 2020b) 90.3 93.5 OcCo (Wang 2020) 89.2 92.7 STRL (Liu et al. 2021) 90.9 CrossPoint (Afham 2022) 91.2 MM-Point (ours) 92.4 95.4 Improvement +1.2 +0.9 Table 1: Linear SVM classification results on ModelNet40 and ModelNet10. Self. and Sup. represent pre-training with self-supervised and supervised methods. modules are consistent. Also, the strength of the augmentation module and the variation trend of feature projection dimensions in Multi-MLP are consistent. Therefore, the overall loss function formula is updated as follows, where Lcontrast refers to the cross-modal loss between the 3D feature zj and a specific 2D view feature zi: Li = Lcontrast (zi, zj) , Loverall = m X i=1 Li (10) This way, the distribution of the augmentation invariance for t0 and t1 is the broadest, and the invariance of tm is limited to the corresponding features of the largest dimension. Experiments In this section, we first introduce the pre-training details of MM-Point. As our focus is on 3D representation learning, we only evaluate the pre-trained 3D point cloud encoder backbones. We sample different downstream tasks and assess the 3D feature representations learned by MM-Point. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6777 - -20 -15 -10 -5 0 5 10 15 20 - -10 -5 0 5 10 bathtub bed chair desk dresser monitor night_ stand sofa table toilet (a) Epoch 0 Visualization Of Features On ModelNet10 Dataset (b) Epoch 10 (c) Epoch 50 (d) Epoch 100 15 10 5 0 -5 -10 -15 -20 15 10 5 0 -5 -10 15 10 5 0 -5 -10 10 5 0 -5 -10 -25 -20 -15 -10 -5 0 5 10 - -30 -20 -10 0 10 20 - Figure 6: t-SNE visualization of point-level features extracted by MM-Point on ModelNet10 (Qi et al. 2016), with each feature point colored according to its class label. Points with the same color represent semantic similarity. Method Accuracy (%) GBNe (Touvron et al. 2020) 80.5 PRANet (Zhang et al. 2020) 81.0 PointMLP (Ma et al. 2022) 85.2 Point-BERT (Yu et al. 2022) 83.1 MaskPoint (Liu, Cai, and Lee 2022) 84.3 Point-MAE (Pang et al. 2022) 85.2 OcCo (Wang 2020) 78.3 STRL (Liu et al. 2021) 77.9 CrossPoint (Afham 2022) 81.7 MM-Point (ours) 87.8 Improvement +6.1 Table 2: Evaluation of 3D point cloud linear SVM classification on ScanObjectNN. Pre-training Setup Datasets ShapeNet (Chang et al. 2015) is a large-scale 3D shape dataset containing 51162 synthetic 3D point cloud objects. For each object in the dataset, we render the 3D object into 2D multi-views, obtaining 24 images per object. Implementation Details For the point cloud modality, we employ DGCNN (Wang et al. 2019) as the 3D backbone. For the image modality, we use ResNet-50 as the 2D backbone. For all encoders, we append a 2-layer non-linear MLP projection head to generate the final representation. Note that we add different projection heads to obtain features. Pretraining employs AdamW as the optimizer. 3D Object Classification The point cloud classification experiments were conducted on three datasets. Notably, we utilized a pre-trained encoder with frozen weights for evaluation using a linear SVM. ModelNet40 (Wu et al. 2015) is a synthetic point cloud dataset obtained by sampling 3D CAD models, consisting of 12311 3D objects. ModelNet10 (Qi et al. 2016) includes 4899 CAD models with orientations from 10 categories. ScanObjectNN (Uy et al. 2019) is a real-world 3D object dataset comprising 2880 unique point cloud objects. This dataset offers a more realistic and challenging setting. To evaluate the effectiveness of the point cloud representation learned by MM-Point, we first performed random sampling of 1024 points for each object. Method 5-way 10-way 10-shot 20-shot 10-shot 20-shot Results on ModelNet40 3D-GAN (Wu 2016) 55.8±3.4 65.8±3.1 40.3±2.1 48.4±1.8 PointCNN (Li 2018) 65.4±2.8 68.6±2.2 46.6±1.5 50.0±2.3 RSCNN (Li et al. 2018) 65.4±8.9 68.6±7.0 46.6±4.8 50.0±7.2 Jigsaw (Sauder 2019) 34.3±1.3 42.2±3.5 26.0±2.4 29.9±2.6 OcCo (Wang 2020) 90.6±2.8 92.5±1.9 82.9±1.3 86.5±2.2 CrossPoint (2022) 92.5±3.0 94.9±2.1 83.6±5.3 87.9±4.2 MM-Point (ours) 96.5±2.8 97.2±1.4 90.3±2.1 94.1±1.9 Results on ScanObjectNN Jigsaw (Sauder 2019) 65.2±3.8 72.2±2.7 45.6±3.1 48.2±2.8 OcCo (Wang 2020) 72.4±1.4 77.2±1.4 57.0±1.3 61.6±1.2 CrossPoint (2022) 74.8±1.5 79.0±1.2 62.9±1.7 73.9±2.2 MM-Point (ours) 88.0±6.5 90.7±5.1 76.7±1.9 83.9±4.2 Table 3: Few-shot classification: SVM classification accuracy comparison on ModelNet40 and ScanObjectNN. We report the average accuracy (%) and standard deviation (%). The classification accuracy results for ModelNet40 and ModelNet10 are shown in Tab.1. As illustrated, MM-Point outperforms all existing self-supervised methods, achieving classification accuracies of 92.4% and 95.4%, respectively. To validate the effectiveness in the real world, we conducted experiments on ScanObjectNN, and the evaluation results are presented in Tab.2. In comparison with the stateof-the-art methods, the accuracy has been significantly improved by 6.1%, highlighting the advantages of MM-Point in challenging scenarios in real-world environments. To visualize the learned 3D representations, we employ t-SNE to reduce the dimensionality of the latent representations and map them onto a 2D plane. Fig.6 presents the visualization of 3D point cloud features learned by MM-Point. 3D Object Few-shot Classification The conventional setting for FSL is N-way K-shot. We conduct experiments using the ModelNet40 and ScanObjectNN datasets. Specifically, we report the results of 10 runs and calculate their mean and standard deviation. We report the results in Tab.3. On both ModelNet40 and ScanObjectNN, MM-Point achieves SOTA accuracy, outperforming other few-shot classification methods, and demonstrating substantial improvements across all settings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6778 Category Method Metrics OA (%) mIoU (%) Sup. PointNet (Qi et al. 2017a) 83.7 PointNet++ (Qi et al. 2017b) 85.1 DGCNN (Wang et al. 2019) 85.1 Self-sup. OcCo (Wang 2020) 94.4 85.0 CrossPoint (Afham 2022) 94.4 85.3 MM-Point (ours) 94.5 85.7 Table 4: Overall accuracy and mean IoU results for 3D part segmentation. The metrics are OA(%) and mIoU(%). Method Metrics OA(%) mIoU(%) Jigsaw3D (Sauder 2019) 84.1 55.6 OcCo (Wang 2020) 84.6 58 CrossPoint (Afham 2022) 86.7 58.4 MM-Point (ours) 88.7 59.1 Table 5: Semantic segmentation results on S3DIS (Armeni et al. 2016). We report the mIoU and OA for all 13 classes. 3D Object Part Segmentation We also extend MM-Point to the task of 3D object part segmentation, a challenging fine-grained 3D recognition task. The ShapeNetPart (Yi et al. 2016) dataset contains 16881 point clouds, with 3D objects divided into 16 categories and 50 annotated parts. We sample 2048 points from each input instance. For a fair comparison, we follow previous works and add a simple part segmentation head on top of the DGCNN encoder. We evaluate the performance using Overall Accuracy (OA) and mean Intersection over Union (mIoU) metrics. Tab.4 summarizes the evaluation results. 3D Object Semantic Segmentation The Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) (Armeni et al. 2016) consists of 3D scan data from 271 rooms across six different indoor spaces. For evaluation, we train our model from scratch on Areas 1 ∼4 and Area 6, using Area 5 for validation. Tab.5 demonstrates the performance of MM-Point. Compared to the randomly-initialized baseline without pretraining, our proposed MM-Point pre-training method yields a significant improvement (+4.2% mIoU). Ablation Study In order to investigate the contributions of each main component in MM-Point, we conduct an extensive ablation study. Multi-view Contrastive: Number of Views We evaluate the performance of MM-Point by performing multi-modal contrastive with different numbers of 2D views. As shown in Tab.6, we observe that the performance of MM-Point is the lowest when using only one view image. As more 2D image views are added, the classification performance steadily improves. The performance is best when the number of multiview 2D images M is 4. Number of 2D images 1 3 4 5 6 Accuracy (%) ModelNet40 91.3 92.2 92.4 92.4 92.1 ScanObjectNN 83.3 86.7 87.8 87.6 86.9 Table 6: Ablation test using different numbers of 2D views. Multi-MLP Accuracy (%) Intra-modal Inter-modal ModelNet40 ScanObjectNN ✗ ✗ 91.4 83.4 ✗ ✔ 91.9 86.7 ✔ ✗ 91.6 84.9 ✔ ✔ 92.4 87.8 Table 7: A comparison using different Multi-MLP strategies. Multi-MLP Strategy Employing different MLPs for intra-modal and inter-modal feature embedding, as well as for different dimensional embedding of multi-views, is a distinctive design in MM-Point. We evaluate our method by: (1) using a unified MLP, (2) employing different dimensions for intra-modal and inter-modal features, ensuring that intermodal are larger than intra-modal dimensions, and (3) using unified output dimensions or multiple different dimensions for 2D multi-views. The results are reported in Tab.7 . These results indicate that our unified framework benefits from using multiple different spaces for multi-modal modeling. Multi-level Augmentation Strategy To validate the effectiveness of the multi-level augmentation strategy, we experiment with: (1) all 2D views based solely on unified augmentations, (2) all 2D views based on different augmentations, but without increasing the difficulty in a hierarchical manner, and (3) all 2D views based on different multi-level augmentations. The results are reported in Tab.8. The absence of any specific augmentation (indicated by ✗) negatively impacts the performance. This suggests that applying multi-level augmentations to 2D multi-views allows the model to benefit from contrast in the augmentation space. Multi-level Augmentation Acc. (%) Multi Aug. Multi-level ModelNet40 ScanObjectNN ✗ ✗ 91.7 86.1 ✔ ✗ 92.1 86.9 ✔ ✔ 92.4 87.8 Table 8: The impact of Multi-level augmentation strategy. Conclusion In this paper, we explore a novel pre-training method for 3D representation learning. We introduce MM-Point, a framework that encourages 3D point clouds to learn excellent features from 2D multi-views. Concurrently, MM-Point employs Multi-MLP and Multi-level augmentation strategies to effectively learn more robust 3D modality knowledge from 2D multi-views. MM-Point consistently exhibits state-ofthe-art performance on various downstream tasks. Codes are available at https://github.com/HaydenYu/MM-Point. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6779 Acknowledgments This work was supported by National Natural Science Foundation of China 61906036 and the Fundamental Research Funds for the Central Universities (2242023k30051). This research work was also supported by the Big Data Computing Center of Southeast University References Afham, M. 2022. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9902–9912. Armeni, I.; Sax, A.; Zamir, A.; and Savarese, S. 2016. 3D semantic parsing of large-scale indoor spaces. arXiv preprint arXiv:1604.04545. Bell, T. 1995. Information theory and the central limit theorem. IEEE Transactions on Information Theory, 41(3): 726– 735. Caron, M.; Bojanowski, P.; Joulin, A.; and Douze, M. 2020a. SwAV: Unsupervised Learning of Features by Swapping Across Views. In European Conference on Computer Vision, 100–116. Springer. Caron, M.; Misra, I.; Mairal, J.; Goyal, P.; Bojanowski, P.; and Joulin, A. 2020b. Unsupervised learning of visual features by contrasting cluster assignments. In NeurIPS. Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. 2015. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2021. Normalized Information Distance for Contrastive Representation Learning. In ICML. Dovrat, K. 2021. Vconv-DAE: 3D shape segmentation via volumetric convolutional autoencoder. IEEE Transactions on Pattern Analysis and Machine Intelligence. Dovrat, K.; Litany, O.; Bronstein, M.; and Averbuch-Elor, H. 2019. GIFT: A Real-Time and Scalable 3D Shape Search Engine. In ICCV. Hjelm, R. D.; Fedorov, A.; Lavoie-Marchildon, S.; Grewal, K.; Bachman, P.; Trischler, A.; and Bengio, Y. 2019. Mutual information neural estimation. NeurIPS. Huang, J.; Hao, Y.; Zhang, W.; and Liu, Y. 2020. S3DIS: Self-Supervised 3D Scene Representation via Inverse Dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Li, L. H.; Yatskar, M.; Yin, D.; Hsieh, C.-J.; and Chang, K.W. 2020. VisualBERT: A Simple and Performant Baseline for Vision and Language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1087–1101. Li, Y. 2018. PointCNN. In CVPR. Li, Y.; Bu, R.; Sun, M.; Wu, W.; and Di, X. 2018. PointCNN: Convolution On X-Transformed Points. In Advances in Neural Information Processing Systems, 820–830. Liu, H.; Cai, M.; and Lee, Y. J. 2022. Masked discrimination for self-supervised learning on point clouds. In European Conference on Computer Vision, 657–675. Springer. Liu, S.; Wang, Z.; Qi, X.; and so on. 2021. Stochastic Temporal Repulsion Learning for Self-supervised Representation of 3D Point Clouds. In CVPR. Liu, X.; and Li, B. 2020. Prior3D: Point Cloud Prior for 3D Object Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3615–3624. Ma, X.; Qin, C.; You, H.; Ran, H.; and Fu, Y. 2022. Rethinking network design and local geometry in point cloud: A simple residual MLP framework. arXiv preprint arXiv:2202.07123. Noroozi, M.; and Favaro, P. 2016. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. In European Conference on Computer Vision, 69–84. Springer. Pang, Y.; Wang, W.; Tay, F. E.; Liu, W.; Tian, Y.; and Yuan, L. 2022. Masked autoencoders for point cloud selfsupervised learning. In European conference on computer vision, 604–621. Springer. Qi, C. R.; Liu, W.; Wu, C.; Su, H.; and Guibas, L. J. 2018. JigSaw: A large-scale, low-cost, and high-accuracy annotation framework for 3D object instance detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In CVPR. Qi, C. R.; Su, H.; Nießner, M.; Dai, A.; Yan, M.; and Guibas, L. J. 2016. Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5648–5656. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In NeurIPS. Qian, R.; Fracastoro, G.; Paudel, D. P.; Pinhanez, C. S.; and Favaro, P. 2020. PointGMM: A Neural GMM Network for Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7178–7187. Sauder, J. 2019. Self-supervised deep learning on point clouds by reconstructing space. Advances in Neural Information Processing Systems, 32. Shi, S.; Wang, X.; and Li, H. 2019. PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 770–779. Shi, S.; Wang, X.; and Li, H. 2020. PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10529–10538. Shi, S.; Zhou, X.; and Li, H. 2020. PV-RCNN++: PointVoxel Feature Set Abstraction for 3D Object Detection. In Advances in Neural Information Processing Systems, 3505– 3516. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6780 Su, H.; Maji, S.; Kalogerakis, E.; and Learned-Miller, E. 2015. Multi-view convolutional neural networks for 3d shape recognition. In ICCV. Sun, J.; Wang, Y.; Zhang, W.; Zhou, Z.; Kong, T.; and Li, L. 2021. SeedCon: Unsupervised Point Cloud Object Segmentation with Data-Efficient Seed Consensus. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Tian, Y.; Krishnan, D.; and Isola, P. 2019. Contrastive multiview coding. In Advances in Neural Information Processing Systems, 159–169. Touvron, H.; Caron, M.; Joulin, A.; Alayrac, J.-B.; Bojanowski, P.; Laptev, I.; and Neverova, N. 2020. Training data-efficient image transformers & distillation through attention. In ICML. Tran, B.; Hua, B.-S.; Tran, A. T.; and Hoai, M. 2022. Selfsupervised learning with multi-view rendering for 3d point cloud analysis. In Proceedings of the Asian Conference on Computer Vision, 3086–3103. Uy, M. A.; Pham, Q.-H.; Hua, B.-S.; Nguyen, T.; and Yeung, S.-K. 2019. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, 1588–1597. Wang, X. E.; Wu, Q.; Wang, X.; and Xiao, J. 2020. VisionLanguage Navigation with Self-Supervised Auxiliary Reasoning Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10811–10820. Wang, Y. 2020. OcCo: Occupancy-aware 3D convolutional networks for indoor/outdoor semantic segmentation. In CVPR. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. In Proceedings of the ACM SIGGRAPH Asia 2019 Technical Papers, 9. Wu, J. 2016. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in neural information processing systems, 82–90. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In CVPR. Xie, S.; Gu, J.; Guo, D.; Qi, C. R.; Guibas, L.; and Litany, O. 2020. Pointcontrast: Unsupervised pre-training for 3d point cloud understanding. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, 574–591. Springer. Yang, Y.; Feng, C.; Shen, Y.; and Tian, D. 2020. PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds. In Proceedings of the European Conference on Computer Vision (ECCV). Yi, L.; Su, H.; Guo, X.; and Guibas, L. J. 2016. Scalable shape retrieval with a family of path-based convolutional neural networks. In CVPR. You, A.; Chen, X.; Li, S.; Yan, Q.; and Yang, X. 2021. H3DNet: 3D Object Detection Using Hybrid Geometric Primitives. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yu, X.; Tang, L.; Rao, Y.; Huang, T.; Zhou, J.; and Lu, J. 2022. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19313–19322. Zhang, Y.; Liu, Z.; Zhou, Y.; and Qi, H. 2020. Simple view for point cloud recognition. In NeurIPS. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6781 | 2024 | 753 |
18,577 | Spatial Transform Decoupling for Oriented Object Detection Hongtian Yu*, Yunjie Tian*, Qixiang Ye, Yunfan Liu† University of Chinese Academy of Sciences {yuhongtian17, tianyunjie19}@mails.ucas.ac.cn, {qxye, liuyunfan}@ucas.ac.cn Abstract Vision Transformers (ViTs) have achieved remarkable success in computer vision tasks. However, their potential in rotation-sensitive scenarios has not been fully explored, and this limitation may be inherently attributed to the lack of spatial invariance in the data-forwarding process. In this study, we present a novel approach, termed Spatial Transform Decoupling (STD), providing a simple-yeteffective solution for oriented object detection with ViTs. Built upon stacked ViT blocks, STD utilizes separate network branches to predict the position, size, and angle of bounding boxes, effectively harnessing the spatial transform potential of ViTs in a divide-and-conquer fashion. Moreover, by aggregating cascaded activation masks (CAMs) computed upon the regressed parameters, STD gradually enhances features within regions of interest (RoIs), which complements the self-attention mechanism. Without bells and whistles, STD achieves state-of-the-art performance on the benchmark datasets including DOTA-v1.0 (82.24% mAP) and HRSC2016 (98.55% mAP), which demonstrates the effectiveness of the proposed method. Source code is available at https://github.com/yuhongtian17/Spatial-TransformDecoupling. Introduction Recent years have witnessed substantial progress and notable breakthroughs in computer vision, which can be primarily attributed to the advent of Vision Transformer (ViT) models. Benefiting from the powerful self-attention mechanism, ViTs consistently achieve new state-of-the-art performance across vision tasks including classification (Dosovitskiy et al. 2020; Liu et al. 2021; Zhang et al. 2022c; Fang et al. 2023), object detection (Li et al. 2022b; Fang et al. 2022; Tian et al. 2023), and semantic segmentation (Xie et al. 2021a; Yuan et al. 2021). Despite the progress made, the capability of ViTs in spatial transform invariance has not been fully explored and understood. In many scenarios, ViTs are treated as a universal approximator, expected to automatically handle various vision data irrespective of their orientations and appearances. *Equal contribution. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Decoupled Prediction Spatial Transform Coupled Prediction 𝑋 𝑌 α!! α!" α"! α"" 𝑊 𝐻 𝑋 𝑌 − − − − − − − − α!! α!" α"! α"" − − − − − − − − 𝑊 𝐻 Initial BBox Predicted BBox Groundtruth BBox 𝑋𝑌: Position Para. 𝑊𝐻: Size Para. 𝛼!! 𝛼!" 𝛼"! 𝛼"":Angle Para. Figure 1: Conventional approaches (upper) estimate the position, size, and angle using a single RoI feature. In contrast, STD (lower) predicts and refines the parameters of bounding boxes in a divide-and-conquer (decoupled) manner. In this study, we aim to tap into the potential of ViTs in tackling the challenging spatial transform issue of vision tasks, e.g., detecting objects in remote sensing scenarios, where images are captured from a bird’s-eye view and target objects may appear in arbitrary orientations. To determine an oriented bounding box, initial research efforts (Ren et al. 2015; Lin et al. 2017b) suggested a direct regression approach for spatial transform parameters, including the spatial coordinates (x and y), object size (w and h), and the angle (α). However, such a straightforward regression strategy often results in discontinuous boundaries due to the inconsistency in angle representation and periodicity, as well as the suboptimal design of loss functions (Yang and Yan 2020; Yang et al. 2021b,c, 2022). Rather than solely concentrating on developing more sophisticated angle representations or refining training objectives, it is essential to tackle the foundational issue of effectively extracting rotation-related features. In particular, we enhance the conventional structure of the bounding box prediction head by allocating distinct feature maps to predict The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6782 parameters associated with diverse semantic interpretations, such as the object’s location, shape, and orientation. This approach guides the feature extraction process in a controlled and effective manner. Furthermore, by estimating the parameters associated with a particular spatial transform at each stage, this step-wise strategy facilitates the progressive refinement of estimation results, which in turn can contribute to improving the overall accuracy of the model. Building upon the insights and discussions presented earlier, we propose Spatial Transform Decoupling (STD), a straightforward yet effective solution for oriented object detection, which decouples the estimation of transformation parameters related to object positions, sizes, and angles, Fig. 1. Concretely, a multi-branch network design is utilized, where each individual branch is designated to predict parameters that correspond to distinct spatial transforms. From another perspective, STD supplements the self-attention mechanism by allocating distinct responsibilities to self-attention modules at different stages of parameter prediction, which effectively utilizes the spatial transform capabilities of ViTs in a divide-and-conquer fashion. Furthermore, STD integrates cascaded activation masks (CAMs) to enhance the features extracted by stacked Transformer blocks, effectively suppressing background information while highlighting foreground objects. By refining features within regions of interest (RoIs) using CAMs, the feature representation for oriented objects is both decoupled and progressively enhanced. As a simple-yet-effective design, STD can be integrated with various ViT-based detectors and achieve significant performance improvements over the state-of-theart methods. For instance, STD achieves 82.24% mAP on DOTA-v1.0 and 98.55% mAP on HRSC2016, surpassing the accuracy of all existing detectors. The contributions of this work are summarized as: • The Spatial Transform Decoupling (STD) approach is introduced to address the challenge of oriented object detection by estimating parameters for spatial transforms through separate network branches. STD demonstrates remarkable generalizability and can seamlessly integrate with a variety of ViT detectors. • Cascade activation masks (CAMs) are integrated into the self-attention module at each layer of ViT to progressively enhance the features. CAMs offer spatially dense guidance, directing the attention maps to focus more on foreground objects rather than the background. • Experimental results demonstrate that STD surpasses state-of-the-art methods by a significant margin across a variety of oriented object detection benchmarks. Related Work Oriented Object Detection Existing methods have investigated oriented object detection from the perspectives of feature robustness, region proposal refinement, and target regression enhancement. Feature Invariance/Equivalence. Invariance or equivalence is an essential problem when designing/learning visual feature representations. During the era of hand-crafted features, SIFT (Lowe 1999) utilizes dominant orientation-based feature alignment to achieve invariance to rotation and robustness to moderate perspective transforms. With the rise of CNNs, STN (Jaderberg et al. 2015) achieves rotation invariance by manipulating the feature maps according to the transformation matrix estimated using a sub-CNN. Group equivariant CNN (Cohen and Welling 2016) proposes a natural generalization of CNNs, enabling them to group objects from the same categories regardless of orientations. ORN (Zhou et al. 2017) introduces Active Rotating Filters (ARFs), which dynamically rotate during the convolution process and thereby produce feature maps with location and orientation explicitly encoded. ReDet (Han et al. 2021) achieves rotation-equivariant convolution (e2cnn (Weiler and Cesa 2019)) by incorporating a rotation-invariant backbone, which normalizes the spatial and orientational information of features. Region Proposal Refinement. RoI Transformer (Ding et al. 2019) enhances two-stage detectors by iteratively repeating the RPN-RoI head structure (Ren et al. 2015; He et al. 2017). Oriented RCNN (Xie et al. 2021b) streamlines the process of oriented proposal generation and directly predicts oriented proposals based on the features extracted by the backbone and FPN (Feature Pyramid Network) (Lin et al. 2017a) module. Drawing inspiration from a similar concept, R3Det (Yang et al. 2021a) introduces a feature refinement stage to the orientation regression head. Target Regression Enhancement. Gliding Vertex (Xu et al. 2020) converts the task of rotated box prediction into regressing the offset for horizontal boxes along the four edges. CSL (Yang and Yan 2020) addresses the potential abrupt change in loss computation by proposing a labelbased solution for angle prediction. CFA (Guo et al. 2021) and Oriented RepPoints (Li et al. 2022a) make improvements to the nine-point prediction methods (Yang et al. 2019). GWD (Yang et al. 2021b), KLD (Yang et al. 2021c), and KFIoU (Yang et al. 2022) use two-dimensional Gaussian distributions to solve the angle prediction problem. Despite the progress of various approaches proposed, few of them explore the impact of decoupling spatial transform, e.g., position (x, y), size (w, h), and angle (α), on the hierarchical feature representation. Vision Transformer Drawing inspiration from the NLP field (Vaswani et al. 2017; Devlin et al. 2018), ViTs divide the image into multiple patch tokens for feature extraction and processing (Dosovitskiy et al. 2020; Liu et al. 2021; Zhang et al. 2022c; Tian et al. 2023). It has attracted significant attention in recent years owing to its remarkable success in computer vision tasks. DETR (Carion et al. 2020) is a representative work that extends ViTs towards object detection, establishing the fundamental paradigm for applying ViT to this task. MAE (He et al. 2022) proposes a novel pre-training mode that deviates from the classic fully supervised pre-training era of CNNs (He, Girshick, and Doll´ar 2019). Building upon MAE, ViTDet (Li et al. 2022b) and MIMDet (Fang et al. 2022), etc, have made significant advancements in the development of ViT for object detection. While Vision Transformers have demonstrated promisThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6783 Transformer Backbone FPN & RPN & RoI Align Input RoI region Transformer Block with Activation Mask X Q K V Activation Mask (Q×KT)×(V AM) (TBAM) Residual, Norm, MLP Reshape & Convs R&C Mask Generation Activation Masks TB Transformer Block Prediction Branches Output Vector Tokens Tokens Tokens α w, h Classes Score BBox Pred α w, h w, h TB R&C R&C R&C Activation Mask Activation Mask Activation Mask TBAM TBAM TBAM x, y Figure 2: The framework of the proposed Spatial Transform Decoupling (STD) method. The detailed structure of Transformer blocks integrated with activation masks (TBAM) is shown on the left. ing results in various visual tasks, they still encounter challenges in leveraging their advantages in handling object spatial transform, e.g., oriented object detection. Recently, RVSA (Zhang et al. 2022a; Wang et al. 2022) made an initial attempt to improve the structure of ViT for oriented object detection tasks, which was achieved by updating Window Attention (Liu et al. 2021; Li et al. 2022b; Fang et al. 2022) to Rotated Varied Size Attention. Nevertheless, these methods solely rely on the self-attention mechanism to handle various spatial transformations, without explicitly introducing dense guiding information. The Proposed Method This section starts with an elucidation of the motivation behind Spatial Transform Decoupling (STD). Subsequently, a detailed explanation of the overall structure of STD is provided, offering an in-depth understanding of its architectural design and how it functions. Next, we delve into a detailed decoupling structure and introduce the cascaded activation masks (CAMs) for progressive feature refinement. Special emphasis is placed on their significant contribution to the overall performance enhancement of STD. Overview The proposed STD can be readily seen as an extension of existing oriented object detectors, and an overview of the architecture is depicted in Figure 2. The primary innovation of STD resides within the detection head module, while for other components, such as the backbone, Region Proposal Network (RPN), and loss functions, we maintain consistency with mainstream detection frameworks (Ren et al. 2FCBBoxHead MAEBBoxHead MAEBBoxHead (Not Pre-trained) (Pre-trained) mAP 69.67 69.16 71.07 Table 1: Performance comparison of Faster RCNN with the same backbone (ViT-small) but different heads. The training is carried out on the DOTA-v1.0 dataset (Xia et al. 2018) for 12 epochs. 2015; Xie et al. 2021b). As a result, STD demonstrates significant generalizability, enabling its compatibility with a variety of detectors. Specifically, for the purpose of a clear explanation, we adopt STD within the Faster RCNN framework (Ren et al. 2015) as the default configuration. Throughout the experiments, we will also showcase the performance of STD in combination with other detectors, such as Oriented RCNN (Xie et al. 2021b). ViTs have demonstrated impressive performance across a broad spectrum of visual tasks. However, their utilization in the context of oriented object detection remains relatively unexplored. Nevertheless, existing pre-trained Transformer models are capable of extracting meaningful features, which contributes to establishing a strong foundation for achieving impressive performance in oriented object detection tasks. Therefore, we adopt a design inspired by the imTED (Zhang et al. 2022b) detector and substitute the backbone as well as head modules of the two-stage detector with Vision Transformer blocks pre-trained using the MAE method. Specifically, we employ the ViT-small model as the backbone instead of ResNet-50, and use a 4-layer Transformer block to replace the conventional detection head in Faster The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6784 RCNN built with fully connected (FC) layers. Please note that the ViT-small backbone is obtained from the MAE pretrained encoder, and the 4-layer Transformer block is derived from the pre-trained decoder, which forms the MAEBBoxHead module. Once the regions of interest (RoIs) are obtained, the feature maps are uniformly divided into 7×7 tokens, which are subsequently fed into the parameter regression head, as depicted in Figure 2. Experiments are conducted to validate the effectiveness of this framework in addressing the oriented object detection problem, and the results are presented in Table 1. In subsequent experiments, the pre-trained MAEBBoxHead is used as the baseline method by default. Afterward, the proposed Spatial Transform Decoupling (STD) module is built upon the aforementioned backbone network. To enhance the performance of decoupling, we employ a hierarchical structure to predict the bounding box parameters in a layer-wise manner, and further enhance it by leveraging the guidance provided by the cascaded activation masks (CAMs). Detailed explanations of these contributions will be provided in the following two subsections. Decoupled Parameter Prediction As highlighted in the Introduction Section, different parameters of an oriented bounding box are expected to possess distinct properties (e.g., rotation-variance or rotationinvariance), and therefore, they should be computed based on different feature maps. However, most conventional methods (Ren et al. 2015; Lin et al. 2017b) depend on a single feature map to predict all bounding box parameters, potentially resulting in the issue of coupled features. To solve this problem, we introduce a multi-branch network to achieve hierarchical and disentangled parameter prediction. As shown in Figure 2, we compute different components of the oriented bounding box based on the feature map at various stages of the Transformer decoder in a cascaded manner. Specifically, {x, y}, α, {w, h} and the class score are obtained based on the feature maps of the 1st, 2nd, 3rd, and 4th layer of the Transformer block, respectively (the rationale behind this design will be detailed in ablation study). After obtaining the discrete output from each Transformer block, we first reshape them into 7×7 feature maps and then apply convolutional layers to further enhance the features. Next, after globally averaging the resultant feature maps, FC layers are adopted to make the final predictions, which are then used to produce the bounding box and CAMs (details explained in the next subsection). Please note that the proposed mechanism is highly generalizable, as one can easily adjust the number of estimated parameters by simply adding or removing predicting branches. Cascaded Activation Masks To further regulate the decoupling process and improve the accuracy of prediction results, we intend to provide dense guidance for bounding box prediction at each stage. To achieve this goal, cascaded activation masks (CAMs) are introduced to enhance the features generated by the multiple branches. affine sample 1 1 0 Figure 3: The translation between the predicted bounding box and the activation mask after affine transformation. The blue box represents the proposal region and the red box represents the activation mask. An ideal activation mask with binary values should have the regions corresponding to foreground objects assigned with a value of 1, and all background locations set to 0. To align the activated regions with the foreground area as much as possible, we propose to generate activation masks by incorporating information from both the proposal and the predicted bounding box. To be specific, the center point, size, and orientation of the estimated bounding box, i.e., (xb, yb, wb, hb, αb), could be expressed as xb = xp + wp · dx, yb = yp + hp · dy, wb = wp · edw, hb = hp · edh, αb = dα (1) where (xp, yp) and (wp, hp) respectively denote the center coordinates and shape of the proposal, and (dx, dy, dw, dh, dα) are the predicted values related to the oriented bounding box obtained from STD. Then, with the proposal placed in a rectangular coordinate system (x, y) and its four vertices located at (−1, −1), (1, −1), (1, 1), and (−1, 1), the affine transformation against the bounding box (x ′, y ′) could be formulated as: x ′ y ′ = cos dα · edw −sin dα · edh · hp wp sin dα · edw · wp hp cos dα · edh ! x y + 2 · dx 2 · dy (2) As illustrated in Figure 3, the activation mask could be produced by applying the affine transformation in Eq.(2) to a matrix AM with all elements set to 1. After integrating AM into the self-attention module in STD, the mapping function implemented by Transformer Block with Activation Mask (TBAM, see Figure 2) could be written as TBAM(Q, K, V , AM) = softmax(QKT √ d )V ′ (3) where V ′ is obtained by performing an element-wise multiplication between V and AM (V ′ = V ⊙AM). By multiplying with V , AM could direct the model’s attention by highlighting the foreground while suppressing the background. In the forward propagation process, the utilization of activation maps guides the decoupled predicted values in earlier stages to direct the self-attention mechanism of the subsequent Transformer blocks; while during the backward The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6785 propagation process, the discrepancies in the decoupled predicted values from later stages are propagated through the activation maps, affecting the feature extraction process of the previously decoupled predicted values. This cascaded architecture enhances the interconnection between decoupled predicted values at various levels. Experiment Experimental Setting Datasets Experiments are conducted on two commonlyused datasets for oriented object detection, namely DOTAv1.0 (Xia et al. 2018) and HRSC2016 (Liu et al. 2017). DOTA-v1.0 is a large-scale object detection dataset for optical remote sensing images, which comprises 2,806 images with diverse dimensions, spanning from 800 to 4,000 pixels in width and height. The dataset consists of a total of 188,282 individual instances distributed across 15 different classes, and it is partitioned into training, validation, and test sets containing 1,411, 458, and 937 images, respectively. HRSC2016 is an optical remote sensing image dataset designed for ship detection. It comprises 1,680 images with diverse widths and heights, ranging from 339 pixels to 1333 pixels. The commonly used training, validation, and test sets consist of 436, 181, and 444 images, respectively. Implementation Details The experimental results are obtained on the MMRotate platform (Zhou et al. 2022). We employ the checkpoints of ViT-small/-base (Dosovitskiy et al. 2020) and HiViT-base (Zhang et al. 2022c), which are all pre-trained using the MAE (He et al. 2022) selfsupervised strategy. We pre-train the ViT-small model and directly utilize the open-sourced checkpoints for the other models, wherein all the Transformer blocks of both the encoder and decoder are fully inherited. For a fair comparison, we adopt a similar experimental configuration as used in the benchmark methods (Wang et al. 2022; Yang et al. 2022; Xie et al. 2021b). The performance evaluation on DOTA-v1.0 follows a multi-scale setting, where the model is trained on the trainval-set and tested on the test-set. In contrast, for the ablation study on DOTAv1.0, a single-scale setting is adopted, where the model is trained on the train-set and tested on the val-set. All images would be cropped into patches of size 1024×1024 with an overlap of 500/200 pixels in multi-scale/single-scale setting. In the multi-scale setting, images are resized by 0.5×, 1.0×, and 1.5× before undergoing the cropping process, and no scale adjustment is adopted in the single-scale setting. In the HRSC2016 dataset, images are resized in such a way that the larger dimension of width and height becomes 800, while maintaining their original aspect ratios. During training, horizontal/vertical flipping and random rotation operations are conducted to increase the scale and diversity of training data. The model is trained for 12 epochs on DOTA-v1.0 and 36 epochs on HRSC2016. We adopt the AdamW optimizer (Kingma and Ba 2014) with an initial learning rate of 1e−4/2.5e−4 for DOTA-v1.0/HRSC2016, a weight decay of 0.05, and a layer decay of 0.75/0.90 for ViT/HiViT. All experiments are conducted on 8×A100 GPUs with a batch size of 8. (b) Attention Maps Comparison (a) Prediction Results Comparison Baseline STD Baseline STD Block 1 Block 2 Block 3 Block 4 Figure 4: Visualization of attention maps. Compare to the baseline Transformer, the attention maps in STD (bk1 to bk4) exhibit a stronger alignment with the semantic interpretation of the parameter estimated at the respective stage. Ablation Study Feasibility of Decoupled Parameter Prediction Prior to assessing the feasibility of the decoupling approach, we first investigate the performance of bounding box prediction relying solely on a single feature map in various levels. As shown in Table 2a, a consistent enhancement in performance is evident when employing feature maps from deeper layers for bounding box prediction. This observation suggests that while deep feature maps contribute to improved feature representations for object detection, shallower layers still contain valuable information for bounding box prediction, as their performance is only slightly lower. We also compare the performance with the decoupled parameter estimation approach. As shown in Fig. 2, we used the feature maps from the first block to predict x and y, the second block for α, the third block for w and h, and the final stage for the class score cls. Without the aid of CAMs, the performance of this decoupled configuration is slightly lower than predicting bounding boxes with the feature maps from the third/fourth block by a margin of 0.09%/0.28% mAP. These results provide evidence that suggests the feasibility and potential of designing a decoupled structure. As previously mentioned, we introduce CAMs to enhance the decoupling process, further reducing the performance gap between decoupled and non-decoupled approaches. Rationality of Model Design To showcase the rationale behind the detailed architecture of STD, we investigate the impact of both the order of parameter decoupling and the loThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6786 (a) (b) (c) (d) (e) Baseline STD missing detection redundant detection wrong detection inaccurate alignment Figure 5: Comparison of detection results. STD demonstrates superior performance in reducing false detections ((a), (b), and (c)), better discerning clustered objects ((c) and (e)), and improving the alignment with oriented objects ((c), (d), and (e)). Feature Levels mAP bk1 bk2 bk3 bk4 +CAMs 2 ♢ 69.65 71.85 2 ♢ 70.37 72.39 2 ♢ 70.88 72.27 2♢71.07† 72.21 (xy) α (wh) ♢ 70.79 72.78‡ (a) Feature Levels Decoupling Order mAP (xy) (wh) α 72.30 (wh) (xy) α 72.26 (xy) α (wh) 72.78 (wh) α (xy) 72.13 α (xy) (wh) 72.03 α (wh) (xy) 71.93 (b) Decoupling Order Table 2: Results of diagnostic studies. (a) Comparison of detection accuracy achieved using feature maps from various levels of Transformer block (bk1 to bk4). 2 denotes coupled bounding box prediction and ♢refers to class score estimation. † indicates the performance of original MAEBBoxHead while ‡ indicates our STD’s. (b) The influence of decoupling order on the overall performance of STD. Detector Model BBoxHead mAP Faster RCNN ViT-S MAEBBoxHead 71.07 Faster RCNN ViT-S STD 72.78 Oriented RCNN ViT-S MAEBBoxHead 72.41 Oriented RCNN ViT-S STD 73.43 Table 3: Comparison of object detection accuracy achieved by different RoI extraction networks. cation of activation mask integration. As shown in Table 2b, the order of decoupling has a significant influence on the performance of STD, and the optimal result is achieved under the configuration {x, y} →α →{w, h}. This phenomenon can be explained by the fact that alternative prediction orders fail to ensure that the RoI could consistently cover the entire foreground object. Adaptability to Different Backbones As discussed in the preceding section, as long as the RoI fully covers the foreground object, the activation masks can effectively activate the entire foreground region. Hence, our approach is expected to be adaptable to other RoI extraction methodologies. As indicated in Table 3, the decoupling module of STD also demonstrates strong performance when incorporated into the Oriented RCNN object detector (Xie et al. 2021b), which showcases the remarkable generalizability of our method. Visualization of Attention Maps In Figure 4, we present visualizations of the attention maps from different decoder layers of STD and a baseline Transformer model (Rotated Faster RCNN+ViT-S). In comparison to the baseline Transformer, the attention maps generated by the STD model at each stage exhibit a closer alignment with the semantic meaning of the corresponding predicted parameter. Specifically, when obtaining the positional information x, y, the attention tends to concentrate around the center of the object. Following this, the attention becomes more widespread, targeting one end and one edge of the object to capture information about its orientation α. Finally, the attention predominantly focuses on both ends of the object, aiming to capture details related to its scale. This phenomenon is likely a result of the decoupled bounding box prediction mechanism and the step-wise guidance provided by the activation masks, further confirming the effectiveness of the proposed architectural approach. Qualitative Comparison We also present a qualitative comparison between the results of STD and the baseline Transformer in Figure 5. STD is capable of mitigating the occurrence of false negatives/positives (as depicted in Figure 5(a), (b)), while also achieving notably improved alignment with oriented foreground objects across different scales (as shown in Figure 5(c), (d), (e)). This observation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6787 Method Model Pre. PL BD BR GTF SV LV SH TC BC ST SBF RA HA SP HC mAP CSL R152 IN 90.25 85.53 54.64 75.31 70.44 73.51 77.62 90.84 86.15 86.69 69.60 68.04 73.83 71.10 68.93 76.17 R3Det R152 IN 89.80 83.77 48.11 66.77 78.76 83.27 87.84 90.82 85.38 85.51 65.67 62.68 67.53 78.56 72.62 76.47 SASM RX101 IN 89.54 85.94 57.73 78.41 79.78 84.19 89.25 90.87 58.80 87.27 63.82 67.81 78.67 79.35 69.37 79.17 ReDet ReR50 IN 88.81 82.48 60.83 80.82 78.34 86.06 88.31 90.87 88.77 87.03 68.65 66.90 79.26 79.71 74.67 80.10 GWD R152 IN 89.66 84.99 59.26 82.19 78.97 84.83 87.70 90.21 86.54 86.85 73.47 67.77 76.92 79.22 74.92 80.23 DEA ReR50 IN 89.92 83.84 59.65 79.88 80.11 87.96 88.17 90.31 88.93 88.46 68.93 65.94 78.04 79.69 75.78 80.37 KLD R152 IN 89.92 85.13 59.19 81.33 78.82 84.38 87.50 89.80 87.33 87.00 72.57 71.35 77.12 79.34 78.68 80.63 O-RCNN R50 IN 89.84 85.43 61.09 79.82 79.71 85.35 88.82 90.88 86.68 87.73 72.21 70.80 82.42 78.18 74.11 80.87 KFIoU Swin-T IN 89.44 84.41 62.22 82.51 80.10 86.07 88.68 90.90 87.32 88.38 72.80 71.95 78.96 74.95 75.27 80.93 RVSA-O ViTAE-B M† 88.97 85.76 61.46 81.27 79.98 85.31 88.30 90.84 85.06 87.50 66.77 73.11 84.75 81.88 77.58 81.24 RTMDet-R RTM-L CO 88.01 86.17 58.54 82.44 81.30 84.82 88.71 90.89 88.77 87.37 71.96 71.18 81.23 81.40 77.13 81.33 KFIoU Swin-B IN 89.26 86.48 62.09 82.86 79.97 85.64 88.47 90.70 86.69 87.54 71.84 68.74 79.62 81.11 76.64 81.18 ViT-B M 89.32 84.52 62.53 81.86 81.55 86.53 89.00 90.76 87.67 88.29 67.31 71.75 79.63 80.25 72.25 80.88 HiViT-B M 88.71 86.02 62.45 80.24 80.84 85.19 88.47 90.70 86.64 86.37 67.41 74.63 79.04 80.80 82.28 81.32 RVSA-O ViT-B M† 87.63 85.23 61.73 81.11 80.68 85.37 88.26 90.80 86.38 87.21 67.93 69.81 84.06 81.25 77.76 81.01 STD-O (Ours) ViT-B M 88.56 84.53 62.08 81.80 81.06 85.06 88.43 90.59 86.84 86.95 72.13 71.54 84.30 82.05 78.94 81.66 STD-O (Ours) HiViT-B M 89.15 85.03 60.79 82.06 80.90 85.76 88.45 90.83 87.71 87.29 73.99 71.25 85.18 82.17 82.95 82.24 Table 4: Performance comparison on DOTA-v1.0. Classes: PL-plane; BD-baseball diamond; BR-bridge; GTF-ground track field; SV-small vehicle; LV-large vehicle; SH-ship; TC-tennis court; BC-baseball court; ST-storage tank; SBF-soccer ball field; RA-roundabout; HA-harbor; SP-swimming pool; HC-helicopter. Pretraining: IN-supervised pretraining on the ImageNet; COsupervised pretraining on the MS COCO; M-MAE self-supervised pretraining on the ImageNet; M†-MAE self-supervised pretraining on the MillionAID (Long et al. 2021), a large remote sensing dataset including about 1 million images. highlights the capability of STD in effectively modeling the object orientations without compromising the precision in capturing spatial location and shape information. Performance Comparison In this section, we present comprehensive experimental results obtained on the DOTA-v1.0 and HRSC2016 datasets. DOTA-v1.0 Table 4 provides a comprehensive comparison of our method with state-of-the-art approaches on DOTA-v1.0, including CSL (Yang and Yan 2020), R3Det (Yang et al. 2021a), SASM (Hou et al. 2022), ReDet (Han et al. 2021), GWD (Yang et al. 2021b), DEA (Liang et al. 2022), KLD (Yang et al. 2021c), Oriented RCNN (O-RCNN) (Xie et al. 2021b), KFIoU (Yang et al. 2022), RVSA (Wang et al. 2022), and RTMDet (Lyu et al. 2022). We evaluate STD within Oriented RCNN frameworks (STD-O) and both ViT and HiViT models are used for evaluations. Remarkably, STD achieves new state-of-the-art performance in both frameworks. When coupled with ViT-B and HiViT-B backbones, STD achieves 81.66% and 82.24% mAP, respectively, surpassing the previous best results. HRSC2016 In Table 5, the experimental results demonstrate that the STD method outperforms all other methods, achieving an impressive mAP of 90.67% and 98.55% under the PASCAL VOC 2007 (Everingham et al. 2010) and VOC 2012 metrics, respectively. Computational Cost. STD has a small computational cost overhead. While the baseline detector with HiViT-B takes around 288ms on average to process an image using A100 GPUs, the overall STD detector requires an average processing time of 315ms. Method Model Pre. mAP(07) mAP(12) CenterMap (Wang et al. 2021) R101 IN 92.70 RoI Trans. (Ding et al. 2019) R101 IN 86.20 Gilding Vertex (Xu et al. 2020) R101 IN 88.20 R3Det (Yang et al. 2021a) R101 IN 89.26 96.01 DAL (Ming et al. 2021) R101 IN 89.77 GWD (Yang et al. 2021b) R101 IN 89.85 97.37 S2ANet (Han et al. 2022) R101 IN 90.17 95.01 AOPG (Cheng et al. 2022) R101 IN 90.34 96.22 ReDet (Han et al. 2021) ReR50 IN 90.46 97.63 O-RCNN (Xie et al. 2021b) R101 IN 90.50 97.60 RTMDet-R (Lyu et al. 2022) RTM-L CO 90.60 97.10 STD-O (ours) ViT-B M 90.67 98.55 STD-O (ours) HiViT-B M 90.63 98.20 Table 5: Comparison of performance on HRSC2016. Conclusion This paper introduces Spatial Transform Decoupling (STD), an oriented object detection method that separates the parameter prediction process into multiple disentangled stages. Such a decoupled process is further enhanced by incorporating cascaded activation masks, which introduce dense guidance into the self-attention mechanism. Extensive experiments have demonstrated the effectiveness of STD on multiple popular benchmarks. To the best of our knowledge, STD is a pioneering method that tackles oriented object detection in remote sensing with a structural perspective. Notably, the Transformer-based nature of STD enables seamless integration with various advanced pre-trained models, providing significant benefits to the research community. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6788 Acknowledgments This work was supported by National Natural Science Foundation of China (NSFC) under Grant 62225208 and 62171431, and by China Postdoctoral Science Foundation under Grant 2023M743442. References Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In ECCV, 213–229. Cheng, G.; Wang, J.; Li, K.; Xie, X.; Lang, C.; Yao, Y.; and Han, J. 2022. Anchor-free oriented proposal generator for object detection. TGRS, 60: 1–11. Cohen, T.; and Welling, M. 2016. Group equivariant convolutional networks. In ICML, 2990–2999. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ding, J.; Xue, N.; Long, Y.; Xia, G.-S.; and Lu, Q. 2019. Learning RoI transformer for oriented object detection in aerial images. In CVPR, 2849–2858. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. IJCV, 88(2): 303–338. Fang, Y.; Wang, W.; Xie, B.; Sun, Q.; Wu, L.; Wang, X.; Huang, T.; Wang, X.; and Cao, Y. 2023. Eva: Exploring the limits of masked visual representation learning at scale. In CVPR, 19358–19369. Fang, Y.; Yang, S.; Wang, S.; Ge, Y.; Shan, Y.; and Wang, X. 2022. Unleashing vanilla vision transformer with masked image modeling for object detection. arXiv preprint arXiv:2204.02964. Guo, Z.; Liu, C.; Zhang, X.; Jiao, J.; Ji, X.; and Ye, Q. 2021. Beyond bounding-box: Convex-hull feature adaptation for oriented and densely packed object detection. In CVPR, 8792–8801. Han, J.; Ding, J.; Li, J.; and Xia, G.-S. 2022. Align deep features for oriented object detection. TGRS, 60: 1–11. Han, J.; Ding, J.; Xue, N.; and Xia, G.-S. 2021. Redet: A rotation-equivariant detector for aerial object detection. In CVPR, 2786–2795. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In CVPR, 16000–16009. He, K.; Girshick, R.; and Doll´ar, P. 2019. Rethinking imagenet pre-training. In ICCV, 4918–4927. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask r-cnn. In ICCV, 2961–2969. Hou, L.; Lu, K.; Xue, J.; and Li, Y. 2022. Shape-adaptive selection and measurement for oriented object detection. In AAAI, volume 36, 923–932. Jaderberg, M.; Simonyan, K.; Zisserman, A.; et al. 2015. Spatial transformer networks. NeurIPS, 28. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Li, W.; Chen, Y.; Hu, K.; and Zhu, J. 2022a. Oriented reppoints for aerial object detection. In CVPR, 1829–1838. Li, Y.; Mao, H.; Girshick, R.; and He, K. 2022b. Exploring plain vision transformer backbones for object detection. In ECCV, 280–296. Liang, D.; Geng, Q.; Wei, Z.; Vorontsov, D. A.; Kim, E. L.; Wei, M.; and Zhou, H. 2022. Anchor retouching via model interaction for robust object detection in aerial images. TGRS, 60: 1–13. Lin, T.-Y.; Doll´ar, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017a. Feature pyramid networks for object detection. In CVPR, 2117–2125. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017b. Focal loss for dense object detection. In ICCV, 2980–2988. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 10012–10022. Liu, Z.; Yuan, L.; Weng, L.; and Yang, Y. 2017. A high resolution optical satellite image dataset for ship recognition and some new baselines. In ICPRAM, volume 2, 324–331. Long, Y.; Xia, G.-S.; Li, S.; Yang, W.; Yang, M. Y.; Zhu, X. X.; Zhang, L.; and Li, D. 2021. On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., 14: 4205–4230. Lowe, D. G. 1999. Object recognition from local scaleinvariant features. In ICCV, 1150–1157. Lyu, C.; Zhang, W.; Huang, H.; Zhou, Y.; Wang, Y.; Liu, Y.; Zhang, S.; and Chen, K. 2022. Rtmdet: An empirical study of designing real-time object detectors. arXiv preprint arXiv:2212.07784. Ming, Q.; Zhou, Z.; Miao, L.; Zhang, H.; and Li, L. 2021. Dynamic anchor learning for arbitrary-oriented object detection. In AAAI, volume 35, 2355–2363. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster rcnn: Towards real-time object detection with region proposal networks. NeurIPS, 28. Tian, Y.; Xie, L.; Wang, Z.; Wei, L.; Zhang, X.; Jiao, J.; Wang, Y.; Tian, Q.; and Ye, Q. 2023. Integrally Pre-Trained Transformer Pyramid Networks. In CVPR, 18610–18620. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. NeurIPS, 30. Wang, D.; Zhang, Q.; Xu, Y.; Zhang, J.; Du, B.; Tao, D.; and Zhang, L. 2022. Advancing plain vision transformer toward remote sensing foundation model. TGRS, 61: 1–15. Wang, J.; Yang, W.; Li, H.-C.; Zhang, H.; and Xia, G.-S. 2021. Learning center probability map for detecting objects in aerial images. TGRS, 59(5): 4307–4323. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6789 Weiler, M.; and Cesa, G. 2019. General e (2)-equivariant steerable cnns. NeurIPS, 32. Xia, G.-S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; and Zhang, L. 2018. DOTA: A largescale dataset for object detection in aerial images. In CVPR, 3974–3983. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021a. SegFormer: Simple and efficient design for semantic segmentation with transformers. NeurIPS, 34: 12077–12090. Xie, X.; Cheng, G.; Wang, J.; Yao, X.; and Han, J. 2021b. Oriented R-CNN for object detection. In ICCV, 3520–3529. Xu, Y.; Fu, M.; Wang, Q.; Wang, Y.; Chen, K.; Xia, G.-S.; and Bai, X. 2020. Gliding vertex on the horizontal bounding box for multi-oriented object detection. TPAMI, 43(4): 1452–1459. Yang, X.; and Yan, J. 2020. Arbitrary-oriented object detection with circular smooth label. In ECCV, 677–694. Yang, X.; Yan, J.; Feng, Z.; and He, T. 2021a. R3det: Refined single-stage detector with feature refinement for rotating object. In AAAI, volume 35, 3163–3171. Yang, X.; Yan, J.; Ming, Q.; Wang, W.; Zhang, X.; and Tian, Q. 2021b. Rethinking rotated object detection with gaussian wasserstein distance loss. In ICML, 11830–11841. Yang, X.; Yang, X.; Yang, J.; Ming, Q.; Wang, W.; Tian, Q.; and Yan, J. 2021c. Learning high-precision bounding box for rotated object detection via kullback-leibler divergence. NeurIPS, 34: 18381–18394. Yang, X.; Zhou, Y.; Zhang, G.; Yang, J.; Wang, W.; Yan, J.; ZHANG, X.; and Tian, Q. 2022. The KFIoU Loss for Rotated Object Detection. In ICLR. Yang, Z.; Liu, S.; Hu, H.; Wang, L.; and Lin, S. 2019. Reppoints: Point set representation for object detection. In ICCV, 9657–9666. Yuan, Y.; Fu, R.; Huang, L.; Lin, W.; Zhang, C.; Chen, X.; and Wang, J. 2021. Hrformer: High-resolution transformer for dense prediction. arXiv preprint arXiv:2110.09408. Zhang, Q.; Xu, Y.; Zhang, J.; and Tao, D. 2022a. Vsa: Learning varied-size window attention in vision transformers. In ECCV, 466–483. Zhang, X.; Liu, F.; Peng, Z.; Guo, Z.; Wan, F.; Ji, X.; and Ye, Q. 2022b. Integral migrating pre-trained transformer encoder-decoders for visual object detection. arXiv preprint arXiv:2205.09613. Zhang, X.; Tian, Y.; Huang, W.; Ye, Q.; Dai, Q.; Xie, L.; and Tian, Q. 2022c. Hivit: Hierarchical vision transformer meets masked image modeling. arXiv preprint arXiv:2205.14949. Zhou, Y.; Yang, X.; Zhang, G.; Wang, J.; Liu, Y.; Hou, L.; Jiang, X.; Liu, X.; Yan, J.; Lyu, C.; et al. 2022. Mmrotate: A rotated object detection benchmark using pytorch. In ACM MM, 7331–7334. Zhou, Y.; Ye, Q.; Qiu, Q.; and Jiao, J. 2017. Oriented response networks. In CVPR, 519–528. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6790 | 2024 | 754 |
18,578 | Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models Hongwei Yu1, Jiansheng Chen1*, Xinlong Ding1, Yudong Zhang2, Ting Tang1, Huimin Ma1 1School of Computer and Communication Engineering, University of Science and Technology Beijing, China 2Department of Electronic Engineering, Tsinghua University, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract The high-quality generation results of conditional diffusion models have brought about concerns regarding privacy and copyright issues. As a possible technique for preventing the abuse of diffusion models, the adversarial attack against diffusion models has attracted academic attention recently. In this work, utilizing the phenomenon that diffusion models are highly sensitive to the mean value of the input noise, we propose the Mean Fluctuation Attack (MFA) to introduce mean fluctuations by shifting the mean values of the estimated noises during the reverse process. In addition, we reveal that the vulnerability of different reverse steps against adversarial attacks actually varies significantly. By modeling the step vulnerability and using it as guidance to sample the target steps for generating adversarial examples, the effectiveness of adversarial attacks can be substantially enhanced. Extensive experiments show that our algorithm can steadily cause the mean shift of the predicted noises so as to disrupt the entire reverse generation process and degrade the generation results significantly. We also demonstrate that the step vulnerability is intrinsic to the reverse process by verifying its effectiveness in an attack method other than MFA. Code and Supplementary is available at https://github.com/yuhongwei22/MFA Introduction Due to the high generation quality and training stability, the diffusion model has become a competitive deep generation model recently (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020; Croitoru et al. 2023; Rombach et al. 2022). A Diffusion model consists of two essential processes. The forward process is a Markov chain that gradually incorporates noises into the input data to diffuse it to a standard Gaussian noise. Conversely, the reverse process functions as a parametric Markov chain that runs in the opposite direction and is designed to learn how to reverse the diffusion process by estimating the added noises. To date, diffusion models have demonstrated outstanding performances by achieving many state-of-the-art results in various generation tasks. To achieve better control over the generation during the reverse process, various prompts are used in diffusion models, such as images (Rombach et al. 2022; Batzolis et al. *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2021; Gal et al. 2022), sketches (Voynov, Aberman, and Cohen-Or 2022; Peng et al. 2023), and text (Nichol et al. 2021; Poole et al. 2022; Ramesh et al. 2022; Saharia et al. 2022a). These prompts are encoded by a prompt encoder and serve as conditional inputs to each step of the reverse process, enabling effective control over the generation. Due to its stable theoretical foundation (Song et al. 2020; Bao et al. 2022) and highly applicable techniques (Gal et al. 2022; Lu et al. 2022), conditional diffusion models have been successfully used in diverse fields, including image synthesis (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020; Song and Ermon 2019; Ruiz et al. 2023), image editing (Kawar et al. 2023; Batzolis et al. 2021; Esser et al. 2021), and video synthesis (Yang, Srivastava, and Mandt 2022). However, with the successful application of the conditional diffusion model, there has been a concern that its high-quality generation results may bring about privacy and copyright issues. Therefore, researchers (Salman et al. 2023; Liang et al. 2023; Zhuang, Zhang, and Liu 2023) are beginning to study the adversarial attack against diffusion models as a possible technique for preventing the abuse of diffusion models. Existing research (Liang et al. 2023; Zhang et al. 2023) has revealed that conditional input is probably a weak point of the conditional diffusion model. This is because conditional diffusion models usually feed conditions to each step of the reverse process so that attackers can effectively influence the reverse process by adding adversarial perturbations to prompts. Most previous adversarial attacks against conditional diffusion models mainly focus on attacking the prompt encoder and the internal structure of Unet (Zhuang, Zhang, and Liu 2023; Zhang et al. 2023), known as the embedding attack. The core of such an attack is to increase the distance between the clean condition input and the corresponding adversarial example in the embedding space. As such, the embedding attack is more likely to attack the prompt encoder rather than the whole diffusion model since the reverse denoising process is basically not involved. Recently, there have been works (Liang et al. 2023; Liu et al. 2023) that consider the adversarial attack against the reverse process. A typical attacking strategy is to increase the error of the estimated noise in the reverse process. For example, AdvDM (Liang et al. 2023) increases the estimation error of the noise by directly maximizing the training loss of the diffusion model. Nevertheless, such an approach treats the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6791 Figure 1: Overview of MFA-MVS for generating adversarial examples against conditional diffusion models. The algorithm consists of two main parts. First, the adversarial vulnerabilities of different reverse steps are estimated. Second, the most vulnerable step is specified as the target for generating adversarial examples. The generated examples are sent to every step of the reverse process to generate fluctuation of the noise means. reverse process as a black box which does not help to understand the differences and correlations between steps in the reverse process and limits the effectiveness of the attack. In this paper, we focus on studying how conditional input adversarial samples influence the reverse process. We reveal that the reverse process steps are extremely sensitive to the mean value of the input noise. For example, if there is a 10% shift in the mean of the initial randomly sampled Gaussian noise input, the reverse process of a diffusion model will experience a collapse by generating a blank image without textures. Utilizing this phenomenon, we propose the Mean Fluctuation Attack (MFA) to introduce mean fluctuations during the reverse process. Adversarial examples generated by MFA can effectively influence the reverse process by shifting the mean values of the estimated noises. Since the reverse process consists of multiple steps, it is usually necessary to specify a target step to attack in each iteration of the optimization process for generating the adversarial example against diffusion models. In existing works (Zhang et al. 2023; Xue et al. 2023; Liang et al. 2023), the target reverse steps are often uniformly randomly sampled. However, we argue that this may not be the best choice by revealing that the vulnerability against adversarial attacks of different reverse steps actually varies significantly. By appropriately modeling the adversarial vulnerability of reverse steps and increasing the probability of sampling the steps with higher vulnerability, the effectiveness of adversarial attacks can be substantially enhanced. Even more interestingly, we find that under certain conditions, the most effective adversarial attack can be achieved by attacking the most vulnerable reverse step only. We refer to such an attack method as MFA-MVS (Most Vulnerable Step), of which the algorithm flow is shown in Figure 1. Generally, the algorithm consists of two main parts. First, the adversarial vulnerabilities of different reverse steps are estimated. Second, the most vulnerable step is specified as the target for generating adversarial examples. We further propose a mathematical explanation for the vulnerability of different steps to reveal how adversarial samples generate mean value shifts of estimated noises for different steps under MFA attacks. We focus on attacking images as prompts in this work. However, our proposal can also be extended to other types of prompts in conditional generations using DMs. Extensive experiments are performed to verify that our proposal successfully steers diffusion models to generate mean shifts in the estimated noises, which ultimately degrades the generation quality to a significant extent. The main contributions of this work are as follows. • We propose the Mean Fluctuation Attack (MFA) against conditional diffusion models based on the finding that the reverse process of the diffusion model is extremely sensitive to the shift of the mean noise value. • We reveal that reverse steps differ a lot in terms of the vulnerability against adversarial attacks, based on which the effectiveness of MFA can be further enhanced. • We provide a mathematical explanation on the adversarial vulnerability of the reverse steps against MFA. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6792 Preliminary Diffusion Models The Diffusion Model (DM) is a latent variable model of the form pθ(x0) := R pθ(x0 : T)dx1:T , where x1, ..., xT are latent variables of the same dimensionality as the data x0 ∼q(x0). The joint distribution pθ(x0:T ) is called the reverse process or the generation process, and it is defined as a Markov chain with learned Gaussian transitions starting at p(xT ) = N(xT ; 0, I). A DM usually contains two processes, namely the forward process and the reverse process. Forward Process: What distinguishes diffusion models from other types of latent variable models is that the approximate posterior q(x1:T |x0), called the forward process or diffusion process, is fixed to a Markov chain that gradually adds Gaussian noise to the data according to a variance schedule α1, ..., αT as Eq. 1 , where αt, βt > 0, α2 t + β2 t = 1, ¯αt = αtαt−1...α1, and ¯βt = p 1 −¯αt2. q(x1:T ) := T Y t=1 q(xt|xt−1), q(xt|xt−1) = N(xt; αt−1xt−1, β2 t−1I), q(xt|x0) = N(xt; ¯αtx0, ¯β2 t I) (1) Reverse Process: The p(xt−1|xt, x0) is usually used to approximate p(xt−1|xt), which can be expressed as Eq. 2. pθ(x0:T ) := p(xT ) T Y t=1 pθ(xt−1|xt), p(xt−1|xt) ≈p(xt−1|xt, x0), p(xt−1|xt, x0) = N(xt−1; 1 αt (xt −β2 t ¯βt ϵθ(xt, t)), ¯β2 t−1β2 t ¯β2 t I) (2) Conditional diffusion models add the condition c to each step to control the reverse process. The training loss function is shown in Eq. 3, and the aim is to predict the added noise ε ∼N(0, I) using the model ϵθ(xt, t, c). L(θ) := Et,x0,ϵ,c||ε −ϵθ(xt, t, c)||2 (3) Adversarial Examples Adversarial examples for classifiers: Given an image x and a classifier f(·) (Madry et al. 2017; Carlini and Wagner 2017), an adversarial example x′ satisfies two properties: D(x, x′) is small for some distance metric D, and f(x) ̸= f(x′). That is, images x and x′ appear visually similar but x′ is classified incorrectly. Following previous work (Dong et al. 2018; Zhang et al. 2022; Goodfellow, Shlens, and Szegedy 2014; Yu et al. 2023b), we use l∞as a distance matrix to measure the similarity between two images. Adversarial examples for Diffusion models: Recently, many studies (Nie et al. 2022; Sun et al. 2022; Lee and Kim 2023) have employed Diffusion models as a supplementary technique to enhance the robustness of classification models. Nevertheless, there has been limited research conducted on the robustness of diffusion models or on adversarial attacks against conditional diffusion models. Early works mainly focus on attacking the prompt encoder (Maus et al. 2023; Milli`ere 2022; Daras and Dimakis 2022). Zhang et al. (Zhuang, Zhang, and Liu 2023) manipulate the text encoder by including redundant characters in the input prompt to deceive it into attacking diffusion models. Zhang et al. (Zhang et al. 2023) aim to attack the the internal structure of Unet and distort the resulting image to disrupt the function of latent diffusion models. However, none of these studies delve into the adversarial robustness of the reverse denoising process, which is essential to DMs. Recently, there have been works that start to consider the adversarial attack against the reverse process, typically by increasing the error of the estimated noise. For example, AdvDM (Liang et al. 2023) attacks the reverse process by directly maximizing the training loss shown in Eq. 3, which represents the estimation error between the predicted noise and the added noise. This work mainly focuses on the textual inversion task in which several concept images provided by the user are used to learn pseudo-words in the space of text embedding to represent these concepts. Then these pseudo-words are combined into natural language sentences to guide the personalized generation. It is actually not clear whether AdvDM effectively attacks the word generation model’s initial stage or the subsequent phase of the diffusion model. The differences and correlations between reverse steps are not explicitly studied in AdvDM which limits the effectiveness of the attack. Methodology In this section, we first introduce the Mean Fluctuation Attack (MFA). Then by analyzing the unique properties of DM, we model the vulnerability of different steps in the reverse process against adversarial attacks. We further demonstrate that more effective attack can be achieved by considering step vulnerability in MFA. Mean Fluctuation Attack We have observed that DMs are highly sensitive to mean values of the initial randomly sampled Gaussian noise input XT . Based on such an observation, we propose the Mean Fluctuation Attack (MFA) that aims at generating mean fluctuations by increasing (or decreasing) the mean value of predicted noise in the reverse process. Suppose the clean prompt image used as the condition to be c, we define the adversarial example as c′ = c + δ, where δ is the adversarial perturbation. The objective of MFA is to maximize the mean fluctuation generated during the reverse process, which can be expressed as Eq. 4. More specifically, we aim to find the optimal perturbation δ that maximizes the expectation of the mean value of the predicted noise ϵθ(xt, t, c+δ). Here, xt is sampled from the distribution q(xt|x0), t is sampled from the uniform distribution U(1, T), and η is the norm constraint of perturbation δ. Detail flowchart of the algorithm is shown in the Supplementary. δ := arg max δ E||µ(ϵθ(xt, t, c + δ))||, where xt ∼q(xt|x0), t ∼U(1, T), ||δ||∞≤η (4) MFA can effectively generate mean fluctuations, which affect the reverse process and result in mean shifting pheThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6793 nomena similar to that of direct modification of xT . Intuitively, the MFA adversarial example will result in a mean shift of the predicted noise with consistent direction for each step in the reverse process. The accumulation of such mean shifts can ultimately invalidate the generation results. In the basic version of MFA, the reverse steps t are sampled with equal probability, assuming that the effectiveness for attacking each step is the same. However, we have discovered that when optimizing adversarial examples, the benefits of attacking different steps vary a lot. To further enhance the effectiveness of MFA, we combine theoretical analysis with empirical observations and introduce the concept of step-wise vulnerability, denoted as v(t), which quantifies the benefit obtained by attacking step t. As such, we can increase the sampling probabilities of steps with higher vulnerability. Therefore, the objective of MFA can be modified as Eq. 5. We name such a modified version as MFA-VT. δ := arg max δ E||µ(ϵθ(xt, t, c + δ))||, where xt ∼q(xt|x0), t ∼P(t), ||δ||∞≤η (5) In Eq. 5, P(t) represents the sampling distribution determined by vulnerabilities. The probability of sampling step t is defined as Eq. 6. P(t) = v(t) PT i=1 v(i) (6) We have further observed that when the number of attacking iterations is sufficiently large, MFA-VT produces excellent results. However, when the number of attacking iterations is limited, fixing t to the step with the highest vulnerability can effectively increase the performance, as is shown in Eq. 7. We name such a modified version as MFA-MVS. δ := arg max δ E||µ(ϵθ(xt, t, c + δ))||, where xt ∼q(xt|x0), t = arg max t v(t), ||δ||∞≤η (7) In the next subsection, we will present a detailed analysis as well as a mathematical modeling of v(t). Step Vulnerability We believe that there are three main factors that contribute to the varying benefits of attacking different steps t in the reverse process. (1) Chain-like structure. In the reverse process, the input to a step depends on the output of the previous step. Hence, the mean fluctuations will be transferred and amplified step by step. (2) Stability of different steps. It can be observed that attacking different steps result in different magnitudes of mean fluctuations. Such a variability in stability also contributes to the varying benefits of attacking different steps. (3) Transferability between different steps. When an adversarial example generated by attacking a specific step is applied to other steps as condition , their effectiveness tends to diminish as the steps become more distant from the attacked step. The diminishing transferability between steps further adds to the variation in benefits obtained from attacking different steps. Coupled together, these three factors contribute to the significant difference in the vulnerability of reverse steps against adversarial attacks. In the following, we first analyze the three factors separately by fixing the target step t to attack. Then we combine the three factors to present a unified mathematical model of v(t). All the analyses in this section are conducted based on the inpainting task using the Latent diffusion model (LDM) (Rombach et al. 2022). (1) Chain-like structure. A typical forward process of a denoising DM is a stable Markov chain that gradually adds Gaussian noise to the data based on pre-designed noise until the distribution of the data converges to the standard Gaussian distribution. According to Eq. 1, we can model the forward process as Eq. 8. Repeated iterations lead to the formula for diffusion process sampling as Eq. 9. xt = αtxt−1 + βtεt, εt ∼N(0, I) (8) xt = ¯αtx0 + ¯βt¯εt, ¯εt ∼N(0, I) (9) For an x0, assume that the mean fluctuation occurs at the nth step in the reverse process, denoted as x′ n = xn + ξ, 1 ≤n ≤T and ξ is a constant. From Eq. 9, the correct x0 should be x0 = 1 ¯αn (xn −¯βn¯εn), and after the mean fluctuation occurs at the nth step, the incorrectly estimated x′ 0 can be expressed as x′ 0 = 1 ¯αn (xn + ξ −¯βn¯εn). As such, the impact of the mean fluctuation occurs at the nth step on the final generation result can be expressed as Eq. 10. x′ 0 −x0 = ξ ¯αn (10) Since ¯αn = Qn i=1 αi, 0 < αi < 1, it is obvious that mean fluctuations are amplified as the reverse process unfolds, and as n increases, this effect becomes more significant. (2) Stability of different steps. If attacking different steps produce the fluctuations of the same magnitude, it is clear that attacking steps at larger t will yield greater benefits according to Eq. 10. However, we find that this is not true in practice. One reason is that in actual attacks, the magnitudes of mean fluctuation generated by attacking different steps differs. The conditional diffusion models estimates the added noise ε ∼N(0, I) using ϵ(zt, t, c), which can be rephrased as ϵ(¯αtz0 + ¯βt¯εt, t, c). It can be observe that as t increases, the first term of the input approaches noise, making it easier for the network to estimate the noise. Therefore, steps at larger t are substantially more stable when facing attacks. To confirm this, we calculated the training loss which measures the different between estimated noise and added noise at different steps and found that the loss decreases as t increases. Due to the complexity of neural networks, it is difficult to theoretically derive and model the stability of different steps. Therefore, we performed an empirical modeling by calculating the mean difference in predicted noise before and after the attack for different steps and normalized the results. Detail results are shown in the Supplementary, which reveals an approximately linear relationship between the stability and t. As t increases, the shift of the noise mean decreases, indicating stronger network stability. The stability can be approximated by a simple linear relationship defined in Eq. 11, of which the Goodness of Fit R2 ≈0.972, indicating a high degree of fit between S(t) and the actual data. S(t) = 1 −0.8 ∗t T (11) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6794 Figure 2: Statistical results on the impact of adversarial examples generated by attacking different steps on step T. The left figure shows that the attack becomes less effective when the attack step is farther from T. For the data in the left figure, we normalized it and fitted it linearly. (3) Transferability between different steps. When attacking the condition of a DM, the generated adversarial example are actually passed to all the steps of the reverse process. Considering that different steps are using the same network with different inputs for noise prediction, the transferability of the adversarial example should also be considered. Specifically, an adversarial example generated by attacking a fixed target step should also have certain attacking effect on other steps. We further find that such transferability is more prominent between adjacent steps. This is understandable considering that the inputs of adjacent steps are highly similar. For example, since xn are highly similar to xn−1 and xn+1, the adversarial example c′ n generated through attacking the step t = n will also have significant attack effects on steps t = n−1 and t = n+1. Intuitively, the further apart two steps are, the weaker the transferability will be. To verify the conjecture, we applied the adversarial example c′ t generated by attacking step t and the clean sample c to step T. We calculate the difference y = ||x′ 0−x0||1 between x′ 0 and x0 predicted using c′ t and c. To better fit the data, we first normalized the data using a nonlinear transformations y′ = log((1/(y + e−5)) −1). Subsequently, we visualized the relationship between y′ and t and fitted it using a straight line. From the left of Figure 2, the experimental results verify our conjecture that the generated adversarial examples are more effective when being applied to steps closer to the target step. The relationship between y′ and t can be represented as: y′ = −0.0092t+7.08. Therefore, the transferability of the adversarial examples generated by attacking step t applying on step i can be estimated as Eq. 12. τ i t = 1 1 + e(7.08−9.2(T −abs(t−i)) T ) = 1 1 + e(−2.08+ 9.2abs(t−i) T ) (12) From the visualization results in Figure 2 left, we find that as the target step gets closer to T, the adversarial examples generated by attacking target step have better attack effects on step T. Figure 2 right shows the relationship between y′ and t. We calculated the correlation coefficient between the fitted curve and the data points, obtaining a R2 ≈0.968, indicating a high degree of fit. Step Vulnerability. Taking into account the three above factors, along with the coefficients in Eq. 2, we define the vulnerability of step t as the total magnitude of fluctuations that can be generated by attacking step t as Eq. 13, in which δi t signifies the effect of adversarial examples generated at t step on the i step, 1/¯αi represents the amplification effect brought by the chain-like structure, β2 i /¯βi is the coefficient in front of the reverse process shown in Eq. 2, and the last two terms represent transferability and stability respectively. v(t) = T X i=1 1 ¯αi β2 i ¯βi ∗τ i t ∗S(t)ξ = T X i=1 1 ¯αi β2 i ¯βi ∗ 1 1 + e(−2.08+ 9.2abs(t−i) T ) ∗(1 −0.8t T )ξ (13) The fitting curve in the Supplementary verifies our hypothesis that there exists a trade-off when selecting the target step. We conducted experiments on multiple models and tasks. The results in Figure 4 aligns well with Eq. 13 that the highest vulnerability is typically observed at the latter stage of the reverse process, e.g. at step t ≈0.8T for LDM. Experiments Dataset and Experimental Settings In this section, we evaluate our methods on multiple tasks. For the inpainting task, we utiliz the Places dataset (Zhou et al. 2017). For the super-resolution task, we employ the ImageNet dataset (Deng et al. 2009). Following existing research (Yu et al. 2023a; Shang et al. 2023) in adversarial examples, we use l∞norm as the constraint for generating the adversarial examples. We set per-step perturbation budget as 1/255, the total budget as 8/255, and attacking iterations as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6795 Random Mask Thick Random Mask Medium Random Mask Thin Method FID↑ Delta E↑ PSNR↓ SSIM↓ FID↑ Delta E↑ PSNR↓ SSIM↓ FID↑ Delta E↑ PSNR↓ SSIM↓ No Attack 10.6 6.01 22.0 0.84 15.2 5.15 23.8 0.87 13.1 3.40 19.8 0.63 Embedding Attack 17.1 6.65 21.3 0.76 20.8 5.91 23.1 0.78 19.0 4.30 19.3 0.55 AdvDM 18.4 7.46 14.3 0.42 20.2 6.19 22.4 0.79 15.6 4.26 19.1 0.56 AdvDM-MVS 22.9 8.31 13.1 0.39 22.8 7.39 21.0 0.77 19.8 4.44 18.8 0.55 MFA 33.4 10.07 12.9 0.41 32.2 8.37 17.1 0.61 26.3 7.51 16.8 0.49 MFA-MVS 52.5 12.77 11.8 0.40 46.8 10.79 14.8 0.59 29.1 8.70 15.9 0.47 MFA-VT 44.9 11.89 12.1 0.41 41.8 9.96 15.5 0.60 28.2 8.64 16.4 0.48 Table 1: The attacking performance against conditional diffusion models on the inpainting task. The best attacking performances are marked as bold, while the second-best results are marked underline. Figure 3: Inpainting results obtained by using adversarial examples generated by different attacks as condition inputs. 70. We conduct our experiments on inpainting and superresolution tasks. We use 8 NVIDIA RTX 3090 GPUs for all experiments. More visualizations of experimental results are shown in the Supplementary. Superparametric experiments on the size of the total budget are also in the Supplementary. Evaluation on Inpainting Task We first evaluate the performance of the MFA algorithm on the inpainting task. Following the setup of the Latent Diffusion Model (LDM), the condition in the inpainting task consists of a mask m and an image x. To evaluate MFA quantitatively, we random select 2,000 images from Places365 (Zhou et al. 2017). The dataset preprocessing is the same as LaMa (Suvorov et al. 2022) and the detail of mask generation are shown in the Supplementary. By generating masks of different sizes on the dataset, we categorized them into three types, thick, medium, and thin. The implementation details for MFA on LDM are shown in the Supplementary. We evaluate the inpainting quality by four metrics. Fr´echet Inception Distance (FID) is a metric for quantifying the realism and diversity of images. Delta E (Sharma and Bala 2017) is a calculation of the change in color as measured in the Hunter Lab color space. Peak Signal-toNoise Ratio (PSNR) is a widely used metric that measures the quality of a processed image by comparing it to the original. The structural similarity index measure (SSIM) is used as a metric to measure the similarity between given images. Table 1 presents the quantitative results of our method on the inpainting task. From Table 1, it can be observed that our method can effectively attack conditional diffusion models. Also, attacking the most vulnerable step can not only significantly improve the performance of MFA, but also AdvDM. This indicates that as a guide, the step vulnerability can effectively enhance the performance of different attack methods. This verifies that step vulnerability is an intrinsic property of the reverse process. From Figure 3, it can be observed that the images generated from clean samples are very similar to the surrounding scenery with no obvious differences. AdvDM can produce textures, with slight differences from the surroundings but no significant color variations. MFA can effectively influence the generation resulting in producing purple color for the inpainted regions, which is similar to directly modifying the mean of xT . We will show the comparison between MFA and directly modifying xT in the Supplementary material. Moreover, MFA-MVS further enhances the attack effect and effectively induces mean shift, generating anomalous color blocks different from other areas. MFA-VT also enhances the attack effect. However, when the number of attack iterations is small, the effect of MFA-VT is not as good as MFA-MVS. In the Section 4.4, it can be observed that when the number of iterations is sufficient, MFA-VT performs better than MFA-MVS, which also demonstrates the rationality and effectiveness of step vulnerabilities. We also evaluated our generated adversarial examples in the Supplementary using a basic JPEG compression defense method. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6796 Method PSNR↓ SSIM↓ No Attack 20.4 0.56 Embedding Attack 20.2 0.54 AdvDM 19.5 0.49 AdvDM-MVS 18.9 0.46 MFA 17.4 0.44 MFA-MVS 16.3 0.37 MFA-VT 16.9 0.41 Table 2: The attacking performance against conditional diffusion models on the super-resolution task. The best attacking performances of methods are marked as bold, while the second-best results are marked underline. Evaluation on Super-resolution Task We also validate our performance on the super-resolution task. Following the settings of LDM (Rombach et al. 2022), we evaluate the performance on ImageNet (Deng et al. 2009). The dataset preprocessing is same with SR3 (Saharia et al. 2022b). We randomly select 1,000 images to attack. The condition of super-resolution task is a low resolution image. We evaluate the super-resolution results using PSNR and SSIM, which are mentioned in section 4.2. Table 2 presents the quantitative results of our method on the super-resolution task. The best result is highlighted in bold, and the second-best result is underlined. From the evaluation metrics, we can observe that our attack effectively reduces the generation quality of images, successfully attacking DMs in the super-resolution task, and also demonstrating the benefits of using the step vulnerability as guidance. The visualization results in the Supplementary show that our method has a more effective influence on the reverse process, resulting in more obvious noise and unreasonable textures in the super-resolution results. Validation of Step Vulnerability To further verify the effectiveness of step vulnerability, we conduct attacks on multiple models. We select three models, namely LDM-Inpainting, LDM-SR, and SR3, which all use images as conditions. To validate the effectiveness and universality of step vulnerability, we chose models that have different total number of reverse steps T. For LDMInpainting and LDM-SR we set T to 1000, while for SR3 we set T to 2000. We launch attacks on different steps of each model, generating adversarial samples for each step. For each model, we use 1000 images to calculate the PSNR metric which is then normalized negatively considering that the worse PSNR represents the better attack performance. Figure 4 shows the normalized data collected from three different diffusion models represented in different colors. It can be observed that all models achieve maximum attack effectiveness at around t = 0.8T. This indicates that there is a strong consistency in terms of step vulnerability across different diffusion models. Moreover, the theoretic curve of the step vulnerability in Eq. 13 is shown as the solid line in Figure 4, indicating that our theoretical analysis is highly consistent with the actual situation. Figure 4: The statistical results on effects of attacking different steps on different models and tasks. Attacking Iterations The number of attacking iterations determines whether the generated adversarial examples can fit well with the step vulnerability curve we modeled, thus having a significant impact on the adversarial examples generated by MFA-VT. To investigate the impact of this hyperparameter, we conducte experiments on the Places365 dataset, following the experimental settings decribed in Section 4.1. We calculated the metrics for both MFA-MVS and MFA-VT under different attack step values ranging from 10 to 1000. The results, as shown in Table 3, indicate that as the number of attack iterations increases, both MFA-MVS and MFAVT show improved performance. However, with further increases in attacking iterations, MFA-VT surpasses MFAMVS in terms of effectiveness. This suggests that choosing MFA-MVS has an advantage when the step value is small. Moreover, the experimental results further validate the effectiveness of the step vulnerability curve we modeled. MFA-VT MFA-MVS Attacking iterations FID↑Delta E↑FID↑Delta E↑ 10 17.9 7.58 19.1 8.06 70 44.9 11.89 52.5 12.77 1000 63.4 17.43 59.2 16.82 Table 3: Ablation on the number of attacking iterations Conclusions In this paper, we propose the Mean Fluctuation Attack (MFA) against conditional diffusion models based on the finding that the reverse process of the diffusion model is extremely sensitive to the shift of the mean noise value. We present that the attacking performance can be further enhanced under the guidance of the step vulnerability. We provide a mathematical explanation of the adversarial vulnerability of the reverse step against MFA. The experiments demonstrate that MFA can effectively influence the reverse process and choosing vulnerable steps to attack can further improve the attacking performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6797 Acknowledgements This work was supported by the National Natural Science Foundation of China (62376024) and by the National Key R&D Program of China (2022ZD0117902). References Bao, F.; Li, C.; Zhu, J.; and Zhang, B. 2022. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. arXiv preprint arXiv:2201.06503. Batzolis, G.; Stanczuk, J.; Sch¨onlieb, C.-B.; and Etmann, C. 2021. Conditional image generation with score-based diffusion models. arXiv preprint arXiv:2111.13606. Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39–57. Ieee. Croitoru, F.-A.; Hondru, V.; Ionescu, R. T.; and Shah, M. 2023. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Daras, G.; and Dimakis, A. G. 2022. Discovering the hidden vocabulary of dalle-2. arXiv preprint arXiv:2206.00169. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185–9193. Esser, P.; Rombach, R.; Blattmann, A.; and Ommer, B. 2021. Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in neural information processing systems, 34: 3518–3532. Gal, R.; Alaluf, Y.; Atzmon, Y.; Patashnik, O.; Bermano, A. H.; Chechik, G.; and Cohen-Or, D. 2022. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and Irani, M. 2023. Imagic: Text-based real image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6007–6017. Lee, M.; and Kim, D. 2023. Robust evaluation of diffusion-based adversarial purification. arXiv preprint arXiv:2303.09051. Liang, C.; Wu, X.; Hua, Y.; Zhang, J.; Xue, Y.; Song, T.; Zhengui, X.; Ma, R.; and Guan, H. 2023. Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples. Liu, Q.; Kortylewski, A.; Bai, Y.; Bai, S.; and Yuille, A. 2023. Intriguing Properties of Text-guided Diffusion Models. arXiv preprint arXiv:2306.00974. Lu, C.; Zhou, Y.; Bao, F.; Chen, J.; Li, C.; and Zhu, J. 2022. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 35: 5775–5787. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Maus, N.; Chao, P.; Wong, E.; and Gardner, J. 2023. Adversarial prompting for black box foundation models. arXiv preprint arXiv:2302.04237. Milli`ere, R. 2022. Adversarial attacks on image generation with made-up words. arXiv preprint arXiv:2208.04135. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. Nie, W.; Guo, B.; Huang, Y.; Xiao, C.; Vahdat, A.; and Anandkumar, A. 2022. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460. Peng, Y.; Zhao, C.; Xie, H.; Fukusato, T.; and Miyata, K. 2023. DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided Latent Diffusion Model. arXiv preprint arXiv:2302.06908. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2022. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22500–22510. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022a. Photorealistic text-toimage diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2022b. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4713–4726. Salman, H.; Khaddaj, A.; Leclerc, G.; Ilyas, A.; and Madry, A. 2023. Raising the cost of malicious ai-powered image editing. arXiv preprint arXiv:2302.06588. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6798 Shang, Y.; Gao, C.; Chen, J.; Jin, D.; Ma, H.; and Li, Y. 2023. Enhancing Adversarial Robustness of Multi-modal Recommendation via Modality Balancing. In Proceedings of the 31st ACM International Conference on Multimedia, 6274–6282. Sharma, G.; and Bala, R. 2017. Digital color imaging handbook. CRC press. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. Song, Y.; and Ermon, S. 2019. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2020. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456. Sun, J.; Nie, W.; Yu, Z.; Mao, Z. M.; and Xiao, C. 2022. Pointdp: Diffusion-driven purification against adversarial attacks on 3d point cloud recognition. arXiv preprint arXiv:2208.09801. Suvorov, R.; Logacheva, E.; Mashikhin, A.; Remizova, A.; Ashukha, A.; Silvestrov, A.; Kong, N.; Goka, H.; Park, K.; and Lempitsky, V. 2022. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2149–2159. Voynov, A.; Aberman, K.; and Cohen-Or, D. 2022. Sketchguided text-to-image diffusion models. arXiv preprint arXiv:2211.13752. Xue, H.; Araujo, A.; Hu, B.; and Chen, Y. 2023. DiffusionBased Adversarial Sample Generation for Improved Stealthiness and Controllability. arXiv preprint arXiv:2305.16494. Yang, R.; Srivastava, P.; and Mandt, S. 2022. Diffusion probabilistic modeling for video generation. arXiv preprint arXiv:2203.09481. Yu, C.; Chen, J.; Wang, Y.; Xue, Y.; and Ma, H. 2023a. Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm Suppressing. IEEE Transactions on Neural Networks and Learning Systems. Yu, H.; Chen, J.; Ma, H.; Yu, C.; and Ding, X. 2023b. Defending Against Universal Patch Attacks by Restricting Token Attention in Vision Transformers. In ICASSP 20232023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Zhang, J.; Wu, W.; Huang, J.-t.; Huang, Y.; Wang, W.; Su, Y.; and Lyu, M. R. 2022. Improving adversarial transferability via neuron attribution-based attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14993–15002. Zhang, J.; Xu, Z.; Cui, S.; Meng, C.; Wu, W.; and Lyu, M. R. 2023. On the Robustness of Latent Diffusion Models. arXiv preprint arXiv:2306.08257. Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; and Torralba, A. 2017. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6): 1452–1464. Zhuang, H.; Zhang, Y.; and Liu, S. 2023. A pilot study of query-free adversarial attack against stable diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2384–2391. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6799 | 2024 | 755 |
18,579 | PaintHuman: Towards High-Fidelity Text-to-3D Human Texturing via Denoised Score Distillation Jianhui Yu1, Hao Zhu2, Liming Jiang3, Chen Change Loy3, Weidong Cai1, Wayne Wu2 1University of Sydney 2Shanghai AI Laboratory 3S-Lab, Nanyang Technological University [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Recent advances in zero-shot text-to-3D human generation, which employ the human model prior (e.g., SMPL) or Score Distillation Sampling (SDS) with pre-trained text-to-image diffusion models, have been groundbreaking. However, SDS may provide inaccurate gradient directions under the weak diffusion guidance, as it tends to produce over-smoothed results and generate body textures that are inconsistent with the detailed mesh geometry. Therefore, directly leveraging existing strategies for high-fidelity text-to-3D human texturing is challenging. In this work, we propose a model called PaintHuman to addresses the challenges from two perspectives. We first propose a novel score function, Denoised Score Distillation (DSD), which directly modifies the SDS by introducing negative gradient components to iteratively correct the gradient direction and generate high-quality textures. In addition, we use the depth map as a geometric guide to ensure that the texture is semantically aligned to human mesh surfaces. To guarantee the quality of rendered results, we employ geometry-aware networks to predict surface materials and render realistic human textures. Extensive experiments, benchmarked against state-of-the-art (SoTA) methods, validate the efficacy of our approach. Project page: https://painthuman.github.io/ Introduction Significant progress has been made in text-to-3D content generation. Some methods are proposed for general objects (Poole et al. 2023; Lin et al. 2023), and some are specifically for 3D human avatars (Cao et al. 2023; Hong et al. 2022; Jiang et al. 2023). 3D human avatars are increasingly important in various applications, including games, films, and metaverse. In this work, we focus on texturing a predefined human mesh with text prompts. The success of recent methods rely on CLIP model (Hong et al. 2022) or text-to-image generation, which leverages the diffusion model (Ho, Jain, and Abbeel 2020; Rombach et al. 2022), and Score Distillation Sampling (SDS) (Poole et al. 2023) combined with differentiable 3D representations (Mildenhall et al. 2020; Barron et al. 2021). However, directly leveraging existing strategies for detailed human avatar texturing in a zero-shot manner is challenging Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A man in a hoodie A kid wearing long sleeves A man in a suit with a tie and belt A woman wearing T-shirt A young girl wearing boots An old man in a polo shirt Text A man in a hoodie A kid wearing long sleeves A man in a suit with a tie and belt A woman wearing T-shirt Text A man in a hoodie Text Text A kid wearing long sleeves A man in a suit with a tie and belt A woman wearing a Tshirt A young girl wearing boots An old man in a polo shirt Figure 1: Generated results of PaintHuman. Given textureless human meshes and textual descriptions as input, our model can generate high-quality and detailed textures that aligned to input geometry and texts. for two reasons. First, we find that SDS is a general-purpose optimization, which guides the loss gradient in a direction due to its weak supervision and unable to well handle unclear signal from the diffusion model. This issue results in generated human textures of low quality, including oversmoothed body parts and blurry garment details. Second, textures guided by text-to-image models are usually not semantically unaligned to either input texts or human mesh surfaces, resulting in missing textures or unaligned texture mapping for the geometry. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6800 Recent work (Hong et al. 2022; Jiang et al. 2023) for human avatar texturing entangles shape and texture generation, which leverage human-specific priors (Loper et al. 2015) for human body texturing. To ensure the generated textures aligned to the given geometry, TEXTure (Richardson et al. 2023) and Text2Tex (Chen et al. 2023a) utilize a depthaware diffusion model (Rombach et al. 2022) to directly inpaint and update textures from different viewpoints, which could cause inconsistency when the input mesh has complex geometry. Other methods such as Latent-Paint (Metzer et al. 2023) or Fantasia3D (Chen et al. 2023b) apply SDS to update the loss gradient for consistent texture generation. However, SDS fails to semantically align textures to input texts, i.e., the synthesized textures are non-detailed and oversmoothed. Therefore, we propose PaintHuman to address a primary issue associated with SDS. Our main idea is to denoise the unclear gradient direction provided by the SDS loss. We handle this from two aspects. Firstly, we propose Denoising Score Distillation (DSD), which introduces a negative gradient component to directly modify the SDS, which could iteratively correct the gradient direction for detailed and high-quality texture generation. Then, to enable geometryaware texture generation, we utilize geometric guidance which provides rich details of the mesh surface to guide the DSD precisely, and use spatially-aware texture shading models (Karis and Games 2013) to guarantee the quality of rendered visual results. Specifically, DSD utilizes an additional negative pair of image and text. The key idea is that by using a negative image, i.e., an image with noise rendered from the last training iteration, we could reinforce the learning of the complex surface geometry to produce clear boundaries between different garments. In addition, with the help of negative text prompts, the synthesized textures could be more semantically aligned to the input text. Overall, the negative pair contributes a negative part to SDS, which controls the gradient direction by a weighted subtraction of the two input pairs, producing an effective gradient to address oversmoothed texture generation. To further ensure textures semantically aligned to the complex avatar surface, we first use the depth map as guidance during the diffusion process for texturing, which provides fine-grained surface details. In addition, we follow (Munkberg et al. 2022) to apply the Spatially-Varying Bidirectional Reflectance Distribution Function (SV-BRDF) (Karis and Games 2013) and coordinate-based networks (M¨uller et al. 2022) for geometry-aware material prediction. With the help of differentiable rendering (Hasselgren et al. 2021), we could update the rendered human avatar and synthesized textures in an end-to-end fashion. The contributions of our work are summarized as follows: • We introduce Denoising Score Distillation (DSD), a diffusion-based denoising score using negative imagetext pairs for high-fidelity texture generation aligned to textual descriptions. • We employ semantically aligned 2D depth signals and spatially-aware rendering functions for geometry-aware texture generation and realistic avatar rendering. • Through comprehensive experiments, we prove the efficacy of our method over existing texture generation techniques. Related Work Diffusion Models. With the development of denoising score-matching generative models (Sohl-Dickstein et al. 2015), diffusion models present great success in a variety of domains such as image editing, text-to-image synthesis, text-to-video synthesis, and text-to-3D synthesis. In the field of text-to-image synthesis, diffusion models have demonstrated impressive performance, especially the Stable Diffusion model (Rombach et al. 2022), which is trained on a large number of paired text-image data samples with CLIP (Radford et al. 2021) to encode text prompts and VQVAE (Van Den Oord, Vinyals et al. 2017) to encode images into latent space. In our work, we use a pre-trained Stable Diffusion model to incorporate intrinsic image prior to guide the training of our texture generation network. 3D Shape and Texture Generation. There has been a recent surge of interest in the field of generating 3D shapes and textures. One line of methods, such as Text2Mesh (Michel et al. 2022), Tango (Lei et al. 2022), and CLIP-Mesh (Mohammad Khalid et al. 2022), utilize CLIP-space similarities as an optimization objective to create novel 3D shapes and textures. Gao et al. (2022) trains a model to generate shape and texture via a DMTet (Shen et al. 2021) mesh extractor and 2D adversarial losses. A recent approach called DreamFusion (Poole et al. 2023) introduces the use of pretrained diffusion models to generate 3D NeRF (Mildenhall et al. 2020) models based on a given text prompt. The key component in DreamFusion is the score distillation sampling (SDS), which uses a pre-trained 2D diffusion model as a critique to minimize the distribution of the predicted and ground-truth Gaussian noise, thus the 3D scene can be optimized for desired shape and texture generation. In the context of texture generation, Latent-NeRF (Metzer et al. 2023) demonstrated how to employ SDS loss in the latent space of the diffusion model to generate textures for 3D meshes and then decoded to RGB for the final colorization output. Besides, both TEXTure (Richardson et al. 2023) and Text2Tex (Chen et al. 2023a) proposed a non-optimization method with progressive updates from multiple viewpoints to in-paint the texture over the 3D mesh models. Human-specific shape and texture generation methods also follow the same ideas that use CLIP similarity between the generated human image and the textural descriptions (Hong et al. 2022) or directly leverage SDS for iterative shape and texture generation (Kolotouros et al. 2023; Zeng et al. 2023; Jiang et al. 2023). Besides, they also employ human body model prior, i.e., SMPL (Loper et al. 2015), for effective human avatar generation. However, most generated human textures are over-smooth and of low quality, which we argue is caused by the unstable guidance provided by SDS. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6801 𝐱! depth Diffuse Metalness Roughness 𝝐 ~ 𝒩(0, 𝐈) 𝑡 ~ 𝒰(𝑡!"#, 𝑡$%%) 𝐳" # 𝑧̂" #$% “A man in a suit with a tie and belt” “A man in a suit with a tie and belt, {NegPrompt}” 𝝐 𝝐& ∇&ℒ'(' SV-BRDF Network Denoising render Denoising PBR Rendering Texture Generation 𝐸 𝒙! 𝒙!"# Figure 2: Overview of our proposed model. Our goal is to texture the human mesh given an input text and a mesh model. To achieve this, we propose Denoised Score Distillation with a negative pair of image and text prompts to guide the gradient direction for detail texture generation that is semantically aligned to the input text. We introduce depth signals to the diffusion process for complex garment texturing, and a learnable network to estimate SV-BRDFs for albedo and material parameter learning. Finally, the camera position is adjusted for refined details of the face region. Method In this section, we start with an overview of SDS. We then introduce Denoised Score Distillation (DSD), which uses an extra negative pair of image-text to guide gradient direction, thereby generating detailed textures that align with the input text. Finally, we employ depth signals in the diffusion process for complex surface texturing and employ a geometryaware rendering function for photorealistic human texture generation. The overall pipeline is shown in Figure 2. SDS Overview Given an input image x with a latent code z, a conditioning text embedding y, a denoising U-Net ϵϕ with model parameters ϕ, a uniformly sampled timestep t ∼U(0, I), and a Gaussian noise ϵ ∼N(0, I), the diffusion loss is: LDiff(z, y, t) = w(t)∥ϵϕ(zt, y, t) −ϵ∥2 2, (1) where w(t) is a weighting function depending on t, and zt refers to the noisy version of z via an iterative forward diffusion process given by zt = √αtz+√1 −αtϵ, with αt being the noise scheduler. For high-quality generation, classifierfree guidance (CFG) (Ho and Salimans 2022) is used, which jointly learns text-conditioned and unconditioned models via a scale parameter ω. During inference, the two models are used to denoise the image as follows: ˆϵϕ (zt, y, t) = (1 + ω)ϵϕ (zt, y, t) −ωϵϕ (zt, t) . (2) Given a differentiable rendering function gθ, the gradient of diffusion loss with respect to model parameters θ is: ∇θLSDS = w(t) (ˆϵϕ (zt, y, t) −ϵ) ∂zt ∂θ , (3) where we have omitted the U-Net Jacobian term as shown in (Poole et al. 2023). The purpose of SDS is to generate samples via optimization from a text-guided diffusion model. However, we argue that SDS only presents poor guidance on input text prompt and the generated 2D image, hence, in the following, we propose a new loss design to increase the generation quality. Denoised Score Distillation Given a textureless human avatar, our task is to generate surface textures conditioned on input texts. Due to SDS and neural representation of 3D avatar (Mildenhall et al. 2020), zero-shot human texture generation is made possible. We observe that using SDS only for human texturing can cause over-smoothed body parts and cannot be fully semantically aligned to the input text. We address the issue brought by SDS by proposing a new method, Denoised Score Distillation (DSD), for detailed human avatar texturing of high quality. Specifically, when presented with input text embedding y and the corresponding image x with the latent code z, our objective is to refine the gradient ∇θLSDS in Eq. 3 to a direction, so that the rendered avatar contains a detailed texture mapping that is semantically aligned to the input text. Mathematically, our DSD score function is formulated as: LDSD = w(t) ∥ϵϕ(zi t, y, t)−ϵ∥2 2 −λ∥ϵϕ(ˆzi−1 t , ˆy, t)−ϵ∥2 2 , (4) where we introduce a negative pair of image with latent code ˆz and text with embedding ˆy. λ is a weighting parameter. Both zi t and ˆzi−1 t have a superscript i indicating the training iteration and share the same timestep t and noise ϵ, allowThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6802 ing us to use the same U-Net for noise prediction. Then the gradient of LDSD over the model parameter θ is: ∇θLDSD = w(t) ˆϵϕ (zt, y, t) −ϵ −λ(ˆϵϕ (ˆzt, ˆy, t) −ϵ) ∂zt ∂θ = w(t) ˆϵϕ (zt, y, t) −λˆϵϕ (ˆzt, ˆy, t) −(1 −λ)ϵ ∂zt ∂θ , (5) where we have omitted the U-Net Jacobian matrix following Poole et al. (2023). As shown in Figure 2, we employ the negative image ˆxi−1 derived from the previous training iteration, where we consider ˆxi−1 a negative version of xi as it contains more noise signals. The inclusion of the negative image within the computation process of ∇θLDSD yields two significant advantages. Firstly, ˆzi−1 t can reinforce the memory of the rendered human image during training, so that the final output can still be semantically aligned to the input text. Secondly, the incorporation of the negative image improves the model’s capacity to learn complex geometries, thus facilitating the generation of clear boundaries between varying garment types. For negative prompts, we use common prompts such as disfigured, ugly, etc. However, we would adapt existing prompts based on a test run, infusing refined negative prompts based on the observed output. For instance, if artifacts emerge within rendered hand regions, we append “bad hands” to the prompt set. In contrast to the indirect application of negative prompts in Stable Diffusion, we inject the negative prompt embedding directly into ∇θLDSD. This strategy effectively minimizes the artifact in rendered human images, thereby enhancing the quality of the generated output. Through the integration of both negative image and prompts, we successfully manipulate the existing SDS gradient in Eq. 3 to guide the model convergence towards a mode that yields highly detailed and qualitative textures, which also remain semantically aligned to the input text. Further analyses and insights into this approach are provided in our ablation study. Geometry-aware Texture Generation Geometry Guidance in DSD. To accurately texture complex garment details, we compute and leverage the corresponding depth map as a fine-grained guidance. Therefore, we employ a pre-trained depth-to-image diffusion model (Rombach et al. 2022) rather than the general version, so that the generated avatar could follow the same depth values of the given surface mesh. As shown in Figure 5 (b), although the rendered human image presents textures that are not semantically aligned to the input text as the belt region is not clearly textured, utilizing the depth-aware diffusion model ensures the generated texture reserve more geometric details and semantically aligned to the given geometry. Shading Model for Rendering. Following the idea of physically based rendering (PBR), which models and renders real-world light conditions and material properties, we estimate surface materials by leveraging SV-BRDFs for human image rendering: R(xp, l) = Z H Li(l)(fd + fs) (l · n) dl, (6) where Li(l) is the incident radiance, and H = {l : l · n ≥ 0} denotes a hemisphere with the incident light and surface normal n. fs and fd are diffuse and specular SV-BRDFs, respectively. In particular, we follow (Karis and Games 2013) to employ a simple diffuse model at a low cost. The diffuse SVBRDF is mathematically expressed as follows: fd(xp) = kd π , (7) where kd is the diffuse term which can be learned based on 3D vertex positions. For specular SV-BRDF estimation, we use a microfacet specular shading model as in (Karis and Games 2013) to characterize the physical properties of the mesh surface: fs(l, v) = DFG 4(n · l)(n · v), (8) where v is the view direction. D, F and G represent the normal distribution function, the Fresnel term and geometric attenuation, respectively. We also choose the Disney BRDF Basecolor-Metallic parametrization (Burley and Studios 2012) for a physically accurate rendering. Specifically, the specular reflectance term ks = m · kd + (1 −m) · 0.04, where the diffuse term kd, the roughness term r and the metallic term m can be estimated via our proposed SV-BRDF network give surface points xp: σ(γ(xp)) = [kd, r, m]. γ is a coordinate-based network (M¨uller et al. 2022) and σ is a parameterized SV-BRDF estimation model. We also utilize a differentiable split-sum approximation for Eq. 6 to maintain the differentiability in the rendering process. Moreover, we follow (Zhang et al. 2021) to regularize the material learning, which results in a smooth albedo map. Semantic Zoom. Human perception is particularly sensitive to distortions and artifacts in facial features. However, texturing human avatars in a full-body context often results in degraded facial details. To address this issue, we enhance the human prior during the optimization process by semantically augmenting the prompt (Hong et al. 2022). For example, we prepend “the face of” to the beginning of the prompt to pay more attention to this region. Simultaneously, every four iterations, we shift the look-at point of the camera to the face center and semantically zoom into the facial region, which refines facial features and improves the overall perception of the rendered avatar. Experiments Baseline Methods. We compare our model to recent SoTA baseline models, including Latent-Paint (Metzer et al. 2023), TEXTure (Richardson et al. 2023), and Fantasia3D (Chen et al. 2023b) with the appearance modeling part only. We modify Fantasia3D to ensure the vertex positions remain fixed. We also compare our model to a recent method for realistic human avatar generation, DreamHuman (Kolotouros et al. 2023), to further validate the effectiveness of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6803 (a) An Asian man wearing a navy suit (c) A woman wearing a short jean skirt and a cropped top (b) A man wearing a hoodie (d) A young man wearing a turtleneck DreamHuman Ours DreamHuman Ours Figure 3: Qualitative comparisons with DreamHuman (Kolotouros et al. 2023). As DreamHuman is not publicly available, we pick similar mesh models from Renderpeople (Renderpeople 2021) and download the results from the published paper. our design. Although its human mesh model is not publicly available, we use the same text prompts as DreamHuman to evaluate the texture quality with similar mesh models. Qualitative Analysis. As shown in Figure 4, we compare our results against baseline models. Latent-Paint cannot capture the object semantics, which results in failed or blurry textured avatars. TEXTure generates relatively better results than Latent-Paint but suffers from inconsistent textures. Fantasia3D performs well given certain input texts as in Figure 4 (a) and (c), but it outputs unrealistic samples with noisy textures in most cases due to the use of SDS. In contrast, our model produces textured avatars with high-quality and detailed textures, which are aligned to input texts and consistent with the geometry. We compare our model with DreamHuman in Figure 3. We observe that using the same text input, our model generates textured avatars with more high-frequency details, such as the cloth wrinkles, which is different from DreamHuman where the textures are oversmoothed. Moreover, in both experiments, our model can consistently generate high-quality human faces. Quantitative Analysis. To investigate the alignment between the rendered human avatars and the input texts, we use the CLIP score (Radford et al. 2021). As shown in Table 1, we compare our method with the baseline models and report the mean CLIP score. Specifically, we generate 6 frontal images from all textured avatars, each separated by a 30-degree interval. We use 20 different meshes with 4 prompts for each mesh, with a total of 80 prompts. We observe that our model outperforms all baseline models, where our result is higher than Latent-Paint by the largest margin of around 19.99%. Method Mean CLIP Score ∆(%) Latent-Paint 24.11 19.99% TEXTure 25.34 14.17% Fantasia3D 27.10 6.75% PaintHuman (Ours) 28.93 DreamHuman 25.79 12.25% PaintHuman (Ours) 28.95 Table 1: Quantitative comparisons between baseline models and ours. ∆denotes the percentage by which our model outperforms the indicated method. Such improvements demonstrate that our proposed DSD is capable of generating more realistic textures on complex human meshes, and is better aligned to the input texts. User Study. We conduct user study to analyze the quality of the generated textures and the fidelity to the input text. Specifically, 4 meshes are selected with 4 text prompts generated for each mesh, resulting in 16 visual results for each baseline method. We ask users to rate the overall quality, including texture quality and alignment between text and rendered results. More details are shown in the supplementary material. Collected results are reported in Table 2 including mean scores and standard deviation values, indicating that our method outperforms the baselines. Ablation Study. To validate the effectiveness of our design, we use two prompts as examples: “a man in a suit with a belt and tie” and “a young man wearing a turtleneck” for the ablation study. Results are shown in Figures 5 and 6. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6804 Latent-Paint TEXTure Fantasia3D Ours (a) A man wearing a shirt (b) A young man wearing a turtleneck (c) A woman in a jogging suit (d) A young woman in a dress (e) A full-body shot of a boy with afro hair Figure 4: Qualitative comparisons on RenderPeople (Renderpeople 2021) for textured human avatars. Compared with LatentPaint (Metzer et al. 2023), TEXTure (Richardson et al. 2023), and Fantasia3D (Chen et al. 2023b), our generations contain the best texture quality with high-frequency details and consistent with input textual descriptions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6805 (a) SDS (b) SDS + Depth (c) SDS + Depth + NegPrompt (f) Full: DSD + Depth + BRDF (d) DSD + SH Figure 5: Visualization of ablation study. We provide results of textured human avatars based on different settings that are added gradually from vanilla SDS baseline. Method Score ∆(%) Latent-Paint 1.21±0.70 148.76% TEXTure 1.28±0.60 135.16% Fantasia3D 1.76±0.70 71.02% DreamHuman 2.83±0.82 6.36% PaintHuman (Ours) 3.01±0.95 Table 2: User study results of baseline models and ours. ∆ denotes the percentage by which our model outperforms the indicated method. Firstly, the efficacy of our DSD is verified through several comparisons. As shown in Figure 5(a), we note that employing SDS for human texturing often results in over-smoothed body parts and fails to fully align with the input text semantically, where the belt region is neglected. The addition of depth map guidance in Figure 5(b) also struggles to address this issue. Furthermore, by adding negative prompts, Figure 5(c) shows that the rendered image contains more highfrequency details but is not aligned with the input text, and some parts are devoid of texturing. We further examine the effectiveness of the BRDF shading model. As shown in Figure 5(d), we render the result with the Spherical Harmonic model (SH) (Boss et al. 2021), resulting in less realistic textures with noticeably noisy color distributions at the borders between different garments. However, using BRDF can give us smooth and clear textures. In contrast, as shown in Figure 5(e), an image rendered using our DSD effectively mitigates the oversmoothing issue and results in a detailed human avatar. Finally, as shown in Figure 6, our application of semantic zoom on the face region enhances the overall texture quality. Notably, the method enables the presence of intricate facial features resulting in a more realistic representation. Figure 6: Importance of semantic zoom. The left image shows the generated avatar with semantic zoom, while the right image employs no semantic zoom. Conclusion In this work, we introduce PaintHuman, a zero-shot text-tohuman texture generation model. We present a novel score function, Denoised Score Distillation (DSD), which refines the gradient direction to generate high-quality, detailed human textures aligned to the input text. We also leverage geometry signals in DSD for accurate texturing of complex garment details. To maintain semantic alignment between the mesh and the synthesized texture, we employ a differentiable network to parameterize SV-BRDFs for surface material prediction, which is complemented by physically based rendering for realistic avatar renderings, with facial details refined through semantic zooming. Our extensive experiments reveal significant improvements in texture generation, validating the effectiveness of our module designs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6806 References Barron, J. T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; and Srinivasan, P. P. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5855–5864. Boss, M.; Braun, R.; Jampani, V.; Barron, J. T.; Liu, C.; and Lensch, H. 2021. Nerd: Neural reflectance decomposition from image collections. In ICCV. Burley, B.; and Studios, W. D. A. 2012. Physically-based shading at disney. Siggraph. Cao, Y.; Cao, Y.-P.; Han, K.; Shan, Y.; and Wong, K.Y. K. 2023. Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models. arXiv preprint arXiv:2304.00916. Chen, D. Z.; Siddiqui, Y.; Lee, H.-Y.; Tulyakov, S.; and Nießner, M. 2023a. Text2Tex: Text-driven texture synthesis via diffusion models. ICCV. Chen, R.; Chen, Y.; Jiao, N.; and Jia, K. 2023b. Fantasia3d: Disentangling geometry and appearance for highquality text-to-3d content creation. In ICCV. Gao, J.; Shen, T.; Wang, Z.; Chen, W.; Yin, K.; Li, D.; Litany, O.; Gojcic, Z.; and Fidler, S. 2022. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35: 31841–31854. Hasselgren, J.; Munkberg, J.; Lehtinen, J.; Aittala, M.; and Laine, S. 2021. Appearance-Driven Automatic 3D Model Simplification. In EGSR. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Ho, J.; and Salimans, T. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. Hong, F.; Zhang, M.; Pan, L.; Cai, Z.; Yang, L.; and Liu, Z. 2022. AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars. ACM ToG. Jiang, R.; Wang, C.; Zhang, J.; Chai, M.; He, M.; Chen, D.; and Liao, J. 2023. AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control. arXiv preprint arXiv:2303.17606. Karis, B.; and Games, E. 2013. Real shading in unreal engine 4. Proc. Physically Based Shading Theory Practice. Kolotouros, N.; Alldieck, T.; Zanfir, A.; Bazavan, E. G.; Fieraru, M.; and Sminchisescu, C. 2023. DreamHuman: Animatable 3D Avatars from Text. arXiv preprint arXiv:2306.09329. Lei, J.; Zhang, Y.; Jia, K.; et al. 2022. Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition. Advances in Neural Information Processing Systems, 30923–30936. Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; and Lin, T.-Y. 2023. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 300–309. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; and Black, M. J. 2015. SMPL: A skinned multi-person linear model. ACM ToG. Metzer, G.; Richardson, E.; Patashnik, O.; Giryes, R.; and Cohen-Or, D. 2023. Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures. In CVPR. Michel, O.; Bar-On, R.; Liu, R.; Benaim, S.; and Hanocka, R. 2022. Text2mesh: Text-driven neural stylization for meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13492–13502. Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV. Mohammad Khalid, N.; Xie, T.; Belilovsky, E.; and Popa, T. 2022. Clip-mesh: Generating textured meshes from text using pretrained image-text models. In SIGGRAPH Asia 2022 conference papers, 1–8. M¨uller, T.; Evans, A.; Schied, C.; and Keller, A. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM ToG. Munkberg, J.; Hasselgren, J.; Shen, T.; Gao, J.; Chen, W.; Evans, A.; M¨uller, T.; and Fidler, S. 2022. Extracting triangular 3d models, materials, and lighting from images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8280–8290. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2023. DreamFusion: Text-to-3D using 2D Diffusion. In ICLR. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In ICML. Renderpeople. 2021. https://renderpeople.com/. Accessed: 2023-07-21. Richardson, E.; Metzer, G.; Alaluf, Y.; Giryes, R.; and Cohen-Or, D. 2023. TEXTure: Text-Guided Texturing of 3D Shapes. In SIGGRAPH. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In CVPR. Shen, T.; Gao, J.; Yin, K.; Liu, M.-Y.; and Fidler, S. 2021. Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis. In NeurIPS. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, 2256–2265. PMLR. Van Den Oord, A.; Vinyals, O.; et al. 2017. Neural discrete representation learning. Advances in neural information processing systems. Zeng, Y.; Lu, Y.; Ji, X.; Yao, Y.; Zhu, H.; and Cao, X. 2023. AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation. arXiv preprint arXiv:2306.09864. Zhang, X.; Srinivasan, P. P.; Deng, B.; Debevec, P.; Freeman, W. T.; and Barron, J. T. 2021. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. ACM ToG. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6807 | 2024 | 756 |
18,580 | CatFormer: Category-Level 6D Object Pose Estimation with Transformer Sheng Yu1, Di-Hua Zhai1,2∗, Yuanqing Xia1 1School of Automation, Beijing Institute of Technology, Beijing, China 2Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing, China [email protected], [email protected], xia [email protected]. Abstract Although there has been significant progress in category-level object pose estimation in recent years, there is still considerable room for improvement. In this paper, we propose a novel transformer-based category-level 6D pose estimation method called CatFormer to enhance the accuracy pose estimation. CatFormer comprises three main parts: a coarse deformation part, a fine deformation part, and a recurrent refinement part. In the coarse and fine deformation sections, we introduce a transformer-based deformation module that performs point cloud deformation and completion in the feature space. Additionally, after each deformation, we incorporate a transformer-based graph module to adjust fused features and establish geometric and topological relationships between points based on these features. Furthermore, we present an end-to-end recurrent refinement module that enables the prior point cloud to deform multiple times according to real scene features. We evaluate CatFormer’s performance by training and testing it on CAMERA25 and REAL275 datasets. Experimental results demonstrate that CatFormer surpasses state-of-the-art methods. Moreover, we extend the usage of CatFormer to instance-level object pose estimation on the LINEMOD dataset, as well as object pose estimation in real-world scenarios. The experimental results validate the effectiveness and generalization capabilities of CatFormer. Our code and the supplemental materials are avaliable at https://github.com/BIT-robot-group/CatFormer. 1 Introduction 6D object pose estimation is a crucial task in computer vision, with applications ranging from robotic grasping (Tremblay et al. 2018; Wang et al. 2019a) to 3D scene understanding (Chen et al. 2019) and augmented reality (Su et al. 2019). While previous methods have primarily focused on instancelevel pose estimation, such as (Xiang et al. 2018; Kehl et al. 2017; Peng et al. 2019; He et al. 2020; Lin et al. 2022b; Rad and Lepetit 2017), these approaches heavily rely on the availability of a 3D model for accurate estimation. Consequently, when faced with an unknown object, it becomes challenging to accurately estimate its 6D pose, which significantly affects pose estimation in real-world scenes. ∗Corresponding author. Copyright c⃝2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Coarse Deformation Graph Module Fine Deformation Graph Module Recurrent Refinement Repeat Point Cloud Prior Point Cloud Coarse Point Cloud Fine Point Cloud Final Point Cloud 6D Pose Coarse Deformation Part Fine Deformation Part Recurrent Refinement Part RGB Patch NOCS Figure 1: CateFormer mainly consists of three parts: coarse deformation, fine deformation, and recurrent refinement. The coarse deformation part is used to coarse deform and complement the point cloud. The fine deformation part is used to fine deform the prior point cloud. The recurrent refinement is used to recurrent refine the point cloud from the fine deformation part. To solve this problem, researchers propose several modelindependent methods for category-level object 6D pose estimation (Wang et al. 2019b; Tian, Ang, and Lee 2020; Chen et al. 2021, 2020a; Di et al. 2022; Lin et al. 2022a). Estimating the pose at the category level is more challenging than instance-level methods due to the lack of 3D models for objects. Some approaches, like (Wang et al. 2019b; Chen and Dou 2021), address this issue by introducing a “Normalized Object Coordinate Space” (NOCS) where they predict the 3D model of the object. Additionally, most methods rely on point clouds to capture the object’s geometric structure. Recognizing the similarity in geometry among objects in the same category, certain methods, such as (Tian, Ang, and Lee 2020; Chen and Dou 2021; Lin et al. 2022a), utilize an average point cloud as prior knowledge, enabling rough estimation of the geometric information in the scene. However, investigating how to handle intra-class object variety and accurately model objects based on prior point clouds is an important problem. Firstly, in real scenes, the camera’s view can be disrupted, resulting in fragmented point cloud information. The key challenge is how to comThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6808 plete the point cloud with limited data. Additionally, deforming the prior point cloud appropriately to accurately fit the object in the scene is another crucial consideration. Furthermore, a single deformation may not suffice to accurately represent the object structure, necessitating multiple deformations. However, certain methods, like (Tian, Ang, and Lee 2020; Chen and Dou 2021; Chen et al. 2020a), have overlooked these issues. They directly input RGB images with point cloud information and combine features for pose estimation, leading to inaccurate predictions. Although some methods try to use feature fusion for improved pose estimation, a gap remains compared to the actual poses. We introduce CatFormer, a transformer-based method for category-level 6D object pose estimation, aiming to address these challenges. Currently, there is a scarcity of transformer-based methods for category-level pose estimation, such as (Zou et al. 2022; Liu et al. 2023). Our proposed method leverages transformers and achieves SOTA performance on benchmark datasets. As depicted in Figure 1, CatFormer comprises three main components: the coarse deformation part, the fine deformation part, and the recurrent refinement part. In the coarse and fine deformation parts, we introduce a transformer-based deformation module to deform and complement the point cloud, enabling a better fit with the target object in the scene. Additionally, we propose a transformer-based graph module to refine and adjust fused features, learning geometric relationships and topological information within the point cloud for improved understanding of the object’s 3D structure. Furthermore, we propose an end-to-end multi-stage refinement method that utilizes RGB image and scene point cloud fusion features to guide multiple deformations of the fine deformed prior point cloud, resulting in a significantly better fit with the target object in the scene. Our proposed method demonstrates notable improvements over SOTA methods on the dataset, surpassing some existing SOTA methods by more than 10% in certain evaluation metrics. We have also successfully applied CatFormer to instance-level pose estimation and real object pose estimation. In summary, the main contributions of this paper are summarized as follows: • We propose a novel transformer-based deformation module to perform coarse deformation on the scene point cloud and fine deformation on the prior point cloud. • A transformer-based graph module is proposed to help networks adjust fused features and construct geometric and topological relationships between points in point cloud features. • We propose an end-to-end recurrent refinement module that guides the prior point cloud to perform multiple iterations of refinement based on the guide of the fusion features of RGB images and point cloud, so that the prior point cloud can largely fit the target object. 2 Related Works 2.1 Instance-Level 6D Object Pose Estimation Recently, there has been extensive research on deep learning-based methods for instance-level 6D object pose estimation. These methods can be categorized into two groups: direct regression and keypoints correspondence. Direct regression methods, such as (Xiang et al. 2018; Kehl et al. 2017; Labb´e et al. 2020; Li et al. 2018; Manhardt et al. 2019, 2018; Wang et al. 2019a), take RGB or RGBD images as input and directly predict the 6D pose based on extracted features. While these methods are generally time-efficient, they may lack accuracy in certain cases. To address this, researchers propose keypoints correspondence methods, including (Li, Wang, and Ji 2019; Zakharov, Shugurov, and Ilic 2019; Park, Patten, and Vincze 2019; Peng et al. 2019). These methods predict predefined object coordinates or 2D keypoints and calculate the 6D pose using the Perspective-n-Point (PnP) algorithm (Lepetit, MorenoNoguer, and Fua 2009) based on the correspondence between 2D and 3D points. Some approaches also utilize keypoint voting for 6D pose prediction, such as (He et al. 2020, 2021). However, the non-differentiable nature of PnP makes it challenging to apply this two-stage pipeline in tasks that require differentiable poses. Consequently, alternative techniques have been explored to learn the PnP step, such as (Di et al. 2021; Hu et al. 2020; Wang et al. 2021). 2.2 Category-Level 6D Object Pose Estimation To address the limitations of instance-level methods, researchers have explored novel approaches that reduce reliance on 3D models. Category-level 6D object pose estimation methods have emerged in recent years, including (Wang et al. 2019b; Tian, Ang, and Lee 2020; Chen et al. 2021; Chen and Dou 2021; Lin et al. 2021; Chen et al. 2020a; Di et al. 2022; Lin et al. 2022a; You et al. 2022). In (Wang et al. 2019b), Wang et al. propose NOCS, a method that projects all objects into the same coordinate space and employs Umeyama’s algorithm (Umeyama 1991) to calculate the 6D object pose. However, since objects within the same category can have distinct shapes, it is challenging to capture the shape variations between each object. To overcome this challenge, some methods introduce shape priors to mitigate the influence of shape variations (Tian, Ang, and Lee 2020; Chen and Dou 2021; Wang, Chen, and Dou 2021; Zou et al. 2022). For example, in (Tian, Ang, and Lee 2020), Tian et al. incorporate a prior point cloud during training and connect the features from the scene point cloud and RGB images. They then predict the NOCS coordinates of the target object and calculate its pose. In (Chen and Dou 2021), Chen et al. propose SGPA, which introduces a feature fusion module based on vision transformers (Vaswani et al. 2017) to merge the features from RGB images and point clouds. Some methods aim to directly predict the rotation, translation, and size of the object based on extracted features, as demonstrated in (Lin et al. 2021; Chen et al. 2021; Di et al. 2022). Many methods try to deform the shape prior to fit the target object, such as (Tian, Ang, and Lee 2020; Lin et al. 2022a; Chen and Dou 2021; Wang, Chen, and Dou 2021; Zhang et al. 2022). Some of these methods leverage fused features to deform the shape prior, such as (Tian, Ang, and Lee 2020; Chen and Dou 2021; Wang, Chen, and Dou 2021). Others attempt to share the weights of the network during the feature extraction process for both the point cloud and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6809 C Mask-RCNN PointNet++ CNN SA SA SA SA CA SA SA SA SA CA offset C MLP SA CA C Self-Attention Cross-Attention Add Concatenation Multiply RGB image Depth image 𝑃𝑟 𝐼 𝑃𝑜 𝐹𝑙𝑜𝑐𝑎𝑙 𝑖𝑛𝑠 𝐹𝑔𝑙𝑜𝑏𝑎𝑙 𝑖𝑛𝑠 𝐹𝑔𝑙𝑜𝑏𝑎𝑙 𝑖𝑛𝑠 𝑃𝑟 (𝑖+1) Repeat GM GM 𝐹𝑟𝑔𝑏 𝐹𝑜 Coarse Deform AVE GM GM 𝐹𝑙𝑜𝑐𝑎𝑙 𝑐𝑎𝑡 AVE 𝐹𝑔𝑙𝑜𝑏𝑎𝑙 𝑐𝑎𝑡 𝑃𝑟 𝐹𝑔𝑙𝑜𝑏𝑎𝑙 𝑖𝑛𝑠 𝐹𝑙𝑜𝑐𝑎𝑙 𝑐𝑎𝑡 PointNet++ C C Transform Matrix 𝐹𝑇 (𝑖) Fine Deform Final Model 6D Pose NOCS GM Graph Module Coarse Deformation Fine Deformation Recurrent Refinement AVE Average Pooling Figure 2: An overview of CatFormer for category-level 6D object pose estimation. We initially employ Mask-RCNN to predict the mask and category of the target object. CatFormer takes the object point cloud Po, RGB image I, and prior point cloud Pr as inputs. Firstly, we utilize the coarse deformation module with the graph module to deform and complement Po. Subsequently, employing the fine deformation module with the graph module and Pr generates relatively accurate point cloud features for the object. Ultimately, a recurrent refinement module is used to enhance the point cloud features, resulting in the final NOCS model of the object. Based on the predicted NOCS model, we generate the 6D pose of the object. the shape prior, such as (Lin et al. 2022a; Zhang et al. 2022). However, in this paper, we propose a novel approach that differs from these methods. Instead of adopting the aforementioned ideas, our method utilizes a constructed feature graph and repetitive refinement to deform the shape prior. 3 Method 3.1 Pipeline Overview The pipeline of CatFormer is provided in Figure 1. Initially, the input consists of the RGB image Io ∈RH×W ×3 and the point cloud Po ∈RNo×3 of the target object. PSPNet (Zhao et al. 2017) is employed to extract features from the RGB image, resulting in Frgb ∈RNo×d. Simultaneously, PointNet++ (Qi et al. 2017) is used to extract features from the point cloud, yielding Fo ∈RNo×d. Next, the deformation module is utilized to complement and deform the point cloud. Additionally, PointNet++ is applied to extract features from the prior point cloud Pr ∈RNr×3, generating Fr ∈RNr×d. The fused feature from Frgb and Fo is then employed to deform Pr within the fine deformation module. To capture geometrical information about the object and adjust the fused feature, a graph module establishes the graph feature after each point cloud deformation process. Finally, the recurrent refinement module is leveraged to further refine the prior point cloud. 3.2 Deformation Module We propose a transformer-based deformation module consisting of self-attention (SA) and cross-attention (CA) modules to deform and complete the shape of the point cloud. The coarse deformation process applies coarse deformation to the scene point cloud, while the fine deformation process deforms the prior point cloud using the fused feature. This deformation process completes and deforms the point cloud shape in the feature space, resulting in feature maps of the deformed point cloud. We begin by applying three MLP layers to Frgb and Fo, generating the query, key, and value inputs for the SA module. which can be calculated by F∗= SA(F∗) (1) where ∗∈{rgb, o} indicates the parameters of RGB and point cloud, respectively. Next, we utilize another SA module based on Frgb and Fo to perform further feature extraction using a similar operation. The resulting feature maps for the RGB images and point cloud are denoted as Frgb ∈RNo×C and Fo ∈RNo×C, respectively. Then, we compute the query, key, and value inputs for the cross-attention (CA) module based on the extracted features Frgb and Fo. The objective of the CA model is to combine Fo and Frgb, resulting in the deformation offsets features Oo and Orgb. The fusion is computed as Oi = softmax qc i × (kc j)T √dk ! vc j (2) where i, j ∈{rgb, o} and i ̸= j. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6810 Finally, we adjust the feature maps by adding the deformation offsets, which can be calculated as F∗= F∗+ O∗, where ∗∈{rgb, o}. Once we have adjusted Frgb and Fo, we concatenate them along the channel dimension, resulting in instance localwise features denoted as Fins local ∈RNo×C. Subsequently, we employ an MLP layer to generate the instance global-wise features represented by Fins global ∈RNo×C. Both the coarse deformation and fine deformation modules function in a similar manner, as they deform objects by predicting the deformation offset in the feature space. For more detailed information about the fine deformation module, please refer to the supplementary material. 3.3 Graph Module In order to establish a graph relationship between the fused features of RGB images and point clouds, we introduce a graph module inspired by ideas from graph convolution (Kipf and Welling 2017). This graph module incorporates a transformer structure, which is illustrated in Figure 3. q k v Input Softmax q Output A Attention map Add Multiply Figure 3: The structure of the graph module. Using the fused feature F ∈RB×N×D as input, we generate the query, key, and value, which indicate by q ∈ RB×N×Dq, k ∈RB×N×Dk, and v ∈RB×N×Dv respectively, where B represents the batch size, and D∗, ∗∈ {q, k, v} indicates the feature dimension. Next, we normalize the q and v tensors, and apply the softmax function to the k tensor. To generate multiple graph features, we first divide q into several heads, resulting in q ∈RB×N×H×(Dq/H), where H denotes the number of heads. We set Dq/H = Dk = Dv and utilize the Einstein summation convention (einsum) for performing matrix multiplication in the transformer. Then, we follow the calculation process of the transformer and calculate the attention map, which can be got by attn = k ⊗v, where ⊗indicates the einsum, attn ∈RB×Dk×Dv indicates the attention map. Next, we generate the adjusted features with the attention map: Fattn = q ⊗attn (3) where Fattn ∈RB×N×H×Dv indicates the adjusted feature map. Then, we generate multi-graph features on the value branch. First, we create a random graph adjacency matrix A to establish a graph G, which can be computed by G = v ⊗A (4) where A ∈RDv×Dv×Dk, G ∈RB×N×Dk×Dv. To establish a graph on itself, we add a self-loop to the v tensor by v = v + I, where I represents the identity matrix. We reshape v into RB×N×Dv×Dv and add it to the graph G. Using G, we establish the relationship between v and q by FG = q ⊗G (5) where FG ∈RB×N×H×Dv indicates the graph features. Finally, we add the FG and Fattn tensors together to obtain Ff. Then, we can get the final graph feature Ffin by Ffin = MLP(Ff) + F, where MLP indicates MLP layers. Due to space limits, we have added more details and motivation of graph module into the supplementary material. 3.4 Recurrent Refinement Module In this part, we deform the prior point cloud Pr by Fins global and Fcat global in the feature space. We predict the deformation offset O and the point cloud transformation matrix T. We use O to adjust and deform the shape of Pr, while T generates the NOCS model from the deformed point cloud. The initial transform matrix and deformation offset for objects are T0 ∈RNo×(Nr×c) and O0 ∈RNr×(c×3). Here, c represents the number of object categories. For a specific object k ∈c, the corresponding deformed point cloud is Pr. The initial point cloud transformation matrix and deformation offset are T (k) 0 ∈RNo×Nr and O(k) 0 ∈RNr×3. We perform the first deformation on Pr with the initial deformation offset, P (1) r = Pr+O(k) 0 , where P (1) r represents the deformed prior point cloud. Then, we update the category local-wise and global-wise features based on the P (1) r that is Fcat local(1) = MLP(P (1) r ), Fcat global(1) = AV E(Fcat local) (6) where AV E indicates average global pooling layer. After that, we use Fcat global(1) with previous features to further update the offset and transformation matrix: O1 = MLP( c⃝(Fcat global(1), Fcat global(0))) (7) T1 = MLP( c⃝(Fcat global(1), Fins global(0))) (8) where c⃝(∗, ∗) indicates concatenation, Fcat global(0) = Fcat global and Fins global(0) = Fins global. Then we update the deformation offset and transformation matrix: O1 ←O1 + O0, T1 ←T1 × T0. By repeatedly performing such a process, we can deform the prior point cloud several times, which can be expressed as Oi+1 = MLP( c⃝(Fcat global(i+1), Fcat global(0))) (9) Oi+1 ←Oi+1 + Oi (10) Ti+1 = MLP( c⃝(Fcat global(i+1), Fins global(i))) (11) Ti+1 ←Ti+1 × Ti (12) Due to space limits, we also provide the pseudocode of this algorithm in the supplementary material. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6811 3.5 Loss Function We can transform the instance object into the NOCS space using the final prediction of deformation offset O and point cloud transformation matrix T. The resulting equation is expressed as PNOCS = softmax(T) × (Pr + O). Correspondence Loss Based on PNOCS, we can transform the object into NOCS space, we use the smooth L1 loss to calculate the correspondence loss Lcorr = S(PNOCS, Pgt) where S indicates smooth L1 loss, Pgt is the ground truth. Reconstruction Loss To evaluate the performance of deformation of the network, following the ideas in (Tian, Ang, and Lee 2020), we use the Chamfer distance to penalize the deformation Lr(M, Mgt) = X i∈M i min j∈M j gt ||i −j||2 2 + X j∈M j gt min i∈M i ||i −j||2 2 where M = Pr+O is the prediction of the object 3D model, Mgt is the ground truth 3D model. Distribution Loss Following (Tian, Ang, and Lee 2020), We also try to encourage T to be a peaked distribution by minimizing the average cross-entropy loss, which can be calculated by Ldis = 1 No X i X j (−softmax(Ti,j) log(softmax(Ti,j))) Deformation Loss To avoid overfitting and large deformations, we also use the O with L2 regularization to realize it, Ldef = 1 No P i∈O ||i||2 Total Loss The total loss of the CatFormer is the sum of the four losses L = n X k=0 λcorrLcorr + λrLr + λdisLdis + λdefLdef where λ∗is the weight of the loss function, n is the number of the object in the scene. 4 Experiments 4.1 Datasets The benchmark datasets for category-level object pose estimation are the REAL275 dataset and CAMERA25 dataset, proposed in (Wang et al. 2019b). The CAMERA25 dataset consists of 300K images, with 25K images used for evaluation. It is generated by rendering synthetic objects into real scenes. On the other hand, the REAL275 dataset contains 4300 real-world training images from 7 scenes, and 2750 real-world evaluation images from 6 scenes. Both datasets include 6 different categories of objects: bottle, bowl, camera, can, laptop, and mug. For instance-level 6D pose estimation, the LINEMOD dataset (Hinterstoisser et al. 2011) serves as the benchmark. It comprises 13 different objects randomly placed in real scenes. 4.2 Training Details All experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU with 24 GB memory, running Ubuntu 18.04 as the operating system. PyTorch 1.8.1 is utilized as the deep learning framework, and CUDA 11.1 is employed for accelerated training. The network is trained with a batch size of 16 for 60 epochs. The initial learning rate is set to 1 × 10−4, gradually decreasing to 1 × 10−6. We also set λcorr = λr = λdef = 1 in this paper. 4.3 Preprocessing To process the dataset, we follow the procedures outlined in (Tian, Ang, and Lee 2020). We employ an instance segmentation network, such as Mask-RCNN (He et al. 2017), for object detection and segmentation. For each segmented instance, we crop the object and resize the image to 192×192 pixels. Using the RGB-D images and the camera’s intrinsic matrix, we generate a point cloud of the scene. From this point cloud, we randomly select 1024 points for each object. The cropped images and selected points are then used as inputs for CatFormer. 4.4 Evaluation Metrics We evaluate the performance of CatFormer using widely used evaluation metrics (Wang et al. 2019b; Tian, Ang, and Lee 2020; Lin et al. 2021; Di et al. 2022; Lin et al. 2022a). For rotation and translation evaluation, we utilize 3D Intersection-Over-Union (IoU) with thresholds of 0.25, 0.5, and 0.75. Additionally, we employ 5◦2cm, 5◦5cm, 10◦2cm, and 10◦5cm to directly assess rotation and translation accuracy. If the errors fall within the thresholds, the predictions are deemed correct. Based on these evaluation metrics, we will use overall mAP to assess the performance of CatFormer compared to other SOTA methods. 4.5 Comparison with State-of-the-Art Methods We present the quantitative results of CatFormer compared to recent state-of-the-art methods on the CAMERA25 and REAL275 datasets in Table 1. CatFormer exhibits slightly superior performance to SOTA methods on the CAMERA25 dataset. This can be attributed to the accurate depth images generated in a simulated real scene, unaffected by environmental factors. Consequently, point cloud completion has not provided significant enhancements in pose estimation. However, on the REAL275 dataset, where all images are captured in the real world, the depth images can be influenced by object reflections or lighting conditions, resulting in incomplete information and imperfect point clouds. In such cases, transformerbased point cloud completion and deformation have demonstrated their effectiveness. Notably, for the IoU50 metric, CatFormer outperforms SOTA HS-Pose (Zheng et al. 2023) with a score of 83.1 compared to 82.1. Specifically, in the most challenging 5◦2cm metric, CatFormer achieves a score of 47.7, surpassing the previous SOTA method HS-Pose at 46.5, indicating higher accuracy. Additionally, for the 10◦2cm metric, CatFormer also demonstrates improved performance with a score of 69.0 compared to HS-Pose’s 68.6. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6812 Method CAMERA25 REAL275 IoU50 IoU75 5◦2cm5◦5cm10◦2cm10◦5cmIoU25 IoU50 IoU75 5◦2cm5◦5cm10◦2cm10◦5cm NOCS(Wang et al. 2019b)(R) 83.9 69.5 32.3 40.9 48.2 64.6 84.9 78.0 30.1 7.2 9.5 13.8 25.2 SPD(Tian, Ang, and Lee 2020)(R) 93.2 83.1 54.3 59.0 73.3 81.5 83.0 77.3 53.2 19.2 21.4 43.2 54.1 SGPA(Chen and Dou 2021)(R) 93.2 88.1 70.7 74.5 82.7 88.4 80.1 61.9 35.9 39.6 61.3 70.7 CR-Net(Wang, Chen, and Dou 2021)(R) 93.8 88.0 72.0 76.4 81.0 87.7 79.3 55.9 27.8 34.3 47.2 60.8 FS-Net∗(Chen et al. 2021)(D) 84.0 81.1 63.5 19.9 33.9 69.1 DualPoseNet(Lin et al. 2021)(R) 92.4 86.4 64.7 70.7 77.2 84.7 79.8 62.2 29.3 35.9 50.0 66.8 SAR-Net(Lin et al. 2022a)(D) 86.8 79.0 66.7 70.9 75.3 80.3 79.3 62.4 31.6 42.3 50.3 68.3 GPV-Pose(Di et al. 2022)(D) 93.4 88.3 72.1 79.1 89.0 84.2 83.0 64.4 32.0 42.9 73.3 HS-Pose(Zheng et al. 2023)(D) 93.3 89.4 73.3 80.5 80.4 89.4 84.2 82.1 74.7 46.5 55.2 68.6 82.7 SPD+GM 93.6 87.9 61.4 66.3 78.0 85.4 83.6 80.5 59.5 20.4 24.4 47.7 58.7 CR-Net+GM 93.9 88.8 72.8 76.0 82.4 88.3 83.2 79.8 64.5 32.9 37.5 53.1 63.4 SGPA+GM 93.5 86.1 73.8 77.7 83.9 89.0 84.0 81.3 65.3 36.5 40.6 62.9 71.1 Ours(R) 93.5 89.9 74.9 79.8 85.3 90.2 84.3 83.1 73.8 47.7 53.7 69.0 79.5 Table 1: Comparison with state-of-the-art methods on CAMERA25 dataset and REAL275 dataset. GM indicates the graph module. R and D indicates RGB-D-based and depth-based methods respectively. ∗We use the results provided by the GPV-Pose (Di et al. 2022) and HS-Pose (Zheng et al. 2023). Since CatFormer is a RGB-D-based method, we comapre it to the other RGB-D-based methods. Figure 4 shows the qualitative analysis of CatFormer and the state-of-the-art RGB-D based method SGPA on the REAL275 dataset. In comparison to SGPA, CatFormer exhibits higher accuracy in pose estimation. We provide more comprehensive details of the comparison experiments and object pose estimation in the the supplementary material. SGPA CatFormer Figure 4: Qualitative results of SGPA and CatFormer. The bounding box in green line is the ground truth, and the red line is the prediction. In this paper, we propose an innovative transformer-based graph module. By incorporating the graph module into CatFormer, we observe a significant improvement in performance, highlighting its effectiveness. To further validate the efficacy of the graph module, we also apply it to other RGBD-based methods such as SPD, CR-Net, SGPA. The experimental results are presented in Table 1. The experimental results demonstrate that incorporating the graph module into the network leads to improved performance. For instance, when we apply the graph module after RGB-D future fusion in SPD, the 5◦2cm metric shows a significant enhancement from 54.3 to 61.4, indicating a substantial improvement. 4.6 Ablation Studies The REAL275 dataset, being more challenging than the CAMERA25 dataset, provides a better evaluation of the network’s performance. Therefore, we primarily utilize the REAL275 dataset for conducting ablation studies. These studies focus on evaluating the proposed module, the number of refinement iterations, and the loss terms. Module: We begin by conducting ablation studies on the proposed module, with the corresponding experimental results presented in Table 2(A). Firstly, when the graph module is removed, CatFormer’s performance decreases, indicating the effectiveness of the graph module in establishing connections between different features. However, removing either the coarse deformation or fine deformation module causes a more significant drop in CatFormer’s performance, highlighting the effectiveness of point cloud deformation and completion. Repeat Times: In addition, we assess the efficacy of the recurrent refinement module by refining the initial point cloud multiple times. Specifically, we conduct refinements 1, 3, 4, 5, 6, and 7 times in this study, with the corresponding experimental results presented in Table 2(B). As the number of refinement iterations increases, there is a gradual improvement in the network’s performance, reaching its peak at five iterations. However, further increasing the number of iterations leads to a decline in the network’s performance. Loss Terms: Given the necessity of predicting the NOCS model of the object, the Lcorr term plays a crucial role. The experimental results are summarized in Table 2(C). It is evident from the results that relying solely on Lcorr yields poor network performance. However, when Lcorr is combined with Ldis, CatFormer demonstrates relatively improved performance. Additionally, removing Ldis results in a significant decrease in network performance. Each row in matrix T can be considered as a relaxed one-hot vector, allowing a point in the NOCS space to be transformed by up to three points in the point cloud. We aim for T to have a peaked distribution, focusing on high-confidence transformations. This concentration of confidence enhances the accuracy of the predicted NOCS model. Lastly, we utilize Ldef to mitigate excessive deformation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6813 Group Module Repeat Times Loss Terms REAL275 CD FD GM Lcorr Lr Ldis Ldef IoU50 IoU75 5◦2cm 5◦5cm 10◦2cm 10◦5cm (A) ✓ 3 ✓ ✓ ✓ ✓ 79.0 54.1 32.5 35.9 60.1 69.8 ✓ 3 ✓ ✓ ✓ ✓ 81.2 65.9 38.9 42.7 62.9 72.9 ✓ 3 ✓ ✓ ✓ ✓ 80.0 51.7 29.8 33.4 59.6 69.7 ✓ ✓ 3 ✓ ✓ ✓ ✓ 79.8 67.7 37.7 42.5 63.9 74.0 ✓ ✓ 3 ✓ ✓ ✓ ✓ 81.7 66.2 40.3 44.3 64.5 74.6 ✓ ✓ 3 ✓ ✓ ✓ ✓ 80.9 56.0 34.9 38.5 61.1 71.9 (B) ✓ ✓ ✓ 1 ✓ ✓ ✓ ✓ 82.5 68.0 43.0 47.6 65.7 76.0 ✓ ✓ ✓ 3 ✓ ✓ ✓ ✓ 82.6 69.5 44.5 47.6 65.3 76.7 ✓ ✓ ✓ 4 ✓ ✓ ✓ ✓ 82.8 71.6 46.8 50.1 67.2 78.4 ✓ ✓ ✓ 5 ✓ ✓ ✓ ✓ 83.1 73.8 47.7 53.7 69.0 79.5 ✓ ✓ ✓ 6 ✓ ✓ ✓ ✓ 82.4 70.6 46.2 49.5 68.4 78.3 ✓ ✓ ✓ 7 ✓ ✓ ✓ ✓ 82.1 68.1 43.3 48.0 66.8 77.0 (C) ✓ ✓ ✓ 3 ✓ ✓ 74.9 34.9 41.6 46.0 66.0 75.4 ✓ ✓ ✓ 3 ✓ ✓ ✓ 78.9 44.9 41.9 46.8 64.8 74.8 ✓ ✓ ✓ 3 ✓ ✓ ✓ 82.2 68.0 42.0 47.1 64.3 75.4 ✓ ✓ ✓ 3 ✓ 1.5 1.0 30.5 34.7 50.8 56.3 ✓ ✓ ✓ 3 ✓ ✓ 0.0 0.0 3.3 3.3 14.2 14.6 ✓ ✓ ✓ 3 ✓ ✓ 81.9 67.3 38.5 43.3 61.5 71.6 ✓ ✓ ✓ 3 ✓ ✓ ✓ 81.8 68.1 42.0 47.1 63.8 75.4 Table 2: Ablation studies on different configurations of network on REAL275. CD, FD, and GM refer to coarse deformation module, fine deformation module, and graph module respectively. in CatFormer’s predictions. Applying Ldef leads to an improvement in CatFormer’s performance. Due to space limits, we add more analysis of ablation studies in supplementary materials. 4.7 Category-level Object Pose Estimation in The Real World To assess CatFormer’s effectiveness and generalization, we apply it to real-world object pose estimation, specifically on objects that were not used during training. RGB-D images are obtained using an Intel RealSense D435i camera, while segmentation is performed using a pretrained Mask-RCNN model. The pose estimation results, shown in Figure 5, indicate that CatFormer exhibits good generalization and performs well in real-world object pose estimation tasks. Figure 5: Object pose estimation results of CatFormer in the real world. 4.8 Instance-Level Object Pose Estimation CatFormer is also applied to instance-level object pose estimation in this study. The LINEMOD dataset (Hinterstoisser et al. 2011) is used for conducting the instancelevel object pose estimation experiment. By comparing CatFormer’s performance with other state-of-the-art methods on the LINEMOD dataset, the experimental results, displayed in Table 3, indicate that CatFormer outperforms related category-level methods and achieves competitive performance compared to some state-of-the-art instance-level methods. For symmetric objects, we utilize ADD-S as the metric (Xiang et al. 2018), while for non-symmetric objects, we employ ADD as the metric (Hinterstoisser et al. 2012). We also provide more details of instance-level object pose estimation in the supplementary material. Method C.L. ADD-(S) Speed(FPS) PVNet(Peng et al. 2019) 86.3 27 G2L-Net(Chen et al. 2020b) 98.7 24 DenseFusion(Wang et al. 2019a) 94.3 15 PVN3D(He et al. 2020) 99.4 5 DualPose(Lin et al. 2021) ✓ 98.2 3 FS-Net (Chen et al. 2021) ✓ 97.6 22 GPV-Pose(Di et al. 2022) ✓ 98.2 20 Ours ✓ 99.3 8 Table 3: Instance-level object pose estimation on LINEMOD dataset. C.L. indicates category-level method. 5 Conclusion In this paper, we introduce CatFormer, a novel categorylevel 6D object estimation network. CatFormer leverages transformer-based deformation modules for both coarse and fine deformations. Additionally, we propose a graph module to establish and extract graph features from fused features. To further refine the prior point’s proximity to the target object, we introduce a recurrent refinement module. Compared to previous state-of-the-art methods, our approach demonstrates superior performance on both the CAMERA25 dataset and REAL275 dataset. Furthermore, CatFormer achieves successful results in instance-level object pose estimation and real-world object pose estimation tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6814 Acknowledgments The work was supported by the National Natural Science Foundation of China under Grant 62173035, Grant 61803033 and Grant 61836001, and in part by the Xiaomi Young Scholars from Xiaomi Foundation, and in part by the BIT Research and Innovation Promoting Project under Grant 2023YCXY035. References Chen, D.; Li, J.; Wang, Z.; and Xu, K. 2020a. Learning canonical shape space for category-level 6D object pose and size estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 11973–11982. Chen, K.; and Dou, Q. 2021. SGPA: Structure-guided prior adaptation for category-level 6D object pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 2773–2782. Chen, W.; Jia, X.; Chang, H. J.; Duan, J.; and Leonardis, A. 2020b. G2l-net: Global to local network for real-time 6d pose estimation with embedding vector features. In IEEE Conference on Computer Vision and Pattern Recognition, 4233–4242. Chen, W.; Jia, X.; Chang, H. J.; Duan, J.; Shen, L.; and Leonardis, A. 2021. FS-net: Fast shape-based network for category-level 6D object pose estimation with decoupled rotation mechanism. In IEEE Conference on Computer Vision and Pattern Recognition, 1581–1590. Chen, Y.; Huang, S.; Yuan, T.; Qi, S.; Zhu, Y.; and Zhu, S.-C. 2019. Holistic++ scene understanding: Single-view 3d holistic scene parsing and human pose estimation with human-object interaction and physical commonsense. In IEEE International Conference on Computer Vision, 8648– 8657. Di, Y.; Manhardt, F.; Wang, G.; Ji, X.; Navab, N.; and Tombari, F. 2021. So-pose: Exploiting self-occlusion for direct 6d pose estimation. In IEEE International Conference on Computer Vision, 12396–12405. Di, Y.; Zhang, R.; Lou, Z.; Manhardt, F.; Ji, X.; Navab, N.; and Tombari, F. 2022. GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting. In IEEE Conference on Computer Vision and Pattern Recognition, 6781–6791. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask R-CNN. In IEEE International Conference on Computer Vision, 2961–2969. He, Y.; Huang, H.; Fan, H.; Chen, Q.; and Sun, J. 2021. Ffb6d: A full flow bidirectional fusion network for 6d pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 3003–3013. He, Y.; Sun, W.; Huang, H.; Liu, J.; Fan, H.; and Sun, J. 2020. PVN3D: A deep point-wise 3d keypoints voting network for 6dof pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 11632–11641. Hinterstoisser, S.; Holzer, S.; Cagniart, C.; Ilic, S.; Konolige, K.; Navab, N.; and Lepetit, V. 2011. Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. In IEEE International Conference on Computer Vision, 858–865. Hinterstoisser, S.; Lepetit, V.; Ilic, S.; Holzer, S.; Bradski, G.; Konolige, K.; and Navab, N. 2012. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Asian Conference on Computer Vision, 548–562. Hu, Y.; Fua, P.; Wang, W.; and Salzmann, M. 2020. Singlestage 6d object pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 2930–2939. Kehl, W.; Manhardt, F.; Tombari, F.; Ilic, S.; and Navab, N. 2017. SSD-6D: Making rgb-based 3d detection and 6d pose estimation great again. In IEEE International Conference on Computer Vision, 1521–1529. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations. Labb´e, Y.; Carpentier, J.; Aubry, M.; and Sivic, J. 2020. Cosypose: Consistent multi-view multi-object 6d pose estimation. In European Conference on Computer Vision, 574– 591. Lepetit, V.; Moreno-Noguer, F.; and Fua, P. 2009. Epnp: An accurate o (n) solution to the pnp problem. International journal of computer vision, 81(2): 155–166. Li, Y.; Wang, G.; Ji, X.; Xiang, Y.; and Fox, D. 2018. Deepim: Deep iterative matching for 6d pose estimation. In European Conference on Computer Vision, 683–698. Li, Z.; Wang, G.; and Ji, X. 2019. Cdpn: Coordinates-based disentangled pose network for real-time rgb-based 6-dof object pose estimation. In IEEE International Conference on Computer Vision, 7678–7687. Lin, H.; Liu, Z.; Cheang, C.; Fu, Y.; Guo, G.; and Xue, X. 2022a. SAR-Net: Shape Alignment and Recovery Network for Category-Level 6D Object Pose and Size Estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 6707–6717. Lin, J.; Wei, Z.; Li, Z.; Xu, S.; Jia, K.; and Li, Y. 2021. Dualposenet: Category-level 6D object pose and size estimation using dual pose network with refined learning of pose consistency. In IEEE International Conference on Computer Vision, 3560–3569. Lin, S.; Wang, Z.; Ling, Y.; Tao, Y.; and Yang, C. 2022b. E2EK: End-to-End Regression Network Based on Keypoint for 6D Pose Estimation. IEEE Robotics and Automation Letters, 7(3): 6526–6533. Liu, J.; Sun, W.; Liu, C.; Zhang, X.; and Fu, Q. 2023. Robotic Continuous Grasping System by Shape Transformer-Guided Multiobject Category-Level 6-D Pose Estimation. IEEE Transactions on Industrial Informatics, 19(11): 11171–11181. Manhardt, F.; Arroyo, D. M.; Rupprecht, C.; Busam, B.; Birdal, T.; Navab, N.; and Tombari, F. 2019. Explaining the ambiguity of object detection and 6d pose from visual data. In IEEE International Conference on Computer Vision, 6841–6850. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6815 Manhardt, F.; Kehl, W.; Navab, N.; and Tombari, F. 2018. Deep model-based 6d pose refinement in rgb. In European Conference on Computer Vision, 800–815. Park, K.; Patten, T.; and Vincze, M. 2019. Pix2pose: Pixelwise coordinate regression of objects for 6d pose estimation. In IEEE International Conference on Computer Vision, 7668–7677. Peng, S.; Liu, Y.; Huang, Q.; Zhou, X.; and Bao, H. 2019. PVNet: Pixel-wise voting network for 6dof pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 4561–4570. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 30. Rad, M.; and Lepetit, V. 2017. BB8: A scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth. In IEEE International Conference on Computer Vision, 3828–3836. Su, Y.; Rambach, J.; Minaskan, N.; Lesur, P.; Pagani, A.; and Stricker, D. 2019. Deep multi-state object pose estimation for augmented reality assembly. In IEEE International Symposium on Mixed and Augmented Reality Adjunct, 222–227. Tian, M.; Ang, M. H.; and Lee, G. H. 2020. Shape prior deformation for categorical 6d object pose and size estimation. In European Conference on Computer Vision, 530–546. Tremblay, J.; To, T.; Sundaralingam, B.; Xiang, Y.; Fox, D.; and Birchfield, S. 2018. Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects. In Conference on Robot Learning. Umeyama, S. 1991. Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions on Pattern Analysis & Machine Intelligence, 13(04): 376–380. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, C.; Xu, D.; Zhu, Y.; Mart´ın-Mart´ın, R.; Lu, C.; FeiFei, L.; and Savarese, S. 2019a. Densefusion: 6d object pose estimation by iterative dense fusion. In IEEE Conference on Computer Vision and Pattern Recognition, 3343–3352. Wang, G.; Manhardt, F.; Tombari, F.; and Ji, X. 2021. Gdrnet: Geometry-guided direct regression network for monocular 6d object pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 16611–16621. Wang, H.; Sridhar, S.; Huang, J.; Valentin, J.; Song, S.; and Guibas, L. J. 2019b. Normalized object coordinate space for category-level 6d object pose and size estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 2642–2651. Wang, J.; Chen, K.; and Dou, Q. 2021. Category-level 6d object pose estimation via cascaded relation and recurrent reconstruction networks. In IEEE International Conference on Intelligent Robots and Systems, 4807–4814. Xiang, Y.; Schmidt, T.; Narayanan, V.; and Fox, D. 2018. PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes. In Robotics: Science and Systems. You, Y.; Shi, R.; Wang, W.; and Lu, C. 2022. CPPF: Towards Robust Category-Level 9D Pose Estimation in the Wild. In IEEE Conference on Computer Vision and Pattern Recognition, 6866–6875. Zakharov, S.; Shugurov, I.; and Ilic, S. 2019. Dpod: 6d pose object detector and refiner. In IEEE International Conference on Computer Vision, 1941–1950. Zhang, R.; Di, Y.; Lou, Z.; Manhardt, F.; Tombari, F.; and Ji, X. 2022. Rbp-pose: Residual bounding box projection for category-level pose estimation. In European Conference on Computer Vision, 655–672. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In IEEE Conference on Computer Vision and Pattern Recognition, 2881–2890. Zheng, L.; Wang, C.; Sun, Y.; Dasgupta, E.; Chen, H.; Leonardis, A.; Zhang, W.; and Chang, H. J. 2023. HS-Pose: Hybrid Scope Feature Extraction for Category-level Object Pose Estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17163–17173. Zou, L.; Huang, Z.; Gu, N.; and Wang, G. 2022. 6d-vit: Category-level 6d object pose estimation via transformerbased instance representation learning. IEEE Transactions on Image Processing, 31: 6907–6921. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6816 | 2024 | 757 |
18,581 | DME: Unveiling the Bias for Better Generalized Monocular Depth Estimation Songsong Yu, Yifan Wang, Yunzhi Zhuge, Lijun Wang*, Huchuan Lu Dalian University of Technology [email protected], {wyfan, ljwang, lhchuan}@dlut.edu.cn, [email protected] Abstract This paper aims to design monocular depth estimation models with better generalization abilities. To this end, we have conducted quantitative analysis and discovered two important insights. First, the Simulation Correlation phenomenon, commonly seen in long-tailed classification problems, also exists in monocular depth estimation, indicating that the imbalanced depth distribution in training data may be the cause of limited generalization ability. Second, the imbalanced and long-tail distribution of depth values extends beyond the dataset scale, and also manifests within each individual image, further exacerbating the challenge of monocular depth estimation. Motivated by the above findings, we propose the Distance-aware Multi-Expert (DME) depth estimation model. Unlike prior methods that handle different depth range indiscriminately, DME adopts a divide-and-conquer philosophy where each expert is responsible for depth estimation of regions within a specific depth range. As such, the depth distribution seen by each expert is more uniform and can be more easily predicted. A pixel-level routing module is further designed and learned to stitch the prediction of all experts into the final depth map. Experiments show that DME achieves state-ofthe-art performance on both NYU-Depth v2 and KITTI, and also delivers favorable zero-shot generalization capability on unseen datasets. Introduction Despite remarkable progress being achieved in recent years (Bhat, Alhashim, and Wonka 2021; Ranftl, Bochkovskiy, and Koltun 2021; Wang et al. 2021; Ren et al. 2022), monocular depth estimation still suffers from unsatisfactory generalization ability, which hinders their applicability in complex real-world scenarios. The lack of generalization issue is mostly attributed to either significant scene diversities or scale variations, and addressed by methods of two main kinds. First, scene-aware models incorporate multi-branch architectures to estimate depth tailored to different scenes (Ren, El-Khamy, and Lee 2019; Bhat et al. 2023). However, they require scene priors. Second, relative depth models train in a scale-invariant manner on diverse, large-scale data to enable metric depth prediction (Ranftl et al. 2020; Ranftl, Bochkovskiy, and Koltun 2021). *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Simulation Correlation. The bar chart displays the frequency distribution of depth values in the NYUD v2 dataset. The line graph represents the model’s d1 accuracy, with the values corresponding to the primary axis. It can be observed that the depth values exhibit a unimodal distribution, while the model’s performance shows a positive correlation with the frequency variation. Though achieving state-of-the-art performance, they cannot restore absolute depth values. While these approaches have advanced the field, their limitations highlight the need for methods that improve generalization without sacrificing metric accuracy or requiring scene priors. In the literature of image classification, the generalization issue has been intensively explored and largely attributed to the Simulation Correlation effect (Hong et al. 2021) found in long-tail data distributions, i.e., the classification model tends to overly focus on high-frequent categories with degraded performance on low frequent ones. In contrast, although the imbalanced distribution of monocular depth has been identified by prior works (Jiao et al. 2018), its connection to the generalization issue has been rarely explored. We thus make one of the first attempts by asking the questions of whether monocular depth also exhibits long-tailed distribution and whether the generalization issue can be explained by the Simulation Correlation effect. To answer the aforementioned questions, we have conducted quantitative studies on depth distribution and its impact on depth estimation models. Two important findings has The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6817 been made. First, we discover that monocular depth values of both indoor and outdoor scenes follow long-tail distributions, and identify the prevalence of the Simulation Correlation phenomenon in most of state-of-the-art depth estimation methods (c.f. Figure 1). Second, unlike the image classification task, the long-tail depth distribution is not only present at the dataset level, but can also be found in individual images. Motivated by the above findings, we design a Distanceaware Multi-Expert model to enhance the generalization ability of depth estimation. We propose to divide the depth range into multi-intervals. Each expert is responsible for estimating depth within its corresponding interval. As the depth distribution in each interval is more balanced, the long-tailed issue is effectively alleviated, and the specifically learned experts are more focused, giving rise to improved depth precision. To achieve the final depth estimation results, a pixel-level routing module is designed which can automatically aggregate the outputs of the multi-experts without requiring any prior information, resulting in a comprehensive and accurate depth map. In summary, the contribution of this paper is threefold: • We conduct an in-depth analysis of widely-adopted depth estimation datasets, which indicates that the generalization issue may be caused by long-tailed depth distribution as well as the Simulation Correlation effect. • We propose the Distance-aware Multi-Expert paradigm to enhance depth estimation, which can effectively address the long-tailed depth distribution issue, providing better generalization ability. • Our method sets new state-of-the-art performance on NYUD v2 and KITTI datasets, and has shown superior zero-shot generalization performance on five unseen datasets of diverse scenarios. Our work provides a new perspective to address the generalization issue in depth estimation. The code can be obtained at https://github.com/YUsong360/DME-Unveilingthe-bias. Related Works Scene-Aware Depth Estimation Model Researchers (Ranftl et al. 2020) find that models face challenges in optimization when simultaneously training on indoor and outdoor datasets. It is currently understood that these challenges arise due to differences in camera parameters and dataset scales, making it difficult for models to learn effectively. Consequently, previous depth estimation models (Xie et al. 2023; Wang et al. 2020b; Bhat, Alhashim, and Wonka 2021; Lee et al. 2019a; He et al. 2023) are being trained and tested on a single dataset, resulting in limited generalization capabilities. ZoeDepth and DS-SIDENet (Ren, El-Khamy, and Lee 2019) introduce scene understanding modules to enable models to learn scene differentiation, achieving a divide-and-conquer effect (Wang et al. 2020a). Specifically, ZoeDepth employs indoor and outdoor heads to predict the NYUD v2 (Silberman et al. 2012) and KITTI (Geiger et al. 2013) datasets respectively, and additionally designs a scene discriminator to distinguish between indoor and outdoor images. DS-SIDENet proposes a twostage model, where the first stage incorporates two different scene understanding modules based on scene classification and coarse depth estimation, while the second stage utilizes the DS-SIDENet trained on specific depth range images to obtain accurate depth maps. Our model takes ZoeDepth as the baseline, requiring similar routing mechanisms, but with a key distinction of employing pixel-level routing rather than simple image-based scene classification. Long-Tail Phenomenon in Classification Extensive and in-depth research is being conducted to address the issue of long-tail distribution in classification. When confronted with data imbalance, the most direct solution is resampling. SimCal (Wang et al. 2020c) proposes a two-level sampling strategy that combines image-level and instance-level resampling to alleviate class imbalance in instance segmentation. DCL (Wang et al. 2019) develops a novel curriculum strategy where the probability of subsequent sampling from a class decreases as the number of instances sampled from that class increases, dynamically rebalancing the class distribution. meta-softmax (Ren et al. 2020) introduces a meta-learning-based sampling approach that optimizes the model’s classification performance on a balanced meta-validation set to learn the optimal sampling rates for different categories in long-tail learning. Recently, numerous ensemble learning and decoupling methods have achieved significant progress in long-tailed recognition. MiSLAS (Zhong et al. 2021) enhances representation learning through data mixing and proposes a label-aware smoothing strategy to improve model generalization. BBN (Zhou et al. 2020) proposes the use of two network branches, namely the conventional learning branch and the re-balancing branch, for handling long-tail recognition tasks. RIDE (Wang et al. 2020d) trains independent softmax losses for each expert and introduces a diversity-promoting loss based on KL divergence to increase the diversity among different experts. TADE (Zhang et al. 2021) develops a new multi-expert framework and innovates expert training schemes by introducing domain-specific knowledge-guided losses to promote diversity in handling different class distributions. Our approach aligns more with ensemble learning methods such as RIDE and TADE, but we focus on reasonable grouping based on continuous depth space, distinguishing our work from these discrete classification problems. Long-Tail Distribution in Depth Estimation Depth estimation requires obtaining specific depth values for each pixel. Since it is a task with continuous values and no concept of categories, there is limited discussion and research on the phenomenon of data imbalance in depth estimation tasks over the years. Attention-Driven Loss (Jiao et al. 2018) conducts a statistical analysis of the frequency distribution of depth in the NYUD v2 and KITTI datasets. They design a reweighted loss function targeting the depth range of distant regions to enhance the model’s focus on The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6818 Figure 2: (a) Inverse frequency training. Both initialization methods were trained using inverse frequency as the supervision signal. (b) Depth distribution. The rarity scores were calculated and the samples were sorted in descending order. ’Total’ represents the depth frequency distribution of the entire dataset. The top 10, top 50, and top 100 samples are selected for statistical analysis, revealing a unimodal frequency distribution. depth values of underrepresented classes and improve performance during training. However, their research primarily focuses on distant depth values and does not provide a detailed and comprehensive analysis of the frequency distribution in depth estimation tasks. Taking the frequency distribution into account, we conduct a thorough investigation, demonstrating that the longtail distribution of depth values is a pixel-level problem. We also validate the specific impact of this imbalanced distribution on the model’s performance. Additionally, we conduct experiments to test common approaches for addressing data imbalance and propose a more effective solution. Method Analysis on Monocular Depth Distribution The Simulation Correlation effect (Hong et al. 2021) in long-tail classification tasks implies that data skewness can introduce undesirable bias to the model, thereby affecting its generalization capability. To investigate whether a similar phenomenon also exists in monocular depth estimation, we conduct quantitative analysis on both indoor (NYUD v2 (Silberman et al. 2012)) and outdoor (KITTI (Geiger et al. 2013)) datasets to study the depth distribution and its impact on depth estimation performance. Since the major findings are mostly consistent, we report the detailed results on NYUD v2 in the following. Interval-Wise Evaluation: Unlike classification, depth estimation as a regression problem has continuous output space. To leverage a similar analysis approach, we evenly divide the depth values of the NYUD v2 dataset into 10 intervals and calculate their frequencies. Interval-wise evaluations are then conducted on a series of recent state-of-the-art methods, including GLP (Kim et al. 2022), MIM (Xie et al. 2023), ZoeDepth (Bhat et al. 2023), and PixelFormer (Agarwal and Arora 2023). Among them, GLP and MIM are single-stage regression models, while ZoeDepth and PixelFormer are two-stage classification-regression models. The depth distribution and evaluation results are shown in Figure 1. It confirms that the depth distribution of entire dataset indeed follows a unimodal long-tail distribution, with most of the depth values in the range of 2-5 meters. Meanwhile, we can also observe a strong positive correlation between the depth frequency and the performance of all the compared methods. This may be caused by the Simulation Correlation effect, but may also be simply attributed to the fact that depth estimation at long distances is inherently more challenging (Jiao et al. 2018). Inverse-Frequency Training: To further confirm the Simulation Correlation effect, we analyze the impact of inversefrequency training on ZoeDepth. To this end, we first invert the frequency of training depth values through resampling, transforming the dominant depth intervals into minority ones and vice versa. The inverse as well as the original training depth frequency are shown in Figure 2 (a). We then train two variants of ZoeDepth on the inverse-frequency data: ’Pretrain’ denotes the one initialized from large-scale pre-trained parameters (Bhat et al. 2023), and ’w/o Pretrain’ is trained from scratch. By comparing Figure 1 and 2 (a), we can conclude that the depth estimation performance is mostly determined by the training data distribution. The performance of the ’Pretrain’ variant further suggests that largescale pretraining on relative depth can improve generalization across depth intervals to a certain extent. However, the improvement is still limited in remote regions due to the long-tailed distribution of pretraining data. The above results and analysis confirms that the weak generalization issue of depth estimation can also be explained by the Simulation Correlation effect. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6819 Figure 3: DME architecture. The RGB image is processed through an encoder-decoder framework, yielding depth estimation results for four distance ranges. Each expert is responsible for performing depth estimation within a different distance range: near, middle, far, and ultra distances. The decoder’s features are connected to a routing module, where confidence scores for each distance range are computed. Finally, the depth estimation results at these distances are linearly combined to obtain the ultimate depth estimation. Analysis on Image-level Depth Distribution: As depth estimation involves dense prediction, an intriguing question arises: Does the long-tailed depth distribution persist at the image level as well? To investigate this, our basic idea is to identify the most uncommon image samples in the dataset, whose depth distributions significantly diverge from the overall depth distribution of the entire dataset. If these rare samples also display a similar long-tailed and unimodal pattern in their depth distribution, it suggests that the longtailed property is a phenomenon observed at the image level. To this purpose, we define the rarity score S of an image as its KL divergence to dataset: S = X i pi log pi qi , (1) where pi and qi denotes the frequency of the i-th depth interval calculated from the entire dataset and the image, respectively. We then sort all the images based on their rarity scores in descending order and select the top-K images to recalculate their depth distributions. As shown in Figure 2 (b), the depth distributions for K = 10, 50, 100 exhibit striking similarities and successively converge towards that of the entire dataset. This observation suggests that the long-tailed depth distribution is a prevalent characteristic at the image level. This is actually easy to comprehend when considering the nature of perspective projection, where closer regions tend to dominate in the resulting images, leading to unbalanced depth distribution. Distance-Aware Multi-Expert Model In light of the above findings, we conjecture that the generalization issue in depth estimation may be caused by multiple factors, including the imbalanced depth distribution present in both training datasets and individual monocular images, as well as the influence of the Simulation Correlation effect. To verify this, we propose the Distance-aware Multi-Expert (DME) depth estimation paradigm. Our major insight is to divide the overall depth range into multiple intervals handled by specific experts. The long-tailed pattern of depth distribution in each interval will significantly diminish. As such, the difficulty of depth estimation for each depth interval will be effectively alleviated. Figure 3 present an overview of the architecture. Given an input image, we adopt an encoderdecoder structure (Touvron et al. 2021) to extract multi-scale features. The experts are convolutional sub-networks built upon the extracted features and responsible to predict depth within specific intervals. A pixel-wise routing module is further designed, which aggregates the output of all experts into the final depth estimation results. Though conceptually simple, our method motivated by quantitative analysis has shown superior performance in our experiments. As opposed to the uniform depth partition, we empirically divide the depth range based on observations. We consider NYUD v2 and KITTI datasets as they contain both indoor and outdoor scenes and are thus more representative. To ensure depth distribution of each interval to be equalization, we divide the depth range into 4 intervals, including 1-3 meters, 3-10 meters, 10-25 meters, and 25-80 meters. As shown in our experiments, this empirical partition approach can well generalize to even unseen datasets. Pixel-Level Routing Module We further design a lightweight routing module to automatically combine the output of all the experts into the final depth estimation results. Different from the image-level routing method in (Bhat et al. 2023), our proposed routing module operates The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6820 at the pixel-level. As shown in Figure 3, The input features from the multi-scale hierarchy are firstly projected into C channels via 1 × 1 convolutions and then upsampled to the original image resolution H ×W through consecutive bilinear interpolations. The concatenation of the upsampled features are passed through a series of router layers, which consist of the two 1 × 1 convolution layers interleaved by batch normalization and ReLU non-linearity layers. The router finally generates a weight map of size H × W × N with N denoting the number of experts. The weight map are normalized using a Softmax layer along the channel dimension. The final depth map is obtained through a summation of the depth maps predicted all the experts weighted by the weight map. Network Training Based on our preliminary experiments, we develop a two-stage training strategy, which can facilitate better model learning, allowing experts to acquire more diverse skills (Zhang et al. 2021). In the first stage, we train the encoder-decoder and the four experts. Specifically, we utilize ground truth as routing information, i.e., which expert model should predict each pixel. For a particular expert, its loss solely stems from the depth range it is responsible for, without considering the estimation performance of other ranges. Consequently, each expert can achieve favorable estimation performance within its designated range, endowing the entire model with the capability to handle multiple depth ranges. In the second stage, we freeze all the network parameters obtained from the first stage and only train the router. In both stages, we train the model using the scale-invariant loss (Lee et al. 2019b) as follows: L = α s 1 T P i g2 i − 1 T P i gi 2 + (1 −λ) 1 T P i gi 2 , (2) where gi = log(di) −log( ˆdi) with di and ˆdi being the ground truth and predicted depth, respectively; T denotes the number of valid pixels; λ and α are set to 0.85 and 10, respectively. Experiments In this section, we conduct comprehensive comparisons and ablations to verify the motivation and effectiveness of the proposed method. Setup Implementation Details Our training data consists of NYUD v2 (Silberman et al. 2012) and KITTI (Geiger et al. 2013) training datasets. Data augmentation including random horizontal flipping, random changes in brightness, contrast, and random rotation is adopted following ZoeDepth (Bhat et al. 2023). For parameter initialization, the encoderdecoder is initialized using the pre-trained weights of MiDas (Ranftl et al. 2020). During training, the Adam optimizer (Kingma and Ba 2014) is employed with a batch size of 2 and a weight decay of 1e-2. The initial learning rate is set to 3e-4. The training process is performed on one NVIDIA GeForce RTX 3090Ti GPU, taking about 20 hours in total. Our method is evaluated on two standard benchmark datasets: the NYUD v2 test set and the KITTI test set. In addition, to validate the generalization ability of our approach, we conduct further evaluation on four additional datasets that have never been seen during training: DIODE (Vasiljevic et al. 2019), iBims benchmark (Koch et al. 2018), DIML (Kim et al. 2018) and Virtual KITTI 2 (Cabon, Murray, and Humenberger 2020). Comparisons on NYUD v2 and KITTI Table 1 reports the quantitative results on NYUD v2 and KITTI test sets. Note that the NYUD v2 and KITTI datasets are collected from indoor and outdoor scenes, respectively, exhibiting significant differences in terms of camera parameters and depth distributions. Consequently, simply merging the two training datasets for network learning without employing specific designs would lead to a performance degradation. To this end, the compared methods BTS (Lee et al. 2019a), Adabins (Bhat, Alhashim, and Wonka 2021), LocalBins (Bhat, Alhashim, and Wonka 2022), PixelFormer (Agarwal and Arora 2023), and NeWCRFs (Yuan et al. 2022) require specific model design and respective network training for each dataset. Neverthless, they still show inferior performance on both datasets with limited generalization capabilities. The recent leading method ZoeDepth (Bhat et al. 2023) employs a scene discriminator and two prediction heads (i.e., indoor and outdoor heads) and does well for the two datsets with one set of model parameters. In contrast, our method DME achieves the best performance with single prediction head upon our distance-aware mechanism. In addition, we also compare with one of our oracle methods DME-GT, which utilizes the ground truth depth instead of the expert predictions for the final Router. It shows that DME obtains comparable and even slightly better performance compared with DME-GT. Generalization to Unseen Datasets To validate the generalization performance of our design, we further conduct zero-shot testing on four unseen datasets, including two indoor and three outdoor scenarios. Apart from ZoeDepth (Bhat et al. 2023) and our DME, other compared methods use different sets of trained parameters that are respectively trained using NYUD v2 and KITTI datasets for indoor and outdoor evaluation. The results of outdoor and indoor scenarios are presented in Table 2 and 3, respectively. For the DIODE Indoor dataset (Vasiljevic et al. 2019), our DME delivers significantly better performance than previous state-of-the-art models. Compared to the prior best method ZoeDepth on the iBims-1 benchmark (Koch et al. 2018), we achieve a reduction of 0.148 in terms of RMSE. For the unseen outdoor datasets Virtual KITTI 2 (Cabon, Murray, and Humenberger 2020), DIML Outdoor (Kim et al. 2018), and DIODE Outdoor (Vasiljevic et al. 2019), DME with the same trained network parameters also show remarkable performance. Ablation Study To investigate the main contributions and key designs of our method, a series of ablation experiments are conducted. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6821 Dataset Method d1↑ d2↑ d3↑ RMSE↓ log10↓ Abs.Rel↓ RMSE log↓ BTS 0.885 0.978 0.995 0.392 0.047 0.110 ∼ AdaBins 0.903 0.984 0.997 0.364 0.044 0.103 ∼ LocalBins 0.907 0.987 0.998 0.357 0.042 0.099 ∼ NYUD v2 NeWCRFs 0.922 0.992 0.998 0.344 0.041 0.095 ∼ PixelFormer 0.929 0.991 0.998 0.322 0.039 0.090 ∼ ZoeDepth 0.953 0.995 0.999 0.277 0.033 0.077 ∼ DME 0.956 0.995 0.999 0.268 0.032 0.074 0.094 DME-GT 0.964 0.997 1.0 0.244 0.03 0.069 0.087 AdaBins 0.964 0.995 0.999 2.360 ∼ 0.058 0.088 NeWCRFs 0.974 0.997 0.999 2.129 ∼ 0.052 0.079 PixelFormer 0.976 0.997 0.999 2.081 0.077 0.051 ∼ KITTI MIM 0.977 0.998 1.0 1.966 ∼ 0.05 0.075 ZoeDepth 0.967 0.995 0.999 2.290 ∼ 0.057 0.091 DME 0.980 0.999 1.0 1.905 0.023 0.05 0.075 DME-GT 0.982 0.998 1.0 1.907 0.021 0.048 0.073 Table 1: Quantitative comparison on NYUD v2 and KITTI. The best results are in bold, and the second best is underlined. Virtual KITTI 2 DIML Outdoor DIODE Outdoor Method d1↑ REL↓ RMSE↓ d1↑ REL↓ RMSE↓ d1↑ REL↓ RMSE↓ BTS 0.831 0.115 5.368 0.016 1.785 5.908 0.171 0.837 10.48 AdaBins 0.826 0.122 5.420 0.013 1.941 6.272 0.161 0.863 10.35 LocalBins 0.810 0.127 5.981 0.016 1.820 6.706 0.170 0.821 10.27 NeWCRFs 0.829 0.117 5.691 0.010 1.918 6.283 0.176 0.854 9.228 ZoeDepth 0.850 0.105 5.095 0.292 0.641 3.610 0.208 0.757 7.569 DME 0.840 0.118 4.433 0.199 0.835 3.793 0.251 0.777 9.570 DME-GT 0.881 0.097 3.943 0.296 0.472 2.12 0.508 0.360 5.713 Table 2: Results of zero-shot transfer on three outdoor datasets not seen during training. The results of the prior works are from the original paper of ZoeDepth (Bhat et al. 2023). The best results are in bold, and the second best is underlined. DIODE Indoor iBims-1 Benchmark Method d1 REL RMSE d1 REL RMSE BTS 0.210 0.418 1.905 0.538 0.231 0.919 AdaBins 0.174 0.443 1.963 0.555 0.212 0.901 LocalBins 0.229 0.412 1.853 0.558 0.211 0.880 NeWCRFs 0.187 0.404 1.867 0.548 0.206 0.861 ZoeDepth 0.386 0.331 1.598 0.615 0.186 0.777 DME 0.479 0.744 0.862 0.585 0.316 0.635 DME-GT 0.654 0.219 0.822 0.589 0.315 0.629 Table 3: Results of zero-shot transfer on two indoor datasets not seen during training. The results of the prior works are from the original paper of ZoeDepth (Bhat et al. 2023). The best results are in bold, and the second best is underlined. Ablation on Distance Grouping To verify our distanceaware strategy, two basic variants are designed. Considering that ZoeDepth (Bhat et al. 2023) has the comparable parameters as ours but treats all the distance indiscriminately, we retrain it as our ‘Baseline’ method under the same experimental settings as ours. Besides, we make our four experts learn to be responsible for equal interval distances by dividing the 0-80 meter range into four distance ranges: 0-20 meters, 20-40 meters, 40-60 meters, and 60-80 meters. We term this variant as ‘Equally spaced’, which does not take into account the frequency distribution of depth values. As shown in Table 4, the results indicate that the equal interval grouping slightly improves the performance compared to Figure 4: Impact of the Number of Experts on Performance. The two images on the left and right display the test results of the NYUD v2 and KITTI datasets, respectively. the baseline model. On the other hand, our Distance-aware grouping approach outperforms the equal interval grouping. These results indicate that grouping based on the similarity of frequency distributions helps alleviate the issue of data imbalance within each interval, thereby contributing to an overall performance boost. Furthermore, we compare the number of experts and find that as the number of experts increases, the error decreases further. We still perform depth value grouping based on frequency, and when we use 8 experts, the RMSE decreases to 0.225 for NYUD v2, as shown in Figure 4. Evaluating Class Imbalance Techniques To evaluate the effectiveness of our method, we compare two commonly used approaches that address the long-tail distribution issue: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6822 Dataset Method d1↑ d2↑ d3↑ RMSE↓ log10↓ Abs.Rel ↓ Baseline 0.953 0.995 0.999 0.277 0.033 0.077 NYUD v2 Equally spaced 0.955 0.995 0.999 0.270 0.032 0.075 Distance-aware 0.964 0.997 1.0 0.244 0.030 0.069 baseline 0.977 0.998 1.0 2.103 0.024 0.051 KITTI Equally spaced 0.978 0.998 1.0 2.049 0.022 0.050 Distance-aware 0.982 0.998 1.0 1.907 0.021 0.048 Table 4: The Impact of Different Grouping Methods. The baseline refers to the results obtained by training ZoeDepth under the same training and testing conditions as ours. ’Equally spaced’ corresponds to the method of equally dividing depth values into intervals, while ’Distance-aware’ corresponds to the method of grouping depth values based on frequency similarity. Method d1↑ d2↑ d3↑ RMSE↓ log10↓ Abs.Rel↓ Baseline 0.953 0.995 0.999 0.277 0.033 0.077 Reweighting-freq 0.945 0.993 0.999 0.334 0.034 0.080 Reweighting-depth 0.923 0.989 0.997 0.337 0.039 0.093 Resampling 0.955 0.995 0.999 0.269 0.032 0.075 MDE-GT-8 0.966 0.998 1.0 0.225 0.030 0.069 Table 5: Comparison of Different Methods for Addressing Imbalanced Data. The resampling approach involves discarding supervision signals from the majority class based on a specified ratio, while the reweighting approach adjusts the weights in the loss function based on the frequency or depth values. All methods are trained on NYUD v2 dataset. resampling and reweighting. Resampling is a simple and effective technique for handling data imbalance, where we categorize depth values into dominant, common, and minority classes based on frequencies. Specifically, we employ a posterior analysis approach and set the dropout frequencies for the three categories to 0.95, 0.75, and 0.00, respectively. By reducing the sampling frequency of the dominant and common classes, we aim to achieve roughly equal frequencies for the three categories of supervised signals. Reweighting is also considered as a method to address data imbalance, where the loss is adjusted based on the class to increase attention on the minority class. We explore two weighting approaches. The first approach, as described in (Jiao et al. 2018), assigns higher weights to depth values that are farther away from the camera based on their magnitude. The second approach leverages our observations on simulated correlations, assigning larger weights to depth values with longer periods to guide the model’s focus toward infrequently occurring depth values. The experimental results in Table 5 demonstrate that DME-GT-8 which employs 8 experts is the most effective. Our approach can not only address the issue of data skewness but also improves the overall performance. This may be attributed to suboptimal sampling strategies or non-optimal hyperparameters for both methods. However, we are well aware that finding appropriate hyperparameters is a timeconsuming process that can render the model fragile. In comparison, our design offers stability and effectiveness. Limitation Our work showcases outstanding generalization capabilities on diverse datasets, providing novel insights and methodologies for the research and application of depth estimation. However, a limitation of the current approach lies in the insufficient accuracy of the routing mechanism, which restricts the overall model performance. Therefore, one future research direction is to design a more accurate and elegant routing approach to further enhance the model’s performance and streamline the operational workflow. Conclusion In this study, we conduct a comprehensive investigation into the phenomenon of distribution skewness in the task of depth estimation and empirically demonstrate the adverse bias it introduces to models. Based on this observation, we propose a Distance-aware Multi-Expert regression model to enhance the performance of depth estimation. The model is designed with a two-stage architecture, where the first stage accurately estimates depth values in different depth ranges, and the second stage utilizes a routing mechanism to achieve a reasonable combination of depth values, yielding the final depth estimation results. Through this study, we aim to not only contribute to a better understanding of long-tail learning in continuous space for researchers and the academic community but also drive advancements in this field. We hope that this work serves as a reference for improving depth estimation tasks and inspires further research and exploration in the realm of longtail learning in continuous space. Acknowledgments This work is supported by the National Natural Science Foundation of China (U23A20386, 62276045, 62293540, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6823 62293542, 62006036), Dalian Science and Technology Talent Innovation Support Plan (2022RY17), OPPO Research Fund, and Fundamental Research Funds for Central Universities (DUT22LAB124, DUT22QN228) References Agarwal, A.; and Arora, C. 2023. Attention Attention Everywhere: Monocular Depth Prediction with Skip Attention. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 5861–5870. Bhat, S. F.; Alhashim, I.; and Wonka, P. 2021. Adabins: Depth estimation using adaptive bins. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4009–4018. Bhat, S. F.; Alhashim, I.; and Wonka, P. 2022. Localbins: Improving depth estimation by learning local distributions. In European Conference on Computer Vision, 480–496. Bhat, S. F.; Birkl, R.; Wofk, D.; Wonka, P.; and M¨uller, M. 2023. Zoedepth: Zero-shot transfer by combining relative and metric depth. arXiv preprint arXiv:2302.12288. Cabon, Y.; Murray, N.; and Humenberger, M. 2020. Virtual kitti 2. arXiv preprint arXiv:2001.10773. Geiger, A.; Lenz, P.; Stiller, C.; and Urtasun, R. 2013. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11): 1231–1237. He, J.; Wang, Y.; Wang, L.; Lu, H.; Luo, B.; He, J.-Y.; Lan, J.-P.; Geng, Y.; and Xie, X. 2023. Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning. In Proceedings of the IEEE International Conference on Computer Vision, 4111–4121. Hong, Y.; Han, S.; Choi, K.; Seo, S.; Kim, B.; and Chang, B. 2021. Disentangling label distribution for long-tailed visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6626–6636. Jiao, J.; Cao, Y.; Song, Y.; and Lau, R. 2018. Look deeper into depth: Monocular depth estimation with semantic booster and attention-driven loss. In Proceedings of the European Conference on Computer vision, 53–69. Kim, D.; Ka, W.; Ahn, P.; Joo, D.; Chun, S.; and Kim, J. 2022. Global-local path networks for monocular depth estimation with vertical cutdepth. arXiv preprint arXiv:2201.07436. Kim, Y.; Jung, H.; Min, D.; and Sohn, K. 2018. Deep monocular depth estimation via integration of global and local predictions. IEEE Transactions on Image Processing, 27(8): 4131–4144. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Koch, T.; Liebel, L.; Fraundorfer, F.; and Korner, M. 2018. Evaluation of cnn-based single-image depth estimation methods. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 0–0. Lee, J. H.; Han, M.-K.; Ko, D. W.; and Suh, I. H. 2019a. From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326. Lee, J. H.; Han, M.-K.; Ko, D. W.; and Suh, I. H. 2019b. From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326. Ranftl, R.; Bochkovskiy, A.; and Koltun, V. 2021. Vision transformers for dense prediction. In Proceedings of the IEEE International Conference on Computer Vision, 12179– 12188. Ranftl, R.; Lasinger, K.; Hafner, D.; Schindler, K.; and Koltun, V. 2020. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3): 1623–1637. Ren, H.; El-Khamy, M.; and Lee, J. 2019. Deep Robust Single Image Depth Estimation Neural Network Using Scene Understanding. In CVPR Workshops, volume 2. Ren, J.; Yu, C.; Ma, X.; Zhao, H.; Yi, S.; et al. 2020. Balanced meta-softmax for long-tailed visual recognition. Advances in Neural Information Processing Systems, 33: 4175–4186. Ren, W.; Wang, L.; Piao, Y.; Zhang, M.; Lu, H.; and Liu, T. 2022. Adaptive co-teaching for unsupervised monocular depth estimation. In European Conference on Computer Vision, 89–105. Silberman, N.; Hoiem, D.; Kohli, P.; and Fergus, R. 2012. Indoor segmentation and support inference from rgbd images. Proceedings of the European Conference on Computer vision, 7576: 746–760. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J´egou, H. 2021. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, 10347–10357. Vasiljevic, I.; Kolkin, N.; Zhang, S.; Luo, R.; Wang, H.; Dai, F. Z.; Daniele, A. F.; Mostajabi, M.; Basart, S.; Walter, M. R.; et al. 2019. Diode: A dense indoor and outdoor depth dataset. arXiv preprint arXiv:1908.00463. Wang, L.; Wang, Y.; Wang, L.; Zhan, Y.; Wang, Y.; and Lu, H. 2021. Can scale-consistent monocular depth be learned in a self-supervised scale-invariant manner? In Proceedings of the IEEE International Conference on Computer Vision, 12727–12736. Wang, L.; Zhang, J.; Wang, O.; Lin, Z.; and Lu, H. 2020a. SDC-Depth: Semantic divide-and-conquer network for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 541–550. Wang, L.; Zhang, J.; Wang, Y.; Lu, H.; and Ruan, X. 2020b. Cliffnet for monocular depth estimation with hierarchical embedding loss. In European Conference on Computer Vision, 316–331. Springer. Wang, T.; Li, Y.; Kang, B.; Li, J.; Liew, J.; Tang, S.; Hoi, S.; and Feng, J. 2020c. The devil is in classification: A simple framework for long-tail instance segmentation. In Proceedings of the European Conference on Computer vision, 728–744. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6824 Wang, X.; Lian, L.; Miao, Z.; Liu, Z.; and Yu, S. X. 2020d. Long-tailed recognition by routing diverse distributionaware experts. arXiv preprint arXiv:2010.01809. Wang, Y.; Gan, W.; Yang, J.; Wu, W.; and Yan, J. 2019. Dynamic curriculum learning for imbalanced data classification. In Proceedings of the IEEE International Conference on Computer Vision, 5017–5026. Xie, Z.; Geng, Z.; Hu, J.; Zhang, Z.; Hu, H.; and Cao, Y. 2023. Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 14475–14485. Yuan, W.; Gu, X.; Dai, Z.; Zhu, S.; and Tan, P. 2022. Neural window fully-connected crfs for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3916–3925. Zhang, Y.; Hooi, B.; Hong, L.; and Feng, J. 2021. Testagnostic long-tailed recognition by test-time aggregating diverse experts with self-supervision. arXiv e-prints, arXiv– 2107. Zhong, Z.; Cui, J.; Liu, S.; and Jia, J. 2021. Improving calibration for long-tailed recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 16489–16498. Zhou, B.; Cui, Q.; Wei, X.-S.; and Chen, Z.-M. 2020. Bbn: Bilateral-branch network with cumulative learning for longtailed visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9719– 9728. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6825 | 2024 | 758 |
18,582 | DOCTR: Disentangled Object-Centric Transformer for Point Scene Understanding Xiaoxuan Yu1*, Hao Wang1, Weiming Li1, Qiang Wang1, Soonyong Cho2, Younghun Sung2 1Samsung Research China – Beijing 2Samsung Advanced Institute of Technology [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Point scene understanding is a challenging task to process real-world scene point cloud, which aims at segmenting each object, estimating its pose, and reconstructing its mesh simultaneously. Recent state-of-the-art method first segments each object and then processes them independently with multiple stages for the different sub-tasks. This leads to a complex pipeline to optimize and makes it hard to leverage the relationship constraints between multiple objects. In this work, we propose a novel Disentangled Object-Centric TRansformer (DOCTR) that explores object-centric representation to facilitate learning with multiple objects for the multiple sub-tasks in a unified manner. Each object is represented as a query, and a Transformer decoder is adapted to iteratively optimize all the queries involving their relationship. In particular, we introduce a semantic-geometry disentangled query (SGDQ) design that enables the query features to attend separately to semantic information and geometric information relevant to the corresponding sub-tasks. A hybrid bipartite matching module is employed to well use the supervisions from all the sub-tasks during training. Qualitative and quantitative experimental results demonstrate that our method achieves state-of-the-art performance on the challenging ScanNet dataset. Code is available at https://github.com/SAITPublic/DOCTR. Introduction Understanding 3D scene is important for various spatial modeling and interaction applications such as augmented reality (AR), autonomous driving, and robotics. With the progress of 3D deep learning, recent real-world applications seek more comprehensive understanding of rich and detailed attributes for all objects of interests in a scene. This paper addresses a recent task of point scene understanding (Nie et al. 2021; Tang et al. 2022) that simultaneously involves several sub-tasks including object classification, instance segmentation, pose estimation, and object mesh reconstruction. The input is point cloud of real-world scene that is obtained by 3D scanning or reconstruction. Such data often contains noisy and missing portions due to occlusions or sensor limitations, which make the task even more challenging. *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Point Scene Understanding. With an incomplete point cloud scene as input, our method learns to segment each object instance and reconstruct its complete mesh. Compared to the recent DIMR (Tang et al. 2022), our proposed DOCTR has a compact pipeline and achieves more accurate results in challenging cases such as when multiple objects are in close proximity. For the point scene understanding task, an early method RfD-Net(Nie et al. 2021) is among the first to learn object meshes at semantic-instance level directly from points. RfD-Net proposes a reconstruction-from-detection framework that enables identification and reconstruction of object meshes at a high resolution. It confirms that object recognition and reconstruction are mutually reinforcing tasks. Recently, DIMR (Tang et al. 2022) founds that the reconstruction-from-detection pipeline tends to fail when reconstructing high fidelity objects. DIMR shows that replacing the detection backbone in RfD-Net by a segmentation backbone and disentangling instance completion and mesh generation contribute to improve object reconstruction quality. These designs empower DIMR to achieve state-of-theart (SOTA) performance. However, DIMR still relies on geometric clustering for object segmentation that requires tuning of hyper-parameters such as the radius for clustering. Meanwhile, after object segmentation, DIMR treats each object instance independently to estimate its pose, shape latent code, and reconstruct its mesh. DIMR’s pipeline is complex to jointly optimize the segmentation and other sub-tasks. Further, the separate processing for each object makes it hard to leverage the relationship between multiple objects. In test with real-world scenes, DIMR is prone to artifacts for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6826 multiple nearby objects as shown in Figure 1. To solve the above issues, our work is inspired by objectcentric learning (Carion et al. 2020; Locatello et al. 2020), which proposes to learn efficient representations of a scene by decomposing it into objects, thus is natural to explore the object relationship. Recently, object-centric learning methods are introduced into 3D point cloud segmentation task (Schult et al. 2023; Sun et al. 2023) and show promising performance. In this paper, we attempt to introduce an objectcentric learning based method for the point scene understanding task. Specifically, each object is represented as a query, and a Transformer decoder is adapted to iteratively optimize all the queries involving their relationship. However, applying an object-centric framework to the point scene understanding task is not straight forward. In existing methods such as RfD-Net and DIMR, after obtaining object proposals, each object’s point cloud is transformed to object’s canonical coordinate frame using the initial pose estimation result. This 3D alignment is important to improve subsequent geometric sub-tasks such as object pose refinement, shape completion, and mesh reconstruction. However, object-centric Transformer uses a unified representation for each object, which is not meaningful to perform any 3D coordinate frame alignment. This makes it difficult to learn object geometric information such as pose and shape. To address this issue, we propose a semantic-geometry disentangled query (SGDQ) design. Different from the origin query, SGDQ disentangles each query to a semantic part and a geometry part. The semantic part is supervised by semantic sub-tasks such as object classification and semantic instance segmentation. The geometric part is supervised by the geometric sub-tasks such as pose estimation and mesh reconstruction. With the distinct focuses on semantic and geometric sub-tasks, our SGDQ empowers disentangled learning for the task-specific features. Following the above, we propose our method as Disentangled Object-Centric TRansformer (DOCTR). Our DOCTR consists of a backbone, a disentangled Transformer decoder with the SGDQ design, a prediction head for the sub-tasks of point scene understanding, and a shape decoder. It facilitates learning with multiple objects for the multiple subtasks in a unified manner. Compared to DIMR, our DOCTR has a much more compact pipeline as shown in Figure 2. Meanwhile, different from Mask3D (Schult et al. 2023), our SGDQ design allows learning for multiple different subtasks that relates to either semantic or geometric attributes of the scene objects. For training our DOCTR model, we employ a hybrid bipartite matching strategy during the matching process between ground truths and SGDQs. We also propose a mask-enhanced box refinement module that leverages segmentation to improve pose estimation. Extensive experiments are performed with the real-world large-scale ScanNet dataset in comparison to the SOTA methods. The contributions of this paper are as follows: • As far as we know, our DOCTR is the first to introduce an object-centric Transformer-based network for the point scene understanding task that allows learning with multiple objects and multiple sub-tasks in a unified manner. Figure 2: Comparison of the pipelines between DIMR (Tang et al. 2022) and our DOCTR. DIMR first segments each object instance by clustering and then processes them independently with multiple stages for the different sub-tasks. Our DOCTR pipeline is much more compact that represents each object instance as a query and optimize for multiple objects and multiple sub-tasks in a unified manner. • We propose semantic-geometry disentangled query (SGDQ) that enables the DOCTR network to extract semantic and geometric features for the different sub-tasks to effectively use disentangled representations. • Qualitative and quantitative experimental results on the challenging ScanNet dataset show that our proposed method achieves superior performance than previous SOTA methods, especially for the challenging cases such as cluttered scenes with nearby objects. Related Work Point scene understanding. Many real-world applications require understanding both semantic and geometric attributes of 3D scenes. Among different types of inputs, point scene understanding is related to 3D deep learning using only scene’s point cloud as input. While point cloud is commonly used for object-level tasks such as object completion (Huang, Chen, and Li 2019; Wang, Ang Jr, and Lee 2020; Xie et al. 2020; Yuan et al. 2018), less work (Zhong and Zeng 2020; Rist et al. 2020, 2021) use point cloud with neural networks for scene understanding. Early works (Garbade et al. 2019; Roldao, de Charette, and Verroust-Blondet 2020; Wu et al. 2020; Yan et al. 2021) often use voxelized input and then predict the semantic label of each voxel in both visible and occluded regions. They aim to jointly estimate the complete geometry and semantic labels from partial input. Working with voxels, computational cost and resolution need to be balanced which limits to reconstruct high fidelity object meshes. To solve such issues, recent point scene understanding methods explore the relationship of semantic and geometric information, containing object localization and reconstruction. RfD-Net (Nie et al. 2021) is among the first to propose a reconstruction-from-detection framework to detect The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6827 Figure 3: Our proposed DOCTR pipeline consists of a backbone, a disentangled Transformer decoder (DTD), a prediction head, and a shape decoder. We propose semantic-geometry disentangled queries (SGDQs) to represent scene objects. The DTD is trained to attend the SGDQs to multi-scale point-wise features extracted from the sparse 3D U-Net backbone. In inference, each object SGDQ is passed to the prediction head to predict the object’s mask, class, box (pose), and shape code. The shape code is input to the shape decoder to reconstruct the object’s complete mesh, which is then aligned by the estimated pose to the scene. objects and generate complete object meshes. It shows that the tasks of object detection and object completion are complementary. Current state-of-the-art is achieved by DIMR (Tang et al. 2022) that proposes to use segmentation network instead of detection network to improve the accuracy of object recognition. DIMR also disentangles the tasks of object completion and mesh generation, mitigating the ambiguity of learning complete shapes from incomplete point cloud observations. In contrast to existing methods, our proposed approach introduces an object-centric Transformerbased network for the point scene understanding task that enables simultaneous learning with multiple objects and multiple sub-tasks in a unified manner. Object-centric learning. Learning with object-centric representation is a promising approach to understanding realworld complex scenes with multiple objects. With the help of set prediction formulation, DETR(Carion et al. 2020) proposes to learn object-centric representations with the Transformer architecture for 2D object detection. In (Locatello et al. 2020), a fully-unsupervised approach based on slot attention for object-centric learning is proposed. Objectcentric learning has achieved great success in 2D tasks(Zhu et al. 2020; Liu et al. 2022; Zhang et al. 2022). Recently, there have been notable advancements in applying objectcentric learning to 3D point cloud segmentation or detection tasks(Schult et al. 2023; Sun et al. 2023; Zhu et al. 2023). These methods have demonstrated performance that rivals or even surpasses the current SOTA approaches in these fields. Different from the recent object-centric models for 3D tasks such as Mask3D (Schult et al. 2023), our work proposes a semantic-geometry disentangled query design that allows object-centric Transformer to deal with multiple different semantic and geometric sub-tasks simultaneously. Methods An overview of our posed DOCTR pipeline is illustrated in Figure 3. Our DOCTR consists of a sparse 3D U-Net backbone, a disentangled Transformer decoder (DTD), a prediction head, and a shape decoder. All objects are represented by our designed SGDQs and each SGDQ corresponds to an object instance. During training, the Transformer decoder is trained to attend the SGDQs to the multi-scale point-wise features extracted by the backbone. To supervise the training, our proposed hybrid bipartite matching is used to assign either a ‘no object’ class or an object’s ground-truth attributes to the corresponding object SGDQ. In inference, for each object’s SGDQ, the prediction head predicts its mask, class, pose (box), and a shape code. The predicted shape code is decoded by a shape decoder to reconstruct the object’s complete mesh. Then the mesh is transformed by the estimated pose to align to the scene’s coordinate frame as the object’s reconstruction. All the reconstructed objects comprise the final reconstructed scene. Next we’ll describe the details for each component of the pipeline. Sparse 3D Backbone From the input point cloud P ∈RN×3, we employ a sparse 3D U-Net (Choy, Gwak, and Savarese 2019) to extract multi-scale point-wise features {Fl} ∈RNl×Df l , where l = 0, . . . , L is scale level, Nl and Df l are spatial dimension and feature dimension of Fl respectively. Especially, the spatial dimension of F0 and P are the same. Different from DIMR, our method does not add any additional semantic branch or offset branch for instance segmentation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6828 Figure 4: Disentangled Transformer Decoder Layer Disentangled Transformer Decoder Point scene understanding comprises sub-tasks involving different types of information. Specifically, the sub-tasks of instance segmentation and classification are more related to semantic information. Meanwhile, the sub-tasks of pose estimation and shape reconstruction are more related to geometry information. To ensure optimal learning for all the subtasks, we propose disentangled Transformer decoder (DTD), which aims to decouple the learning of semantic features and geometric features. This allows our model to learn the most relevant information for each sub-task. Semantic-geometry disentangled query. As a key design of the DTD, we pose semantic-geometry disentangled query (SGDQ). All scene objects are represented by a set of SGDQs and each SGDQ corresponds to one object instance. Unlike the query used in the vanilla object-centric framework, the feature of each SGDQ is divided into a semantic part and a geometric part. In training, the disentanglement of semantic and geometric parts is achieved through separate supervision with the different sub-task prediction heads. The semantic part is supervised by the sub-tasks of object classification and semantic instance segmentation. The geometric part is supervised by the sub-tasks of pose estimation and mesh reconstruction. We choose the same fixed number of SGDQs for all scenes, whose value is often much larger than the number of objects in the scene. SGDQs are randomly initialized and iteratively learned to be representations of objects in the scene. Disentangled transformer decoder layer. Our DTD consists of multiple DTD layers, and each DTD layer contains multiple query refinement modules. Each query refinement module corresponds to a different level of the multi-scale features {Fl} for l = 1, . . . , L as shown in Figure 4. The semantic parts and geometric parts of SGDQs are separately refined by cross-attending to the scene’s point-wise features and fused at the object level through self-attention. Starting from M randomly initialized SGDQs, through the series of query refinement modules, the SGDQs are optimized in a coarse-to-fine manner progressively. In our model, we use three DTD layers and four levels (L = 4) in each DTD layer. Different from the original Transformer decoder layer, our DTD layer is designed to learn disentangled semantic and geometric representations of SGDQs. At each level l in a decoder layer, each SGDQ competes for attending to the features Fl from one object instance through attention mechanism. We denote Ql as an input SGDQ of level l, and Ql+1 as its output SGDQ. Each SGDQ is composed of a semantic part and a geometric part, denoted as Ql = (Ql s, Ql g). In each query refinement module of DTD layer, we firstly utilize two cross-attention modules to learn the semantic and geometric information separately from the multi-scale pointwise features at level l. In the semantic aware cross-attention module, we first apply learnable linear transformations, denoted as qs, ks, vs respectively, to features Fp and semantic part queries Ql s to obtaining query ¯Ql s ∈RM×D, key Kl s ∈RN×D and value Vl s ∈RN×D. Then matrix multiplication between ˆQl s and Kl s gives the correlation matrix between M queries and Nl input features. Here we use the masked cross-attention where each query only attends to features within binary instance mask m predicted by the mask prediction head using Ql−1 s , which can be formulated as: ˆQl+1 s = softmax( ¯Ql s z }| { qs(Ql s) · Kl s z }| { ks( ˜m(F, m)) T ) Vl s z }| { vs(F), (1) where ˜m(F, m)j,k = Fj,k, mj,k = 0, 0, mj,k = 1. (2) The same attention method is used in the geometric part Ql g: ˆQl+1 g = softmax(qg(Ql g) · kg( ˜m(F, m))T )vg(F). (3) These two cross-attentions empower the queries to extract semantic and geometric information from the point-wise features. Then, the semantic and geometric features will be concatenated and fed into a self-attention module and a Feed Forward Network (FFN) to gather context information: ˆQl+1 = self-attention(( ˆQl+1 s , ˆQl+1 g )), Ql+1 = FFN( ˆQl+1). (4) Prediction head. After DTD, each SGDQ [Q]i for i = 0, · · · , M −1 attends to the features of a specific object instance. These SGDQs are then fed into a prediction head whose weights are shared across queries. The prediction head consists of four MLP-based task heads, including mask head, class head, box head, and shape head. The mask head takes semantic part [Qs]i and the point-wise features from the sparse 3D U-Net as input. It maps [Qs]i through an MLP fmask(·), and outputs object binary mask mi by computing dot products between the mapped query features and pointwise features F0 with a threshold at 0.5: [m]i = {σ(F0 · fmask([Qs]i)T ) > 0.5 ∈{0, 1}N}. (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6829 The class head also takes the semantic part [Qs]i as input, and outputs classification logits li ∈RC for each instance, where C is the total class number. The box head takes the geometric part [Qg]i as input, and outputs box vector bi, including rotation angle ri ∈ [−π, π] along the z-axis following (Nie et al. 2021), center of bounding box ci = (xi, yi, zi) and size of bounding box si = (sx i , sy i , sz i ) and IoU score scoreIoU i , denoted as bi = (ri, ci, si, scoreIoU i ). For shape completion, we assume that the complete shape is sampled from a latent Gaussian distribution and learn it through the reparameterization trick (Kingma and Welling 2013). The shape head takes the geometric part [Qg]i as input and regresses mean and standard deviation µi, σi ∈ RDshape for the shape latent Gaussian distribution, where Dshape is the latent shape code dimension. We decode the latent code zi ∼N(µi, σi) to mesh only at inference time through a pretrained shape decoder of BSP-Net(Chen, Tagliasacchi, and Zhang 2021, 2020). Details of the shapde decoder can be found in (Tang et al. 2022). Training Design Hybrid bipartite matching. During training, we utilize ground truth annotations for each object instance, including the mask, class, box, and mesh. For determining the correspondence between the queries and the ground truths, we use bipartite matching following DETR (Carion et al. 2020). For each SGDQ, a pair of mask and box predicted by two task heads may be inconsistent with each other. To obtain accurate and consistent matching, we employ a mixed cost of predicted mask, box, and class when performing bipartite matching. We construct a cost matrix C defined as follows: C = λ1Lmask dice + λ2Lbox GIoU + λ3Lclass = {Cjl}, (6) in which Cjl is the similarity of the j-th query and the l-th ground truth. We set the weights to λ1 = 5 and λ2 = λ3 = 2. The meaning of each component will be explained in the upcoming discussion of training losses. After matching, those queries that are not assigned to any ground truth are given to the ‘no object’ category and only participate in the classification task. Benefiting from the one-to-one matching mechanism, each object instance in the scene will be represented by only one query, which allows our network to directly perform sparse prediction. Training losses. The loss L consists of semantic loss Lsem and geometric loss Lgeo. Semantic tasks are supervised by : Lsem = Lmask BCE + Lmask dice + Lclass, (7) in which Lmask BCE is the binary cross-entropy loss for mask, Lmask dice is the Dice loss for mask, and Lclass is the crossentropy loss for object-wise semantic label. Geometry tasks are supervised by : Lgeo = Lbox center + Lbox size + Lbox angle + Lbox GIoU + Lshape, (8) where Lbox center is the Huber loss for bounding box center, Lbox size is the Huber loss for bounding box size, Lbox angle contains the cross-entropy loss for angle label and Huber loss for residual angle, Lbox GIoU is the 3D GIoU loss for box regression, Lshape is the latent shape distribution loss for predicted shape latent distribution. The ground-truth latent shape distributions are obtained from a shape encoder of BSP-Net pretrained with ground-truth meshes. Mask Enhanced Box Refinement For an object instance i, it is observed that when the network predicts its mask mi accurately, the box Bm i = (cp i , sp i ) calculated from the object’s point cloud tends to be more accurate than the network’s predicted box Bp i = (cm i , sm i ). We consider the mask mi is “sufficiently accurate”, when the distance d(Bp i , Bm i ) = |sp i −sm i |∞between Bm i and Bp i is less than d0 = 0.1m. For each object point cloud oi, we apply the estimated rotation angle ri to transform it to a canonical coordinate system for mask enhanced box refinement (MEBR). For MEBR, denote the box calculated from the object point cloud oc in canonical coordinate system as Bm i and the box predicted by network as Bp i , then the refined box Bf i is as follows: Bf i = Bp i , if d(Bp i , Bm i ) ≤d0, Bm i , otherwise. (9) Experiment Experimental Settings Datasets. As previous works for point scene understanding, our experiments are conducted with the ScanNet V2 dataset (Dai et al. 2017) (abbreviated as ScanNet for short). The ScanNet dataset is a richly-annotated dataset that contains point clouds of 1513 real world indoor scenes labeled at instance level. The Scan2CAD dataset (Avetisyan et al. 2019) provides additional geometric instance annotations by aligning CAD models from ShapeNet (Chang et al. 2015) to instances in ScanNet. To deal with data inconsistency issue, DIMR (Tang et al. 2022) relabels the datasets based on a compatible label system including 8 object categories. We follow exactly the same data split and pre-processing method as DIMR. A minor exception is when searching for the best matches between CAD models in Scan2CAD and instances in ScanNet based on box IoU, we further require that the matched pairs belong to the same category. This excludes some cases of mismatching caused by overlapping object boxes. The annotation variations will be made publicly available. For fair comparison, we re-evaluate previous RfD-Net and DIMR models on the refined annotations. Metrics. Following previous works (Nie et al. 2021; Tang et al. 2022), we evaluate performance from two aspects including completion quality and mapping quality. (i) Completion quality. It is measured by the distance between the reconstructed meshes and the aligned models in the ground truth. We employ three metrics to evaluate completion quality following (Tang et al. 2022): the voxel-based 3D Intersection over Union (IoU) metric, the point-based Chamfer Distance (CD) metric, and the mesh-based Light The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6830 [email protected] [email protected] [email protected] [email protected] LFD@5000 LFD@2500 [email protected] RfD-Net(Nie et al. 2021) 42.52 14.35 46.37 19.09 28.59 7.80 43.49 RfD-Net(refined data) 45.84 15.60 46.97 18.41 29.90 6.83 43.49 DIMR(Tang et al. 2022) 46.34 12.54 52.39 25.71 29.47 8.55 56.76 DIMR(refined data) 53.18 13.61 54.20 26.99 33.14 8.47 56.76 DOCTR w/o MEBR (ours) 55.18 17.88 56.82 29.17 33.17 12.00 55.70 DOCTR w/ MEBR (ours) 58.25 19.60 59.61 31.46 33.61 10.9 59.89 Table 1: Comparisons on mesh completion quality and mapping quality. We report mean average precision for different metric@threshold. For IoU and PCR, higher thresholds are more difficult. For CD and LFD, smaller thresholds are more difficult. [email protected] [email protected] RfD-Net(refined data) 22.99 7.92 DIMR(refined data) 36.23 11.91 DOCTR w/o MEBR (ours) 47.75 21.08 DOCTR w/ MEBR (ours) 49.14 21.63 Table 2: Object Recognition Precision. We report the precision of object recognition at different IoU thresholds. Field Distance (LFD) metric. We adopt these metrics with the same thresholds as these in DIMR to determine whether a predicted mesh can match a ground-truth mesh. We also report the mean precision of IoU over all classes. (ii) Mapping quality. We adopt the Point Coverage Ratio (PCR) proposed in (Tang et al. 2022) to measure the distance between the reconstructed mesh and the original point cloud, which computes the nearest distance from each observed instance point to the corresponding mesh surface. Implementation details. The Minkowski Res16UNet34C (Choy, Gwak, and Savarese 2019) is used as the 3D U-Net backbone. The SGDQ in our DTD is represented by 256D features, of which the first 128 dimensions represent semantic features, and the latter 128 dimensions represent geometric features. Following (Sun et al. 2023), we also reduce memory consumption by computing the dot product between instance queries and aggregated point features within segments which are obtained from a graph-based segmentation (Felzenszwalb and Huttenlocher 2004). During training, we use the AdamW (Loshchilov and Hutter 2017) optimizer for 600 epochs with a batch size of 5 on a single Nvidia RTX A6000 GPU for all the experiments. One-cycle learning rate schedule (Smith and Topin 2019) is utilized with a maximum learning rate of 10−4 and a minimum learning rate of 10−6. Standard data augmentation are performed on point cloud including horizontal flipping, random rotations around the z-axis, elastic distortion, and random scaling. Comparisons to the State-of-the-Arts Table 1 shows the quantitative comparisons between our method with RfD-Net (Nie et al. 2021) and DIMR (Tang et al. 2022) in metrics of completion quality and mapping quality. As described in previous section, some wrong annotations are removed, so we use the officially released models of RfD-Net and DIMR to re-evaluate the metrics for fair comparison. For reader’s convenience, Table 1 includes both the metrics from the original paper report and our re-evaluation results. We provide results for our DOCTR method in two versions, one is with mask enhanced box refinement (MEBR) and the other is without MEBR. As shown in Table 1, our DOCTR (w/o MEBR) achieves improved performance on most metrics. Especially in the [email protected], our DOCTR (w/o MEBR) surpasses RfD-Net by a large margin, while DIMR shows slightly worse performance than RfD-Net. Also, our DOCTR (w/o MEBR) obtains consistent improvements on all the CD and LFD metrics. A reason for the lower [email protected] is that the PCR metric is highly related to point cloud and a slight box deviation can lead to a significant decrease, even if our reconstructed shape is of good quality. With adding the MEBR to enhance box quality, our DOCTR (with MEBR) achieves further improvement and surpasses DIMR in all the metrics including the PCR. Table 2 reports the precision of object recognition at different IoU thresholds. The [email protected] and [email protected] are both improved more than 10 points by our DOCTR, indicating that our model greatly reduces false positive proposals. This shows that our method enables direct sparse predictions, and even without using NMS, it yields fewer false positives compared to existing methods with NMS. Ablation Study We conduct ablation studies to analyze the effectiveness of our method on ScanNet. As shown in Table 3, the first row shows the performance of a baseline model that is modified from Mask3D (Schult et al. 2023) by directly adding box and shape prediction heads. Both the [email protected] and [email protected] metrics of this baseline are inferior to those of DIMR, which shows that straight forward applying an object-centric Transformer-based model is not sufficient. Adding hybrid bipartite matching module to the baseline model brings an improvement in the [email protected] metric, but the [email protected] metric shows a slight decrease. As shown in the third row, our proposed SGDQ for DTD layers achieves apparent improvement in both metrics compared to baseline model, and both metrics surpass those of DIMR. By combining the SGDQ design with hybrid bipartite matching, both metrics are further improved and reach the best results. Qualitative Comparisons We present a qualitative comparison in Figure 5. The version of our model to yield the results is the DOCTR (with MEBR). The first two rows in Figure 5 demonstrate that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6831 (a) DIMR (b) Ours (c) Ground truth Figure 5: Qualitative comparison on the ScanNet dataset SGDQ Hybrid Matching MEBR [email protected] [email protected] % % % 52.59 52.51 % ! % 52.51 53.31 ! % % 53.93 54.79 ! ! % 55.18 56.82 ! ! ! 58.25 59.61 Table 3: Ablation Study. DIMR tends to yield erroneous segmentation when there are multiple objects in close proximity, resulting in multiple repetitive false positives. In comparison, our method achieves accurate segmentation and correct object reconstruction even for the closely placed objects. The third row shows that DIMR predicts incorrect orientations for the monitors and the fifth row shows that DIMR predicts some noticeable angular errors of the tables. In contrast, our method predicts correct results. Our method also outperforms DIMR in object shapes, such as for distinguishing between square and round tables in the fifth row. Moreover, DIMR is less accurate in predicting objects of limited presence in training set such as L-shaped sofas in the fourth row. As the visualization shows, our DOCTR demonstrates significant and consistent improvements than DIMR in the reconstructed scene in terms of object instance segmentation, pose accuracy, and shape quality. Conclusion This paper introduces a novel Disentangled Object-Centric TRansformer (DOCTR) that allows learning with multiple objects for multiple sub-tasks in a unified manner. In particular, our design of semantic-geometry disentangled query is proved to be effective to improve performance on the different semantic and geometric sub-tasks. Extensive experiments demonstrate our superior performance than previous SOTA methods. At present, our method relies on a pretrained shape decoder to generate object meshes, which can be integrated within the main network for collaborative optimization in future work. We hope our work inspires more future works to explore unified learning models to understand rich attributes of multiple objects in complex scenes. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6832 Acknowledgments We would like to acknowledge contributions through helpful discussions from Hui Zhang and Yi Zhou. References Avetisyan, A.; Dahnert, M.; Dai, A.; Savva, M.; Chang, A. X.; and Nießner, M. 2019. Scan2CAD: Learning CAD model alignment in RGB-D scans. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 2614–2623. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vision, 213–229. Springer. Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. 2015. ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012. Chen, Z.; Tagliasacchi, A.; and Zhang, H. 2020. BSP-Net: Generating compact meshes via binary space partitioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 45–54. Chen, Z.; Tagliasacchi, A.; and Zhang, H. 2021. Learning mesh representations via binary space partitioning tree networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. Choy, C.; Gwak, J.; and Savarese, S. 2019. 4D SpatioTemporal ConvNets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3075–3084. Dai, A.; Chang, A. X.; Savva, M.; Halber, M.; Funkhouser, T.; and Nießner, M. 2017. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5828–5839. Felzenszwalb, P. F.; and Huttenlocher, D. P. 2004. Efficient graph-based image segmentation. International journal of computer vision, 59: 167–181. Garbade, M.; Chen, Y.-T.; Sawatzky, J.; and Gall, J. 2019. Two stream 3D semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 0–0. Huang, H.; Chen, H.; and Li, J. 2019. Deep neural network for 3D point cloud completion with multistage loss function. In 2019 Chinese Control And Decision Conference (CCDC), 4604–4609. IEEE. Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Liu, S.; Li, F.; Zhang, H.; Yang, X.; Qi, X.; Su, H.; Zhu, J.; and Zhang, L. 2022. DAB-DETR: Dynamic anchor boxes are better queries for DETR. arXiv preprint arXiv:2201.12329. Locatello, F.; Weissenborn, D.; Unterthiner, T.; Mahendran, A.; Heigold, G.; Uszkoreit, J.; Dosovitskiy, A.; and Kipf, T. 2020. Object-centric learning with slot attention. Advances in Neural Information Processing Systems, 33: 11525–11538. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Nie, Y.; Hou, J.; Han, X.; and Nießner, M. 2021. RfD-Net: Point scene understanding by semantic instance reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4608–4618. Rist, C. B.; Emmerichs, D.; Enzweiler, M.; and Gavrila, D. M. 2021. Semantic scene completion using local deep implicit functions on LiDAR data. IEEE transactions on pattern analysis and machine intelligence, 44(10): 7205– 7218. Rist, C. B.; Schmidt, D.; Enzweiler, M.; and Gavrila, D. M. 2020. SCSSnet: Learning spatially-conditioned scene segmentation on LiDAR point clouds. In 2020 IEEE Intelligent Vehicles Symposium (IV), 1086–1093. IEEE. Roldao, L.; de Charette, R.; and Verroust-Blondet, A. 2020. LMSCNet: Lightweight multiscale 3D semantic completion. In 2020 International Conference on 3D Vision (3DV), 111– 119. IEEE. Schult, J.; Engelmann, F.; Hermans, A.; Litany, O.; Tang, S.; and Leibe, B. 2023. Mask3D: Mask Transformer for 3D Semantic Instance Segmentation. Smith, L. N.; and Topin, N. 2019. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multidomain operations applications, volume 11006, 369–386. SPIE. Sun, J.; Qing, C.; Tan, J.; and Xu, X. 2023. Superpoint transformer for 3D scene instance segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2393–2401. Tang, J.; Chen, X.; Wang, J.; and Zeng, G. 2022. Point scene understanding via disentangled instance mesh reconstruction. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII, 684–701. Springer. Wang, X.; Ang Jr, M. H.; and Lee, G. H. 2020. Cascaded refinement network for point cloud completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 790–799. Wu, S.-C.; Tateno, K.; Navab, N.; and Tombari, F. 2020. SCFusion: Real-time incremental scene reconstruction with semantic completion. In 2020 International Conference on 3D Vision (3DV), 801–810. IEEE. Xie, H.; Yao, H.; Zhou, S.; Mao, J.; Zhang, S.; and Sun, W. 2020. GRNet: Gridding residual network for dense point cloud completion. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX, 365–381. Springer. Yan, X.; Gao, J.; Li, J.; Zhang, R.; Li, Z.; Huang, R.; and Cui, S. 2021. Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3101–3109. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6833 Yuan, W.; Khot, T.; Held, D.; Mertz, C.; and Hebert, M. 2018. PCN: Point completion network. In 2018 international conference on 3D vision (3DV), 728–737. IEEE. Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L. M.; and Shum, H.-Y. 2022. DINO: DETR with improved denoising anchor boxes for end-to-end object detection. arXiv preprint arXiv:2203.03605. Zhong, M.; and Zeng, G. 2020. Semantic point completion network for 3D semantic scene completion. In ECAI 2020, 2824–2831. IOS Press. Zhu, B.; Wang, Z.; Shi, S.; Xu, H.; Hong, L.; and Li, H. 2023. ConqueR: Query contrast voxel-DETR for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9296–9305. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable DETR: Deformable transformers for end-toend object detection. arXiv preprint arXiv:2010.04159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6834 | 2024 | 759 |
18,583 | DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models Namhyuk Ahn1, Junsoo Lee1, Chunggi Lee1,2, Kunhee Kim3, Daesik Kim1, Seung-Hun Nam1, Kibeom Hong4 1 NAVER WEBTOON AI 2 Harvard University 3 KAIST 4 SwatchOn Abstract Recent progresses in large-scale text-to-image models have yielded remarkable accomplishments, finding various applications in art domain. However, expressing unique characteristics of an artwork (e.g. brushwork, colortone, or composition) with text prompts alone may encounter limitations due to the inherent constraints of verbal description. To this end, we introduce DreamStyler, a novel framework designed for artistic image synthesis, proficient in both text-to-image synthesis and style transfer. DreamStyler optimizes a multi-stage textual embedding with a context-aware text prompt, resulting in prominent image quality. In addition, with content and style guidance, DreamStyler exhibits flexibility to accommodate a range of style references. Experimental results demonstrate its superior performance across multiple scenarios, suggesting its promising potential in artistic product creation. Project page: https://nmhkahn.github.io/dreamstyler/. Introduction “Painting is silent poetry.” — Simonides, Greek poet Recent text-to-image models have shown unprecedented proficiency in translating natural language into compelling visual imagery (Saharia et al. 2022; Ramesh et al. 2022; Rombach et al. 2022). These have emerged in the realm of art, providing inspiration and even assisting in crafting tangible art pieces. In the AI-assisted art production workflow, artists typically utilize various descriptive prompts that depict the style and context to generate their desired image. However, the unique styles of a painting, its intricate brushwork, light, colortone, or composition, cannot be easily described in a single word. For instance, dare we simplify the entirety of Vincent Van Gogh’s lifelong artworks as just one word, ‘Gogh style’? Text descriptions cannot fully evoke his unique style in our imagination — his vibrant color, dramatic light, and rough yet vigorous brushwork. Beyond text description, recent studies (Gal et al. 2022; Ruiz et al. 2023) embed specific attributes of input images into latent space. While they effectively encapsulate a novel object, we observed that they struggle to personalize style of a painting. For instance, model optimization-based methods (Ruiz et al. 2023; Kumari et al. 2023) are highly susceptible to overfitting and often neglect inference prompts, Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. “a market place” “a waterfall in the forest” “a dog” Figure 1: DreamStyler synthesizes outputs based on a given context along with a style reference. Note that each model is trained on a single style image shown in this figure. which is not ideal for real-world production (please refer to the Suppl. for more details). Textual inversion-based methods (Gal et al. 2022; Voynov et al. 2023), in contrast, effectively reflect the inference prompt but fail to replicate style, possibly due to the limited capacity of the learned embeddings. This is because capturing style, from global elements (e.g. colortone) to local details (e.g. detailed texture), is challenging when relying solely on a single embedding token. In this work, we present DreamStyler, a novel single (one-shot) reference-guided artistic image synthesis framework designed for the text-to-image generation and style transfer tasks (Figure 1). We encapsulate the intricate styles of artworks into CLIP text space. DreamStyler is grounded in textual inversion (TI), chosen for the inherent flexibility that stems from its prompt-based configuration. To overcome the limitations of TI, we introduce an extended textual embedding space, S by expanding textual embedding into the denoising timestep domain (Figure 2). Based on this space, we propose a multi-stage TI, which maps the textual The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 674 information into the S space. It accomplishes by segmenting the entire diffusion process into multiple stages (a chunk of timesteps) and allocating each textual embedding vector to the corresponding stage. The exploitation of the timestep domain in textual inversion significantly improves the overall efficacy of artistic image synthesis. This enhancement stems from the increased capacity of the personalized module, as well as the utilization of prior knowledge suggesting that different denoising diffusion steps contribute differently to image synthesis (Balaji et al. 2022; Choi et al. 2022). We further propose a context-aware prompt augmentation that simply yet proficiently decouples the style and context information from the reference image. With our approach, the personalization module can embed style features solely into its textual embeddings, ensuring a more faithful reflection of the reference’s style. To further refine the artistic image synthesis, we introduce a style and context guidance, inspired by classifier-free guidance (Ho and Salimans 2022). Our guidance bisects the guidance term into style and context components, enabling individual control. Such a guidance design allows users to tailor the outputs based on their preferences or intricacy of the reference image’s style. We validate the effectiveness of DreamStyler through a broad range of experiments. DreamStyler not only demonstrates advanced artistic image synthesis but also paves the new way of applying text-to-image diffusion models to the realms of artistic image synthesis and style transfer tasks. Related Work Personalized text-to-image synthesis. Since the latentbased text conditional generation has been explored (Rombach et al. 2022), following studies (Saharia et al. 2022; Ramesh et al. 2022; Li et al. 2022) have further contributed to enhancing text-to-image synthesis with CLIP (Radford et al. 2021) guidance. Furthermore, Textual inversion (Gal et al. 2022), DreamBooth (Ruiz et al. 2023) and CustomDiffusion (Kumari et al. 2023) introduced approaches that leverage 3-5 images of the subject to personalize semantic features. Recently, Voynov et al. (2023) proposed P+ space, which consists of multiple textual conditions, derived from per-layer prompts. Although they showed promising results in penalization of diffusion models, there are still limitations to fully capturing precise artistic style representations. In contrast, DreamStyler considers the denoising timestep to accommodate temporal dynamics in the diffusion process, achieving high-quality artistic image generation. Paint by style. Neural style transfer renders the context of a source with a style image. Since Gatys, Ecker, and Bethge (2016), studies have been devoted to enhancing the transfer networks for more accurate and convincing style transfer. Notably, AdaIN (Huang and Belongie 2017) and AdaAttN (Liu et al. 2021) investigated matching the second-order statistics of content and style images. AesPANet (Hong et al. 2023) and StyTr2 (Deng et al. 2022) adopted recent architectures such as attention or transformer for high-fidelity neural style transfer. Recently, InST (Zhang et al. 2023) utilized the diffusion models by introducing the image encoder to inverse style images into CLIP spaces. BLIP-2 + Feedback “painting” “<S*> style” U-Net t stage Style U-Net t-1 stage U-Net t+1 stage Text Encoder … (a) Training (b) Sampling Encoder U-Net t stage U-Net t-1 stage U-Net t+1 stage … “painting of a bear in <S*> style” Text Encoder Content … … … … … … Figure 2: Model overview. (a) DreamStyler constructs training prompt with an opening text Co, multi-stage style tokens S∗, and a context description Cc, which is captioned with BLIP-2 and human feedback. DreamStyler projects the training prompt into multi-stage textual embeddings v∗= {v∗ 1, . . . , v∗ T }, where T is # stages (a chunk of the denoising timestep). As a result, the denoising U-Net provides distinct textual information at each stage. (b) DreamStyler prepares the textual embedding using a provided inference prompt. For style transfer, DreamStyler employs ControlNet to comprehend the context information from a content image. Method Preliminary: Stable Diffusion (SD). DreamStyler is built upon SD (Rombach et al. 2022). SD projects an input image x into a latent code, z = E(x) using an encoder E, while decoder D transforms the latent code back into pixel space, i.e. x′ = D(z′). The diffusion model creates a new latent code z′ by conditioning on additional inputs such as a text prompt y. The training objective of SD is defined as: L = Ez∼E(x),y,ϵ∼N(0,1),t[||ϵ −ϵθ(zt, t, c(y))||2 2]. (1) At each timestep t, the denoising network ϵθ reconstructs the noised latent code zt, given the timestep t and a conditioning vector c(y). To generate c(y), each token from a prompt is converted into an embedding vector, which is then passed to the CLIP text encoder (Radford et al. 2021). Preliminary: Textual Inversion (TI). Gal et al. (2022) proposed a method to personalize a pre-trained text-to-image model by incorporating a novel embedding representing the intended concept. To personalize the concept, they initialize a word token S∗and its corresponding vector v∗, situated in the textual conditioning space P, which is the output of the CLIP text encoder. Instead of altering any weights in SD models, they optimize v∗alone using Eq. (1). To create images of personalized concepts, the inclusion of S∗in the prompts (e.g. a photo of S∗dog) is the only required step. Multi-Stage Textual Inversion In some cases, TI fails to sufficiently represent the concept due to the inherent capacity limitations associated with usThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 675 Style image “knight with armor” “rock island on ocean” (b) green box description + (a) (c) red box description + (a,b) (d) blue box description + (a,b,c) “knight with armor” “rock island on ocean” “knight with armor” “rock island on ocean” “knight with armor” “rock island on ocean” Inference prompt: (a) w/o contextual description “a painting in S* style” + “of a woman in a blue dress playing a violin” + “with a woman in red dress playing piano behind” + “with women sitting in chairs” Figure 3: How does training prompt affect? Given a style image, we construct training prompts with contextual descriptions (b∼d). (a) Training without contextual description in the prompt; i.e. trains the model with “a painting in S∗style”. The model tends to generate the images that contains objects and compositions from the style image (e.g. standing and sitting audiences), instead of attributes depicted in the inference prompt. (b, c) Training with partial contextual descriptions (the green and red boxes displayed in the style image, respectively). Such a tendency is significantly reduced, yet the model still synthesizes some objects from the style image (e.g. sitting people in the blue box). (d) Training with full contextual descriptions. The model produces outputs that fully reflect the inference prompt without introducing any non-style attributes from the style image. ing a single embedding token. Moreover, this single embedding strategy is inappropriate for accommodating the changing process of diffusion models. As explored in Balaji et al. (2022); Choi et al. (2022), diffusion models display intriguing temporal dynamics throughout the process, necessitating different capacities at various diffusion steps. In light of this, managing all denoising timesteps with a single embedding potentially has limitations due to the spectrum of local to global expressions embodied in paintings. Thus, articulating paintings is intricately related to the denoising timesteps, which operate in a coarse-to-fine synthesis manner (Balaji et al. 2022). To address these challenges, we introduce a multi-stage TI that employs multiple embeddings, each corresponding to specific diffusion stages (Figure 2). We first propose an extended textual embedding space S. The premise of the S space is to decompose the entire diffusion process into multiple distinct stages. To implement this, we split the denoising timesteps into T chunks and denote each chunk as a stage. Based on the S space, the multistage TI prepares the copies of the initial style token (S∗) as a multi-stage token set S∗= {S∗ 1, . . . , S∗ T }. In this way, the multi-stage TI projects a style image into T style tokens, contrasting the TI that embeds it into a single token. The token set is then encoded by a CLIP text encoder to form stagewise embedding vectors, denoted as v∗= {v∗ 1, . . . , v∗ T }. Lastly, the multi-stage TI optimizes these embeddings following the subsequent equation. v∗= arg min v Ez,v,ϵ,t[||ϵ −ϵθ(zt, t, c(vt))||2 2]. (2) The application of multi-stage TI significantly enhances the representation capacity beyond that of vanilla TI, which we will illustrate in a series of experiments. Furthermore, this method enables the fusion of multiple tokens, each originating from different styles, at a specific stage t. Consequently, it facilitates the creation of unique and novel styles tailored to the user’s individual preferences. Context-Aware Text Prompt While the multi-stage TI enhances representational capacity, it still faces fundamental problems when training with a style reference; the style and context of the image may become entangled during the optimization of the embeddings. This problem mainly arises from attempts to encapsulate all features of the image into S∗, not just the style aspect. As depicted in Figure 3, without contextual information in the training prompt, the model overlooks the context of inference prompt. However, when we inject contextual descriptions into the training prompt, the model better disentangles the style from the context. In our observations, such a phenomenon occurs more frequently as the representational capacity increases, likely due to the model’s increased efforts to accommodate all information within its capacity. Hence, we construct training prompts to include contextual information about the style image. Let C = [Co, S∗] be the vanilla prompt used in multi-stage TI training, where Co is the opening text (e.g. “a painting”), and S∗is multi-stage style token set, described above. In the proposed strategy, we incorporate a contextual descriptor Cc (e.g. “of a woman in a blue dress”) into the middle of the prompt (Figure 2), i.e. C = [Co, Cc, S∗]. We annotate all the non-style attributes (e.g. objects, composition, and background) from the style image to form the contextual descriptor. When we caption non-style attributes, BLIP-2 (Li et al. 2023) is employed to aid in the automatic prompt generation. Although a context-aware prompt significantly reinforces style-context decoupling, for some style images with complicated contexts (Figure 3), BLIP-2 might not capture all details, which could limit the model’s disentanglement capability. In such cases, we further refine caption Cc based on human feedback (e.g., caption by humans). This human-inthe-loop strategy is straightforward yet markedly improves the model’s ability to disentangle styles. Since our goal is one-shot model training, the time spent refining the caption is minimal; typically less than a minute. With the contextaware prompt, the text-to-image models can now distinguish style elements from contextual ones and specifically embed these into the (multi-stage) style embeddings v∗. The motivation for augmenting the training prompt is also suggested in StyleDrop (Sohn et al. 2023), a current personalization approach in the text-to-image diffusion model. Style and Context Guidance Classifier-free guidance (Ho and Salimans 2022) improves conditional image synthesis. It samples adjusted noise preThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 676 “a cat” “a bridge” “a sailboat on the sea” Style image Prompt (a) DreamStyler (b) InST (c) XTI (d) Textual Inversion (e) CustomDiffusion (f) DreamBooth Figure 4: Qualitative comparison on the style-guided text-to-image synthesis task. Content & Style (a) DreamStyler (b) InST (c) AesPA-Net (d) StyTr2 (f) AesUST (e) IEContraAST (g) AdaAttN Figure 5: Qualitative comparison on the style transfer task. diction ˆϵ(.), by leveraging unconditional output under null token ∅as: ˆϵ(v) = ϵ(∅) + λ(ϵ(v) −ϵ(∅)), where, λ is the guidance scale and we omit c(.), z and t for brevity. In style-guided image synthesis, this guidance pushes both style and context uniformly with λ. The uniform guidance could face limitations since the spectrum of “style” of artistic paintings is wider than that of natural photos. Given this diversity, a more nuanced control mechanism is required. Furthermore, there exist demands to individually control style and context in the art-making process. To this end, we propose style and context guidance as in below. ˆϵ(v) = ϵ(∅) + λs[ϵ(v) −ϵ(vc)] + λc[ϵ(vc) −ϵ(∅)] + λc[ϵ(v) −ϵ(vs)] + λs[ϵ(vs) −ϵ(∅)] (3) where, vs, vc are the embeddings of prompts C, Cc, respectively. λs, λc denote style and context guidance scale. We derive Eq. (3) by decomposing v into vs, vc. We employ two paired terms to balance the influence of each guidance. Please refer to Suppl. for detailed derivation and analysis. By separating the guidance into style and context, users are afforded the flexibility to control these elements individually. Specifically, an increase in λc increases the model’s sensitivity towards context (e.g. inference prompt or content image), whereas amplifying λs leads the model towards a more faithful style reproduction. This flexible design allows users to generate stylistic output tailored to their individual preferences, and it also facilitates the adoption of various styles, each with a range of complexities (Hong et al. 2023). Style Transfer DreamStyler transmits styles by inverting a content image into a noisy sample and then denoising it towards the style domain (Meng et al. 2021). With this approach, however, the preservation of content would be suboptimal (Ahn et al. 2023). To improve this, we inject additional conditions from the content image into the model (Zhang and Agrawala 2023) (Figure 2). This straightforward pipeline well preserves with the structure of the content image, while effectively replicating styles. Moreover, by leveraging a powerful prior knowledge from text-to-image models, the style quality of DreamStyler surpasses that of traditional methods. Experiment Implementation details. We use T = 6 for multi-stage TI and utilize human feedback-based context prompts by default. Please refer to Suppl. for more details. Datasets. We collected a set of 32 images representing varThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 677 21 22 23 24 25 26 27 28 Text Score 25.0 27.5 30.0 32.5 Style Score TI DreamBooth CustomDiffusion XTI InST DreamStyler Figure 6: Performance of text and style scores in styleguided text-to-image synthesis. DreamStyler effectively balances these metrics and surpasses the majority of methods. Method Text Style User Score Score Score Textual Inversion (Gal et al. 2022) 24.11 26.84 2.1% DreamBooth (Ruiz et al. 2023) 22.48 25.20 3.9% CustomDiffusion (Kumari et al. 2023) 21.43 33.45 4.8% XTI (Voynov et al. 2023) 26.36 27.07 4.5% InST (Zhang et al. 2023) 27.05 23.97 1.8% DreamStyler (Ours) 26.40 28.74 82.9% Table 1: Quantitative comparison on the style-guided textto-image synthesis task. Bold: best, underline: second best. ious artistic styles, following the literature on style transfer (Tan et al. 2019). To evaluate text-to-image synthesis, we prepared 40 text prompts, as described in Suppl. Baselines. In terms of text-to-image synthesis, we compare DreamStyler against diffusion-based personalized methods, ranging from textual inversion to model-optimization approaches. For the style transfer task, we compare our method to state-of-the-art style transfer frameworks. We utilize official codes for all the methods used in the comparison. Evaluation. Text and image scores, based on CLIP, measure the alignment with a given text prompt and style image, respectively. Style score assesses the style consistency by calculating the similarity of Gram features between the style and generated images. More details are provided in Suppl. Style-Guided Text-to-Image Synthesis Table 1 and Figure 6 show quantitative results. DreamStyler delivers a robust performance while managing the trade-off between text and style scores. A tendency is noted that an overemphasis on input text prompts may lead to a compromise in style quality. Despite this, DreamStyler effectively balances these aspects, yielding a performance that goes beyond the trade-off line, indicative of outstanding capability. User score also supports the distinction of DreamStyler. As shown in Figure 4, previous inversion-based methods (TI, InST, and XTI) effectively preserve the context of text prompts but fall short in adopting the intrinsic artwork of style images. Conversely, the model optimization-based methods (DreamBooth, CustomDiffusion) excel in delivering styles but struggle to adhere to the prompt or introduce Style & object (a) DreamStyler (b) Textual inversion (c) CustomDiffusion Figure 7: My object in my style. Textual inversion faces challenges in accurately capturing both style and context from the reference images. Although CustomDiffusion successfully recreates the object’s appearance, it tends to generate objects in a realistic style, which does not entirely match the target style image. On the other hand, DreamStyler excels at synthesizing the object in the user-specified style. Method Text Image User Score Score Score AdaAttN (Liu et al. 2021) 56.67 56.76 8.6% AesUST (Wang et al. 2022) 58.05 58.09 6.8% IEContraAST (Chen et al. 2021) 59.38 59.42 8.6% StyTr2 (Deng et al. 2022) 56.18 56.28 21.2% AesPA-Net (Hong et al. 2023) 58.08 58.15 8.6% InST (Zhang et al. 2023) 65.32 65.37 2.3% DreamStyler (Ours) 66.04 66.05 44.1% Table 2: Quantitative comparison on the style transfer task. objects in style images (3rd row). DreamStyler, in contrast, not only faithfully follows text prompts but also accurately reflects the delicate artistic features of style images. Style Transfer As an extended application, DreamStyler also conducts style transfer. As shown in Table 2, we quantitatively compare with previous style transfer studies. Note that since most prior studies have employed Gram loss to boost style quality, we report a CLIP-based image score as an evaluation metric for a more fair comparison. In this benchmark, DreamStyler achieves state-of-the-art performance across text and image scores as well as user preference. Figure 5 also provides evidence of DreamStyler’s effectiveness. Our method adeptly captures style features such as polygon shapes or subtle brushwork present in style images. These results highlight the method’s capacity to accurately mirror both the thematic intent and the stylistic nuances of the source artwork. Stylize My Own Object in My Own Style Beyond style transfer that stylizes my image, one might desire to stylize my object (Sohn et al. 2023). In such a scenario, a user leverages both their object and style images. As DreamStyler employs an inversion-based approach, this can be readily accomplished by simply training an additional embedding for the object. Subsequently, the user The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 678 23.7 23.9 24.1 24.3 24.5 Text Score 26 28 30 Style Score T=1 T=2 T=4 T=6 T=8 T=10 T=12 Figure 8: Study on the number of stages (T) in multi-stage TI. We vary T from 1 to 12 and select T = 6 as the final model, considering the trade-off between text and style. “desert and oasis” “a man with a bearded face” Prompt T = 1 T = 2 T = 6 Style image Figure 9: Visual comparison of varying T in multi-stage TI. At T = 1, the model fails in both style replication and prompt understanding. As T increases, the style quality and text alignment are drastically enhanced. freely merges style and object tokens in the inference prompt to generate images. As depicted in Figure 7, DreamStyler excels in accurately reflecting both the style and object Model Analysis Ablation study. In Table 3, we evaluate each component of our method. The usage of multi-stage TI substantially augments both the text and style score, with a marked increase in style quality, accentuating the pivotal role of this module in creating artistic stylization products. A context-aware prompt yields a modest alteration in the quantitative metrics, yet provides a considerable contribution to the qualitative, which we will discuss in the following section. Style and context (S&C) guidance considerably impacts scores, reinforcing its significance in sustaining the comprehensive quality and coherence of the generated outputs. Multi-stage TI. In Figure 8, we delve into the influence of the number of stages (T) on performance. A transition from T = 1 to 4 results in substantial improvement. Upon reaching T = 6, the performance begins to navigate trade-off contours, prompting us to select T = 6 for the final model, as we seek to improve the text alignment of the synthesized images. Nevertheless, users have the flexibility to choose a different T value according to their preference. In Figure 9, we provide a visual comparison of the outcomes when T is set to 1, 2, and 6. While T = 1 struggles to reflect the artistic features of the style image or comprehend the input prompt, Method Text Score Style Score Baseline (Gal et al. 2022) 23.78 25.23 + Multi-Stage TI 24.74 29.86 + Context-Aware Prompt 24.65 29.50 + S&C Guidance (Ours) 25.38 29.62 Table 3: Model ablation study. Upon the textual inversion baseline (Gal et al. 2022), we attach the proposed components to measure the effectiveness of our method. “desert and oasis” “a cat” Style image Prompt w/o description + BLIP2 + Human feedback Figure 10: Comparison of three prompt strategies. The model trained without contextual description struggles to disentangle style and context from the style image, generating elements present in the style reference (e.g. the same composition in 1st row, a yellow dress in 2nd row). The contextual prompt alleviates this issue to some extent, but the BLIP2-based construction cannot completely eliminate it (e.g. the same vanishing point in 1st row). The issue is thoroughly addressed when human feedback is utilized. T = 2 uplifts the quality, yet it also falls short of embracing the style. In contrast, T = 6 proves proficient at mimicking the style image, effectively replicating delicate brushwork (1st row) or emulating the pointillism style (2nd row). Context-aware prompt. Figure 10 presents a visual comparison of three prompt constructions. Training the model without any contextual description (i.e. using “A painting in S∗style.”) poses a significant challenge, as it struggles to distinguish style from the context within the style image. Subsequently, this often results in the generation of elements that exist in the style reference, such as objects or scene perspective. The introduction of a contextual prompt considerably alleviates this issue, aiding the model in better separating stylistic elements from context. However, the automatic prompt construction does not fully resolve this, as BLIP-based captions often fail to capture all the details of the style image. The most effective solution is leveraging human feedback in the construction of prompts. This approach effectively tackles the issue, resulting in a more robust separation of style and context in the generated outputs. Guidance. In Figure 11, we explore style and context guidance by adjusting the scale parameters. When we amplified the style guidance strength (λs), the model mirrors the style image, illustrating style guidance’s capability in managing the image’s aesthetics. Yet, overemphasis on style risks compromising the context, leading to outputs that, while stylisThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 679 Context guidance Style guidance Style image Style image + + Figure 11: Study on the style and context guidance. Inference prompt: “A cat”. By adjusting the scale parameters (λs, λc), we assess the influence of style and context guidance on the synthesized image. Increasing the style guidance strength causes the model to align more closely with the aesthetics of the style image; however, an excessive emphasis on style could compromise the context. Conversely, increasing the context guidance strength ensures the output corresponds with the inference prompt, but overly strong context guidance could deviate the output from the original style. Style Mixing “a lighthouse on a cliff” “a bridge” Style image A B C Prompt A T 0 T 0 A A B T 0 A B C Baseline B T 0 T 0 B B A T 0 B C A A + B + C A + B + C Figure 12: Style mixing. Multi-stage TI facilitates style mixing from various style references. A user can customize a new style by substituting style tokens at different stages t. For example, the style token closer to t = T tends to influence the structure of the image, while those closer to t = 0 have a stronger effect on local and detailed attributes. For comparison, we display the baseline that employs all style tokens at every stage (i.e. using “A painting in SA t , SB t , SC t style” at all stages). tically congruent, might diverge from the intended context. On the other hand, strengthening context guidance (λc) ensures the output resembles the inference prompt, highlighting context guidance’s essential role in preserving contextual integrity. However, excessively strong context guidance could steer the output away from the original style, underlining the need for a nuanced balance of guidance for generating visually appealing and contextually accurate images. Nevertheless, this offers a new dimension of control over the synthesized image, differing from the classifier-free guidance (Ho and Salimans 2022). The additional control is a crucial element in the workflow of digital art production, considering its delicate and nuanced final outcomes. Style mixing. As shown in Figure 12, multi-stage TI opens up a novel avenue for an intriguing aspect of style mixing from diverse style references. This process empowers users to customize a unique style by deploying different style tokens at each stage t. The style tokens close to t = T predominantly impact the structure of the image, akin to broad strokes, while tokens closer to t = 0 affect local and detailed attributes, akin to intricate brushwork. To provide a concrete point of comparison, we present a baseline model that incorporates all style tokens at every stage, using the prompt “A painting in SA t , SB t , SC t styles”. While the baseline produces reasonable style quality, it lacks a control factor for extracting partial stylistic features from the reference. Consequently, the fusion of styles with multi-stage TI underscores the creative and flexible nature of our model, offering users a broad range of applications for artistic creation. Conclusion We have introduced DreamStyler, a novel image generation method with a given style reference. By optimizing multistage TI with a context-aware text prompt, DreamStyler achieves remarkable performance in both text-to-image synthesis and style transfer. Content and style guidance provides a more adaptable way of handling diverse style references. Limitations. While DreamStyler exhibits outstanding ability in generating artistic imagery, it is important to acknowledge its limitations within the intricate context of artistic expression. The vast spectrum of artistry, spanning from primitive elements to more nuanced and abstract styles (such as surrealism), demands thorough definition and examination from both artistic and technological perspectives. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 680 References Ahn, N.; Kwon, P.; Back, J.; Hong, K.; and Kim, S. 2023. Interactive Cartoonization with Controllable Perceptual Factors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16827–16835. Balaji, Y.; Nah, S.; Huang, X.; Vahdat, A.; Song, J.; Kreis, K.; Aittala, M.; Aila, T.; Laine, S.; Catanzaro, B.; et al. 2022. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324. Chen, H.; Wang, Z.; Zhang, H.; Zuo, Z.; Li, A.; Xing, W.; Lu, D.; et al. 2021. Artistic style transfer with internalexternal learning and contrastive learning. Advances in Neural Information Processing Systems, 34: 26561–26573. Choi, J.; Lee, J.; Shin, C.; Kim, S.; Kim, H.; and Yoon, S. 2022. Perception prioritized training of diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11472–11481. Deng, Y.; Tang, F.; Dong, W.; Ma, C.; Pan, X.; Wang, L.; and Xu, C. 2022. StyTr2: Image Style Transfer with Transformers. In CVPR, 11326–11336. Gal, R.; Alaluf, Y.; Atzmon, Y.; Patashnik, O.; Bermano, A. H.; Chechik, G.; and Cohen-Or, D. 2022. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618. Gatys, L. A.; Ecker, A. S.; and Bethge, M. 2016. Image style transfer using convolutional neural networks. In CVPR, 2414–2423. Ho, J.; and Salimans, T. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. Hong, K.; Jeon, S.; Lee, J.; Ahn, N.; Kim, K.; Lee, P.; Kim, D.; Uh, Y.; and Byun, H. 2023. AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks. Huang, X.; and Belongie, S. 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 1501–1510. Kumari, N.; Zhang, B.; Zhang, R.; Shechtman, E.; and Zhu, J.-Y. 2023. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1931–1941. Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Li, W.; Xu, X.; Xiao, X.; Liu, J.; Yang, H.; Li, G.; Wang, Z.; Feng, Z.; She, Q.; Lyu, Y.; et al. 2022. UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance. arXiv preprint arXiv:2210.16031. Liu, S.; Lin, T.; He, D.; Li, F.; Wang, M.; Li, X.; Sun, Z.; Li, Q.; and Ding, E. 2021. Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In Proceedings of the IEEE/CVF international conference on computer vision, 6649–6658. Meng, C.; He, Y.; Song, Y.; Song, J.; Wu, J.; Zhu, J.-Y.; and Ermon, S. 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684– 10695. Ruiz, N.; Li, Y.; Jampani, V.; Pritch, Y.; Rubinstein, M.; and Aberman, K. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22500–22510. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, K.; Gontijo Lopes, R.; Karagol Ayan, B.; Salimans, T.; et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494. Sohn, K.; Ruiz, N.; Lee, K.; Chin, D. C.; Blok, I.; Chang, H.; Barber, J.; Jiang, L.; Entis, G.; Li, Y.; et al. 2023. StyleDrop: Text-to-Image Generation in Any Style. arXiv preprint arXiv:2306.00983. Tan, W. R.; Chan, C. S.; Aguirre, H.; and Tanaka, K. 2019. Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork. IEEE Transactions on Image Processing, 28(1): 394–409. Voynov, A.; Chu, Q.; Cohen-Or, D.; and Aberman, K. 2023. P+: Extended Textual Conditioning in Text-to-Image Generation. arXiv preprint arXiv:2303.09522. Wang, Z.; Zhang, Z.; Zhao, L.; Zuo, Z.; Li, A.; Xing, W.; and Lu, D. 2022. AesUST: towards aesthetic-enhanced universal style transfer. In Proceedings of the 30th ACM International Conference on Multimedia, 1095–1106. Zhang, L.; and Agrawala, M. 2023. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543. Zhang, Y.; Huang, N.; Tang, F.; Huang, H.; Ma, C.; Dong, W.; and Xu, C. 2023. Inversion-based style transfer with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10146–10156. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 681 | 2024 | 76 |
18,584 | Discretization-Induced Dirichlet Posterior for Robust Uncertainty Quantification on Regression Xuanlong Yu1,2, Gianni Franchi2, Jindong Gu3, Emanuel Aldea1 1SATIE, Paris-Saclay University 2U2IS, ENSTA Paris, Institut Polytechnique de Paris 3University of Oxford Abstract Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications. An Auxiliary Uncertainty Estimator (AuxUE) is one of the most effective means to estimate the uncertainty of the main task prediction without modifying the main task model. To be considered robust, an AuxUE must be capable of maintaining its performance and triggering higher uncertainties while encountering Out-of-Distribution (OOD) inputs, i.e., to provide robust aleatoric and epistemic uncertainty. However, for vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates, and AuxUE robustness has not been explored. In this work, we propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks. Concretely, to achieve a more robust aleatoric uncertainty estimation, different distribution assumptions are considered for heteroscedastic noise, and Laplace distribution is finally chosen to approximate the prediction error. For epistemic uncertainty, we propose a novel solution named Discretization-Induced Dirichlet pOsterior (DIDO), which models the Dirichlet posterior on the discretized prediction error. Extensive experiments on age estimation, monocular depth estimation, and super-resolution tasks show that our proposed method can provide robust uncertainty estimates in the face of noisy inputs and that it can be scalable to both image-level and pixel-wise tasks. 1 Introduction Uncertainty quantification in deep learning has gained significant attention in recent years (Blundell et al. 2015; Kendall and Gal 2017; Lakshminarayanan, Pritzel, and Blundell 2017; Abdar et al. 2021). Deep Neural Networks (DNNs) frequently provide overconfident predictions and lack uncertainty estimates, especially for regression models outputting single point estimates, affecting the interpretability and credibility of the prediction results. There are two types of uncertainty in DNNs: unavoidable aleatoric uncertainty caused by data noise, and reducible epistemic or knowledge uncertainty due to insufficient training data (H¨ullermeier and Waegeman 2021; Kendall and Gal 2017; Malinin and Gales 2018). Disentangling and estimating them can better guide the decision-making based on DNN predictions. Many seminal methods (Blundell Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. et al. 2015; Gal and Ghahramani 2016; Lakshminarayanan, Pritzel, and Blundell 2017; Kendall and Gal 2017; Wen, Tran, and Ba 2020; Franchi et al. 2022) have been proposed to capture these two types of uncertainty. However, these methods require extensive modifications to the underlying model structure or more computational cost. Furthermore, since DNNs are often designed as task-oriented, obtaining uncertainty estimates by changing the structure of DNNs might reduce main task performance. As one of the most effective methods, Auxiliary Uncertainty Estimators (AuxUE) (Corbi`ere et al. 2019; Yu, Franchi, and Aldea 2021; Jain et al. 2021; Corbi`ere et al. 2021; Besnier et al. 2021; Upadhyay et al. 2022; Shen et al. 2023) aim to obtain uncertainty estimates without affecting the main task performance. AuxUEs are DNNs that rely on the main task models used for estimating the uncertainty of the main task prediction. They are trained using the input, output, or intermediate features of the pretrained main task model. In practice, the model inputs can be distribution-shifted from the training set, such as samples disturbed by noise (Hendrycks and Dietterich 2019), or even Out-of-Distribution (OOD) data. The pre-trained main task models mainly exhibit aleatoric uncertainty in the outputs given the In-Distribution (ID) inputs. Meanwhile, higher epistemic uncertainty is expected to be raised when OOD data is fed. A robust AuxUE is required in this case to provide robust aleatoric uncertainty estimates when facing InDistribution (ID) inputs and epistemic uncertainty estimates when encountering OOD inputs. This can help to make effective decisions under anomalies and uncertainty (Guo et al. 2022), such as in autonomous driving (Arnez et al. 2020). To achieve robustness, disentangling the two types of uncertainty becomes a prerequisite, aiding in improved epistemic uncertainty estimation and a more robust aleatoric uncertainty estimation solution. For vision regression tasks, basic AuxUE addresses only aleatoric uncertainty estimation (Yu, Franchi, and Aldea 2021). Recent works (Upadhyay et al. 2022; Qu et al. 2022) aim to improve the generalization ability of the basic AuxUEs. In DEUP (Jain et al. 2021), the authors propose to add a density estimator based on normalizing flows (Rezende and Mohamed 2015) in the AuxUE, yet challenging to apply on pixel-wise vision tasks. In the current context, both the robustness analysis and modeling of epistemic uncertainty are The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6835 underexplored for vision regression problems. To further explore robust aleatoric and epistemic uncertainty estimation in vision regression tasks, in this work, we propose a novel uncertainty quantification solution based on AuxUE. For estimating aleatoric uncertainty, we follow the approach of previous works such as (Nix and Weigend 1994; Kendall and Gal 2017; Yu, Franchi, and Aldea 2021; Upadhyay et al. 2022) and model the heteroscedastic noise using different distribution assumptions. For epistemic uncertainty quantification, we apply a discretization approach to the continuous prediction errors of the main task. This helps to mitigate the numerical impact of the training targets, which may be distributed in a long-tailed manner. With the discretized prediction errors, we propose parameterizing Dirichlet posterior (Sensoy, Kaplan, and Kandemir 2018; Charpentier, Z¨ugner, and G¨unnemann 2020; Joo, Chung, and Seo 2020) for estimating epistemic uncertainty without relying on OOD data during the training process. In summary, our contributions are as follows: (1) We propose a generalized AuxUE solution for aleatoric and epistemic uncertainty estimation; (2) We propose DiscretizationInduced Dirichlet pOsterior (DIDO), a new epistemic uncertainty estimation strategy for regression, which, to the best of our knowledge, is the only existing work employing this distribution for regression; (3) We demonstrate that assuming the noise which affects the main task predictions to follow Laplace distribution can help AuxUE achieve a more robust aleatoric uncertainty estimation; (4) We propose a new evaluation strategy for the OOD analysis of pixel-wise regression tasks based on systematically non-annotated patterns. We show the robustness and scalability of the proposed generalized AuxUE and DIDO on the age estimation, super-resolution and monocular depth estimation tasks. 2 Related Works Auxiliary uncertainty estimation Auxiliary uncertainty estimation strategies can be divided into two categories: unsupervised and supervised. For the former, Dropout layer injection (Mi et al. 2022; Gal and Ghahramani 2016) samples the network by forward propagations, and (Hornauer and Belagiannis 2022) proposed to use the gradients from the back-propagation. For the latter, AuxUEs are applied to obtain the uncertainty. In addition to regression-oriented ones presented in Section 1, we here introduce classificationoriented solutions. ConfidNet (Corbi`ere et al. 2019) and KLoS (Corbi`ere et al. 2021) learn the true class probability and evidence for the DNNs, respectively. Shen et al. (Shen et al. 2023) apply evidential classification (Joo, Chung, and Seo 2020) to their AuxUE. ObsNet (Besnier et al. 2021) uses adversarial noise to provide more abundant training targets in semantic segmentation task for their AuxUE. Evidential deep learning and Dirichlet networks Evidential deep learning (Ulmer 2021) (EDL) is a modern application of the Dempster-Shafer Theory (Dempster 1968) to estimate epistemic uncertainty with single forward propagation. In classification tasks, EDL is usually formed as parameterizing a prior (Malinin and Gales 2018, 2019) or a posterior (Joo, Chung, and Seo 2020; Charpentier, Z¨ugner, and G¨unnemann 2020; Charpentier et al. 2022; Sensoy, Kaplan, and Kandemir 2018) Dirichlet distribution. In regression, EDL estimates parameters of the conjugate prior of Gaussian distribution (Amini et al. 2020; Charpentier et al. 2022; Malinin et al. 2020). Multi-task learning is also applied to alleviate main task performance degradation (Oh and Shin 2022), yet using AuxUE will not affect main task performance. Therefore, we apply EDL to our AuxUE. Moreover, we are the first to apply the Dirichlet network to the regression tasks by discretizing the main task prediction errors. Robustness of uncertainty estimation A robust uncertainty estimator should show stable performance when encountering images perturbed to varying degrees (Michaelis et al. 2019; Hendrycks and Dietterich 2019; Kamann and Rother 2021). Similar studies are applied to evaluate the robustness of uncertainty estimates (Yeo, Kar, and Zamir 2021; Franchi et al. 2022). Meanwhile, it should provide a higher uncertainty when facing OOD data, such as in classification tasks (Hendrycks and Gimpel 2017; Liang, Li, and Srikant 2018). In image-level regression, we can use the definition of OOD from image classification (Techapanurak and Okatani 2021) in, for example, age estimation task. But for pixel-wise regression tasks, the notion of OOD data is illdefined. Typical OOD analysis estimates uncertainty on a different dataset than the training dataset (Charpentier et al. 2022). Yet, image patterns that are rarely assigned ground truth values in the training set can also be regarded as OOD. In this work, we also provide a new evaluation strategy for OOD patterns based on outdoor depth estimation to compensate for this experimental shortfall. 3 Method We define a training dataset D = {x(i), y(i)}N i where N is the number of images. We consider that x, y are drawn from a joint distribution P(x, y). A pipeline for the main task and auxiliary uncertainty estimation is shown in Fig. 1. We define a main task DNN fω with trainable parameters ω as shown in the blue area in Fig. 1. Similar to (Blundell et al. 2015), we view fω as a probabilistic model P(y|x, ω) which follows a Gaussian distribution N(y|µ, σ2) (Bishop and Nasrabadi 2006). The variable σ2 represents the variance of the noise in the DNN’s prediction, and the variable µ is the prediction ˆy = fω(x) in this case. The noise is considered here to be homoscedastic as all data have the same noise. The parameter ω is optimized by maximizing the loglikelihood: ˆω = argmaxω log (P(D|ω)) which is often performed by minimizing Negative Log Likelihood (NLL) loss in practice. With the above-mentioned Gaussian assumption on ˆy, the NLL loss optimizes with the same objective as the Mean Square Error loss (Bishop and Nasrabadi 2006), thus, only the prediction goal y is considered, and the uncertainty modeling is absent in the main task model training objective. AuxUE aims to obtain this missing uncertainty estimation without modifying ˆω. We consider two DNNs σΘ1 and σΘ2 in our generalized AuxUE with parameters Θ1 and Θ2, i.e., the two DNNs in the orange area of the Fig. 1. σΘ1 is for estimating aleatoric uncertainty ualea, and σΘ2 is for estimating epistemic uncertainty uepis. The backbone of σΘ1 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6836 Main task estimation Auxiliary uncertainty estimation Forward propagation Backward propagation & Other operations Outputs during inference 𝑓𝝎" 𝐲>(") 𝐱(") 𝐲(") 𝝐A(") 𝝐(") discretization 𝜎#! 𝜎#" 𝐱(") 𝝐AC(") 𝐮"!"#$ % 𝐮"&'!& % Calculating loss ℒ(Θ$) ℒ(Θ%) Figure 1: Pipeline of our proposed AuxUE solution. A generalized AuxUE is considered with two DNNs σΘ1 and σΘ2 for estimating aleatoric and epistemic uncertainty, respectively. Presented notations are consistent with and described in Section 3. The encoder parts of both DNNs can be shared, we compare the performance in Section 4.3. The input of AuxUE can be the input, output, or intermediate features of fˆω, we here simplify it to the image x(i) for brevity. and σΘ2 are based on the basic AuxUEs such as ConfidNet (Corbi`ere et al. 2019), BayesCap (Upadhyay et al. 2022) and SLURP (Yu, Franchi, and Aldea 2021) depending on the tasks. The input of AuxUE can be the input, output, or intermediate features of fˆω and it depends on the design of the basic AuxUEs, which is not the focus of this paper. We detail the inputs for different experiments in Supplementary material (Supp)1 Section A. 3.1 Aleatoric Uncertainty Estimation on AuxUE Based on the preliminaries of the settings, we now start with the first AuxUE σΘ1, which addresses ualea estimation problem as in SLURP and BayesCap. We consider the data-dependent noise (Goldberg, Williams, and Bishop 1997; Bishop and Quazaz 1996; Nix and Weigend 1994) follows N(0, σ2). Then we use the DNN σΘ1 to estimate the heteroscedastic aleatoric uncertainty ualea (Nix and Weigend 1994; Kendall and Gal 2017). bΘ1 and the loss function L(Θ1) are given by: bΘ1= argmax Θ1 P(D|ˆω, Θ1)= argmax Θ1 N X i=1 log(P(y(i)|x(i), ˆω, Θ1)) L(Θ1)= 1 N N X i=1 1 2 log(σΘ1(x(i)))+(y(i)−f ˆω(x(i)))2 2σΘ1(x(i)) (1) The top of the σΘ1 is an exponential or Softplus function to maintain the output non-negative. The aleatoric uncertainty estimation will be: ˆu(i) alea = σΘ1(x(i)). Minimizing L(Θ1) is also equivalent to making σΘ1 correctly predict the main task errors on the training set according to likelihood maximization. The errors set is denoted as ϵ = {ϵ(i)}N i=1 = {(y(i) −fˆω(x(i)))2}N i=1. Given the fact that distribution assumption on the noise affecting ˆy can be different than Gaussian, e.g., Laplacian (Marks et al. 1978) and Generalized Gaussian distribution (Nadarajah 2005; Upadhyay et al. 2022) also been considered in this work, the corresponding loss functions are provided in Supp Section B. The objective remains unchanged: employing AuxUE to estimate and predict the 1Refer to: https://arxiv.org/abs/2308.09065 component associated with aleatoric uncertainty using various distribution assumptions. Perturbing input data in various ways with different types of noise makes it challenging to accurately identify the actual noise distribution. Relying on a single distribution assumption and loss function can affect the reliability of aleatoric uncertainty estimates. In Section 4.3, we assess the impact of different distribution assumptions and losses on the robustness of these estimates. 3.2 Epistemic Uncertainty Estimation on AuxUE Modeling AuxUEs as formalized in Eq. 1 helps to estimate aleatoric uncertainty for fˆω. Yet, taking this uncertainty prediction as an indicator for epistemic uncertainty is not methodologically grounded. Evidential learning is considered to be an effective uncertainty estimation approach (Ulmer 2021). In regression tasks, DNN estimates parameters of Gaussian distribution’s conjugate prior, like Normal Inverse Gamma (NIG) distribution (Amini et al. 2020). The training will make the model fall back onto a NIG prior for the rare samples by attaching lower evidence to the samples with higher prediction errors using a regularization term in the loss function (Amini et al. 2020). Yet, long-tailed prediction errors in standard AuxUE lead to assigning high evidence to most data points, diminishing its ability to estimate epistemic uncertainty, as confirmed by our experiments. In contrast to previous works, which consider the numerical value of the prediction errors for both aleatoric and epistemic uncertainty estimation, we disentangle them and apply discretization to mitigate numerical bias from long-tailed prediction errors. Specifically, σΘ1 focuses on aleatoric uncertainty considering the numerical value of prediction errors, while for epistemic uncertainty, σΘ2 will consider the value-free categories of the prediction errors. Specifically, we propose Discretization-Induced Dirichlet pOsterior (DIDO), involves discretizing prediction errors and estimating a Dirichlet posterior based on the discrete errors. Discretization on prediction errors To mitigate numerical bias due to imbalanced data in our prediction error estimation, we employ a balanced discretization approach. Discretization is widely applied in classification approaches for regression (Yu, Franchi, and Aldea 2022). The popular discretization methods can be generally divided into handThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6837 crafted (Cao, Wu, and Shen 2017) and adaptive (Bhat, Alhashim, and Wonka 2021). The latter requires computationally expensive components like mini-ViT (Dosovitskiy et al. 2021) to extract global features. Thus, we discretize prediction errors in a handcrafted way. For pixel-wise scenarios, discretization is applied using per-image prediction errors, and for other cases, such as image-level tasks and 1D signal estimation, we use perdataset prediction errors. Details and demo-code can be found in Supp Section C.1 and C.2 respectively. We divide the set of errors ϵ, denoted in Section 3.1, into K subsets, where the kth subset is represented by the subscript k. To do this, we sort the errors in ascending order and create a new set, denoted by ϵ′, with the same elements as ϵ. Then we divide ϵ′ into K subsets of equal size, represented by {ϵk}K k=1. Each error value ϵ(i) is then replaced by the index of its corresponding subset k ∈[1, K], and transformed into a one-hot vector, denoted by ¯ϵ(i), as the final training target. Specifically, the one-hot vector is defined as: ¯ϵ(i) = [¯ϵ(i) 1 . . . ¯ϵ(i) k . . . ¯ϵ(i) K ]T ∈RK (2) where ¯ϵ(i) k = 1 if ϵ(i) belongs to the kth subset, and 0 otherwise. Each subset or bin represents a class of error severity. This process creates a new dataset, denoted by ¯D = {x(i), ¯ϵ(i)}N i , consisting of discretized prediction errors represented as one-hot vectors, which serves for training the epistemic uncertainty estimator σΘ2. Modeling epistemic uncertainty using ϵ in AuxUE In a Bayesian framework, given an input x, the predictive uncertainty of a DNN is modeled by P(y|x, D). Since we have a trained main task DNN, and as proposed in (Malinin and Gales 2018), we assume a point-estimate ˆω of ω: P(ω|D) = δ(ω −ˆω) →P(y|x, D) ≈P(y|x, ˆω) (3) with δ being the Dirac function. We follow the previous assumption, i.e., the prediction is drawn from a Gaussian distribution N(y|µ, σ2) and according to (Amini et al. 2020), we denote α as the parameters of the prior distributions of (µ, σ2) and we have P(µ, σ2|α, ˆω) = P(µ|σ2, α, ˆω)P(σ2|α, ω∗). After introducing α and Eq. 3, we can approximate P(y|x, D) as: P(y|x, D) = ZZ P(y|x, σ2)P(σ2|ω)P(ω|D)dσ2dω = Z P(y|x, σ2)P(σ2|D)dσ2 ≈ Z P(y|x, σ2)P(σ2|x, α, ˆω)dσ2 (4) Detailed derivation can be found in Supp Section C.3. We can consider ϵ to be drawn from a continuous distribution parameterized by σ2. The discrepancy in variances P(σ2|D) can describe epistemic uncertainty of the final prediction and the variational approach can be applied (Joo, Chung, and Seo 2020; Malinin and Gales 2018): P(σ2|x, α, ˆω) ≈P(σ2|D). After discretization, we can transform the approximation to P(π|x, α, ˆω) ≈P(π| ¯D), with ¯D defined above, π the parameters of a discrete distribution and α re-defined as the prior distribution parameters of this discrete distribution. In the next section, we omit ˆω and x for the sake of brevity. Dirichlet posterior for epistemic uncertainty According to the previous discussions on the epistemic uncertainty modeling and error discretization, we model Dirichlet posterior (Sensoy, Kaplan, and Kandemir 2018; Joo, Chung, and Seo 2020; Charpentier et al. 2022) on the discrete errors ¯ϵ to achieve epistemic uncertainty on the main task. Intuitively, we consider each one-hot prediction error ¯ϵ(i) to be drawn from a categorical distribution, and π(i) = (π(i) 1 , . . . , π(i) K ) denotes the random variable over this distribution, where PK k=1 π(i) k = 1 and π(i) k ∈[0, 1] for k ∈ {1, ..., K}. The conjugate prior of categorical distribution is a Dirichlet distribution: P(π(i)|α(i)) = Γ(S(i)) QK k=1 Γ(α(i) k ) K Y k=1 π(i) k α(i) k −1 (5) with Γ(·) the Gamma function, α(i) positive concentration parameters of Dirichlet distribution and S(i) = PK k=1 α(i) k the Dirichlet strength. To get access to the epistemic uncertainty, the categorical posterior P(π| ¯D) is needed, yet it is untractable. Approximating P(π| ¯D) using Monte-Carlo sampling (Gal and Ghahramani 2016) or ensembles (Lakshminarayanan, Pritzel, and Blundell 2017) comes with an increased computational cost. Instead, we adopt a variational way to learn a Dirichlet distribution in Eq. 5 to approximate P(π| ¯D) as in (Joo, Chung, and Seo 2020). Here, σΘ2 outputs the concentration parameters α of P(π|α), and α update according to the observed inputs. It can also be viewed as collecting the evidence e as a measure for supporting the classification decisions for each class (Sensoy, Kaplan, and Kandemir 2018), akin to estimating the Dirichlet posterior. Since the numbers of data points are identical for each class in ¯D, and no e(i) output before training, we set the initial α as 1 so that the Dirichlet concentration parameters can be formed as in (Sensoy, Kaplan, and Kandemir 2018; Charpentier, Z¨ugner, and G¨unnemann 2020): α(i) = e(i) + 1 = σΘ2(x(i)) + 1, where e(i) is given by an exponential function on the top of σΘ2. Then we minimize the KullbackLeibler (KL) divergence between the variational distribution P(π|x, Θ2) and the true posterior P(π| ¯D) to achieve bΘ2: bΘ2 = argmin Θ2 KL[P(π|x, Θ2)||P(π| ¯D)] = argmin Θ2 -EP (π|x,Θ2)[log P( ¯D|π)]+KL[P(π|x, Θ2)||P(π)] The loss function will be equivalent to minimizing the negative evidence lower bound (Jordan et al. 1999), considering the prior distribution P(π) as Dir(1): L(Θ2) = 1 N N X i=1 K X k=1 [¯ϵ(i) k (ψ(S(i)) −ψ(α(i) k ))] +λKL(Dir(α(i))||Dir(1)) (6) where ψ is the digamma function, λ is a positive hyperparameter for the regularization term and ¯ϵ is given by Eq. 2. For measuring epistemic uncertainty, we consider using the spread in the Dirichlet distribution (Shen et al. 2023; Charpentier, Z¨ugner, and G¨unnemann 2020), which is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6838 shown in (Shen et al. 2023) to outperform other metrics, e.g. differential entropy. Specifically, the epistemic uncertainty is inversely proportional to the Dirichlet strength: ˆu(i) epis = σbΘ2(x(i)) = K S(i) . The class corresponding to the maximum output from σΘ2 can also represent the aleatoric uncertainty. Yet, this is a rough estimate due to quantization errors and underperforming the other solutions. We provide the corresponding results in Supp Tab. A14. Overall, we take only σΘ1 output as the aleatoric uncertainty. In conclusion, we propose a generalized AuxUE with two components, namely σΘ1 and σΘ2, to quantify the uncertainty of main task model outputs. By assuming different distributions on heteroscedastic noise in training data (Section 3.1), σΘ1 is trained for aleatoric uncertainty estimation. Meanwhile, applying the proposed DIDO on σΘ2 and measuring the spread of the Dirichlet distribution (Section 3.2) aids in estimating epistemic uncertainty. Overall, we integrate the optimization for both uncertainty estimators, and the final loss for training the generalized AuxUE is: LAuxUE = L(Θ1) + L(Θ2) (7) For L(Θ1), in addition to the Gaussian NLL, we will test other NLL loss functions according to different distribution assumptions in the experiment. 4 Experiments In this section, we first show the feasibility of the proposed generalized AuxUE on toy examples. Then, we demonstrate the effectiveness of epistemic uncertainty estimation using the proposed DIDO on age estimation and monocular depth estimation (MDE) tasks, and investigate the robustness of aleatoric uncertainty estimation on MDE task. Due to page limitations, the experiments for an example of OOD detection in tabular data regression and the super-resolution task are provided in Supp Section A.2 and A.4 respectively. In the result tables, the top two performing methods are highlighted in color. All the results are averaged by three runs. The shar.enc. and sep.enc. denote respectively sharedparameters for the encoders and separate encoders of σΘ1 and σΘ2 in the generalized AuxUE. For epistemic uncertainty, we compare our proposed method with the solutions based on modified main DNN: LDU (Franchi et al. 2022), Evidential learning (Evi.) (Amini et al. 2020; Joo, Chung, and Seo 2020) and Deep Ensembles (DEns.) (Lakshminarayanan, Pritzel, and Blundell 2017). The detailed implementations and the main task performance for all experiments are provided in Supp Section A. 4.1 Toy Examples: Simple 1D Regression We generate two toy datasets to illustrate uncertainty estimates given by our proposed AuxUE, as shown in Fig. 2. In both examples, a tight aleatoric uncertainty estimation is provided on training data areas. For epistemic uncertainty, in Fig. 2-A, DIDO provides small uncertainty until reaching the unknown inputs x /∈[−3, 3]. In Fig. 2-B, we report the ‘in-between’ uncertainty estimates (Foong et al. 2019). On the in-between part x ∈[−1, 3], DIDO can provide higher epistemic uncertainty than in training set regions . . y epistemic uncertainty A 3*aleatoric uncertainty estimates x epistemic uncertainty estimates x B epistemic uncertainty y training samples predictions Figure 2: Results on toy examples. Aleatoric and epistemic uncertainty estimations given by our AuxUE are presented respectively as the uncertainty interval and degree (0-1). x ∈[−3, −1] and x ∈[3, 5]. In summary, the generalized AuxUE provides reliable uncertainty estimates in regions where training data is either present or absent. 4.2 Age Estimation and OOD Detection Epistemic uncertainty estimation for age estimation is similar to one for classification problems but has rarely been discussed in previous works. We use (unmodified) official ResNet34 (He et al. 2016) checkpoints from Coral (Cao, Mirjalili, and Raschka 2020) as the main task models. Our AuxUE is applied in a ConfidNet (Corbi`ere et al. 2019) style since it is more suitable for image-level tasks. Evaluation settings and datasets We train the models on AFAD (Niu et al. 2016) training set and choose AFAD test set as the ID dataset for the OOD detection task. We take CIFAR10 (Krizhevsky, Hinton et al. 2009), SVHN (Netzer et al. 2011), MNIST (LeCun 1998), FashionMNIST (Xiao, Rasul, and Vollgraf 2017), Oxford-Pets (Parkhi et al. 2012) and Noise image generated by Pytorch (Paszke et al. 2019) (FakeData) as the OOD datasets. We employ the Areas Under the receiver operating Characteristic (AUC) and the Precision-Recall curve (AUPR) (higher is better for both) to evaluate OOD detection performance. Results OOD detection results are shown in Tab. 1. DIDO performs the best on most datasets. On the Pets dataset, DIDO performs worse than DEns. and aleatoric uncertainty estimation head σΘ1. We argue that images of pets provide features closer to facial information, resulting in higher evidence estimates given by DIDO. While σΘ1 performs better in this case, which can jointly make AuxUE a better uncertainty estimator. Overall, we consider that using generalized AuxUE with DIDO is an alternative that can better detect OOD inputs than ensembling-based solutions. 4.3 Monocular Depth Estimation Task For the MDE task, we will evaluate both aleatoric and epistemic uncertainty estimation performance based on the AuxUE SLURP (Yu, Franchi, and Aldea 2021). Our generalized AuxUE is also constructed using SLURP as the backbone. We use BTS (Lee et al. 2019) as the main task model and KITTI (Geiger et al. 2013; Uhrig et al. 2017) Eigensplit (Eigen, Puhrsch, and Fergus 2014) training set for training both BTS and AuxUE models. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6839 Ours Modified main DNN OOD sets Metrics σΘ1 σΘ2(DIDO) LDU Evi. DEns. CIFAR10 AUC ↑ 96.0 100 95.2 50.0 99.2 AUPR ↑ 91.7 100 88.3 23.4 95.1 SVHN AUC ↑ 98.3 100 94.8 50.0 99.2 AUPR ↑ 98.1 100 93.2 44.3 97.8 MNIST AUC ↑ 97.8 100 97.6 50.0 99.6 AUPR ↑ 93.9 100 93.8 23.4 97.2 Fashion MNIST AUC ↑ 97.7 100 95.6 50.0 99.1 AUPR ↑ 94.0 100 89.3 23.4 93.8 Oxford Pets AUC ↑ 82.9 55.9 31.5 50.1 56.1 AUPR ↑ 53.3 23.9 12.5 18.5 21.3 Fake Data AUC ↑ 67.0 80.8 70.0 50.0 33.2 AUPR ↑ 59.7 70.2 58.8 49.5 37.8 Table 1: OOD detection results on Age estimation task. ID data is from the Asian Face Age Dataset (AFAD). A B C D Figure 3: Illustrations on MDE. A: input image, green points represent pixels with depth ground truth; B: depth prediction; C and D: aleatoric and epistemic uncertainty estimations. In B, C, and D, brighter pixels correspond to higher values. The areas lacking depth ground truth, e.g. sky and tramway, are assigned high uncertainty using DIDO. Aleatoric uncertainty estimation In this section, the goal is to analyze the fundamental performance and robustness of aleatoric uncertainty estimation under different distribution assumptions. We choose simple Gaussian (Sgau) (Nix and Weigend 1994), Laplacian (Lap), Generalized Gaussian (Ggau) (Upadhyay et al. 2022) and Normal-Inverse-Gamma (NIG) (Amini et al. 2020) distributions. We modify the loss functions and the head of the SLURP to output the desired parameters of the distributions. Evaluation settings and datasets We first build Sparsification curves (SC) (Bruhn and Weickert 2006): we achieve predictive SC by computing the prediction error of the remaining pixels after removing a certain partition of pixels (5% in our experiment) each time according to the highest uncertainty estimations. We can also obtain an Oracle SC by removing the pixels according to the highest prediction errors. Then, we have the same metrics used in (Poggi et al. 2020): Area Under the Sparsification Error (AUSE, lower is better), and Area Under the Random Gain (AURG, higher is better). We choose absolute relative error (REL) and root mean square error (RMSE) as the prediction error metrics. We generate KITTI-C from KITTI Eigen-split validation set using the code of ImageNet-C (Hendrycks and Dietterich 2019) to have different corruptions on the images to check the robustness of the uncertainty estimation solutions. We S Metrics Ggau Sgau NIG Lap sep. enc. σΘ1 Lap shar. enc. σΘ1 0 AUSE-REL ↓ 0.014 0.013 0.012 0.013 0.013 AUSE-RMSE ↓ 0.258 0.202 0.208 0.203 0.205 AURG-REL ↑ 0.023 0.023 0.024 0.023 0.023 AURG-RMSE ↑ 1.815 1.871 1.865 1.870 1.869 1 AUSE-REL ↓ 0.021 0.019 0.018 0.019 0.018 AUSE-RMSE ↓ 0.482 0.332 0.335 0.336 0.332 AURG-REL ↑ 0.029 0.031 0.032 0.031 0.032 AURG-RMSE ↑ 2.215 2.365 2.362 2.361 2.365 2 AUSE-REL ↓ 0.026 0.023 0.022 0.023 0.022 AUSE-RMSE ↓ 0.707 0.463 0.479 0.468 0.464 AURG-REL ↑ 0.035 0.039 0.039 0.038 0.039 AURG-RMSE ↑ 2.535 2.779 2.763 2.774 2.777 3 AUSE-REL ↓ 0.036 0.031 0.031 0.031 0.031 AUSE-RMSE ↓ 1.176 0.737 0.806 0.730 0.749 AURG-REL ↑ 0.044 0.049 0.049 0.049 0.049 AURG-RMSE ↑ 2.862 3.301 3.232 3.308 3.289 4 AUSE-REL ↓ 0.057 0.050 0.053 0.049 0.051 AUSE-RMSE ↓ 2.380 1.364 1.582 1.268 1.430 AURG-REL ↑ 0.051 0.058 0.054 0.059 0.056 AURG-RMSE ↑ 2.817 3.834 3.615 3.929 3.767 5 AUSE-REL ↓ 0.082 0.064 0.069 0.059 0.066 AUSE-RMSE ↓ 3.878 2.043 2.414 1.760 2.157 AURG-REL ↑ 0.045 0.063 0.057 0.067 0.061 AURG-RMSE ↑ 2.377 4.213 3.842 4.496 4.098 Table 2: Aleatoric uncertainty estimation results on MDE. S = 0 represents original KITTI dataset and S > 0 represents KITTI-C datasets. apply eighteen perturbations with five severities, including Gaussian noise, shot noise, etc., and take it along with the original KITTI for evaluation. Results As shown in Tab. 2, the Laplace assumption is more robust when the severity increases, while Gaussian one works better when the noise severity is smaller. We also check the proposed generalized AuxUE with a shared encoder. It shows that the epistemic uncertainty estimation branch affects the robustness of aleatoric uncertainty estimation in this case, especially under stronger noise. The next sections show epistemic uncertainty estimation results based on different methods. Furthermore, in Supp Tab. A15 and Tab. A16, we also verify whether aleatoric uncertainty methods based on different distribution assumptions can generalize to the OOD data, i.e., provide high uncertainty to the unseen patterns, even without explicitly modeling epistemic uncertainty. Robustness under dataset change This experiment will explore the predictive uncertainty performance encountering the dataset change. Supervised MDE is an ill-posed problem that heavily depends on the training dataset. In our case, the main task model is trained on the KITTI dataset, so the model will output meaningless results on the indoor data, which should trigger a high uncertainty estimation. The results are shown in Tab. 3. Evaluation settings and datasets We take AUC and AUPR as evaluation metrics. We take all the valid pixels from the KITTI validation set (ID) as the negative samples and the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6840 AuxUE with DIDO Modified main DNN Metrics Ours σΘ2 sep. enc. Ours σΘ2 shar. enc. LDU Evi. DEns. AUC ↑ 98.1 98.4 58.1 70.6 62.1 AUPR ↑ 99.3 99.4 79.5 77.8 76.7 Table 3: Epistemic uncertainty estimation results encountering dataset change on MDE. The evaluation dataset here is NYU indoor depth dataset. AuxUE with DIDO Modified main DNN S Metrics Ours σΘ2 sep. enc. Ours σΘ2 shar. enc. LDU Evi. DEns. 0 AUC ↑ 100.0 99.9 96.5 76.7 93.5 AUPR ↑ 100.0 99.0 93.8 42.6 70.0 Sky-All ↓ 0.015 0.018 0.278 0.986 0.005 1 AUC ↑ 100.0 99.9 96.3 69.7 92.8 AUPR ↑ 99.9 98.9 93.5 37.4 68.0 Sky-All ↓ 0.016 0.018 0.277 0.988 0.005 2 AUC ↑ 99.9 99.9 95.9 65.4 92.3 AUPR ↑ 99.8 98.8 93.0 34.5 67.0 Sky-All ↓ 0.017 0.018 0.280 0.990 0.005 3 AUC ↑ 99.9 99.7 95.9 62.3 91.6 AUPR ↑ 99.7 98.1 92.8 32.8 65.7 Sky-All ↓ 0.018 0.020 0.283 0.992 0.005 4 AUC ↑ 99.6 99.5 96.1 58.8 91.8 AUPR ↑ 99.1 97.2 92.9 31.2 67.2 Sky-All ↓ 0.023 0.022 0.288 0.994 0.005 5 AUC ↑ 98.5 99.0 96.5 58.5 92.2 AUPR ↑ 97.1 96.1 93.7 32.8 70.4 Sky-All ↓ 0.035 0.026 0.295 0.996 0.005 Table 4: Epistemic uncertainty estimation results encountering unseen pattern on MDE. The evaluation datasets here are KITTI Seg-Depth (S=0) and KITTI Seg-Depth-C (S>0). valid pixels from the NYU (Nathan Silberman and Fergus 2012) validation set (OOD) as the positive samples. Results Tab. 3 shows whether different uncertainty estimators can give correct indications facing the dataset change. The evidential learning method can provide competitive results, while our DIDO provides the best performance. Robustness on unseen patterns during training This experiment focuses on how uncertainty estimators behave on unseen patterns during training. The unseen patterns are drawn from the same dataset distribution as the patterns used in training, and the outputs of the main task model for such patterns may be reasonable. Still, they cannot be evaluated and thus are unreliable. High uncertainty should be assigned to these predictions. Since this topic is rarely considered in MDE, we try to give a benchmark in this work. Evaluation settings and datasets We select sky areas in KITTI as OOD patterns. This setting is based on the following reasons: due to the generalization ability of MDE DNNs, it is inappropriate to treat all pixels without ground truth as OOD. However, there is consistently no ground truth for the sky parts since LIDAR is used in depth acquisition. During training, sky patterns are masked and never seen by the DNNs (including the AuxUEs). Meanwhile, they are annotated in KITTI semantic segmentation dataset (Alhaija et al. 2018) (200 images), thus can be used for evaluation. Three metrics are applied for evaluating OOD detection performance as shown in Tab. 4. AUC and AUPR: we select 49 images that are not in the training set and have both depth and semantic segmentation annotations. For each image, we take the sky pixels as the positive class and the pixels with depth ground truth as the negative class. We use AUC and AUPR to assess the uncertainty estimation performance. Note that this metric does not guarantee that the uncertainty of the sky is the largest in the whole uncertainty map. Thus, we have Sky-All (lower is better): all 200 images with semantic segmentation annotations are selected for evaluation. The ground truth uncertainties are set as 1 for the sky areas. Then we normalize the predicted uncertainty, take the sky areas ˆusky from the whole uncertainty map and measure: mean((1 −ˆusky)2). For simplicity, we denote KITTI Seg-Depth for both evaluation datasets. We also generate a corruption dataset KITTI Seg-Depth-C using the same way in the aleatoric uncertainty estimation section. Results Fig. 3 shows a qualitative example of typical uncertainty maps computed on KITTI images. More visualizations are presented in Supp Section E. In Tab. 4, Deep Ensembles can better assign consistent and higher uncertainty to the sky areas, but it is inadequate for identifying the ID and OOD areas. As outlined in Section 3.2, DIDO prioritizes rare patterns and then generalizes the uncertainty estimation ability to the unseen patterns. This results in assigning higher uncertainty to some few-shot pixels that have ground truth, making Sky-All results slightly worse. Yet, it can achieve a balanced performance on all the metrics, and at the same time, it maintains robust performance in the presence of noise. 4.4 Ablation Study We conduct the ablation study in Supp Section D. Hyperparameters. We analyze the effect of the number of sets K defined in Section 3.2 for discretization and the regularization weight λ in Eq. 6. Necessity of using AuxUE. We also apply DIDO on the main task model to check impacts on main task performance. Effectiveness of Dirichlet modeling. We show the effectiveness of Dirichlet modeling instead of using the normal Categorical modeling based on the discretized prediction errors. For the former, we apply classical cross-entropy on the Softmax outputs given by the AuxUE. 5 Conclusion In this paper, we propose a new solution for uncertainty quantification on regression problems based on a generalized AuxUE. We design and implement the experiments based on four different regression problems. By modeling heteroscedastic noise using Laplace distribution, the proposed AuxUE can achieve more robust aleatoric uncertainty. Meanwhile, the novel DIDO solution in our AuxUE can provide better epistemic uncertainty estimation performance on both image-level and pixel-wise tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6841 Acknowledgements We acknowledge the support of the Saclay-IA computing platform. We also thank M˘ad˘alina Olteanu for the thoughtprovoking discussion for the article. References Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Acharya, U. R.; et al. 2021. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 76: 243–297. Alhaija, H.; Mustikovela, S.; Mescheder, L.; Geiger, A.; and Rother, C. 2018. Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes. International Journal of Computer Vision (IJCV). Amini, A.; Schwarting, W.; Soleimany, A.; and Rus, D. 2020. Deep evidential regression. NeurIPS. Arnez, F.; Espinoza, H.; Radermacher, A.; and Terrier, F. 2020. A comparison of uncertainty estimation approaches in deep learning components for autonomous vehicle applications. arXiv preprint arXiv:2006.15172. Besnier, V.; Bursuc, A.; Picard, D.; and Briot, A. 2021. Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation. In ICCV. Bhat, S. F.; Alhashim, I.; and Wonka, P. 2021. Adabins: Depth estimation using adaptive bins. In CVPR. Bishop, C.; and Quazaz, C. 1996. Regression with inputdependent noise: A Bayesian treatment. NeurIPS. Bishop, C. M.; and Nasrabadi, N. M. 2006. Pattern recognition and machine learning, volume 4. Springer. Blundell, C.; Cornebise, J.; Kavukcuoglu, K.; and Wierstra, D. 2015. Weight uncertainty in neural network. In ICML. Bruhn, A.; and Weickert, J. 2006. A confidence measure for variational optic flow methods. Computational Imaging and Vision, 31: 283. Cao, W.; Mirjalili, V.; and Raschka, S. 2020. Rank consistent ordinal regression for neural networks with application to age estimation. Pattern Recognition Letters, 140: 325– 331. Cao, Y.; Wu, Z.; and Shen, C. 2017. Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Transactions on Circuits and Systems for Video Technology, 28(11): 3174–3182. Charpentier, B.; Borchert, O.; Z¨ugner, D.; Geisler, S.; and G¨unnemann, S. 2022. Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions. In ICLR. Charpentier, B.; Z¨ugner, D.; and G¨unnemann, S. 2020. Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. NeurIPS. Corbi`ere, C.; Lafon, M.; Thome, N.; Cord, M.; and P´erez, P. 2021. Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition. In ICML 2021 Workshop on Uncertainty and Robustness in Deep Learning. Corbi`ere, C.; Thome, N.; Bar-Hen, A.; Cord, M.; and P´erez, P. 2019. Addressing failure prediction by learning model confidence. In NeurIPS. Dempster, A. P. 1968. A generalization of Bayesian inference. Journal of the Royal Statistical Society: Series B (Methodological), 30(2): 205–232. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR. Eigen, D.; Puhrsch, C.; and Fergus, R. 2014. Depth map prediction from a single image using a multi-scale deep network. NeurIPS. Foong, A. Y.; Li, Y.; Hern´andez-Lobato, J. M.; and Turner, R. E. 2019. ’In-Between’Uncertainty in Bayesian Neural Networks. arXiv preprint arXiv:1906.11537. Franchi, G.; Yu, X.; Bursuc, A.; Aldea, E.; Dubuisson, S.; and Filliat, D. 2022. Latent Discriminant deterministic Uncertainty. In ECCV. Gal, Y.; and Ghahramani, Z. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML. Geiger, A.; Lenz, P.; Stiller, C.; and Urtasun, R. 2013. Vision meets Robotics: The KITTI Dataset. International Journal of Robotics Research (IJRR). Goldberg, P.; Williams, C.; and Bishop, C. 1997. Regression with input-dependent noise: A Gaussian process treatment. NeurIPS. Guo, Z.; Wan, Z.; Zhang, Q.; Zhao, X.; Chen, F.; Cho, J.H.; Zhang, Q.; Kaplan, L. M.; Jeong, D. H.; and Jøsang, A. 2022. A Survey on Uncertainty Reasoning and Quantification for Decision Making: Belief Theory Meets Deep Learning. arXiv preprint arXiv:2206.05675. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR. Hendrycks, D.; and Dietterich, T. 2019. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. In ICLR. Hendrycks, D.; and Gimpel, K. 2017. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. ICLR. Hornauer, J.; and Belagiannis, V. 2022. Gradient-Based Uncertainty for Monocular Depth Estimation. In ECCV. H¨ullermeier, E.; and Waegeman, W. 2021. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110: 457–506. Jain, M.; Lahlou, S.; Nekoei, H.; Butoi, V.; Bertin, P.; Rector-Brooks, J.; Korablyov, M.; and Bengio, Y. 2021. Deup: Direct epistemic uncertainty prediction. arXiv preprint arXiv:2102.08501. Joo, T.; Chung, U.; and Seo, M.-G. 2020. Being bayesian about categorical probability. In ICML. Jordan, M. I.; Ghahramani, Z.; Jaakkola, T. S.; and Saul, L. K. 1999. An introduction to variational methods for graphical models. Machine learning, 37: 183–233. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6842 Kamann, C.; and Rother, C. 2021. Benchmarking the robustness of semantic segmentation models with respect to common corruptions. IJCV. Kendall, A.; and Gal, Y. 2017. What uncertainties do we need in bayesian deep learning for computer vision? In NeurIPS. Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. Technical report. Lakshminarayanan, B.; Pritzel, A.; and Blundell, C. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS. LeCun, Y. 1998. The MNIST database of handwritten digits. https://yann.lecun.com/exdb/mnist/. Lee, J. H.; Han, M.-K.; Ko, D. W.; and Suh, I. H. 2019. From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326. Liang, S.; Li, Y.; and Srikant, R. 2018. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In ICLR. Malinin, A.; Chervontsev, S.; Provilkov, I.; and Gales, M. 2020. Regression prior networks. arXiv preprint arXiv:2006.11590. Malinin, A.; and Gales, M. 2018. Predictive uncertainty estimation via prior networks. NeurIPS. Malinin, A.; and Gales, M. 2019. Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness. NeurIPS. Marks, R. J.; Wise, G. L.; Haldeman, D. G.; and Whited, J. L. 1978. Detection in Laplace noise. IEEE Transactions on Aerospace and Electronic Systems, 866–872. Mi, L.; Wang, H.; Tian, Y.; and Shavit, N. 2022. TrainingFree Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate. In AAAI. Michaelis, C.; Mitzkus, B.; Geirhos, R.; Rusak, E.; Bringmann, O.; Ecker, A. S.; Bethge, M.; and Brendel, W. 2019. Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv preprint arXiv:1907.07484. Nadarajah, S. 2005. A generalized normal distribution. Journal of Applied statistics, 32(7): 685–694. Nathan Silberman, P. K., Derek Hoiem; and Fergus, R. 2012. Indoor Segmentation and Support Inference from RGBD Images. In ECCV. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. Y. 2011. Reading Digits in Natural Images with Unsupervised Feature Learning. In NeurIPS. Niu, Z.; Zhou, M.; Wang, L.; Gao, X.; and Hua, G. 2016. Ordinal Regression With Multiple Output CNN for Age Estimation. In CVPR. Nix, D.; and Weigend, A. 1994. Estimating the mean and variance of the target probability distribution. In ICNN. Oh, D.; and Shin, B. 2022. Improving evidential deep learning via multi-task learning. In AAAI. Parkhi, O. M.; Vedaldi, A.; Zisserman, A.; and Jawahar, C. V. 2012. Cats and Dogs. In CVPR. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In NeurIPS. Poggi, M.; Aleotti, F.; Tosi, F.; and Mattoccia, S. 2020. On the uncertainty of self-supervised monocular depth estimation. In CVPR. Qu, H.; Li, Y.; Foo, L. G.; Kuen, J.; Gu, J.; and Liu, J. 2022. Improving the reliability for confidence estimation. In ECCV. Rezende, D.; and Mohamed, S. 2015. Variational inference with normalizing flows. In ICML. Sensoy, M.; Kaplan, L.; and Kandemir, M. 2018. Evidential deep learning to quantify classification uncertainty. NeurIPS. Shen, M.; Bu, Y.; Sattigeri, P.; Ghosh, S.; Das, S.; and Wornell, G. 2023. Post-hoc Uncertainty Learning using a Dirichlet Meta-Model. In AAAI. Techapanurak, E.; and Okatani, T. 2021. Practical evaluation of out-of-distribution detection methods for image classification. arXiv preprint arXiv:2101.02447. Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; and Geiger, A. 2017. Sparsity Invariant CNNs. In 3DV. Ulmer, D. 2021. A survey on evidential deep learning for single-pass uncertainty estimation. arXiv preprint arXiv:2110.03051. Upadhyay, U.; Karthik, S.; Chen, Y.; Mancini, M.; and Akata, Z. 2022. BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks. In ECCV. Wen, Y.; Tran, D.; and Ba, J. 2020. BatchEnsemble: an alternative approach to efficient ensemble and lifelong learning. In ICLR. Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Yeo, T.; Kar, O. F.; and Zamir, A. 2021. Robustness via cross-domain ensembles. In CVPR. Yu, X.; Franchi, G.; and Aldea, E. 2021. SLURP: Side Learning Uncertainty for Regression Problems. In BMVC. Yu, X.; Franchi, G.; and Aldea, E. 2022. On Monocular Depth Estimation and Uncertainty Quantification Using Classification Approaches for Regression. In ICIP. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6843 | 2024 | 760 |
18,585 | Attacks on Continual Semantic Segmentation by Perturbing Incremental Samples Zhidong Yu1, Wei Yang1,2*, Xike Xie1,3*, Zhenbo Shi1,3 1School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China 2Hefei National Laboratory, Hefei 230088, China 3Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China [email protected] Abstract As an essential computer vision task, Continual Semantic Segmentation (CSS) has received a lot of attention. However, security issues regarding this task have not been fully studied. To bridge this gap, we study the problem of attacks in CSS in this paper. We first propose a new task, namely, attacks on incremental samples in CSS, and reveal that the attacks on incremental samples corrupt the performance of CSS in both old and new classes. Moreover, we present an adversarial sample generation method based on class shift, namely Class Shift Attack (CS-Attack), which is an offline and easy-to-implement approach for CSS. CS-Attack is able to significantly degrade the performance of models on both old and new classes without knowledge of the incremental learning approach, which undermines the original purpose of the incremental learning, i.e., learning new classes while retaining old knowledge. Experiments show that on the popular datasets Pascal VOC, ADE20k, and Cityscapes, our approach easily degrades the performance of currently popular CSS methods, which reveals the importance of security in CSS. Introduction Semantic segmentation is a crucial computer vision task extensively applied in diverse real-world scenarios (Siam et al. 2018; Milioto, Lottes, and Stachniss 2018; Asgari Taghanaki et al. 2021). Recently, numerous models (Shelhamer, Long, and Darrell 2017; Chen et al. 2017; Cheng, Schwing, and Kirillov 2021) have been developed to tackle this task, displaying encouraging outcomes. However, these models encounter a significant hurdle known as the catastrophic forgetting (Michieli and Zanuttigh 2019) in the scenario of continual learning. In other words, the network learns new classes while rapidly forgetting those it has already acquired. The continual semantic segmentation (CSS) task was originally proposed by Michieli et al. (Michieli and Zanuttigh 2019). After that, some methods are proposed to solve this task with better results. A number of works (Cermelli et al. 2020; Douillard et al. 2021; Michieli and Zanuttigh 2021a,b; Phan et al. 2022) address the catastrophic forgetting of this task by distilling the knowledge of the old model to the new one. For example, MiB (Cermelli et al. 2020) *Corresponding Authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Mt-1 Mt ... Dt-1 Dt ... Mt-1 Gt Perturbations Attack for Samples Incremental Training Dt Figure 1: Overview of adversarial attacks on incremental samples. In this case, the attack on the incremental samples is separate from the incremental training. The attack phase occurs before the incremental training, using only the old training model and the new data Dt. The incremental model forgets the old knowledge due to the interference of the training data. takes into account the problem of background bias in CSS and models it to alleviate the confusion of new classes and the old knowledge. Moreover, there are works (Cha et al. 2021; Zhang et al. 2022) that utilize other techniques to retain the old classes, such as saliency detection and model compression. However, with the rapid development of CSS, a potential risk in this task remains neglected. It is well known that the standard CSS task is to update the model by incremental samples to learn new classes and retain old knowledge. These incremental samples may be provided by third parties, and thus they can be exploited to corrupt the old knowledge of the model for attack purposes. Specifically, it is feasible to corrupt the performance of the model by perturbing the incremental samples so that the predictions of certain pixels deviate from the previous model. Adversarial attacks against incremental classification were first proposed by Han et al. (Han et al. 2022). It assumes that the losses of the model in incremental training can be obtained. Based on this, adversarial samples are generated in real-time to attack the model, which is an online attack. Unlike it, we first consider the adversarial attack in CSS. Furthermore, we propose a more difficult and practical The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6844 setting. Specifically, the training process of the incremental model is difficult to obtain, so we consider the CSS attack method in an offline mode. The settings are as follows: 1) given incremental data Dt without any old data Dt−1, 2) the prediction map of the trained model at step t−1 on the sample is available, and 3) the specific structure and parameters of the model and the incremental learning paradigm are unknown. Under the proposed settings, we propose an attack method for incremental data in CSS, which aims to make the incremental model forget the learned knowledge quickly when trained on the perturbed incremental data. Fig. 1 illustrates the attack at step t without knowing the incremental learning paradigm and the model structure. The attack model Gt is first trained on the data Dt and the old segmentation model. Then, Gt generates perturbations for Dt. The disturbed Dt crashes the incremental training. In addition, we propose a loss based on class shift. It uses adversarial attacks to apply perturbations to the incremental data so that the old model produces the same prediction for all pixels on the image, limiting the exploitation of old class knowledge. Extensive experiments demonstrate the destruction of CS-Attack on CSS, proving that the learned knowledge is quickly forgotten due to incremental data being attacked. This reminds us that it is crucial to consider the attack on incremental data streams when designing CSS schemes. Our main contributions can be summarized as follows: • We reveal for the first time the potential risk in CSS and propose a novel task, namely, the attack against incremental samples in CSS. • We propose an attack method, namely CS-Attack, for incremental data that uses class shift to guide the generation of samples. • We conduct extensive experiments at multiple incremental settings on the standard benchmarks, and the proposed method substantially reduces the performance of CSS on old and new classes. Related Work Class Incremental Learning Concerns surrounding continual learning, also termed incremental or lifelong learning, have been steadily growing. Previous works are divided into three main categories: regularization-based, replay-based, and parameter isolationbased. Regularization-based methods (Zenke, Poole, and Ganguli 2017; Dhar et al. 2019; Douillard et al. 2020) can be subdivided into two categories: data-focused and prior-focused. The former utilizes techniques like distillation (Hinton et al. 2015) to generate an additional loss that acts as a regularization constraint to prevent forgetting. The latter preserves acquired knowledge by controlling the variation of parameters with differing levels of importance. Replay-based methods (Rebuffiet al. 2017; Castro et al. 2018; Hou et al. 2019; Iscen et al. 2020) select or generate examples of previous steps, which the model incorporates alongside new data to learn the updated classes. Then, the model employs these examples along with the new data to learn the new classes. Parameter isolation-based methods (Mallya, Davis, and Lazebnik 2018; Liu et al. 2020) allocate an independent set of model parameters to each task, aiming to forestall forgetting. Class Incremental Semantic Segmentation Michieli et al. (Michieli and Zanuttigh 2019) propose continual semantic segmentation and put forward a general framework to retain old knowledge through knowledge distillation. Subsequently, MiB (Cermelli et al. 2020) initially highlights the background shift in CSS, addressing it by modeling the background to alleviate transfer issues. PLOP (Douillard et al. 2021) introduces Local POD, preserving both long and short-distance spatial relationships at the feature level. SDR (Michieli and Zanuttigh 2021a) uses prototype matching and contrast learning to construct robust features. The REMINDER (Phan et al. 2022) designs CSWKD, adjusting the distillation weight of each class based on the similarity between new and old classes. Rong et al. (Rong et al. 2022) focus on utilizing historical information to guide class-incremental semantic segmentation in remote sensing images. Furthermore, several other approaches (Cha et al. 2021; Zhang et al. 2022) achieve promising results with additional models or structures. For instance, SSUL (Cha et al. 2021) relies on the saliency detection model to discover potential objects, which requires models trained on other datasets. On the other hand, RCIL (Zhang et al. 2022) utilizes parallel convolutions to enhance performance. In this paper, we focus on the risk of attacks against incremental samples that are overlooked by the CSS schemes, and design CSS attacks to verify the feasibility of attacks against them. Adversarial Attack The adversarial attack (Goodfellow, Shlens, and Szegedy 2015) usually refers to the use of perturbed samples to cause machine learning models to make incorrect predictions. It is a way to generate such examples through various techniques. It can be divided into test-time adversarial attack methods (Goodfellow, Shlens, and Szegedy 2015; Carlini and Wagner 2017; Madry et al. 2018; Gu et al. 2022; Agnihotri and Keuper 2023) and training-time ones (Feng, Cai, and Zhou 2019). The former generates adversarial images during inference and confuses the model to produce false predictions. The latter generates images at training time to deviate the training of the model from expectations. In addition, (Han et al. 2022) explores an online attack method for incremental learning in classification for the first time. Unlike them, we explore the possibility of offline style attacks against training samples in CSS, i.e., attacks without knowledge of the incremental learning paradigm and the information in training. Methodology Problem Definition for CSS Before introducing the problem, we first introduce some related concepts. The purpose of CSS is to train a segmentation model based on the data stream D1 to Dt in T steps to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6845 Xt Gt Original images Mt-1 𝓛d Class Distribution 𝓛cs Perdictions Frozen Trainable Mt-1 Mt Mt Perturbed images CSS process Training Gt Obtaining perturbed images 𝓛r Gt Figure 2: Overview of the CS-Attack. Gt is trainable, while the t −1 step trained model Mt−1 is frozen. The attack model outputs the relevant perturbations based on input images, and Mt−1 outputs the predictions of the original and perturbed samples, using the constructed losses to optimize Gt. The objective function contains three parts, Lcs and Ld are to destroy the prediction map of Mt−1, and Lr is to constrain the perturbed image to be close to the original image. After that the perturbed images are generated and passed to the model for CSS. learn new classes without forgetting the old ones. We define that Ct is the class learned at step t, and C1:t−1 denotes all the seen classes from step 1 to step t −1. For step t, we present a dataset Dt, which comprises a set of pairs (Xt, Y t), where Xt is an image with a size of H × W, and Y t is the ground truth segmentation map, which contains only the background and the class Ct learned in the current step. Catastrophic forgetting means that the performance of the model on C1:t−1 degrades rapidly while learning Ct. Typically, the segmentation model at step t −1 is defined as Mt−1, which generates corresponding semantic segmentation prediction map with |C1:t−1| channels. Adversarial Attack for Incremental Samples As mentioned earlier, CSS is constantly learns new classes and retains old knowledge in an incremental data stream. Therefore, one possible attack is to add perturbations to the incremental data so that the model quickly forgets what it has learned during incremental training. We define this as a new problem, i.e., the adversarial attack on incremental samples. Attacks against incremental samples require us to generate adversarial samples to interfere with the training of the model, so that the incremental learning method fails on these samples, i.e., it cannot do the job of learning new classes while retaining old ones. For this task, we consider a more difficult but easier to implement setup, i.e., generating perturbations for the current step of data without knowing the incremental training process, and thus the model quickly forgets the existing knowledge in the training of them. Specifically, the settings of this task are as follows: 1) The incremental training process and the incremental model are agnostic, and information such as the loss generated by the model during training is not available. 2) Following the standard CSS task, only the data from the current step is available, all old data is not available. 3) The model trained at step t −1 is available, and its prediction map for the sample is available. In the following, we illustrate the feasibility of training the model with incremental samples to make it forget the old knowledge. For the CSS approach, the model Mt−1 from the previous step is usually used to provide the old knowledge contained in the picture and Mt is supervised using two losses, namely Llearn and Lretain. Llearn is used to learn new classes from those regions that are labeled as new classes, and Lretain relies on Mt−1 to retain knowledge from those regions that are labeled as background. Therefore, attacking the regions in the background so that Mt−1 produces results that contradict the original prediction will destroy the old knowledge, i.e., it is feasible to train the model with incremental samples in order to make it forget the old knowledge. Note that we do not specify the type of losses, i.e., any loss function can be employed. The Proposed Attack Paradigm The proposed attack paradigm is shown in Fig. 2. Note that this process is independent of incremental training. In step t, the perturbation attack model Gt is trained using Mt−1 and Dt. Mt−1 is the segmentation model. Mt−1 is frozen and does not update the parameters, but computes the gradient to train the model Gt. r = G(Xt) (1) where Xt is an incremental sample of step t, whose label Y t contains only the classes of Ct and the background. The image being attacked ˆXt is defined as: ˆXt = Xt + r (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6846 where all attacked images ˆXt constitute the adversarial data ˆDt at step t. We impose a constraint loss Lr between the original with perturbed images to avoid excessive image modification, which is calculated as: Lr = || ˆXt −Xt||2 (3) where || ∗||2 denotes the L2-norm. Then, we train the generative model by creating constraints between the prediction maps of the perturbed image and the original image, and these maps are obtained by Mt−1. The goal is to make the generated disturbance aggressive and significantly reduce the performance of the model in incremental training. For semantic segmentation, Xt contains rich information that can be categorized into two types: the information that has already been learned, and the information of all new classes. For the former, the CSS method uses them to mine old class information as a way to retain old knowledge. Therefore, we first construct a class shift loss. This loss, denoted as Lcs, forces Gt to generate a perturbation r to confuse Mt−1 and predict these locations as the background. The Lcs erases any useful information that might be contained in the original image, especially about the old classes. The prediction map of ˆXt generated by Mt−1 is P ˆ Xt. The definition of Lcs is: Lcs = − X Y t i,j /∈Ct yblog(ˆpi,j) (4) where ˆpi,j is the prediction vector of P ˆ Xt at pixel (i, j), yb is the one-hot label of the background, and Y t i,j is the label of pixel (i, j). yb is a vector of length |C1:t−1| with the dimension of the background being 1 and the other values being 0. For the latter, the model needs to learn these classes. This process corrects the learned knowledge of the old model (which may be old or new classes) to the new classes. This part is usually supervised by the cross-entropy loss after softmax processing: Lce = − X Y t i,j∈Ct yclog(pi,j) (5) where yc is the one-hot label of the new class in current step, and pi,j is the prediction vector of PXt at pixel (i, j). According to the softmax function, the output of other old classes is suppressed when learning new classes. For these regions containing new classes, we wish to utilize the attack to change the output of the old model for it to be predicted as a non-background old class. Therefore, we construct a disorder loss Ld. It drives the prediction map outputted by Mt−1 of perturbed images to deviate from the prediction map of the original images, thus corrupting the learning of new classes in incremental learning. Ld is defined as: Ld = 1 P Y t i,j∈Ct∩ˆYi,j̸=bk LKL(pi,j, ˆpi,j) (6) Algorithm 1: Attack process at step t Input: Segmentation model Mt−1 and incremental data Dt Parameter: Attack model Gt and segmentation model Mt at step t Output: Segmentation model Mt 1: while Current epochs less than total epochs do 2: while {(Xt, Y t)} ∈Dt do 3: Get the Perturbations: r = G(Xt) 4: Get the perturbed image with Eq. (2) 5: Get original and perturbed predictions by Mt−1. 6: Update Gt with Eq. (7) 7: end while 8: end while 9: Generate the Perturbations r for Xt ∈Dt and construct Adversarial Data ˆDt 10: Incremental Training for Mt with ˆDt 11: return Mt where pi,j is the prediction vector of PXt at pixel (i, j), ˆYi,j is the pseudo labels of P ˆ Xt, and ˆYi,j ̸= bk means the predictions of the old model for ˆXt is not the background. LKL is the KL-divergence loss. Finally, the objective loss used to train the model Gt can be obtained: Lobj = Lr + Ld + λLcs (7) where λ is the weighting factor, and the weights of the other two terms are 1. Algorithm 1 shows the pseudocode for the attack process at step t. The proposed method uses the model from step t−1 and the data from step t to train Gt. After that, the trained Gt generates attack samples and submits them to the incremental model for learning. The algorithm is an offline and easy to implement method, which does not require real-time generation of adversarial data during incremental learning. Experiments Experimental Setup Datasets. We validate CS-Attack on different standard semantic segmentation datasets: Pascal VOC2012 (Everingham et al. 2010), ADE20k (Zhou et al. 2017) and Cityscapes (Cordts et al. 2016). The Pascal VOC2012 dataset contains 20 object classes and the background. It includes 10,582 images for training and 1,449 images for validation, respectively. The ADE20k dataset contains 150 objects and includes 20,210 training images and 2,000 test images. The Cityscapes dataset contains 19 classes from 21 cities with 2,975 training images, 500 validation images and 1,525 test images. Setting. MIB (Cermelli et al. 2020) sets two experimental protocols: disjoint and overlap. We consider the latter more realistic and challenging, and recent works mainly report their results in the overlapping setting. Therefore, we evaluate the performance in the overlapped setting for each dataset. We conduct experiments on Pascal VOC2012 in three settings: adding 1 class after training 19 classes (191), adding 5 classes after training 15 classes (15-5), and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6847 Method 19-1 (2 tasks) 15-5 (2 tasks) 15-5s (6 tasks) 0-19 ↓ 20 ↓ all ↓ 0-15 ↓ 16-20 ↓ all ↓ 0-15 ↓ 16-20 ↓ all ↓ ILT 67.71 11.65 65.04 67.14 39.24 60.50 8.74 7.94 8.55 +Noise 67.43 9.86 64.69 66.77 38.63 60.07 8.20 8.49 8.27 CS-Attack (ours) 42.75 4.63 40.93 45.23 9.74 36.78 2.75 1.03 2.34 MiB 70.29 33.28 68.53 75.32 48.79 69.00 39.41 14.71 33.53 +Noise 70.77 23.82 68.53 75.76 48.32 69.23 39.27 14.81 33.45 CS-Attack (ours) 46.56 12.36 44.93 47.51 18.54 40.61 11.48 8.32 10.73 PLOP 72.78 28.12 70.65 74.54 47.58 68.12 60.00 18.46 50.11 +Noise 72.77 30.39 70.75 73.71 48.45 67.70 60.00 17.18 49.80 CS-Attack (ours) 55.02 21.74 53.43 61.07 30.30 53.74 36.21 4.35 28.62 RCIL 77.00 25.11 74.53 78.95 50.66 72.21 69.91 22.82 58.70 +Noise 75.34 23.62 72.88 76.44 48.65 69.82 68.14 25.76 58.05 CS-Attack (ours) 58.99 8.70 56.60 65.15 28.36 56.39 37.23 7.82 30.22 Joint 77.45 77.94 77.47 78.88 72.63 77.39 78.88 72.63 77.39 Table 1: mIoU for different incremental learning settings on the dataset Pascal VOC2012. For each CSS method, noise is added to the samples (+Noise) and adversarial samples are generated (CS-Attack) in incremental learning. Best results for each CSS method are marked in boldface. Method 11-5 11-1s 1-1s all ↓ all ↓ all ↓ PLOP 61.52 58.46 45.14 +Noise 58.59 52.18 42.76 CS-Attack (ours) 45.15 30.22 17.59 Table 2: mIoU for different incremental learning settings on the dataset Cityscapes. mIoU of all classes after incremental training is reported. adding 5 classes sequentially after training 15 classes (155s). For ADE20k, we perform experiments with four settings: adding 50 classes after training 100 classes (100-50), adding 50 classes each time after training 50 classes (5050), and adding 10 classes each time sequentially after training 100 classes (100-10s). For Cityscapes, as in the previous work (Douillard et al. 2021), we treat the training data for each city as a class and apply three settings: adding 5 classes each time after training 11 classes (11-5), adding 5 classes each time sequentially after training 11 classes (11-1s), and adding one class at a time (1-1s). Evaluation metrics. The mean Intersection over Union (mIoU) metric is frequently used to measure the performance of the model in semantic segmentation. And for a comprehensive evaluation, we report different mIoUs in CSS. Initially, the mIoU of all initial classes assesses the ability of the model to retain the old knowledge. Subsequently, the mIoU of all new classes indicates the ability of the model to acquire novel knowledge. Finally, the mIoU of all classes (all) evaluates the performance of the model. Details. We validate the attack effectiveness of CS-Attack on several state-of-the-art CSS methods RCIL (Zhang et al. 2022), PLOP (Douillard et al. 2021), MIB (Cermelli et al. 2020) and ILT (Michieli and Zanuttigh 2019). All results are from the Deeplabv3 (Chen et al. 2017) architecture, which is pre-trained on ImageNet (Deng et al. 2009). There are no special requirements for the specific structure of the generative model Gt, and herein a simple encoder-decoder structure is used in our experiments. The encoder contains a 7×7 convolution with 64 channels and three 3 × 3 convolutions with 128, 256 and 512 channels, respectively. The decoder is composed of four 3 × 3 convolutions with 512, 256, 128 and 64 channels, respectively. The final output is a perturbation map r of size H × W. It is worth noting that there is no attack method specifically set up for this task, hence we imposed additional Gaussian noise (+Noise) on the images as a comparison. Moreover, we use CS-Attack as a new baseline for this task. Our experiments are conducted on 4 NVIDIA 2080Ti GPUs. To train the model Gt, we use the stochastic gradient descent (SGD) optimizer, where the base learning rate is 0.01 for all datasets, and λ is 1 in our experiments. Gt is trained for 30 epochs on PASCAL VOC2012 and ADE20k datasets and 50 epochs on Cityscapes dataset. As for the incremental learning process, we follow the setup of the original works. Quantitative Results Results on the Pascal VOC2012 dataset. Tab. 1 shows the attack results of CS-Attack on the Pascal VOC dataset with different incremental settings. First, the existing CSS methods achieve satisfactory results on the PASCAL VOC2012 dataset, especially PLOP, and RCIL. Second, the application of Gaussian noise to the original image does not lead to a significant decrease in the effectiveness of these methods. This suggests that these methods have some robustness as they only learn the probability distribution of old knowledge instead of labels. In contrast, the proposed method has a significant impact on almost all of the CSS methods, and their performance decreases significantly. In particular, in the long-term incremental process, we considerably destroy their results on the old classes. Results on the Cityscapes dataset. For Cityscapes, we report the results of the proposed attack on PLOP in different settings. As shown in Tab. 2, PLOP is a structure changebased approach that achieves promising results in various The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6848 Method 100-50 (2 tasks) 50-50 (3 tasks) 100-10s (6 tasks) 0-100 ↓ 101-150 ↓ all ↓ 0-50 ↓ 51-150 ↓ all ↓ 0-100 ↓ 101-150 ↓ all ↓ ILT 18.46 17.02 17.99 2.93 11.82 8.82 0.45 0.98 0.63 +Noise 18.88 14.95 17.57 3.25 12.99 9.74 0.44 2.19 1.03 CS-Attack (ours) 6.86 6.70 6.81 1.45 4.89 3.73 0.24 1.36 0.61 MiB 40.81 18.96 33.57 45.90 21.64 29.84 37.97 12.40 29.50 +Noise 40.53 17.56 32.87 46.16 21.46 29.80 37.25 10.21 28.30 CS-Attack (ours) 19.81 9.36 16.35 20.90 12.54 15.36 19.35 5.14 14.64 PLOP 41.27 16.65 33.12 47.64 21.21 30.13 39.28 11.63 30.13 +Noise 41.63 14.32 32.59 46.74 21.07 29.74 39.49 13.75 30.97 CS-Attack (ours) 27.84 4.28 20.04 23.81 12.49 16.31 21.04 7.45 16.54 RCIL 40.68 19.97 33.82 47.23 18.93 28.49 38.05 22.33 32.84 +Noise 41.81 18.24 34.01 47.74 20.67 29.69 38.21 21.03 32.52 CS-Attack (ours) 29.19 10.61 23.04 23.30 13.97 17.12 21.05 10.93 17.70 Joint 44.34 28.21 39.00 51.21 32.77 39.00 44.34 28.21 39.00 Table 3: mIoU for different incremental learning settings on the dataset ADE20k. Ld Lcs 0-15 ↓ 16-20 ↓ all ↓ 73.91 48.45 67.70 ✓ 70.98 37.81 63.08 ✓ ✓ 61.07 30.30 53.74 Table 4: Ablation study of different components on the 15-5 (2 tasks) setting of the Pascal VOC dataset. The CSS method is PLOP. settings. Random noise causes some attack effects. In contrast, CS-Attack is still useful and substantially waves the performance of the model, especially for the training setting with more incremental steps (1-1s). Results on the ADE20k dataset. Tab. 3 shows the results of attacking various CSS methods using CS-Attack under different settings on the ADE20k dataset. Firstly, the recently proposed methods perform well on this data. Similar to that on PASCAL VOC2012 dataset, the performance of the CSS methods is not significantly affected after imposing noise. This suggests that simple random noise does not affect performance. In contrast, our method plays a destructive role in all three different settings, and several CSS methods show drastic performance degradation. Moreover, the degradation is proportionally greater for the ADE20k dataset. This is due to the significantly larger incremental sample, which makes the attacked sample increase and the model corrupted to a greater extent. Through the above experiments on multiple incremental methods on multiple datasets, we demonstrate that the existing CSS methods are vulnerable and can be easily corrupted by attacks targeting incremental samples. Ablation Study Effectiveness of different components. We evaluate the impact of the proposed modules, and the performance analysis is shown in Tab. 4. For a fair comparison, these experiments are performed on Pascal VOC2012 with the setting 15-5. The baseline is PLOP without other attacks, and the method achieves promising incremental learning. Then, we 1 2 3 4 5 6 step 30 40 50 60 70 80 mIoU (%) PLOP CS-Attack Figure 3: The mIoU (%) at each step in the PASCAL VOC2012 dataset with the 15-5s setting. add adversarial attacks, and use Ld with Lr as supervision to generate adversarial samples. These samples are used for incremental learning, where the effect of PLOP drops significantly (-4.62%). After that, we introduce the class shift loss (Lcs) in the adversarial phase, and the performance of PLOP decreases significantly (-9.34 %) after learning with perturbations. Integrating all the modules, PLOP finally decreases to 53.74%. This shows that CS-Attack is effective for the incremental sample attack of CSS, which defeats the original purpose of CSS. Attack effect in different steps. To show the performance of CS-Attack at each step of the attack, we show in Fig. 3 the results of PLOP at each step on both the original incremental data and the data of the attack. They have the same performance before incremental learning. Then CS-Attack starts attacking the model at each incremental step. The introduction of CS-Attack corrupts the incremental results. As the incremental steps are gradually added, our method further destroys the performance of the model and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6849 (a) Images (b) PLOP (c) CS-Attack (d) GT Figure 4: Visualization results of PLOP, with imposed noise (+Noise) and proposed adversarial attack (CS-Attack) on the dataset PASCAL VOC2012 with 15-5s settings on several test images. (a) Input image. (b) The baseline is PLOP. (c) The results of PLOP for incremental training using the attack samples generated by CS-Attack. (d) Hand-annotated labels. S1 S2 S3 S4 S5 all ↓ 54.68 ✓ 37.00 ✓ 38.62 ✓ 35.67 ✓ 36.35 ✓ 37.14 ✓ ✓ ✓ ✓ ✓ 28.62 (-26.06) Table 5: Ablation experiment for attacks on different incremental steps on the PASCAL VOC2012 dataset with 15-5s (6 tasks) settings. the effect degrades even more. Experimentation of different attack points. To visualize the impact of the attack on the incremental results in CSS, we show the results of the proposed attack on different incremental steps in Tab. 5. The experiments are conducted using the CSS method PLOP with the PASCAL VOC2012 dataset in the 15-5s setting. The Si denotes the attack at step i. First, this attack is valid for any incremental step. Similar results are achieved for any step of the attack. This is because once the model is attacked, its learned knowledge is corrupted and cannot be recovered again in subsequent increments. Moreover, the effect is most effective when all incremental steps are attacked. Qualitative Evaluation Fig. 4 shows the original predictions of PLOP and the prediction results after training with adversarial samples in the PASCAL VOC2012 dataset with the 15-5s setting. PLOP is able to retain the old knowledge better during the incremental process, which produces clear results. However, its performance drops dramatically after training with adversarial samples, and the original knowledge is induced as background in the incremental training. (a) Images (b) +Noise (c) CS-Attack Figure 5: Visualization results of some adversarial samples on ADE20k in the 100-50 setting. (a) Original images. (b) Images of adding Gaussian noise. (c) The adversarial samples generated in the second incremental learning step. Fig. 5 shows some original and perturbed images after incremental training under ADE20k dataset with the 100-50 setting. These adversarial samples are generated in the second step of incremental learning, i.e., 50 classes are added. The CSS method is PLOP. Thanks to the loss term Lr, there is a gap between the adversarial samples and the original images, but the difference is almost imperceptible. Conclusions In this paper, we focused on the potential risk in continual semantic segmentation. We showed for the first time that the attack for incremental samples in CSS can substantially disrupt the performance of incremental models and reduce the retention of old knowledge. Moreover, we proposed a new task, namely, an adversarial attack on incremental samples in CSS. Specifically, perturbing pictures by adversarial attacks on incremental data streams leads to the failure of incremental learning methods, thus defeating the purpose of incremental learning, i.e., the retention of the learned knowledge. On this basis, we proposed a class shift-based attack method to disrupt the incremental process by changing the predictive distribution of the old model over the incremental data stream and obfuscating the old knowledge generated by the model. We validated the proposed approach in several classic CSS methods and experimentally demonstrated that CSS methods greatly forget the old knowledge due to the attack on incremental samples. Acknowledgments This work was supported by the National Natural Science Foundation of China (Nos. 62172385 and 62072428), and the Innovation Program for Quantum Science and Technology (No. 2021ZD0302900). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6850 References Agnihotri, S.; and Keuper, M. 2023. CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks. arXiv preprint arXiv:2302.02213. Asgari Taghanaki, S.; Abhishek, K.; Cohen, J. P.; CohenAdad, J.; and Hamarneh, G. 2021. Deep semantic segmentation of natural and medical images: a review. Artificial Intelligence Review, 54: 137–178. Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39–57. Ieee. Castro, F. M.; Mar´ın-Jim´enez, M. J.; Guil, N.; Schmid, C.; and Alahari, K. 2018. End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV), 233–248. Cermelli, F.; Mancini, M.; Bulo, S. R.; Ricci, E.; and Caputo, B. 2020. Modeling the background for incremental learning in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9233–9242. Cha, S.; Yoo, Y.; Moon, T.; et al. 2021. SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning. Advances in Neural Information Processing Systems, 34: 10919–10930. Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Cheng, B.; Schwing, A.; and Kirillov, A. 2021. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34: 17864–17875. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee. Dhar, P.; Singh, R. V.; Peng, K.-C.; Wu, Z.; and Chellappa, R. 2019. Learning without memorizing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5138–5146. Douillard, A.; Chen, Y.; Dapogny, A.; and Cord, M. 2021. Plop: Learning without forgetting for continual semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4040–4050. Douillard, A.; Cord, M.; Ollion, C.; Robert, T.; and Valle, E. 2020. Podnet: Pooled outputs distillation for small-tasks incremental learning. In European Conference on Computer Vision, 86–102. Springer. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303–338. Feng, J.; Cai, Q.-Z.; and Zhou, Z.-H. 2019. Learning to confuse: generating training time adversarial data with autoencoder. Advances in Neural Information Processing Systems, 32. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations. Gu, J.; Zhao, H.; Tresp, V.; and Torr, P. H. 2022. Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In European Conference on Computer Vision, 308–325. Springer. Han, G.; Choi, J.; Hong, H.; and Kim, J. 2022. Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning. In NeurIPS ML Safety Workshop. Hinton, G.; Vinyals, O.; Dean, J.; et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7). Hou, S.; Pan, X.; Loy, C. C.; Wang, Z.; and Lin, D. 2019. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 831–839. Iscen, A.; Zhang, J.; Lazebnik, S.; and Schmid, C. 2020. Memory-efficient incremental learning through feature adaptation. In European conference on computer vision, 699–715. Springer. Liu, Y.; Parisot, S.; Slabaugh, G.; Jia, X.; Leonardis, A.; and Tuytelaars, T. 2020. More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning. In European Conference on Computer Vision, 699–716. Springer. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations. Mallya, A.; Davis, D.; and Lazebnik, S. 2018. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV), 67–82. Michieli, U.; and Zanuttigh, P. 2019. Incremental learning techniques for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. Michieli, U.; and Zanuttigh, P. 2021a. Continual semantic segmentation via repulsion-attraction of sparse and disentangled latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1114–1124. Michieli, U.; and Zanuttigh, P. 2021b. Knowledge distillation for incremental learning in semantic segmentation. Computer Vision and Image Understanding, 205: 103167. Milioto, A.; Lottes, P.; and Stachniss, C. 2018. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In 2018 IEEE international conference on robotics and automation (ICRA), 2229–2235. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6851 Phan, M. H.; Phung, S. L.; Tran-Thanh, L.; Bouzerdoum, A.; et al. 2022. Class Similarity Weighted Knowledge Distillation for Continual Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16866–16875. Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001–2010. Rong, X.; Sun, X.; Diao, W.; Wang, P.; Yuan, Z.; and Wang, H. 2022. Historical information-guided class-incremental semantic segmentation in remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 60: 1–18. Shelhamer, E.; Long, J.; and Darrell, T. 2017. Fully convolutional networks for semantic segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(4): 640–651. Siam, M.; Gamal, M.; Abdel-Razek, M.; Yogamani, S.; Jagersand, M.; and Zhang, H. 2018. A comparative study of real-time semantic segmentation for autonomous driving. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 587–597. Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. In International Conference on Machine Learning, 3987–3995. PMLR. Zhang, C.-B.; Xiao, J.-W.; Liu, X.; Chen, Y.-C.; and Cheng, M.-M. 2022. Representation Compensation Networks for Continual Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7053–7064. Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; and Torralba, A. 2017. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, 633–641. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6852 | 2024 | 761 |
18,586 | Data-Free Hard-Label Robustness Stealing Attack Xiaojian Yuan1, Kejiang Chen*1, Wen Huang1, Jie Zhang2, Weiming Zhang1, Nenghai Yu1 1University of Science and Technology of China 2Nanyang Technological University [email protected], [email protected], [email protected], jie [email protected], [email protected], [email protected] Abstract The popularity of Machine Learning as a Service (MLaaS) has led to increased concerns about Model Stealing Attacks (MSA), which aim to craft a clone model by querying MLaaS. Currently, most research on MSA assumes that MLaaS can provide soft labels and that the attacker has a proxy dataset with a similar distribution. However, this fails to encapsulate the more practical scenario where only hard labels are returned by MLaaS and the data distribution remains elusive. Furthermore, most existing work focuses solely on stealing the model accuracy, neglecting the model robustness, while robustness is essential in security-sensitive scenarios, e.g., face-scan payment. Notably, improving model robustness often necessitates the use of expensive techniques such as adversarial training, thereby further making stealing robustness a more lucrative prospect. In response to these identified gaps, we introduce a novel Data-Free Hard-Label Robustness Stealing (DFHL-RS) attack in this paper, which enables the stealing of both model accuracy and robustness by simply querying hard labels of the target model without the help of any natural data. Comprehensive experiments demonstrate the effectiveness of our method. The clone model achieves a clean accuracy of 77.86% and a robust accuracy of 39.51% against AutoAttack, which are only 4.71% and 8.40% lower than the target model on the CIFAR-10 dataset, significantly exceeding the baselines. Our code is available at: https://github.com/LetheSec/DFHL-RS-Attack. Introduction Machine learning as a service (MLaaS) has gained significant popularity due to its ease of deployment and costeffectiveness, which provides users with pre-trained models and APIs. Unfortunately, MLaaS is susceptible to privacy attacks, with Model Stealing Attacks (MSA) being particularly harmful (Tram`er et al. 2016; Orekondy, Schiele, and Fritz 2019; Jagielski et al. 2020; Yuan et al. 2022; Wang et al. 2022), where an attacker can train a clone model by querying its public API, without accessing its parameters or training data. This attack not only poses a threat to intellectual property but also compromises the privacy of individuals whose data was used to train the original model. Moreover, the clone model can serve as a surrogate *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. model for other black-box attacks, e.g., adversarial examples (AE) (Zhang et al. 2022b), membership inference (Shokri et al. 2017), and model inversion (Yuan et al. 2023). Furthermore, numerous security-sensitive scenarios require deployed models are not only accurate but also robust to various attacks, such as adversarial attacks. To address this issue, MLaaS providers can employ adversarial training (AT) techniques (Madry et al. 2018) to improve the robustness of their models (Goodman and Xin 2020; Shafique et al. 2020). Despite the target model’s robustness, most existing MSA are limited to Accuracy Stealing, i.e., reconstructing a model with similar accuracy to the target model, and fail at Robustness Stealing, i.e., acquiring the adversarial robustness of the target model while maintaining accuracy. Since the improvement of model robustness requires much more computational resources and extra data (Schmidt et al. 2018; Gowal et al. 2021), robustness stealing will bring greater losses to MLaaS providers. Moreover, if an attacker seeks to train a clone model for transfer-based adversarial attacks against a robust target model, then it becomes crucial to employ robustness stealing to achieve effective attack performance (Dong et al. 2020; Gao et al. 2020). In addition, most previous MSA require MLaaS to provide prediction logits, i.e., soft labels. However, this requirement is overly stringent in typical scenarios where MLaaS can only return top-1 prediction, i.e., hard label, for each query. Since models that require robustness are more likely trained on sensitive or private datasets, it is difficult for attackers to obtain public data with similar distributions, let alone access to the original data. Hence, the investigation of MSA targeting robustness in a data-free hard-label setting is highly valuable and remains unexplored. To tackle the above issues, we propose Data-Free HardLabel Robustness Stealing (DFHL-RS) attack, which can effectively steal both the accuracy and robustness of the target model. We first demonstrate that direct use of AT during MSA is suboptimal and point out the limitations of using Uncertain Example (UE) (Li et al. 2023a) for robustness stealing. Then the concept of High-Entropy Example (HEE) is introduced, which can characterize a more complete classification boundary shape of the target model. By imitating the target model’s prediction of HEE, the clone model gradually approaches its classification boundaries, so as to achieve prediction consistency for various samples. MoreThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6853 over, to eliminate the reliance on natural data and the logits of the target model, we design a data-free robustness stealing framework in a hard-label setting. Specifically, we first train a generator to synthesize substitute data for approximating the distribution of the target data. Since only hard labels are available, we cannot use the target model for gradient backpropagation (Chen et al. 2019; Zhang et al. 2022a) or gradient estimation (Truong et al. 2021; Kariyappa, Prakash, and Qureshi 2021). Thus, we use the clone model as a surrogate to guide the direction of the synthesized images. To prevent the generator from overfitting to the clone model, we adopt label smoothing and data augmentation techniques. Then we sample multiple batches from the memory bank storing synthesized images and use the proposed algorithm to construct HEE. Finally, we employ HEE to query the target model and obtain pseudo-labels for training the clone model. Our contributions can be summarized as follows: • For the first time, we explore a novel attack namely Data-Free Hard-Label Robustness Stealing (DFHL-RS) to achieve both accuracy and robustness stealing by leveraging only hard labels without any natural data. • We propose the concept of High-Entropy Examples (HEE), which can better characterize the complete shape of the classification boundary. • Extensive experiments demonstrate the effectiveness and stability of our proposed attack framework under various configurations. Related Work Data-Free Knowledge Distillation. Knowledge distillation (Hinton, Vinyals, and Dean 2015) aims to transfer the knowledge of a large teacher model to a smaller student model. In some cases, it is not feasible to access the training data due to storage costs or privacy concerns. Therefore, some proposed distillation techniques utilizing proxy datasets with similar distributions (Lopes, Fenu, and Starner 2017; Addepalli et al. 2020). ZSKD (Nayak et al. 2019) first proposed Data-Free Knowledge Distillation (DFKD), which uses the teacher’s predictions to optimize synthetic data. DAFL (Chen et al. 2019) introduced the generator for synthesizing query samples, and proposed several generative losses to promote the diversity. Adversarial DFKD (Micaelli and Storkey 2019) utilized adversarial learning to explore the data space more efficiently. Some follow-up work attempted to mitigate the catastrophic overfitting (Binici et al. 2022b,a), mode collapse (Fang et al. 2021) in DFKD, and to speed up the training process (Fang et al. 2022). However, all of these methods necessitate white-box access to the teacher. ZSDB3KD (Wang 2021) proposed DFKD in blackbox scenarios, but it has high computational costs and requires a large number of queries (4000 million). Data-Free Model Stealing. The main difference between Data-Free Model Stealing (DFMS) and DFKD is that it only has black-box access to the teacher model, i.e., the target model. Some early work required the use of a proxy dataset for attacks (Orekondy, Schiele, and Fritz 2019; Barbalau et al. 2020; Wang et al. 2022). Based on Adversarial DFKD, recent works MAZE (Kariyappa, Prakash, and Qureshi 2021) and DFME (Truong et al. 2021) utilized gradient estimation techniques to achieve DFMS, which require the target model to return soft labels. Therefore, DFMSHL (Sanyal, Addepalli, and Babu 2022) extended the problem to the hard-label setting. However, it still needs to use a proxy dataset or a synthetic dataset of random shapes generated on colored backgrounds, which breaks the truly datafree setting. To address this issue, DS (Beetham et al. 2023) proposed to train two student models simultaneously, which allows the generator to use one of the students as a proxy for the target model. However, these methods only achieved accuracy stealing, but cannot obtain the model’s robustness. Adversarial Robustness Distillation. Large models tend to be more robust than small models due to their greater capacity. Therefore, Goldblum et al. (Goldblum et al. 2020) first proposed Adversarial Robustness Distillation (ARD). By distilling the robustness of the teacher, the student obtained higher robustness than AT from scratch. RSLAD (Zi et al. 2021) found that using pseudo-labels provided by a robust teacher can further improve the robustness. IAD (Zhu et al. 2022) found that the guidance from the teacher model is progressively unreliable and proposed a multi-stage strategy to address this issue. However, these methods require access to the training set and the parameters of the teacher. BEST (Li et al. 2023a) proposed to steal the robustness of the target model in the black-box setting, but it necessitated a proxy dataset. DFARD (Wang et al. 2023) proposed ARD in a data-free setting, yet still required white-box access. How To Steal Robustness? In this section, for a fair comparison, we first assume that the attacker can obtain a proxy dataset for attacks as (Li et al. 2023a). Given a target model MT for a classification task that is built via AT and exhibits certain adversarial robustness, our goal is to train a clone model MC that performs similarly to MT on both clean and adversarial examples. The attacker has no prior knowledge of MT , including the architecture, parameters, and training strategies, and is only granted black-box access to MT . We consider the typical MLaaS scenario, where MT only returns top-1 predicted labels, i.e., hard-label setting. Most MSA methods usually use proxy data as query samples to query MT , with the purpose of merely stealing accuracy. Recently, there have been new attempts of robustness stealing (Li et al. 2023a). Here, we provide a comprehensive summary and analysis. Adversarial Training (AT) A straightforward way to steal robustness is to conduct AT during the attack. Specifically, the attacker first queries MT with query samples xq and obtains top-1 predictions yq, and then conducts standard AT on MC with (xq, yq) as follows: min θMC E(xq,yq)∈Dp(arg max δ LCE(θMC, xq + δ, yq)), (1) where LCE is the cross-entropy loss function, θMC represents parameters of the clone model, and xq + δ represents the adversarial examples generated using PGD (Madry et al. 2018). However, this method will also inherit the shortcomings of AT, which will seriously reduce the clean accuracy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6854 Step 2 Step 10 Step 8 Step 6 Step 4 (b) HEE (a) UE Step 2 Step 10 Step 8 Step 6 Step 4 Figure 1: Illustration of the superiority of HEE over UE in characterizing the classification boundaries. We make a twodimensional dataset with four classes to train a two-layer MLP, represented by four colors. Blue points in (a) and (b) represent UE and HEE constructed at different steps according to Eq. (2) and Eq. (3), respectively. Cloen Model Method Clean Acc Robust Acc Avg. ResNet18 AT 34.22 22.58 28.40 UE 43.98 15.78 29.88 AE 34.76 20.32 27.54 HEE 49.38 19.60 34.49 MobileNet AT 34.42 22.64 28.53 UE 48.84 14.86 31.85 AE 35.46 21.00 28.23 HEE 50.12 18.98 34.55 Table 1: Quantitative comparison of attacks using different query samples. MT is ResNet18 trained on CIFAR-10 training set, MC is ResNet18 or MobileNetV2, the proxy dataset is a random half of CIFAR-100 test set. Clean Acc and Robust Acc represent the accuracy (%) of clean samples and adversarial samples generated by PGD, respectively. Uncertain Examples (UE) In addition, Li et al.introduced Uncertain Examples (UE) to achieve robustness stealing. The main idea is to find samples that the model predicts with the highest uncertainty across all classes and use them to query MT . The construction process of UE is similar to that of AE, i.e., iteratively adds small noise to the query sample. Specifically, first assign the same target Y = [1/K, . . . , 1/K] (K is the number of classes) to each query sample, then use the Kullback–Leibler (KL) divergence to compute and minimize the distance between the prediction of MT and Y. Given a starting point x0 UE which is a random neighbor of the original query sample, an iterative update is performed with: xt+1 UE =ΠBϵ[xUE](xt UE− α · sign(∇xt UELKL(MC(xt UE)∥Y ))), (2) where LKL(·∥·) is the KL divergence, ∇x(t) UE denotes the gradient of the loss function w.r.t. the uncertain example xt UE in step t, α denotes step size, and ΠBϵ[xUE](·) projects its input onto the ϵ-bounded neighborhood of the original one. Clean Acc Robust Acc Avg. HEE 50.12 18.98 34.55 HEE w/o H(·) 48.26 18.26 33.26 HEE w/ l∞-norm 48.86 15.90 32.38 Table 2: Ablation study about HEE. ”w/o H(·)” represents the same objective as UE. ”w/ l∞-norm” indicates that l∞norm constraints are used in the process of constructing HEE like UE. MC is MobileNetV2. However, although this method alleviates the negative impact on clean accuracy, it reduces the robustness of MC, as shown in Table 1. We analyze the limitations of UE: • First, using the manually defined Y as the target in Eq. (2) is too hard for optimization, which will make UE more inclined to be located at the junction of the classification boundaries of all classes. The iterative process in Fig. 1(a) confirms this conjecture. • Additionally, UE, like AE, uses the l∞-norm to constrain the magnitude of added noise with the aim of improving the visual quality of samples. However, this will sacrifice attack performance, as shown in Table 2. Actually, the constraint is not necessary in MSA, because the attacker typically uses abnormal samples or even unnatural synthetic samples in the data-free setting High-Entropy Examples (HEE) We hope to construct samples that can characterize a more complete shape of the classification boundary, not just the junction as UE. Intuitively, samples located near the classification boundaries usually have larger prediction entropy, and the model will give similar confidence in multiple classes. Therefore, we propose the concept of High-Entropy Examples (HEE), which can be constructed by directly maximizing the prediction entropy as follows: xt+1 HEE = xt HEE + α · sign(∇xt HEEH(MC(xt HEE))), (3) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6855 Latent Code 𝐺 Synthetic Samples 𝑀! ℒ!"#+ ℒ$%& 𝑦$ Memory Bank (Fixed) Update 𝑦 Random Label Standard Augmentation Label Smooth Substitute Data Generation Soft Label 𝑀# Construct HEE Black-Box Query ℒ!$ Update 𝑀! Clone Model Training Strong Augmentation 𝑦& Hard Label Figure 2: The pipeline of our DFHL-RS attack, which consists of two alternately executed stages in each epoch. In the first stage, we optimize latent code and generators to generate substitute data and store it in the memory bank. In the second stage, we sample multiple batches from the memory bank and construct HEE to query the target model for hard labels, then use them to update the parameters of the clone model. where H(·) calculates the entropy of the prediction, ∇x(t) HEE denotes the gradient of the entropy loss function w.r.t. the high-entropy example xt HEE in step t and α is the step size. Note that we no longer use the l∞-norm to constrain the modification of the sample during iterations. Compared to manually assigning the same confidence value to all classes in UE, Eq. (3) provides an adaptive optimization objective, allowing samples to explore among several similar classes rather than all classes. It can be seen from Fig. 1(b) that HEE can gradually distribute near the classification boundaries over steps. Therefore, keeping the predictions of MC on these samples consistent with MT enables it to learn the shape of the classification boundary. Adversarial Examples (AE) Previous work (He, Li, and Song 2018; Li et al. 2023a) noticed that AE is also close to the classification boundaries, which may achieve the similar effect of UE and HEE. For robustness stealing, we can first obtain pseudo-labels of proxy data by querying MT and use the standard PGD to construct AE. Then we use AE to query MT again for corresponding labels to train MC. Comparison of Different Query Samples For quantitative evaluation, we follow the same setting as (Li et al. 2023a). As shown in Table 1, although AT and AE can make MT obtain higher robustness, they will greatly reduce the clean accuracy. While UE can improve the clean accuracy, the robustness will be significantly reduced. Our proposed HEE achieves the best balance between clean accuracy and robustness. In Table 2, we also conduct an ablation study on different components of HEE. To illustrate the superiority of HEE over UE in characterizing classification boundaries, we make a two-dimensional dataset with four classes to train a MLP (see appendix for details), as shown in Fig. 1. As the iteration steps increase during the construction process, UE will gradually concentrate at the junction of classification boundaries, while HEE will be uniformly distributed around the classification boundaries, thus characterizing its more complete shape. This is in line with our previous analysis. Therefore, querying with HEE allows MC to better approximate the classification boundaries of MT . Data-Free Hard-Label Robustness Stealing In this section, we further explore the challenging data-free scenario, where the attacker CAN NOT obtain any similar natural samples as proxy data. The framework is illustrated in Fig. 2 and the training algorithm is in the appendix. Overview Unlike previous work (Sanyal, Addepalli, and Babu 2022; Beetham et al. 2023), which uses MT as a discriminator and plays a min-max game, we decouple the training process of the generator and the training process of MC into two stages: 1) Substitute Data Generation and 2) Clone Model Training. In the first stage, we train a generator to synthesize substitute data to approximate the distribution of the target data and store them in a memory bank. Due to the hard-label setting, we use MC to guide the training of the generator, rather than MT as in previous work (Kariyappa, Prakash, and Qureshi 2021; Truong et al. 2021). In the second stage, we randomly sample multiple batches of substitute data from the memory bank and use Eq. (3) to construct HEE, then use HEE to query MT for hard labels to optimize the parameters of MC. These two stages are executed alternately in each epoch. Substitute Data Generation We adopt the “batch-by-batch” manner to synthesize substitute data, that is, only one batch of images will be synthesized by a new generator in each epoch. Specifically, at the beginning of an arbitrary epoch i, we resample a batch of latent code zi ∼N(0, 1) and reinitialize the generator Gi with parameters from the last epoch. Then we randomly sample a batch of corresponding labels ˜yi from the uniform distribution. Intuitively, we hope that the images synthesized by the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6856 generator can be classified into the specified class by MT . This means that the distribution of the synthetic data is similar to the target data. For this, we can iteratively optimize the latent code zi as well as the parameters θGi as follows: Lcls = arg min zi,θGi LCE(MT (Gi(zi; θGi), ˜yi)), (4) However, the backpropagation of Eq. (4) will violate the principles of a black-box setting. The gradient estimation techniques (Truong et al. 2021; Kariyappa, Prakash, and Qureshi 2021) also cannot be applied due to the unavailability of soft labels. To address this issue, we use the latest MC as a surrogate for MT , providing gradients for optimization. The optimization problem can be formulated as follows: Lcls = arg min zi,θGi LCE(MCi−1(Gi(zi; θGi), ˜yi)). (5) To prevent overfitting of synthetic data to the clone model, we use a small number of iterations (see appendix for details). We perform standard data augmentation on synthetic samples to ensure that they are not deceptive (e.g., adversarial example). In addition, we also adopt label smoothing (Szegedy et al. 2016) to soften the corresponding labels to further alleviate overfitting. Moreover, merely ensuring the generator produces images from the desired distribution is inadequate, as it may still encounter issues such as mode collapse and insufficient diversity (Chen et al. 2019; Sanyal, Addepalli, and Babu 2022). In a hard-label setting, a lack of diversity among classes can significantly hamper the learning process of MC, particularly for the less represented classes. Therefore, to promote the generation of diverse images across all classes in each batch, we adopt a class-diversity loss as follows: Ldiv = K X j=0 αj log αj, αj = 1 N N X i=1 softmax(MC (xi))j, (6) where K denotes the number of classes, αj denotes the expected confidence value for every class j over a batch with N samples and the Ldiv calculates the negative entropy. When the loss is minimized, the entropy of the number of synthetic samples per class gets maximized such that each class has a similar amount of samples. By combining the aforementioned two loss functions, we obtain the final objective: Lgen = Lcls + λ · Ldiv, (7) where λ is a hyperparameter for balancing two different terms. Note that we does not require training a global generator to model the entire distribution of the target data. Instead, in each epoch, only a temporary generator is trained to handle a specific data distribution for one batch. Besides, we keep the parameters of MC fixed in this stage. After optimization by Eq. (7), we store this batch of synthetic data into the memory bank and discard the generator. Clone Model Training In the second stage, we train MC with synthetic substitute data. However, if we optimize MC using only one batch of samples synthesized in the first stage of the current epoch, it will suffer from catastrophic forgetting (Binici et al. 2022b,a; Do et al. 2022). The reason is that the substitute data is synthesized under the guidance of MC, as shown in Eq. (5). With each epoch, the gap between MC and MT decreases, causing a shift in the distribution of synthetic data over time. Hence, if MC does not periodically relearn previously synthesized samples, the knowledge acquired during the early training phase may be lost. This can result in performance degradation or even failure to converge over time. To address this issue, we use a memory bank to store all previously synthesized samples in the first stage. In each epoch, we optimize MC for NC steps. In each step, we randomly select a batch of synthetic samples from the memory bank. Then we use Eq. (3) to construct HEE xHEE, but before that, we need to perform strong augmentation (see appendix for details) on x to promote the diversity . This is beneficial to characterize more complete classification boundaries. Finally, we feed xHEE into MT to query the hard label ˆy and use cross-entropy loss to optimize MC as follows: LC = LCE(MC(xHEE), ˆy). (8) By minimizing LC, MC can obtain similar classification boundaries as MT by imitating its predictions for xHEE, thus simultaneously stealing the accuracy and robustness. Experiments In this section, we first provide the detailed experimental settings. To demonstrate the effectiveness of our method, we evaluate it from several perspectives. Experimental Settings Datasets. We consider two benchmark datasets commonly used in AT research (Madry et al. 2018; Zhang et al. 2019; Li et al. 2023b), CIFAR-10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009), as target datasets. Prior work requires a proxy dataset with samples from the same distribution (Tram`er et al. 2016; Jagielski et al. 2020; Orekondy, Schiele, and Fritz 2019) or the same task domain (Sanyal, Addepalli, and Babu 2022; Li et al. 2023a) as the target dataset. However, we avoid using any natural data. Models. We evaluate our attack on different models with various architectures. MT is selected from ResNet18 (He et al. 2016) and WideResNet-34-10 (Zagoruyko and Komodakis 2016). MC may be different from MT in architecture, so we use two additional models, i.e., ResNet34 and MobileNetV2 (Sandler et al. 2018). We employ two commonly used AT strategies, namely PGD-AT (Madry et al. 2018) and TRADES (Zhang et al. 2019), and a state-of-theart method, STAT-AWP (Li et al. 2023b), to improve the robustness of MT . We follow the same generator architecture as (Fang et al. 2021, 2022). Baselines. This is the first work to achieve data-free hardlabel robustness stealing attack. The closest work to ours is BEST (Li et al. 2023a), but it requires a proxy dataset to attack. Therefore, we also make some modifications to the second stage of our framework as baselines: (1) Data-Free The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6857 Target Data Data-Free Method Clean Acc FGSM PGD-20 PGD-100 CW-100 AA CIFAR10 / Target Model 82.57 56.99 51.31 50.92 49.68 47.91 × BEST (Li et al. 2023a) 67.31 32.68 27.48 27.27 28.33 28.33 ✓ Data-Free AT 36.15 15.86 11.87 11.73 12.03 11.43 Data-Free AE 67.78 32.20 28.20 28.07 28.50 27.89 Data-Free UE 74.24 39.78 35.00 34.80 35.28 34.41 DFHL-RS(Ours) 77.86 44.94 40.07 39.87 40.64 39.51 CIFAR100 / Target Model 56.76 31.96 28.96 28.83 26.84 26.84 × BEST (Li et al. 2023a) 28.33 18.78 18.78 15.08 16.28 14.59 ✓ Data-Free AT 20.22 9.79 8.63 8.62 8.73 8.74 Data-Free AE 37.76 15.41 13.15 13.00 13.90 12.77 Data-Free UE 39.25 16.88 14.18 14.06 14.94 14.94 DFHL-RS(Ours) 51.94 23.68 20.02 19.88 20.91 19.30 Table 3: Attack performance comparison between different methods. MT and MC are ResNet18. The AT strategy is PGD-AT. We report the clean accuracy and robust accuracy (%) of MC. Target Model AT Strategy Clean Acc PGD-100 AA ResNet18 PGD-AT 77.86 39.87 39.51 TRADES 72.15 37.24 37.09 STAT-AWP 72.24 37.79 37.73 WideResNet PGD-AT 77.75 36.65 36.55 TRADES 71.04 34.20 33.94 STAT-AWP 73.81 38.16 38.13 Table 4: Attack performance under different architectures of MT and various AT strategies. MC is ResNet18. AT: use synthetic samples to query MT for corresponding labels and then perform AT on MC. (2) Data-Free UE: construct uncertain examples with synthetic samples to query MT . (3) Data-Free AE: construct adversarial examples with synthetic samples to query MT . Metrics. In this task, MC should not only have good usability, but also need to have certain robustness. Therefore, in addition to considering clean accuracy, i.e., the accuracy over clean samples, we also measure robust accuracy against various adversarial attacks, including FGSM (Goodfellow, Shlens, and Szegedy 2014), PGD-20, PGD-100 (Madry et al. 2018), CW-100 (Carlini and Wagner 2017) and AutoAttack (AA) (Croce and Hein 2020). The attack settings are ϵ = 8/255 and η = 2/255. The number of attack step is 20 for PGD-20, and 100 for PGD-100 and CW-100. Implementation Details. For substitute data generation, we use Adam optimizer with β = (0.5, 0.999) and set the hyperparameter λ = 3 in Eq. (7). For CIFAR-10, we set learning rates ηG = 0.002, ηz = 0.01, number of iterations NG = 10 and the label smoothing factor is set to 0.2. For CIFAR-100, we set learning rates ηG = 0.005, ηz = 0.015, number of iterations NG = 15 and the label smoothing factor is set to 0.02. For training MC, we use SGD optimizer with an initial learning rate of 0.1, a momentum of 0.9 and a weight decay of 1e−4. For constructing HEE, the step size α in Eq. (3) is set to 0.03 and the number of iterations is set to Clone Model Clean Acc PGD-100 AA MobileNet 73.50 33.32 33.19 ResNet34 78.49 40.25 39.97 WideResNet 77.38 40.66 40.33 Table 5: Attack performance using different architectures of MC. MT is ResNet18 using PGD-AT strategy. 10. We set the iterations of the clone model NC = 500. The batch sizes for CIFAR-10 and CIFAR100 are set to B = 256 and B = 512, respectively. We apply a cosine decay learning rate schedule and the training epoch is E = 300. Experimental Results Performance on Robustness Stealing. We first compare the attack effects of different methods by evaluating the clone model, as shown in Table 3. Our DFHL-RS significantly outperforms the baselines in both accuracy and robustness. When the target data is CIFAR-10, our method achieves 77.86% clean accuracy, which is only 4.71% lower than the target model. Meanwhile, it also achieves 39.51% robust accuracy against AA, which is only 8.40% lower than the target model. When the target data is CIFAR100, our method achieves 51.94% clean accuracy and 19.30% robust accuracy against AA, which are only 4.82% and 7.53% lower than the target model, respectively. It should be emphasized that our method does not require any natural data, but still outperforms BEST which requires a proxy dataset. Different Model Architectures and AT Strategies. In real scenarios, MLaaS providers may use different architectures and strategies to improve the robustness of the target model, so we study the impact of different target model architectures using various AT strategies on the attack. See appendix for the performance of all target models. As shown in Table 4, when the target model is ResNet18, PGD-AT is more vulnerable to robustness stealing attacks. Although the target model using PGD-AT has lower robustness than using STAT-AWP, the clone model obtained by the attack has The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6858 Epsilon(ϵ) Attack White-box AT Proxy Data-Free Model Stealing Method MAZE DFME DFMS-HL DFHL-RS(Ours) 4/255 FGSM 12.49 9.84 4.36 3.90 6.09 10.29 PGD-20 13.54 10.27 4.31 3.71 6.11 10.54 8/255 FGSM 25.58 21.44 10.51 9.25 13.95 22.77 PGD-20 31.25 24.33 10.13 9.06 14.52 24.59 12/255 FGSM 36.99 31.94 17.42 15.57 22.07 34.30 PGD-20 49.02 40.44 71.73 15.80 23.93 39.25 Table 6: Attack Success Rate (ASR) (%) of transfer-based adversarial attack using the clone model obtained by different methods as a surrogate model. AT Proxy represents the adversarial training model on the target data. The target model is trained on CIFAR-10 dataset using PGD-AT with ϵ = 8/255. Modification Query Budget Clean Acc AA Default 38.4M 77.86 39.51 B 256 →128 19.2M 75.79 36.42 E 300 →150 73.81 34.73 NC 500 →250 74.07 35.19 Table 7: Attack performance after modifying hyperparameters to halve query budget. the best performance. When the target model is WideResNet with a different architecture, STAT-AWP can make the clone model obtain the highest robustness, but PGD-AT still has the best clean accuracy. Consider the black-box setting, the attacker may use different clone model architectures for the attack. As shown in Table 5, close attack performance is achieved when the clone model is ResNet34 and WideResNet, while the attack performance drops slightly when the clone model is MobileNetv2, probably due to the larger difference in architecture. In conclusion, our method achieves stable attack performance across different configurations. Transfer-Based Adversarial Attacks. We study the effectiveness of black-box transfer-based adversarial attacks on the target model using the clone model as a surrogate. As shown in Table 6, although black-box attacks on robust models are extremely challenging (Dong et al. 2019, 2020), our method significantly improves the attack success rate (ASR) at various ϵ compared to the baselines. In most cases, our method even slightly exceeds the AT Proxy, which directly uses the target data to train a surrogate model, and is slightly worse than the white-box attack. As ϵ decreases, the gap between our method and the white-box attack gets smaller. Attack Cost of DFHL-RS. We also consider the attack cost of our DFHL-RS. In terms of query budget, our method eliminates the need for any querying during the data generation stage. It only requires querying the target model with each HEE sample for the corresponding label. Therefore, the total query budget depends on the training epoch, batch size and the clone model iterations, i.e., E×B×NC. Our default query budget for CIFAR-10 is 38M, whereas it is 20M for DFME and DS, and 30M for MAZE. This is reasonable because acquiring robustness typically comes at a higher cost. Clean Acc AA DFHL-RS 77.86 39.51 w/o Div Loss 76.87 38.87 w/o Label Smoothing 77.10 38.42 w/o Standard Augmentation 74.99 36.80 w/o Strong Augmentation 77.13 38.66 Table 8: Ablation study of different components. By modifying these three hyperparameters, we can control the query budget as shown in Table 7. When the query budget is halved, the clean accuracy and robust accuracy are only reduced by approximately 2% and 3%, respectively. More results can be found in the appendix. In terms of time overhead, since only a few iterations per epoch are sufficient for the first stage, it takes very little time. The time overhead of the second stage is mainly concentrated on the inner loop for constructing HEE, and its computational complexity is similar to that of standard AT. The time overhead mainly depends on the number of training samples per epoch, i.e., B × NC, which is comparable to many AT techniques. Ablation Study. We further investigate the effects of the various components in our DFHL-RS framework as shown in Table 8. When the diversity loss in Eq. (7) is discarded, the clean accuracy and robust accuracy both decrease. In addition, removing the label smoothing and standard augmentation in the first stage will also affect the attack performance. The strong data augmentation in the second stage improves the attack effect by promoting the diversity of HEE. Conclusion In this paper, we first explore a novel and challenging task called Data-Free Hard-Label Robustness Stealing (DFHLRS) attack, which can steal both clean accuracy and adversarial robustness by simply querying a target model for hard labels without the participation of any natural data. Experiments show that our method achieves excellent and stable attack performance under various configurations. Our work aims to advance research to improve security and privacy by raising awareness of the vulnerabilities of machine learning models through attacks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6859 Acknowledgements This work was supported in part by the Natural Science Foundation of China under Grant U20B2047, 62102386, U2336206, 62072421 and 62121002, and by Xiaomi Young Scholars Program. References Addepalli, S.; Nayak, G. K.; Chakraborty, A.; and Radhakrishnan, V. B. 2020. Degan: Data-enriching gan for retrieving representative samples from a trained classifier. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04): 3130–3137. Barbalau, A.; Cosma, A.; Ionescu, R. T.; and Popescu, M. 2020. Black-Box Ripper: Copying black-box models using generative evolutionary algorithms. Advances in Neural Information Processing Systems, 33: 20120–20129. Beetham, J.; Kardan, N.; Mian, A. S.; and Shah, M. 2023. Dual Student Networks for Data-Free Model Stealing. In International Conference on Learning Representations. Binici, K.; Aggarwal, S.; Pham, N. T.; Leman, K.; and Mitra, T. 2022a. Robust and resource-efficient data-free knowledge distillation by generative pseudo replay. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6): 6089– 6096. Binici, K.; Pham, N. T.; Mitra, T.; and Leman, K. 2022b. Preventing catastrophic forgetting and distribution mismatch in knowledge distillation via synthetic data. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 663–671. Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), 39–57. IEEE. Chen, H.; Wang, Y.; Xu, C.; Yang, Z.; Liu, C.; Shi, B.; Xu, C.; Xu, C.; and Tian, Q. 2019. Data-free learning of student networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3514–3522. Croce, F.; and Hein, M. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning, 2206–2216. PMLR. Do, K.; Le, T. H.; Nguyen, D.; Nguyen, D.; Harikumar, H.; Tran, T.; Rana, S.; and Venkatesh, S. 2022. Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation. Advances in Neural Information Processing Systems, 35: 10055–10067. Dong, Y.; Fu, Q.-A.; Yang, X.; Pang, T.; Su, H.; Xiao, Z.; and Zhu, J. 2020. Benchmarking Adversarial Robustness on Image Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Dong, Y.; Pang, T.; Su, H.; and Zhu, J. 2019. Evading defenses to transferable adversarial examples by translationinvariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4312– 4321. Fang, G.; Mo, K.; Wang, X.; Song, J.; Bei, S.; Zhang, H.; and Song, M. 2022. Up to 100x faster data-free knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6): 6597–6604. Fang, G.; Song, J.; Wang, X.; Shen, C.; Wang, X.; and Song, M. 2021. Contrastive Model Inversion for Data-Free Knolwedge Distillation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, 2374–2380. Gao, L.; Zhang, Q.; Song, J.; Liu, X.; and Shen, H. T. 2020. Patch-wise attack for fooling deep neural network. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16, 307–322. Springer. Goldblum, M.; Fowl, L.; Feizi, S.; and Goldstein, T. 2020. Adversarially robust distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04): 3996–4003. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Goodman, D.; and Xin, H. 2020. Attacking and defending machine learning applications of public cloud. arXiv preprint arXiv:2008.02076. Gowal, S.; Rebuffi, S.-A.; Wiles, O.; Stimberg, F.; Calian, D. A.; and Mann, T. A. 2021. Improving robustness using generated data. Advances in Neural Information Processing Systems, 34: 4218–4233. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 770–778. He, W.; Li, B.; and Song, D. 2018. Decision boundary analysis of adversarial examples. In International Conference on Learning Representations. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Jagielski, M.; Carlini, N.; Berthelot, D.; Kurakin, A.; and Papernot, N. 2020. High accuracy and high fidelity extraction of neural networks. In USENIX Security Symposium, 1345–1362. Kariyappa, S.; Prakash, A.; and Qureshi, M. K. 2021. Maze: Data-free model stealing attack using zeroth-order gradient estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13814–13823. Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. Tech Report. Li, G.; Xu, G.; Guo, S.; Qiu, H.; Li, J.; and Zhang, T. 2023a. Extracting Robust Models with Uncertain Examples. In The Eleventh International Conference on Learning Representations. Li, Q.; Guo, Y.; Zuo, W.; and Chen, H. 2023b. Squeeze Training for Adversarial Robustness. In International Conference on Learning Representations. Lopes, R. G.; Fenu, S.; and Starner, T. 2017. Data-free knowledge distillation for deep neural networks. arXiv preprint arXiv:1710.07535. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6860 Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations. Micaelli, P.; and Storkey, A. J. 2019. Zero-shot knowledge transfer via adversarial belief matching. Advances in Neural Information Processing Systems, 32. Nayak, G. K.; Mopuri, K. R.; Shaj, V.; Radhakrishnan, V. B.; and Chakraborty, A. 2019. Zero-shot knowledge distillation in deep networks. In International Conference on Machine Learning, 4743–4751. PMLR. Orekondy, T.; Schiele, B.; and Fritz, M. 2019. Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4954–4963. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4510– 4520. Sanyal, S.; Addepalli, S.; and Babu, R. V. 2022. Towards data-free model stealing in a hard label setting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15284–15293. Schmidt, L.; Santurkar, S.; Tsipras, D.; Talwar, K.; and Madry, A. 2018. Adversarially robust generalization requires more data. Advances in Neural Information Processing Systems, 31. Shafique, M.; Naseer, M.; Theocharides, T.; Kyrkou, C.; Mutlu, O.; Orosa, L.; and Choi, J. 2020. Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead. IEEE Design & Test, 37(2): 30–57. Shokri, R.; Stronati, M.; Song, C.; and Shmatikov, V. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), 3–18. IEEE. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2818–2826. Tram`er, F.; Zhang, F.; Juels, A.; Reiter, M. K.; and Ristenpart, T. 2016. Stealing Machine Learning Models via Prediction APIs. In USENIX Security Symposium, volume 16, 601–618. Truong, J.-B.; Maini, P.; Walls, R. J.; and Papernot, N. 2021. Data-free model extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4771–4780. Wang, Y.; Chen, Z.; Yang, D.; Guo, P.; Jiang, K.; Zhang, W.; and Qi, L. 2023. Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data. arXiv preprint arXiv:2303.11611. Wang, Y.; Li, J.; Liu, H.; Wang, Y.; Wu, Y.; Huang, F.; and Ji, R. 2022. Black-box dissector: Towards erasing-based hard-label model stealing attack. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part V, 192–208. Springer. Wang, Z. 2021. Zero-shot knowledge distillation from a decision-based black-box model. In International Conference on Machine Learning, 10675–10685. PMLR. Yuan, X.; Chen, K.; Zhang, J.; Zhang, W.; Yu, N.; and Zhang, Y. 2023. Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3): 3349–3357. Yuan, X.; Ding, L.; Zhang, L.; Li, X.; and Wu, D. O. 2022. Es attack: Model stealing against deep neural networks without data hurdles. IEEE Transactions on Emerging Topics in Computational Intelligence, 6(5): 1258–1270. Zagoruyko, S.; and Komodakis, N. 2016. Wide residual networks. arXiv preprint arXiv:1605.07146. Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; and Jordan, M. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 7472–7482. PMLR. Zhang, J.; Chen, C.; Dong, J.; Jia, R.; and Lyu, L. 2022a. QEKD: query-efficient and data-free knowledge distillation from black-box models. arXiv preprint arXiv:2205.11158. Zhang, J.; Li, B.; Xu, J.; Wu, S.; Ding, S.; Zhang, L.; and Wu, C. 2022b. Towards Efficient Data Free Black-Box Adversarial Attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15115– 15125. Zhu, J.; Yao, J.; Han, B.; Zhang, J.; Liu, T.; Niu, G.; Zhou, J.; Xu, J.; and Yang, H. 2022. Reliable adversarial distillation with unreliable teachers. In International Conference on Learning Representations. Zi, B.; Zhao, S.; Ma, X.; and Jiang, Y.-G. 2021. Revisiting adversarial robustness distillation: Robust soft labels make student better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 16443–16452. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6861 | 2024 | 762 |
18,587 | Efficient Conditional Diffusion Model with Probability Flow Sampling for Image Super-resolution Yutao Yuan, Chun Yuan Tsinghua University [email protected], [email protected] Abstract Image super-resolution is a fundamentally ill-posed problem because multiple valid high-resolution images exist for one low-resolution image. Super-resolution methods based on diffusion probabilistic models can deal with the ill-posed nature by learning the distribution of high-resolution images conditioned on low-resolution images, avoiding the problem of blurry images in PSNR-oriented methods. However, existing diffusion-based super-resolution methods have high time consumption with the use of iterative sampling, while the quality and consistency of generated images are less than ideal due to problems like color shifting. In this paper, we propose Efficient Conditional Diffusion Model with Probability Flow Sampling (ECDP) for image super-resolution. To reduce the time consumption, we design a continuoustime conditional diffusion model for image super-resolution, which enables the use of probability flow sampling for efficient generation. Additionally, to improve the consistency of generated images, we propose a hybrid parametrization for the denoiser network, which interpolates between the data-predicting parametrization and the noise-predicting parametrization for different noise scales. Moreover, we design an image quality loss as a complement to the score matching loss of diffusion models, further improving the consistency and quality of super-resolution. Extensive experiments on DIV2K, ImageNet, and CelebA demonstrate that our method achieves higher super-resolution quality than existing diffusion-based image super-resolution methods while having lower time consumption. Our code is available at https://github.com/Yuan-Yutao/ECDP. Introduction Image super-resolution, the task of recovering highresolution (HR) images from low-resolution (LR) images, is fundamentally an ill-posed problem. Given an LR image, there are more than one HR images consistent with the input. Existing PSNR-oriented super-resolution methods (Dong et al. 2014; Lim et al. 2017; Zhang et al. 2018b) that learn deterministic mappings from LR images to HR images using pixel losses are effectively predicting the mean of all plausible HR images, and tend to generate blurry HR images with unsatisfactory visual quality. Super-resolution methods based on generative models deal with the ill-posed Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. nature by learning the distribution of HR images conditioned on LR images, allowing for the generation of multiple diverse results from a single input image and avoiding the problem of blurry images. Recently, the use of diffusion probabilistic models (Ho, Jain, and Abbeel 2020; Song et al. 2021), a trending class of generative models, have grown popular in image super-resolution. SR3 (Saharia et al. 2021) and SRDiff (Li et al. 2021) adapts Diffusion Denoising Probabilistic Models (DDPMs) (Ho, Jain, and Abbeel 2020) for image superresolution. They define a Markovian forward process that gradually adds Gaussian noise into image data, and use denoiser neural networks conditioned on LR images to learn its reverse process and generate new images from noise. They are able to generate diverse and realistic HR images with fine details. However, there are still challenging aspects that remain to be improved for diffusion-based super-resolution. Diffusion models typically generate new images iteratively using a Markov chain, which necessitates many neural network evaluations and makes the super-resolution process timeconsuming. Additionally, they are prone to problems like color shifting, making the quality and consistency of generated images less than ideal and reducing their performance on super-resolution. To tackle the challenges, we propose Efficient Conditional Diffusion Model with Probability Flow Sampling (ECDP) for image super-resolution. It gradually corrupts HR images using stochastic differential equations (SDEs), and learns to restore the original images with a denoiser network conditioned on LR images. We generate super-resolution images using probability flow sampling, which can be performed with low time consumption using ordinary differential equation (ODE) solvers. Additionally, to improve the consistency of generated images with LR input, we use a hybrid parametrization in the denoiser network. It uses the x0parametrization that predicts the clean data directly in addition to the commonly used ϵ-parametrization, and smoothly interpolates between them for different noise scales. Moreover, we introduce an image quality loss as a complement to the score matching loss of diffusion models. It measures the feature-space distance of generated HR images and groundtruth images in the dataset, further improving the consistency and quality of super-resolution. Extensive experiments The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6862 on multiple datasets encompassing face super-resolution and general super-resolution demonstrate the effectiveness of our approach. Our main contributions are summarized as follows: • We propose Efficient Conditional Diffusion Model with Probability Flow Sampling (ECDP) for image superresolution, which generates realistic super-resolution images with low time costs. • With a continuous-time conditional diffusion model based on SDEs designed for super-resolution, we can generate super-resolution images using probability flow sampling, which reduces the time consumption of superresolution. • We propose score matching with hybrid-parametrization and design an image quality loss for diffusion-based image super-resolution, improving the consistency and quality of generated images. • Extensive experiments on DIV2K, ImageNet, and CelebA demonstrate that our method achieves higher super-resolution quality than existing diffusion-based image super-resolution methods while having lower time consumption. Related Work Diffusion probabilistic models Diffusion probabilistic models are a family of deep generative models with great success in image generation. DDPM (Ho, Jain, and Abbeel 2020) defines a Markovian diffusion process on image data, gradually adding noise into the image, and learns to reproduce the original image with a sequence of denoisers. It achieves impressive high-quality image generation results. Various improvements to the model have been proposed, including improved architectures (Dhariwal and Nichol 2021) loss reweighting (Nichol and Dhariwal 2021), and fast sampling (Song, Meng, and Ermon 2021). DDPMs are shown to be equivalent to denoising score matching over multiple noise levels (Song and Ermon 2019), which are unified and generalized to the continuous case with SDEs (Song et al. 2021). We build our super-resolution method on top of the SDE formulation by extending it and conditioning on LR images. Besides image generation, diffusion models have been applied to a large range of tasks in computer vision. A popular strategy among them is to formulate the task as a conditional generation problem, using diffusion models to predict the distribution of outputs conditioned on the inputs. This strategy has achieved success in text-to-image generation (Saharia et al. 2022b), image super-resolution (Saharia et al. 2021; Li et al. 2021), image inpainting (Saharia et al. 2022a), and image colorization (Saharia et al. 2022a), among others. A different line of research focuses on using existing diffusion models in a zero-shot manner. By taking an unconditional diffusion model and enforcing consistency with reference images during sampling, it is possible to perform image editing (Meng et al. 2022; Choi et al. 2021) and image inpainting (Lugmayr et al. 2022) without task-specific training. More recently, this approach has been generalized for a family of linear and non-linear inverse problems (Kawar et al. 2022; Wang, Yu, and Zhang 2023; Chung et al. 2023). Image super-resolution A lot of super-resolution methods based on deep learning have been proposed in recent years. Most of the early work takes a regression-based approach (Dong et al. 2014; Lim et al. 2017; Zhang et al. 2018b), learning a deterministic one-to-one mapping from LR images to HR images with L2 or L1 losses. Since the posterior distribution of HR images is highly multimodal, these methods tend to generate blurry images, effectively predicting the mean of the distribution. To improve the visual quality of generated images, GAN-based approaches (Ledig et al. 2017; Wang et al. 2018) are proposed for super-resolution. They are able to generate HR images with high quality, but tend to suffer from mode collapse, difficult optimization and low consistency with LR images. Normalizing flows have also been used for image super-resolution (Lugmayr et al. 2020; Liang et al. 2021). They are able to estimate the distribution of HR images conditioned on LR images, allowing for diverse and realistic image super-resolution. However, normalizing flows require invertible architectures, limiting the expressiveness of the models. Recently, several super-resolution methods using diffusion models have been proposed. SR3 (Saharia et al. 2021) and SRDiff (Li et al. 2021) adapts DDPM (Ho, Jain, and Abbeel 2020) for super-resolution, making the model conditional on LR images. They are able to generate realistic, high quality HR images. SR3 concatenates upscaled LR images to noisy HR images as the input to the denoiser network, making the model conditional on LR images. SRDiff uses an LR encoder to extract features from LR input, and further uses residual prediction to improve the convergence speed and performance of the model. Some methods (Chung, Sim, and Ye 2022; Chung et al. 2023; Fei et al. 2023; Zhu et al. 2023) use a different zero-shot approach as opposed to aforementioned supervised methods, taking an unconditional diffusion model trained on image generation and modifying its sampling process with guidance. However, their performance is often less than ideal compared to supervised methods due to lack of dedicated training. Preliminaries Given a dataset X = {xi} that follows an unknown distribution p(x), continuous-time diffusion probabilistic models define a forward process that gradually injects noise into data x. This process can be described as the solution to a SDE running from t = 0 to t = T, starting with i.i.d. samples x(0) from the dataset (Song et al. 2021): dx = f(x, t)dt + g(t)dw (1) where f(x, t) and g(t) are predefined functions, and w is Brownian motion. Denote by x(t) the solution at time t and pt(x) its probability distribution. The parameters of the forward process are chosen so pT ends up as a prior distribution with tractable sampling. It has be shown (Song et al. 2021) that the data distribution p0 can be recovered from pT using another reverse SDE, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6863 !(#) !(0) !(#) !(0) Down block Down block Down block Up block Up block Up block & !"!($) ' Ground truth !!" (" !!" (" !(0) ( Image Quality Loss with Probability Flow Sampling Hybrid-parametrization Score Matching ℒ!"#$%&' = # ℱ%&( ' −ℱ) ℒ)*+,- = # *( −* . . + )/( −) 0 . . -) = −1 2 0 1 ) −2 ' -1 + 0 1 3.(')-6 Forward SDE !(&) Probability flow using ∇# log -$(! ∣') LR Feature Extractor ℒ%&'()$* ℒ+,-./ Hybrid-parametrization Score Predictor Continuous-time Conditional Diffusion for Image Super-resolution 0" Figure 1: Overview of ECDP. Top left: Continuous-time conditional diffusion uses a forward SDE to transform images into noise, and generate new images from noise using probability flow. Bottom: The conditional score in the probability flow is approximated with a hybrid-parametrization score predictor sθ, which is trained using score matching. Top right: An additional image quality loss that compares the generated HR images with the ground truth is computed using probability flow sampling, improving the quality of super-resolution results. running backwards from t = T to t = 0: dx = f(x, t) −g(t)2∇x log pt(x) dt + g(t)dw (2) The term ∇x log pt(x) is the score of the distribution pt(x), which is intractable because the data distribution is unknown. Diffusion models learn the data distribution by approximating ∇x log pt(x) with a score prediction network sθ(x, t). Due to the intractability of the marginal distribution pt(x), score matching techniques are deployed in training, giving rise to the score matching loss: L = Ex,tEx(t) h
sθ(x(t), t) −∇x(t) log p0t(x(t) | x)
2 2 i (3) where p0t is the transition probability of the forward process and can often be computed analytically. In practice the loss is often reweighted, where terms associated with different t are assigned different weights, to ensure better convergence in training. Besides the reverse SDE, it is also possible to sample from the learned distribution using probability flow, which takes the form of an ODE: dx = f(x, t) −1 2g(t)2∇x log pt(x) dt (4) It can be solved using off-the-shelf ODE solvers, allowing efficient sampling using much less time than the reverse SDE. Proposed Method In this section, we present Efficient Conditional Diffusion Model with Probability Flow Sampling (ECDP) for image super-resolution. From a dataset of paired HR and LR images D = {(xi, yi)}, our model learns the conditional distribution p(x | y). Given LR images, super-resolution images are generated by sampling from this conditional distribution. Our method is illustrated in Figure 1. Continuous-Time Conditional Diffusion for Image Super-Resolution In our method, we design a conditional diffusion model for image super-resolution with a forward SDE that transforms HR images x into noise while conditioned on LR images y: dx = −1 2β(t)(x −µ(y))dt + p β(t)σ2(y)dw (5) where µ(y) and σ2(y) are the per-pixel mean and variance of p(x | y) respectively, and β(t) is a hyperparameter that controls how fast noise is injected into data. Since it is impossible to compute µ(y) and σ2(y) without direct access to the true data distribution, we approximate µ(y) by upscaling y using bicubic interpolation, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6864 set σ2(y) to a predefined constant determined using the empirical variance of x −µ(y) over the training dataset. It is worth noting that the use of µ(y) in the forward process is similar to residual prediction (Li et al. 2021), which subtracts upscaled LR images from HR images before diffusion. We differ from residual prediction in that we integrate the LR images to the forward process of diffusion directly, and we additionally considers the variance of HR images. With the use of µ(y) and σ2(y), the forward process has the mean-preserving and variance-preserving properties, as formalized below: Proposition 1. The forward process given by (5) keeps the mean and variance of x(t) conditioned on y unchanged during the transform from t = 0 to t = T. More specifically: E[x(t) | y] = µ(y) (6) Var[x(t) | y] = σ2(y) (7) By maintaining the mean and variance of data during the forward process, the amount of change in the data distribution is minimized, making model training easier and image generation faster. To enable conditional generation of HR images, we learn to approximate the conditional score ∇x log pt(x | y) with a conditional score prediction network sθ(x, y, t). The score prediction network is trained with denoising score matching (Vincent 2011) using the following loss: Lscore = E(x,y)∼DEtEϵ∼N (0,I)
sθ ˆµ(x, y, t) + p ˆσ2(y, t)ϵ, y, t + ϵ p ˆσ2(y, t)
2 2 (8) where ˆµ(x, y, t) = p α(t)(x −µ(y)) + µ(y) (9) ˆσ2(y, t) = (1 −α(t))σ2(y) (10) α(t) = exp − Z t 0 β(s)ds (11) For efficient generation of super-resolution images, we sample HR images with probability flow, which can be much faster than SDE-based sampling due to its deterministic nature. Given an LR image y, new HR images can be sampled from the learned conditional distribution by solving the following ODE from t = T to t = 0, starting with x(T) sampled from the limit distribution of the forward SDE: x(T) ∼N(µ(y), σ2(y)I) (12) dx = −1 2β(t)(x −µ(y)) −1 2β(t)σ2(y)sθ(x, y, t) dt (13) Hybrid-Parametrization Score Matching Existing diffusion probabilistic models (Ho, Jain, and Abbeel 2020; Song et al. 2021) typically parametrize the denoiser network in such a way that its output is matched (a) ϵ-param (b) x0-param (c) hybrid-param Figure 2: The denoised images produced by the ϵparametrization, the x0-parametrization and the hybridparametrization. against the normalized noise component ϵ in noisy data x(t) during training, so the network output has identical variance across different t. We denote it the ϵ-parametrization. ϵ-parametrization has been proved effective in image generation (Ho, Jain, and Abbeel 2020), improving the quality of denoising predictions when the amount of noise is low. However, ϵ-parametrization alone does not work well for image super-resolution. In image super-resolution, the space of plausible HR images x is strongly constrained by the consistency with paired LR images y and concentrated around a single point. ϵ-parametrization has difficulty learning this consistency constraint, because it can only recover the clean HR images indirectly by subtracting the predicted noise from noisy images, requiring wildly varying prediction values for different noisy data. It tends to produce very inconsistent denoising results when the amount of noise is large. This effect can be seen in Figure 2a, where ϵ-parametrization does not produce satisfactory denoising predictions, leaving artifacts in the denoised image. As a result, super-resolution methods using the ϵ-parametrization tends to produce HR images inconsistent with LR input. A natural alternative to ϵ-parametrization is to predict the clean data component in noisy data as opposed to the noise component. We denote it the x0-parametrization. x0parametrization has been investigated in the context of unconditional image generation (Benny and Wolf 2022; Salimans and Ho 2022), where it was found to have better performance than ϵ-parametrization in certain cases. In image super-resolution, x0-parametrization produces clean images with no artifacts even when the amount of noise is large, because the denoiser network can use the LR inputs to recover a good estimate of clean HR images directly. For the conditional image diffusion in our method, ϵparametrization and x0-parametrization are defined by expressing the score prediction values in terms of neural networks ϵθ and x0θ respectively: sθ,ϵ(x, y, t) = − 1 p ˆσ2(y, t) ϵθ(x, y, t) (14) sθ,x0(x, y, t) = − 1 ˆσ2(y) (x −ˆµ(x0θ(x, y, t), y, t)) (15) where ϵθ and x0θ are trained using the following reweighted The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6865 version of the score matching loss (8): Lscore = E(x,y)∼DEtEϵ∼N (0,I) h ∥ϵθ(xt, y, t) −ϵ∥2 2 + ∥x0θ(xt, y, t) −x∥2 2 i (16) where xt = ˆµ(x, y, t) + p ˆσ2(y, t)ϵ (17) As the ϵ-parametrization has low estimation errors in the low-noise region, while x0-parametrization has low estimation errors in the high-noise region, to take advantage of both parametrizations, we use a hybrid approach to smoothly interpolate between these two parametrizations: sθ(x, y, t) = λ(t)sθ,ϵ(x, y, t) + (1 −λ(t))sθ,x0(x, y, t) (18) where λ(t) is the interpolation coefficient. In our method, we choose λ(t) = α(t)c, where c is a constant in the range [0.5, 1.5], selected for each dataset individually to minimize the prediction error for the hybrid parametrization. The hybrid-parametrization is able to achieve low estimation errors in all noise scales. Image Quality Loss with Probability Flow Sampling Diffusion probabilistic models do not directly optimize for the quality of generated images during training. Instead, the quality is only optimized indirectly in the learning of data distribution with score matching loss. In image superresolution, each LR image has only one paired HR image in the dataset, and it is difficult for diffusion models to estimate the conditional distribution of HR images accurately from this single data point. It is therefore important for diffusion-based super-resolution methods to add additional image quality losses into training. In CNN-based super-resolution methods, losses like the pixel loss and the perceptual loss directly measure the distance between super-resolution images and the ground truth. However, these losses have not received usage in diffusionbased super-resolution methods, as they require generating new HR images during training, which is computationally expensive for diffusion models using stochastic sampling. With the use of probability flow sampling in our method for efficient super-resolution, it becomes possible to introduce such losses into the training of the diffusion model. Therefore, we propose an image quality loss for diffusionbased image super-resolution, defined as the feature-space distance between generated HR images and the ground truth: Lquality = E(x,y)∼D [∥F(SRθ(y)) −F(x)∥] (19) where SRθ(y) is an HR image sample generated using the probability flow. F is chosen as the feature maps of a VGG network pretrained on image classification, making Lquality equivalent to the perceptual loss in CNN-based methods, but in principle it can be any function that converts images to feature vectors. To compute gradients of the image quality loss with regards to the network parameters, it is necessary to backpropagate through SRθ(y), the solution of the probability (a) no Lquality (b) with Lquality Figure 3: Visualization of images generated by our method trained without and with Lquality. The image generated by the model with Lquality has more visible structure (the lines on the pillar) and less background noise. flow ODE. This can be achieved efficiently using the adjoint method (Chen et al. 2018), which expresses the gradient of ODE solutions with regards to model parameters in terms of another augmented ODE, making it possible to compute gradients without depending on the intermediate values of the original ODE. Compared with direct backpropagation through the ODE solver, the memory consumption of computing the image quality loss is reduced from O(s) to O(1), where s is the number of sampling steps. As seen in Figure 3, the image quality loss can significantly improve the quality of super-resolution images for diffusion-based methods. The model trained with Lquality is able to generate cleaner images with more visible structure and less background noise compared to the model trained without Lquality. Experiments In this section, we conduct experiments on multiple datasets encompassing both face images and general images to demonstrate the performance of our method. Experimental Setup Model architecture and hyperparameters In our experiments, we set β(t) in our forward process (5) to a linear function increasing from β(0) = 0.1 and β(T) = 20, matching the settings in VP-SDE (Song et al. 2021). Score prediction from ϵ-parametrization and x0-parametrization are produced using a single network with two output heads, using an U-Net architecture with BigGAN (Brock, Donahue, and Simonyan 2019) residual blocks. To make the model conditional on LR images, we use an LR feature extractor with the RRDB (Wang et al. 2018) architecture and add the features to each layer of the score prediction network. To improve the training efficiency, we use only the standard score matching loss in early stages of training, and then add our image quality loss later on. To generate HR images, we perform probability flow sampling using a standard RungeKutta ODE solver with absolute and relative error tolerance of 10−4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6866 Datasets For general image super-resolution (4×), we evaluate the performance of various methods on two datasets, DIV2K (Agustsson and Timofte 2017) and ImageNet (Russakovsky et al. 2015). For DIV2K, models are trained using random HR image crops of size 160 × 160 and evaluated with full-size images. The training dataset is augmented with images from Flickr2K following the practice in earlier methods (Lugmayr et al. 2020; Li et al. 2021). For ImageNet, images are center cropped and resized to 256 × 256 for HR following SR3 (Saharia et al. 2021), and further downsampled to 64 × 64 with bicubic interpolation for LR. For face image super-resolution (8×), we train and evaluate on CelebA. Images are cropped and resized to 160×160 following the procedures in (Lugmayr et al. 2020), and then downsampled as LR using bicubic interpolation. Baselines We compare our image super-resolution method with the following baselines: PSNR-oriented method RRDB (Wang et al. 2018), GAN-based method ESRGAN (Wang et al. 2018), normalizing flow-based methods SRFlow (Lugmayr et al. 2020) and HCFlow (Liang et al. 2021), as well as several diffusion-based methods, SR3 (Saharia et al. 2021), SRDiff (Li et al. 2021), IR-SDE (Luo et al. 2023), GDP (Fei et al. 2023), and DiffPIR (Zhu et al. 2023). Among diffusion-based methods, GDP and DiffPIR uses pretrained unconditional diffusion models, while other methods including ours train conditional models for superresolution. We use official results for baselines where available, and train the models from scratch otherwise. Evaluation metrics In addition to PSNR and SSIM, two standard metrics for image super-resolution, we use LPIPS (Zhang et al. 2018a) to measure the visual quality of super-resolution results. It is a perceptual metric that is known to correlate better with human perception than traditional metrics like PSNR and SSIM, and is therefore considered the main metric in our experiments. Results of Image Super-Resolution General image super-resolution The quantitative results on DIV2K and ImageNet are shown in Table 1 and 2 respectively, and the visual results are shown in Figure 4. On both datasets, our method outperforms all baseline methods in LPIPS and achieves the best overall super-resolution quality. RRDB generates blurry images and has high LPIPS distance despite achieving high PSNR and SSIM scores, further confirming that PSNR and SSIM does not correlate well with perceptual quality. SR3 has good PSNR and SSIM metrics on ImageNet, but its performance is poor in the main metric LPIPS. GDP and DiffPIR, the two diffusion-based methods that use pretrained unconditional models as opposed to training dedicated conditional models, have inferior performance in the three metrics compared to other diffusion-based methods, demonstrating the importance of designing diffusion models specifically for image super-resolution. In addition, the sampling speed of diffusion-based superresolution methods in our experiments is listed in Table 3. Our method is able to perform super-resolution with the least Method LPIPS↓ PSNR↑ SSIM↑ #Params RRDB 0.253 29.44 0.84 16.7M ESRGAN 0.124 26.22 0.75 16.7M SRFlow 0.120 27.09 0.76 39.5M HCFlow 0.110 26.61 0.74 23.2M SR3 0.175 25.90 0.75 97.8M SRDiff 0.136 27.41 0.79 37.6M IR-SDE 0.231 25.90 0.66 137.1M Ours 0.108 28.03 0.79 42.6M Table 1: Results of 4× image super-resolution on DIV2K. Diffusion-based methods and non-diffusion-based methods are grouped together respectively. Best results among diffusion-based methods are highlighted in bold. Method LPIPS↓ PSNR↑ SSIM↑ #Params RRDB 0.245 27.23 0.78 16.7M ESRGAN 0.123 24.18 0.67 16.7M SRFlow 0.142 24.09 0.67 39.5M HCFlow 0.129 25.07 0.70 23.2M SR3 0.191 26.40 0.76 625M SRDiff 0.154 24.04 0.59 37.6M GDP 0.240 24.42 0.68 — DiffPIR 0.219 25.19 0.70 — Ours 0.110 25.81 0.74 37.6M Table 2: Results of 64 × 64 →256 × 256 image superresolution on ImageNet. Best results among diffusion-based methods are highlighted in bold. #Params for GDP and DiffPIR are omitted because they use pretrained unconditional diffusion models and do not train new models on their own. Method SR3 SRDiff IR-SDE Inference Time 83.1s 2.4s 6.2s Method GDP DiffPIR Ours Inference Time 94.6s 13.0s 2.0s Table 3: Time required to generate a single 256 × 256 HR image for diffusion-based methods. Method LPIPS↓ PSNR↑ SSIM↑ #Params RRDB 0.230 26.59 0.77 16.7M ESRGAN 0.120 22.88 0.63 16.7M SRFlow 0.110 25.24 0.71 40.0M HCFlow 0.090 24.83 0.69 27.0M SR3 0.109 24.26 0.68 97.8M SRDiff 0.106 25.38 0.74 12.0M Ours 0.097 25.48 0.73 40.0M Table 4: Results of face image super-resolution on CelebA. Best results among diffusion-based methods are highlighted in bold. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6867 LR Ground Truth RRDB ESRGAN SRFlow HCFlow SR3 SRDiff IR-SDE Ours LR RRDB ESRGAN SRFlow HCFlow SR3 SRDiff GDP DiffPIR Ours Figure 4: Visual results of general image super-resolution. The first two rows are results on DIV2K validation set. The last two rows are results on ImageNet dev set. Parametrization Lquality LPIPS↓ PSNR↑ SSIM↑ hybrid, c = 1.0 ✓ 0.108 28.03 0.79 hybrid, c = 0.5 ✓ 0.109 28.07 0.79 hybrid, c = 1.5 ✓ 0.107 28.01 0.79 ϵ-parametrization ✓ 0.128 28.02 0.79 x0-parametrization ✓ 0.112 27.72 0.77 hybrid, c = 1.0 × 0.120 27.26 0.76 Table 5: Results of the ablation study. All results are measured on the DIV2K 4× task. amount of time among all diffusion-based methods, confirming the efficiency of probability flow sampling. Face image super-resolution The results on CelebA are shown in Table 4. Similar to the general image superresolution case, our method reaches state-of-the-art performance and has the best LPIPS score among diffusion-based methods. Our method is able to produce faces with realistic details without generating unnecessary noise and distorting the images. Ablation Study To study the influence of difference choices of parametrizations in the denoiser network as well as the use of image quality loss, we conduct ablation studies as illustrated in Table 5. Our hybrid-parametrization achieves the best super-resolution results in all metrics among three choices of parametrizations since it has the advantages of both ϵ and x0-parametrization. ϵ-parametrization has the lowest LR-PSNR score among all, confirming our analysis that it generates images with low consistency. The performance of hybrid-parametrization is mostly insensitive to the choice of c. For the image quality loss, it can be seen that the model trained with Lquality achieves significantly better metrics than the model without Lquality, confirming its importance for diffusion-based super-resolution. Conclusion In this paper, we proposed ECDP, an image super-resolution framework with a continuous-time conditional diffusion model. It deploys a hybrid-parametrization denoiser network to learn the conditional score function, and generates superresolution images efficiently using probability flow sampling. An additional image quality loss for diffusion-based super-resolution is introduced, which is computed efficiently and improves the quality of super-resolution results. Experiments demonstrate that our method achieves higher superresolution quality than existing diffusion-based image superresolution methods while having lower time consumption. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6868 Acknowledgments This work was supported by the National Key R&D Program of China (2022YFB4701400/4701402), SSTIC Grant (JCYJ20190809172201639, WDZC20200820200655001), Shenzhen Key Laboratory (ZDSYS20210623092001004), and Beijing Key Lab of Networked Multimedia. References Agustsson, E.; and Timofte, R. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26, 2017, 1122–1131. IEEE Computer Society. Benny, Y.; and Wolf, L. 2022. Dynamic Dual-Output Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 11472–11481. IEEE. Brock, A.; Donahue, J.; and Simonyan, K. 2019. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Chen, T. Q.; Rubanova, Y.; Bettencourt, J.; and Duvenaud, D. 2018. Neural Ordinary Differential Equations. In Bengio, S.; Wallach, H. M.; Larochelle, H.; Grauman, K.; CesaBianchi, N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr´eal, Canada, 6572–6583. Choi, J.; Kim, S.; Jeong, Y.; Gwon, Y.; and Yoon, S. 2021. ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 14347–14356. IEEE. Chung, H.; Kim, J.; McCann, M. T.; Klasky, M. L.; and Ye, J. C. 2023. Diffusion Posterior Sampling for General Noisy Inverse Problems. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Chung, H.; Sim, B.; and Ye, J. C. 2022. Come-CloserDiffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 1824, 2022, 12403–12412. IEEE. Dhariwal, P.; and Nichol, A. Q. 2021. Diffusion Models Beat GANs on Image Synthesis. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y. N.; Liang, P.; and Vaughan, J. W., eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 8780– 8794. Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a Deep Convolutional Network for Image Super-Resolution. In Fleet, D. J.; Pajdla, T.; Schiele, B.; and Tuytelaars, T., eds., European Conference on Computer Vision, Lecture Notes in Computer Science, 184–199. Springer. Fei, B.; Lyu, Z.; Pan, L.; Zhang, J.; Yang, W.; Luo, T.; Zhang, B.; and Dai, B. 2023. Generative Diffusion Prior for Unified Image Restoration and Enhancement. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, 9935–9946. IEEE. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Kawar, B.; Elad, M.; Ermon, S.; and Song, J. 2022. Denoising Diffusion Restoration Models. In NeurIPS. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A. P.; Tejani, A.; Totz, J.; Wang, Z.; and Shi, W. 2017. Photo-Realistic Single Image SuperResolution Using a Generative Adversarial Network. In IEEE Conference on Computer Vision and Pattern Recognition, 105–114. IEEE Computer Society. Li, H.; Yang, Y.; Chang, M.; Feng, H.; Xu, Z.; Li, Q.; and Chen, Y. 2021. SRDiff: Single Image Super-Resolution with Diffusion Probabilistic Models. arXiv:2104.14951. Liang, J.; Lugmayr, A.; Zhang, K.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2021. Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 4056–4065. IEEE. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Lee, K. M. 2017. Enhanced Deep Residual Networks for Single Image SuperResolution. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 1132–1140. IEEE Computer Society. Lugmayr, A.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2020. SRFlow: Learning the Super-Resolution Space with Normalizing Flow. In Vedaldi, A.; Bischof, H.; Brox, T.; and Frahm, J., eds., Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V, volume 12350 of Lecture Notes in Computer Science, 715–732. Springer. Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; and Gool, L. V. 2022. RePaint: Inpainting using Denoising Diffusion Probabilistic Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 11451– 11461. IEEE. Luo, Z.; Gustafsson, F. K.; Zhao, Z.; Sj¨olund, J.; and Sch¨on, T. B. 2023. Image Restoration with Mean-Reverting Stochastic Differential Equations. In Krause, A.; Brunskill, E.; Cho, K.; Engelhardt, B.; Sabato, S.; and Scarlett, J., eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, 23045– 23066. PMLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6869 Meng, C.; He, Y.; Song, Y.; Song, J.; Wu, J.; Zhu, J.; and Ermon, S. 2022. SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Nichol, A. Q.; and Dhariwal, P. 2021. Improved Denoising Diffusion Probabilistic Models. In Meila, M.; and Zhang, T., eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, 8162–8171. PMLR. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. S.; Berg, A. C.; and Fei-Fei, L. 2015. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis., 115(3): 211–252. Saharia, C.; Chan, W.; Chang, H.; Lee, C. A.; Ho, J.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2022a. Palette: Image-to-Image Diffusion Models. In Nandigjav, M.; Mitra, N. J.; and Hertzmann, A., eds., SIGGRAPH ’22: Special Interest Group on Computer Graphics and Interactive Techniques Conference, Vancouver, BC, Canada, August 7 - 11, 2022, 15:1–15:10. ACM. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, S. K. S.; Lopes, R. G.; Ayan, B. K.; Salimans, T.; Ho, J.; Fleet, D. J.; and Norouzi, M. 2022b. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In NeurIPS. Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2021. Image Super-Resolution via Iterative Refinement. arXiv:2104.07636. Salimans, T.; and Ho, J. 2022. Progressive Distillation for Fast Sampling of Diffusion Models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Song, J.; Meng, C.; and Ermon, S. 2021. Denoising Diffusion Implicit Models. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Song, Y.; and Ermon, S. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alch´e-Buc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 11895– 11907. Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Vincent, P. 2011. A Connection Between Score Matching and Denoising Autoencoders. Neural Comput., 23(7): 1661– 1674. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Loy, C. C. 2018. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Leal-Taix´e, L.; and Roth, S., eds., Computer Vision - ECCV 2018 Workshops Munich, Germany, September 8-14, 2018, Proceedings, Part V, volume 11133 of Lecture Notes in Computer Science, 63– 79. Springer. Wang, Y.; Yu, J.; and Zhang, J. 2023. Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018a. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 586–595. Computer Vision Foundation / IEEE Computer Society. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018b. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Ferrari, V.; Hebert, M.; Sminchisescu, C.; and Weiss, Y., eds., European Conference on Computer Vision, Lecture Notes in Computer Science, 294–310. Springer. Zhu, Y.; Zhang, K.; Liang, J.; Cao, J.; Wen, B.; Timofte, R.; and Gool, L. V. 2023. Denoising Diffusion Models for Plug-and-Play Image Restoration. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023, 1219–1229. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6870 | 2024 | 763 |
18,588 | SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical Refinement and EM Optimization Zhenlong Yuan1, Jiakai Cao1, Zhaoxin Li2, 3*, Hao Jiang1, Zhaoqi Wang1 1Institute of Computing Technology, Chinese Academy of Sciences 2Agricultural Information Institute, Chinese Academy of Agricultural Sciences 3Key Laboratory of Agricultural Big Data, Ministry of Agriculture and Rural Affairs [email protected], [email protected], [email protected], {jianghao, zqwang}@ict.ac.cn Abstract In this paper, we introduce Segmentation-Driven Deformation Multi-View Stereo (SD-MVS), a method that can effectively tackle challenges in 3D reconstruction of textureless areas. We are the first to adopt the Segment Anything Model (SAM) to distinguish semantic instances in scenes and further leverage these constraints for pixelwise patch deformation on both matching cost and propagation. Concurrently, we propose a unique refinement strategy that combines spherical coordinates and gradient descent on normals and pixelwise search interval on depths, significantly improving the completeness of reconstructed 3D model. Furthermore, we adopt the Expectation-Maximization (EM) algorithm to alternately optimize the aggregate matching cost and hyperparameters, effectively mitigating the problem of parameters being excessively dependent on empirical tuning. Evaluations on the ETH3D high-resolution multi-view stereo benchmark and the Tanks and Temples dataset demonstrate that our method can achieve state-of-the-art results with less time consumption. Introduction Multi-view stereo (MVS) is a technique that employs images to reconstruct 3D objects or scenes. Its application spans various fields, including autonomous driving (Orsingher et al. 2022), augmented reality (Cao et al. 2021), and robotics (Li, Gogia, and Kaess 2019). Recently, PatchMatch-based methods (Sch¨onberger et al. 2016; Xu and Tao 2019; Lee et al. 2021) exhibits remarkable capabilities in sub-pixel reconstruction for large-scale imagery while being reliable for unstructured image set. These methods typically initiate by computing the matching cost of fixed patches between images, then proceeding with propagation and refinement for accurate depth estimation. Nonetheless, they typically encounter difficulties in textureless areas where the absence of texture results in unreliable depth estimations. To address this issue, several techniques have been introduced, including plane prior (Xu and Tao 2020), superpixel-wise planarization (Romanoni and Matteucci 2019), epipolar geometry (Xu et al. 2020) and confidence-based interpolation (Li et al. 2020). Yet when *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Comparative analysis of patch deformation strategies between APD-MVS and our approach. APD-MVS (a) selects green anchor pixels from pixels characterized by similar colors but may have inconsistent depths to help reconstruct central red pixel, leading to potential inaccuracy. Conversely, our method (b) utilizes neighboring pixels inside the segmentation boundary for reconstruction. facing large textureless areas, these methods perform unsatisfactory and leave room for further improvement. Differently, learning-based methods leverages network to build learnable 3D cost volumes and thereby ameliorating the reconstruction quality. Several methods (Yao et al. 2019; Yan et al. 2020) attempt to employ the gated recurrent unit (GRU) to provide a more rational interpretation in reconstruction, while this often leads to unaffordable time and memory cost. Others (Su and Tao 2023) try to utilize residual learning module to refine depth estimates by rectifying the upsampling errors. Yet, such networks typically lack generalization when facing scenes different from training datasets, posing challenges for their practical application. Edges in the color image are usually consistent with depth boundaries. Thus, edge information plays a pivotal role in both computation of PatchMatch and construction of 3D cost volumes. Nonetheless, problems like shadows and occlusions in complicated scenes tend to weaken the linkage between edge and depth boundaries. Consequently, several methods (Yuesong Wang et al. 2023) struggle to harness edge information effectively, often skipping edges and consequently calculating regions with inconsistent depth, leading to detail distortion, as shown in Fig. 1. Additionally, certain superpixel segmentation approaches (Kuhn, Lin, and Erdler 2019) face challenges in precisely segmenting edges The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6871 and lack semantic information to broaden receptive field. Differently, as an instance segmentation model, the Segment Anything Model (SAM) (Kirillov et al. 2023) can subtly mitigates the aforementioned disturbances, thereby segmenting instances with different depths across diverse scenes. Therefore, we introduce SD-MVS, a PatchMatch-based method that integrates SAM-based instance segmentation to better exploits edge information for patch deformation. Specifically, we first employ the instance segmentation results derived from SAM to adaptively deform the patches for matching cost and propagation, thereby accommodating the distinct characteristics of different pixels. Moreover, we employ multi-scale matching cost and propagation scheme to extract diverse information, addressing the challenges posed by textureless areas. To optimize memory consumption, we introduce an architecture promoting multi-scale consistency in parallel, consequently reducing the program’s runtime. Moreover, we propose the spherical gradient refinement to optimize previous refinement strategies. Concerning with normal refinement, we randomly select two orthogonal unit vectors perpendicular to the current normal for perturbation and incorporate gradient descent to further refine perturbation directions in subsequent rounds, thereby improving the accuracy for each hypothesis. Regarding depth refinement, we adopt pixelwise search interval derived from the deformed patch for local perturbations. Furthermore, we introduce an EM-based hyperparameter optimization to address the issue of empirical determination of hyperparameters in existing methods. By alternately optimizing the aggregated cost and the hyperparameters, we implement an excellent strategy for automatic parameter tuning, thereby facilitating a balanced consideration against diverse information. Evaluation results on the ETH3D and the Tanks and Temples benchmarks illustrate that our method surpasses the existing state-of-the-art (SOTA) methods. In summary, our contributions are as follows: • Based on SAM segmentation, we propose an adaptive patch deformation with multi-scale consistency on both matching cost and propagation to better utilize image edge information and memory cost. • We introduce the spherical gradient refinement, which leverages spherical coordinates and gradient descent on normals and employs pixelwise search interval to constrain depths, thereby enhancing search precision. • We propose the EM-based hyperparameter optimization by adopting the EM algorithm to alternately optimizing the aggregate cost and the hyperparameters. Related Work Traditional MVS Methods Traditional Multi-View Stereo (MVS) algorithms can primarily be classified into four categories (Seitz et al. 2006): voxel-based methods (Vogiatzis et al. 2007), surface evolution-based methods (Cremers and Kolev 2011) , patch-based methods (Bleyer, Rhemann, and Rother 2011), and depth-map based methods (Yao et al. 2019). Our methodology aligns with the last category, where depth maps are generated from images and their corresponding camera parameters, further leading to point cloud construction via fusion. Within this category, PatchMatch-based methods are the most well-known subclass. Numerous innovative PatchMatch-based methods have been proposed and accomplished a great enhancement in both accuracy and completeness. ACMM (Xu and Tao 2019) uses multi-view consistency and cascading structure to tackle reconstruction of textureless areas, while subsequent works such as ACMMP (Xu et al. 2022) further introduce a plane-prior probabilistic graph model and thus provide plane hypothesis for textureless areas. In contrast, TAPA-MVS (Romanoni and Matteucci 2019) and PCFMVS (Kuhn, Lin, and Erdler 2019) employ superpixel for image segmentation and planarization of textureless areas. However, the reconstruction performance in textureless areas is contingent upon the actual segmentation and fitting of the superpixels. CLD-MVS (Li et al. 2020) incorporate a confidence estimator to interpolate unreliable pixels, but their definition way of the confidence makes the result susceptible to occlusion and highlights. MAR-MVS (Xu et al. 2020) leverages epipolar geometry to determine the optimal neighborhood images and scale for pixels, yet its fixed patch size limits its adaptability across various application scenarios. APD-MVS (Yuesong Wang et al. 2023) employs patches with adaptive deformation strategy and pyramid architecture, but the time consumption of its iterative process poses a challenge in large-scale datasets. Learning-based MVS Methods Unlike traditional MVS methods that suffer from hand-crafted image features, learning-based MVS methods typically leverage convolutional neural networks to extract high-dimensional image features, thereby enabling a more rational 3D reconstruction. MVSNET (Yao et al. 2018) has pioneered the construction through introducing differentiable 3D cost volumes using deep neural network, enabling numerous methods for further research. Certain classic multi-stage methods, including Cas-MVSNet (Gu et al. 2020), utilize a coarse-to-fine strategy to refine and upscale depth from low-resolution, thereby reducing the cost volumes while expanding the receptive field. In terms of memory reduction, several methods like Iter-MVS (Wang et al. 2022a) leverage GRU to regulate the 3D cost volumes along the depth direction. Concerning feature extraction, AA-RMVSNET (Wei et al. 2021) aggregates multi-scale variable convolution for adaptive feature extraction. Additionally, MVSTER (Wang et al. 2022b) integrates the transformer architecture into MVS tasks to capture multi-dimensional attention feature information. Despite these advancements, it is worth noting that numerous learning-based MVS methods risk severe degradation when applied to target domains that deviate from the training set. Method Given a series of input images I = {Ii|i = 1, ..., N}, each one with specific camera parameters Pi = {Ki, Ri, Ci}. Our goal is to estimate the depth map Di for each image and subsequently merge them into a 3D point cloud. Fig. 2 illustrates our overall pipeline, specific design of each component will be detailed in subsequent sections. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6872 Figure 2: An illustrated pipeline of our proposed method. Images with multi views are initially downsampled and further allocated into our multi-scale architecture. Through leveraging the SAM-based segmentation, we carry out patch deformation on the matching cost to gain multi-scale matching costs Cms. By integrating Cms with the projection color error Cpc and the reprojection error Crp, the aggregated cost is acquired. Then we again employ the SAM-based segmentation for patch deformation in propagation, succeeded by load-balancing within each search domain. Subsequently, we alternately iterates spherical gradient refinement on normals and pixelwise search interval on depths for enhanced accuracy. Finally, we employ EM-based optimization for the hyperparameter tuning of wms, wrp, wpc and reassign them for the next iteration procedure. Figure 3: Comparative analysis of patch deformation strategies between the SAM-based instance segmentation and the Canny edge detection on partial scenes of ETH3D dadaset (office and kicker). From top to bottom, (a), (b) and (c) respectively show the original images, the SAM-based segmentation results and the Canny edge detection results. Representative areas in red boxes illustrate the advantages of SAM-based segmentation over Canny edge detection. Why Using Segment Anything Model? The Segment Anything Model (SAM) can effectively discriminate between different instances, extracting subtle edge while neglecting strong illumination disturbances. To validate its effectiveness, we conduct the SAM-based instance segmentation and the Canny edge detection for patch deformation on partial scenarios of ETH3D datasets. As shown in Fig. 3, when confronting with scenarios characterized by extensive similar colors and occlusion like office, SAM can effectively separate edges that exhibit similar colors on both sides with inconsistent depths, whereas Canny edge detection simply ignores them. Additionally, textureless areas like floors and walls in kicker can be effectively separated into different instances through SAM segmentation without illumination interference. In contrast, Canny edge detection incorrectly detects these illumination areas as edges, adversely affecting patch deformation. Segmentation-Driven Patch Deformation Patch Deformation on Matching Cost Some recent methods (Wang et al. 2021; Yuesong Wang et al. 2023) attempt to leverage patch deformation to improve matching cost or propagation scheme. As shown in Fig. 1, due to their insufficiency in exploiting edge information, they often cross boundary and reference areas with discontinuous depths, thereby yielding unsatisfactory results, especially when confronting with scenarios characterized by extensive similar colors and occlusions like forests and farmlands. Simultaneously, superpixel-based segmentation approaches (Romanoni and Matteucci 2019) also struggle in precisely The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6873 Figure 4: Patch deformation on matching cost. (a) is the matching cost scheme from ACMMP, (b) shows the distance of each directions and (c) illustrates the deformed patch. recognizing certain critical edges within these scenarios. They also lack instance semantic information to broaden receptive field, thereby meet pixelwise characteristic. SAM segmentation can mitigate this issue as it separates different instances to extract subtle edges information while neglecting robust illumination disturbance. Consequently, we can leverage instance segmentation to better exploit and further introduce edge information into patch deformation. Specifically, we perform instance segmentation using SAM for input image Ii to generate masks for diverse instances, denoted as F. Hence we have M = F(Ii), where M is an image mask whose size is consistent with Ii. For each pixel p, we compute the bilateral weighted adaption of normalized cross correlation score (NCC) (Sch¨onberger et al. 2016) between reference images Ii and source image Ij, which can be calculated as follows: ρ p, Wi p = cov Wi p, Wj p r cov Wip, Wip cov Wj p, Wj p (1) where cov is weighted covariance, Wi p and Wj p are respectively the corresponding images patches on image Ii and Ij. The goal of minimizing the matching cost is to obtain the optimal matching depths via the computation of color differences. However, when objects with varying depths exhibit similar colors, they are susceptible to generating matching inaccuracies, as shown in Fig. 4(a). Therefore, we introduce patch deformation to compute matching cost upon the sample patch W intersecting with different instances. Specifically, we first measure the distances from the corresponding central pixel p to the left, right, lower and upper boundaries of M, denoted respectively as dl, dr, dd, and du. Then we can deform the shape of W to match these boundaries. The new shape of deformed patch can be defined as: dl + dr dl + dr + dd + du L, dd + du dl + dr + dd + du L (2) where L denotes the side length of the square patch before patch deformation. Additionally, we reposition the patch’s center by adding an offset: ∆o(p) = dl −dr dl + dr Lh, dd −du du + dd Lv (3) where Lh and Lv are respectively the horizontal and vertical length of deformed patch. The new center of the sample patch now becomes p + ∆o(p). Figure 5: Patch deformation on propagation. (a) is the propagation pattern of ACMMP, (b) depicts the length of each propagation branch, and (c) illustrates different search domains with different colors. Both patch deformation and center offset allow pixels positioned at boundary regions to orient their patches more intensively towards the center of its own instance. Enhancing the receptive field for homogenous pixels in such approach can yield more robust results, consequently reducing potential errors in estimation. Note that considering the runtime, we restrict the number of calculations for each window such that the number of calculations after deformation never surpasses the initial total number (L/2)2. Patch Deformation on Propagation After SAM-based instance segmentation, pixels within the same instance typically exhibit similar depths, whereas noticeable depth discontinuities frequently arise at the boundaries between instances. Considering that propagation involves updating potential depths and normals within the surrounding area for each pixel, depth discontinuities will inevitably impact propagation. Consequently, we leverage patch deformation to adaptively alter the propagation scheme. The adaptive checkerboard propagation scheme (Xu and Tao 2019) is conducted by introducing the optimal hypotheses from four near and four far search domains, as illustrated in Fig. 5 (a). However, his search domain between two adjacent diagonal directions is too dense, which leads to an imbalanced search space density and a risk of selecting redundant values. Hence we modify its oblique direction into a straight line extending to the corner of each patch. Subsequently, we propose patch deformation on propagation via SAM, which adjusts the propagation patch shape and direction for each pixel. As illustrated in Fig. 5 (b), we adapt the propagation directions according to the shape of the surrounding mask. Specifically, denoting ll, lr, ld, and lu as the length from the central pixel p to the left, right, lower and upper edges of the patch, respectively, we obtain: lu = du du + dd Lv, ll = dl dl + dr Lh (4) Both lr and ld can be obtained similarly. Therefore, the directions and lengths of slanted branch lul is given by: lul = q lu 2 + ll 2, αur = arc tan lu ll (5) where lul refers to the length of the up-right branch, and αur represents the angle between the upward branch and the upright branch. Corresponding lengths and directions of other three slanted branches can be obtained similarly. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6874 Figure 6: Different design architectures between ACMMP and our method. (a) illustrates the cascading network architectures employed in ACMMP, whereas (b) depicts our method with multi-scale architecture. Having adjusted all directions and lengths, we encounter another challenge: the searching domain for each branch is unbalanced. Since the process of selecting a pixel with the minimal cost is essentially a spatial neighborhood search, an imbalance will emerge due to the different length of branches. The search along a shorter branch is suffered from unreliable results due to its minor search domain. To address this, we accordingly modify the searching strategy in the propagation scheme, as shown in Fig. 5 (c). Specifically, we employ eight different colors to depict separate search domains on the eight directions centered on p. Instead of taking the central pixel p as the dividing point, we use the midpoint of the sum of the lengths of two opposite branches to divide the search domain. In experiments, pixels with the same color are grouped into the same domain, with CUDA operators balance the load of searching for minima within each color-specific region. Therefore, our proposed strategy ensures load-balance across all directions and allows for faster convergence. Multi-scale Consistency Many conventional methods adopt cascading architectures by sequentially loading different scales of images into GPU, as shown in Fig. 6 (a). This may result in a time-consuming performance due to the limited transfer speed between CPU and GPU. Therefore, we draw inspiration from mipmap (Williams 1983) in computer graphics, a technique to load different scales of images in parallel at once, to replace the previous cascading architecture into our proposed parallel architecture. Specifically, we first perform image downsampling in the CPU. Subsequently, multi-scale images are assembled and loaded together into the GPU, as depicted in Fig. 6 (b). Then multi-scale images are processed together through matching cost, propagation and refinement in the GPU. Finally, all predicted depth images are transferred back into the CPU. Denoting the maximum memory consumption of ACMMP cascading architectures as σ, and the number of memory read operations as k, this technique enables us to load all scales of images in the GPU memory at a reasonable cost of 4 3σ instead of sequentially loading images, thereby eliminating the need for k −1 additional memory read operations. Based on this architerture, we further introduce multiscale consistency on matching cost and propagation. Regarding matching cost, we first apply SAM segmentation on the k-th level downsampled image. Based on segmentation results, we construct deformed patch and further compute kth level matching cost, denoted as ck. Therefore, the multiscale matching cost is given by: Cms = P k ck k (6) Concerning with propagation, the multi-scale consistency aggregates the search domain for all scales in each direction, yielding a total of eight distinct search domains. Conclusively, eight values with the lowest cost within each domain are chose as new hypothesis for further computation. Aggregated Cost During the patch-matching phase, we consider not only the multi-scale matching cost Cms, but also the reprojection error Crp and the projection color gradient error Cpc. Crp proposed in ACMMP validates depth estimation from geometric consistency. Cpc measures color consistency between current pixel pi in reference image Ii and its corresponding pixel pj in source images Ij: Cpc = max {∥∇Ij (pj) −∇Ii (pi)∥, τ} (7) where ∇represents the Laplacian Operator, pj denotes pixel in image Ij the projected by pixel pi in Ii, and τ is the truncation threshold to robustify the cost against outliers. With these terms, our the aggregated costs Cag can be given by: Cag = wmsCms + wrpCrp + wpcCpc (8) where wms, wrp, and wpc respectively represent the aggregation weights of each component. Spherical Gradient Refinement Two types of refinement strategies are adopted in ACMMP: 1. Local perturbations, which is the local search conduct by perturbing the current depth and normal with a small value; 2. Random selection, which achieves global search to suit potential depth discontinuities by assigning a random value. Since the edge information has already been segmented out through SAM, we only need to consider local perturbations. Given depth d and normal n = (nx, ny, nz) in Cartesian coordinates, new depth d′ and normal n′ after the local perturbation can be defined by: d′ ←d + δd n′ ←VN (nx + δx, ny + δy, nz + δz) (9) where VN is a normalization function ensuring ∥n′∥= 1, and δ denotes a random value chosen from a fixed interval. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6875 Figure 7: Spherical Gradient Refinement Procedure. (a) illustrates the rotation from n to n′, (b) illustrates the rotation from n′ to n′′. (c) respectively indicates two old and new orthogonal perturbation directions e1, e2 and e′ 1, e′ 2. However, the strategy is incompatible with the definition of normal. It introduces a higher sensitivity to axes with smaller values during the search process, resulting in an unequal ratio of change on xyz axes. Therefore, we propose the spherical gradient descent refinement, which utilize a structured representation to converge more accurate hypotheses. Spherical Coordinate As shown in Fig. 7, given the normalized normal, we first randomly choose two orthogonal vectors, e1 and e2, perpendicular to the normal n as the perturbation direction. We then use the angles θ1 and θ2 as the degree of rotation for iterative refinement. The normal first undergoes a counterclockwise rotation by θ1 degrees around e1 as the rotation axis. Subsequently, the normal is further rotated counterclockwise by θ2 degrees around e2 as the rotation axis. According to Rodrigues’ rotation formula, the ultimate updated normal n′′ is given by: n′ = cosθ1 · n + sinθ1(e1 × n) n′′ = cosθ2 · n′ + sinθ2(e2 × n′) (10) This is analogous to sliding a vertex directed by the normal on the surface of a sphere, which ensures the preservation of normalization for the normal vector both before and after rotation. By finding two orthogonal bases perpendicular to the normal for refinement, it can be ensured that perturbations in each direction are equivalent. This approach aligns more closely with the geometric essence of the normal, which is defined on a sphere rather than individual axes in the xyz coordinate system. As a result, our approach boosts the robustness and stability during the refinement process. Gradient Descent We also utilize gradient descent in our method. The primary merit of gradient descent lies in its ability to logically restrict the search space to the vicinity of probable solutions. Denoting the number of total iterations as Nmax, the rotation angle θ for the i-th round is randomly selected from range 0, 5 ∗2Nmax−i . After one round of refinement for depth d and normal n, we determine the new direction for local perturbations e′ 1 and e′ 2 based on the result of the previous search. As such, we get: e′ 1 ←n′′ −n e′ 2 ←e′ 1 × n′′ (11) Here, e′ 1 is aligned with the vector sum of the previous round’s perturbation, while e′ 2 is a vector perpendicular to both n′ and e′ 1, as shown in Fig. 7(c). The primary merit of gradient descent lies in its ability to restrict the search domain of neighbourhood solutions. Each round of search takes place on the orthogonal plane defined by the previous search direction and the current normal direction, thereby enabling faster convergence to the optimal solution. Pixelwise Depth Interval Search ACMMP employs a fixed interval for local perturbations on depth, while static perturbation range cannot adapt well to locally varying scene depth. Addressing this, we introduce pixelwise depth search interval chosen within the deformed patch. Specifically, for each pixel, we extract the depth values of all pixels encompassed by its deformed patch, and choose the maximal and minimal values from this set as depth boundary for perturbations. Additionally, considering our iterative refinement strategy, during the i-th iteration, the pixelwise search interval is chosen within the deformed patch gained from i-th downsampled image, thereby narrowing the perturbation interval to yield more accurate hypothesis. EM-based Hyperparameters Optimization While computing the aggregated matching cost, the hyperparameters of each component is typically determined empirically, which may result in suboptimal outcomes for different scenes. To mitigate this, we leverage the ExpectationMaximization (EM) algorithm to alternately optimize the hyperparameters and the aggregated cost, thereby enhancing both the robustness and effectiveness of our method. E-Step: Optimize Cag By fixing wms, wrp, and wpc, we can optimize the aggregated cost Cag, formulated as: min Cms,Crp,CpcCag = wmsCms + wrpCrp + wpcCpc (12) After optimization, we can get the optimal depth estimation under current hyperparameters. M-Step: Optimize wms, wrp, wpc By fixing Cms,Crp and Cpc, we can optimize wms, wrp and wpc, defined by: min wms,wms,wpcCag = wmsCms + wrpCrp + wpcCpc, s.t. wms + wrp + wpc = 1, wms, wrp, wpc > η (13) All hyperparameters are required to exceed a minimal value η, and we implement a normalization constraint ensuring that their sum equals 1 to mitigate significant variances. Following the E-step optimization, we can alternatively optimize the hyperparameters and feed them back into the E-step for the next round of aggregated cost optimization. Since it may be challenging to obtain the analytical solution to the optimization problem in M-step, we will use numerical optimization methods such as Newton’s method (Qi and Sun 1993) to obtain the optimal solutions for wms, wrp, and wpc. A comprehensive formula derivation of the optimization can be found in supplementary material. In practical situations, there might be partial pixels with depth estimation errors when all pixels are selected. Hence, we only select pixels where SIFT features can be matched between different images, and then calculate the aggregate cost between the pixels corresponding to these features. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6876 Figure 8: An illustration of the qualitative results on partial scenes of ETH3D datasets (office, old computer, and pipes). Some challenging areas are shown in red boxes. It is obvious that our methods outperform others, especially in large textureless areas. Method Train Test Acc. Comp. F1 Acc. Comp. F1 PatchMatchNet 64.81 65.43 64.21 69.71 77.46 73.12 IterMVS-LS 79.79 66.08 71.69 84.73 76.49 80.06 MVSTER 68.08 76.92 72.06 77.09 82.47 79.01 EPP-MVSNet 82.76 67.58 74.00 85.47 81.79 83.40 EPNet 79.36 79.28 79.08 80.37 87.84 83.72 COLMAP 91.85 55.13 67.66 91.97 62.98 73.01 PCF-MVS 84.11 75.73 79.42 82.15 79.29 80.38 MAR-MVS 81.98 77.19 79.21 80.24 84.18 81.84 ACMP 90.12 72.15 79.79 90.54 75.58 81.51 ACMMP 90.63 77.61 83.42 91.91 81.49 85.89 APD-MVS 89.14 84.83 86.84 89.54 85.93 87.44 SD-MVS (ours) 89.63 84.52 86.94 88.96 87.49 88.06 Table 1: Quantitative results on ETH3D benchmark at threshold 2cm . Our method accomplishes the best F1 score. Experiments Datasets and Implementation Details We evaluate our work on both ETH3D high-resolution benchmark (Sch¨ops et al. 2017) and Tanks and Temples benchmark (TNT) (Knapitsch et al. 2017). We compare our work against state-of-the-art learning-based methods including PatchMatchNet (Wang et al. 2021), IterMVS-LS (Wang et al. 2022a), MVSTER (Wang et al. 2022b), EPPMVSNet (Ma et al. 2021), EPNet (Su and Tao 2023) and traditional MVS methods including COLMAP (Sch¨onberger et al. 2016), PCF-MVS (Kuhn, Lin, and Erdler 2019), MARMVS (Xu et al. 2020), ACMP (Xu and Tao 2020), ACMMP (Xu et al. 2022) and APD-MVS (Yuesong Wang et al. 2023). Note that experiments is carried out on downsampled images with half of the original resolution in ETH3D, and on original images in TNT. Concerning parameter setting, {wms, wrp, wpc, L, k, τ, Nmax, η} = {1, 0.2, 0.2, 11, 3, 2, 3, 0.1}. In cost calculation, we take the matching strategy of every other row and column. Our method is implemented on a system equipped with an Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz and an NVIDIA GeForce RTX 3080 graphics card. We take ACMP Method Intermediate Advanced Pre. Rec. F1 Pre. Rec. F1 PatchMatchNet 43.64 69.37 53.15 27.27 41.66 32.31 CasMVSNet 47.62 74.01 56.84 29.68 35.24 31.12 IterMVS-LS 47.53 74.69 56.94 28.70 44.19 34.17 MVSTER 50.17 77.50 60.92 33.23 45.90 37.53 EPP-MVSNet 53.09 75.58 61.68 40.09 34.63 35.72 EPNet 57.01 72.57 63.68 34.26 50.54 40.52 COLMAP 43.16 44.48 42.14 31.57 23.96 27.24 PCF-MVS 49.82 65.68 55.88 34.52 35.36 35.69 ACMP 49.06 73.58 58.41 34.57 42.48 37.44 ACMMP 53.28 68.50 59.38 33.79 44.64 37.84 APD-MVS 55.58 75.06 63.64 33.77 49.41 39.91 SD-MVS (ours) 53.78 77.63 63.31 35.53 47.37 40.18 Table 2: Quantitative results on TNT dataset. Our method accomplishes competitive F1 score with SOTA methods. (Xu and Tao 2020) as the backbone of our method. Results on ETH3D and TNT Qualitative results on ETH3D are illustrated in Fig. 8. It is obvious that our method reconstructs the most comprehensive results, especially in large textureless areas like floors, walls and doors, without introducing conspicuous detail distortion. More qualitative results on ETH3D and TNT benchmark can be referred in supplementary material. Tab. 1 and Tab. 2 respectively present quantitative results on the ETH3D and the TNT benchmark. Note that the first group is learning-based methods and the second is traditional methods. Meanwhile, the best results are marked in bold while the second-best results are underlined. Our method achieves the highest F1 score on ETH3D datasets, giving rise to state-of-the-art performance. Meanwhile, our method achieves competitive results with SOTA methods in TNT datasets like EPNET and APD-MVS, falling short by less than 0.5% in F1 score. Especially, our method shows significant improvement in completeness in both datasets, demonstrating its robustness in recovering textureless areas. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6877 Method 2cm 10cm Acc. Comp. F1 Acc. Comp. F1 w/. ACM. Cost 90.16 74.61 81.27 98.01 89.04 93.16 w/o. Adp. Cost 89.92 78.01 83.42 97.92 91.87 94.71 w/o. Mul. Cost 89.84 79.94 84.55 97.9 93.36 95.53 w/. ACM. Pro. 89.83 79.96 84.52 97.91 93.58 95.54 w/o. Adp. Pro. 89.57 81.74 85.38 97.81 94.96 96.29 w/o. Mul. Pro. 89.69 81.97 85.54 97.87 95.17 96.44 w/o. Ref. 86.75 70.45 77.6 97.04 85.37 90.72 w/. Gip. Ref. 89.3 78.51 83.43 97.74 91.56 94.48 w/. ACM. Ref. 89.42 79.83 84.25 97.79 92.64 95.11 w/o. EM A 89.74 78.16 83.45 97.89 91.78 94.57 w/o. EM B 89.45 79.87 84.27 97.81 93.05 95.3 SD-MVS 89.63 84.52 86.94 97.85 96.74 97.28 Table 3: Quantitative results of the ablation studies on ETH3D benchmark to validate each proposed component. Memory and Runtime Comparison To demonstrate the efficiency of our method, we compare both GPU memory usage and runtime among various methods on ETH3D training datasets, as depicted in Fig. 9. Note that all experiments are executed on original images whose number have been standardized to 10 across all scenes. Moreover, to exclude the impact of unrelated variables, all methods are conducted on a same system, whose hardware configuration has been specified in previous section. Concerning learning-based methods, while IterMVS-LS exhibits the shortest runtime, its memory overhead exceeds the maximum capacity of mainstream GPUs. Other stateof-the-art (SOTA) learning-based methods also suffer from excessive memory consumption, making them impractical for the reconstruction of large-scale outdoor scenarios. Although SD-MVS consumes approximately one-third more memory usage than traditional SOTA methods like APD-MVS and ACMMP, our runtime is only half of them, thanks to our multi-scale consistency architecture. Therefore, our method strikes the optimal balance between time and memory usage without sacrificing performance, demonstrating its effectiveness and practicality. Ablation Studies We validate the rationale behind the design of each part of our method through ablation studies, as shown in Tab. 3. Matching Cost with Adaptive Patch In terms of matching cost, we respectively remove patch deformation (w/o. Adp. Cost), multi-scale consistency (w/o. Mul. Cost) and both of them (w/. ACM. Cost). Since w/. ACM. Cost has neither deformable nor multi-scale, it produces the worst results. w/o. Mul. Cost slightly outperformed w/o. Adp. Cost, yet both are inferior to SD-MVS, implying that patch deformation contribute more than multi-scale consistency. Adaptive Propagation with Load-balancing In terms of propagation, we respectively remove patch deformation (w/o. Adp. Pro.), multi-scale consistency (w/o. Mul. Pro.) and apply propagation scheme from ACMMP (w/. ACM. Pro.). Given that patches in ACMMP do not deform in accordance with the patch, its performance fell short of expectations. Both w/o. Adp. Pro. and w/o. Mul. Pro. delivered Figure 9: GPU memory usage (GB) and runtime (second) between different methods on ETH3D training datasets. similar results, yet fell short in comparison to SD-MVS, indicating that both patch deformation and multi-scale consistency on propagation are equally crucial. Spherical Gradient Refinement In terms of refinement, we respectively remove refinement (w/o. Ref.), exchange the refinement module into Gipuma (Galliani, Lasinger, and Schindler 2015) (w/. Gip. Ref.) and switch the refinement module into ACMMP (w/. ACM. Ref.). As observed, the absence of refinement significantly diminishes the results. However, introducing Gipuma refinement brings about noticeable progress, with further advancements achieved after adopting ACMMP refinement. Nonetheless, both refinement methods are worse than SD-MVS, proving the necessity of spherical gradient refinement. EM-based Hyperparameters Optimization We conduct two experiments (w/o. EM A and w/o. EM B) by removing EM-based Optimization and respectively setting (wms, wrp, wpc) to (1, 0.5, 0.5) and (1, 0.2, 0.2). The results highlight the impact of hyperparameter settings on the final results. Furthermore, their inferior performances compared to SD-MVS evidences the importance of automatic parameter tuning by the proposed EM-based Optimization. Conclusion In this paper, we presented SD-MVS, a novel MVS method designed to effectively address challenges posed by textureless areas. The proposed method consists of an adaptive patch deformation with multi-scale consistency, a spherical gradient refinement and EM-based hyperparameter optimization. Our method has achieved state-of-the-art performance on ETH3D high-resolution benchmark, while being memory-friendly and with less time cost. In the future, we will tackle difficulty in highlight areas in matching cost and view selection strategy in pursuit of superior performance. Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant 62172392, the Central Public-interest Scientific Institution Basal Research Funds(No. Y2022QC17) and the Innovation Research Program of ICT CAS (E261070). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6878 References Bleyer, M.; Rhemann, C.; and Rother, C. 2011. PatchMatch Stereo - Stereo Matching with Slanted Support Windows. In Proc. Brit. Mach. Vis. Conf. (BMVC), 14.1–14.11. Cao, M.; Zheng, L.; Jia, W.; Lu, H.; and Liu, X. 2021. Accurate 3-D Reconstruction Under IoT Environments and Its Applications to Augmented Reality. IEEE Trans. Ind. Inf., 17(3): 2090–2100. Cremers, D.; and Kolev, K. 2011. Multiview Stereo and Silhouette Consistency via Convex Functionals over Convex Domains. IEEE Trans. Pattern Anal. Mach. Intell., 33(6): 1161–1174. Galliani, S.; Lasinger, K.; and Schindler, K. 2015. Massively Parallel Multiview Stereopsis by Surface Normal Diffusion. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 873– 881. Gu, X.; Fan, Z.; Zhu, S.; Dai, Z.; Tan, F.; and Tan, P. 2020. Cascade Cost Volume for High-Resolution MultiView Stereo and Stereo Matching. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2492–2501. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.Y.; Doll´ar, P.; and Girshick, R. 2023. Segment Anything. arXiv:2304.02643. Knapitsch, A.; Park, J.; Zhou, Q.-Y.; and Koltun, V. 2017. Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction. ACM Trans. Graph., 36(4). Kuhn, A.; Lin, S.; and Erdler, O. 2019. Plane Completion and Filtering for Multi-View Stereo Reconstruction. In Proc. DAGM German Conf. (GCPR), volume 11824, 18–32. Lee, J. Y.; DeGol, J.; Zou, C.; and Hoiem, D. 2021. PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 6138–6147. Li, Z.; Gogia, P. C.; and Kaess, M. 2019. Dense Surface Reconstruction from Monocular Vision and LiDAR. In Proc. IEEE Conf. Robot. Automat. (ICRA), 6905–6911. Li, Z.; Zuo, W.; Wang, Z.; and Zhang, L. 2020. ConfidenceBased Large-Scale Dense Multi-View Stereo. IEEE Trans. on Image Process., 29: 7176–7191. Ma, X.; Gong, Y.; Wang, Q.; Huang, J.; Chen, L.; and Yu, F. 2021. EPP-MVSNet: Epipolar-assembling based Depth Prediction for Multi-view Stereo. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 5712–5720. Orsingher, M.; Zani, P.; Medici, P.; and Bertozzi, M. 2022. Revisiting PatchMatch Multi-View Stereo for Urban 3D Reconstruction. In Proc. IEEE Intelligent Vehicles Symp. (IV), 190–196. Qi, L.; and Sun, J. 1993. A nonsmooth version of Newton’s method. Math. Program., 58: 353–367. Romanoni, A.; and Matteucci, M. 2019. TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 10412–10421. Sch¨ops, T.; Sch¨onberger, J. L.; Galliani, S.; Sattler, T.; Schindler, K.; Pollefeys, M.; and Geiger, A. 2017. A MultiView Stereo Benchmark with High-Resolution Images and Multi-Camera Videos. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR). Sch¨onberger, J. L.; Zheng, E.; Frahm, J.-M.; and Pollefeys, M. 2016. Pixelwise View Selection for Unstructured MultiView Stereo. In Proc. Eur. Conf. Comput. Vis. (ECCV), volume 9907, 501–518. Seitz, S.; Curless, B.; Diebel, J.; Scharstein, D.; and Szeliski, R. 2006. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), volume 1, 519–528. Su, W.; and Tao, W. 2023. Efficient Edge-Preserving MultiView Stereo Network for Depth Estimation. In Proc. of the AAAI Conf. Artif. Intell. (AAAI), 2348–2356. Vogiatzis, G.; Hernandez Esteban, C.; Torr, P. H.; and Cipolla, R. 2007. Multiview Stereo via Volumetric GraphCuts and Occlusion Robust Photo-Consistency. IEEE Trans. Pattern Anal. Mach. Intell., 29(12): 2241–2246. Wang, F.; Galliani, S.; Vogel, C.; and Pollefeys, M. 2022a. IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 8596–8605. Wang, F.; Galliani, S.; Vogel, C.; Speciale, P.; and Pollefeys, M. 2021. PatchmatchNet: Learned Multi-View Patchmatch Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 14189–14198. Wang, X.; Zhu, Z.; Huang, G.; Qin, F.; Ye, Y.; He, Y.; Chi, X.; and Wang, X. 2022b. MVSTER: Epipolar Transformer for Efficient Multi-view Stereo. In Proc. Eur. Conf. Comput. Vis. (ECCV), volume 13691, 573–591. Wei, Z.; Zhu, Q.; Min, C.; Chen, Y.; and Wang, G. 2021. AA-RMVSNet: Adaptive Aggregation Recurrent Multiview Stereo Network. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 6167–6176. Williams, L. 1983. Pyramidal Parametrics. In Proc. of the 10th Annu. Conf. on Comput. Graph. and Interact. Techn. (SIGGRAPH), 1–11. Xu, Q.; Kong, W.; Tao, W.; and Pollefeys, M. 2022. MultiScale Geometric Consistency Guided and Planar Prior Assisted Multi-View Stereo. IEEE Trans. Pattern Anal. Mach. Intell., 1–18. Xu, Q.; and Tao, W. 2019. Multi-Scale Geometric Consistency Guided Multi-View Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 5478–5487. Xu, Q.; and Tao, W. 2020. Planar Prior Assisted PatchMatch Multi-View Stereo. In Proc. of the AAAI Conf. Artif. Intell. (AAAI), volume 34, 12516–12523. Xu, Z.; Liu, Y.; Shi, X.; Wang, Y.; and Zheng, Y. 2020. MARMVS: Matching Ambiguity Reduced Multiple View Stereo for Efficient Large Scale Scene Reconstruction. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 5980–5989. Yan, J.; Wei, Z.; Yi, H.; Ding, M.; Zhang, R.; Chen, Y.; Wang, G.; and Tai, Y.-W. 2020. Dense hybrid recurrent multi-view stereo net with dynamic consistency checking. In Proc. Eur. Conf. Comput. Vis. (ECCV), 674–689. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6879 Yao, Y.; Luo, Z.; Li, S.; Fang, T.; and Quan, L. 2018. MVSNet: Depth Inference for Unstructured Multi-view Stereo. In Proc. Eur. Conf. Comput. Vis. (ECCV), volume 11212, 785– 801. Yao, Y.; Luo, Z.; Li, S.; Shen, T.; Fang, T.; and Quan, L. 2019. Recurrent MVSNet for High-Resolution Multi-View Stereo Depth Inference. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 5520–5529. Yuesong Wang; Zhaojie Zeng; Tao Guan; Wei Yang; Zhuo Chen; Wenkai Liu; Luoyuan Xu; and Yawei Luo. 2023. Adaptive Patch Deformation for Textureless-Resilient Multi-View Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 1621–1630. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6880 | 2024 | 764 |
18,589 | KeDuSR: Real-World Dual-Lens Super-Resolution via Kernel-Free Matching Huanjing Yue1, Zifan Cui1, Kun Li2, Jingyu Yang1* 1School of Electrical and Information Engineering, Tianjin University, China 2College of Intelligence and Computing, Tianjin University, China {huanjing.yue, cuizifan, lik, yjy}@tju.edu.cn Abstract Dual-lens super-resolution (SR) is a practical scenario for reference (Ref) based SR by utilizing the telephoto image (Ref) to assist the super-resolution of the low-resolution wide-angle image (LR input). Different from general RefSR, the Ref in dual-lens SR only covers the overlapped field of view (FoV) area. However, current dual-lens SR methods rarely utilize these specific characteristics and directly perform dense matching between the LR input and Ref. Due to the resolution gap between LR and Ref, the matching may miss the best-matched candidate and destroy the consistent structures in the overlapped FoV area. Different from them, we propose to first align the Ref with the center region (namely the overlapped FoV area) of the LR input by combining global warping and local warping to make the aligned Ref be sharp and consistent. Then, we formulate the aligned Ref and LR center as value-key pairs, and the corner region of the LR is formulated as queries. In this way, we propose a kernelfree matching strategy by matching between the LR-corner (query) and LR-center (key) regions, and the corresponding aligned Ref (value) can be warped to the corner region of the target. Our kernel-free matching strategy avoids the resolution gap between LR and Ref, which makes our network have better generalization ability. In addition, we construct a DuSR-Real dataset with (LR, Ref, HR) triples, where the LR and HR are well aligned. Experiments on three datasets demonstrate that our method outperforms the second-best method by a large margin. Our code and dataset are available at https://github.com/ZifanCui/KeDuSR. Introduction Single Image Super-Resolution (SISR) (Liang et al. 2021; Yang, Liu, and Yang 2023) aims to reconstruct a highresolution (HR) image from a low-resolution (LR) input, which is challenging due to the limited available information. In contrast, Reference-based SR (RefSR) introduces a similar high-resolution reference image (Ref) to assist the reconstruction process and has achieved better results. However, the development of RefSR is constrained since obtaining similar Ref for a given LR in real scenarios is difficult. Fortunately, modern smartphones are equipped with multiple cameras of different fields of view (FoV), where the *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. wide-angle lens sacrifices resolution to increase the FoV, while the telephoto lens has a smaller FoV but higher resolution. Therefore, dual-lens (or dual-camera) SR is proposed (Wang et al. 2021), where the telephoto camera serves as the reference to super-resolve the wide-angle camera by transferring the matched reference details to the LR, as shown in Fig. 1. However, only the center region (namely the overlapped FoV area) of the LR image has reference content, with differences in viewpoints and colors. Meanwhile, it is difficult for the corner region 1 of the LR image to find similar contents from the reference due to the large resolution gap (as shown in Fig. 1) between the telephoto and wideangle cameras. Therefore, the key question for real duallens SR is how to improve the matching and warping performance between LR and Ref when they have large resolution gaps and different FoV? The matching problem has been widely explored in RefSR and dual-lens SR. Previous RefSR methods (Yang et al. 2020; Lu et al. 2021; Jiang et al. 2021) are conducted with synthesized LR (namely the down-sampled version of the HR) images from CUFED5 dataset, and the matching is conducted between HR↓(namely LR) and Ref↓(or Ref). However, when the HR and Ref are captured with different focal lengths, the resolution gap still exists. Similarly, DCSR (Wang et al. 2021) also utilizes synthesized pairs, namely that the original LR and the reference image are downsampled to generate the training triples, namely {LR↓, Ref↓, LR}, where the original LR serves as the ground truth. The matching is conducted between LR↓and Ref↓↓. However, the simple downsampling operation cannot simulate the resolution gap between LR and Ref since they are captured by different focal lengths, as shown in Fig. 1. SelfDZSR (Zhang et al. 2022b) directly performs matching between Ref and auxiliary-LR features, which also did not pay attention to the resolution gap. Different from them, we did not perform matching between the features of Ref↓and LR. We propose a kernel-free matching strategy by matching between LRcorner and LR-center features, which perfectly avoids the resolution gap problem. In this way, our method also has a good generalization ability. 1We divide the LR image into two parts, where the overlapped FoV region is named as the center region and the remaining regions in the LR are named as corner region. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6881 The warping strategy is the second key problem for duallens SR. General RefSR, such as (Yang et al. 2020; Huang et al. 2022) performs globally pixel-wise or patch-wise dense matching since the corresponding matched content may locate in any position in the reference and then utilize the matched index for Ref warping. The benchmark duallens SR work DCSR (Wang et al. 2021) also utilizes this strategy. However, for dual-lens SR, the center region of the LR has the same scene as that of the reference. Directly performing dense patch matching between the LR and Ref may miss the best-matched patch for the center region due to the large resolution gap between the LR and Ref and the matching index may be incongruent. Therefore, SelfDZSR (Zhang et al. 2022b) proposes to paste the Ref back to the center area of the warped Ref features. However, the LR center and Ref are not pixel-wise aligned and this operation introduces misalignments between the center and corner regions in the SR result. Different from them, we propose a novel center warping strategy to find the matched content for the center region, which jointly utilizes global warping and local warping. For the corner region, we utilize the kernel-free matching index for corner warping, leading to a well-aligned reference in both corner and center regions. The third problem is how to adapt to real captured LR images in dual-lens SR? Since there is no pairwise real duallens SR dataset, DCSR (Wang et al. 2021) is trained with a synthesized dataset. It adapts the network to real images by finetuning with self-supervised loss. However, the finetuning strategy cannot solve the domain gap problem between synthesized and real captured ones. In contrast, SelfDZSR (Zhang et al. 2022b) proposes a self-supervised learning strategy, which utilizes the warped telephoto image as the ground truth (GT) and the center region of it serves as the reference. It introduces an auxiliary LR to make the warped LR and Ref aligned with GT during training. This makes the network pay more attention to the alignment process other than detail generation. Recently, ZeDuSR (Xu, Yao, and Xiong 2023) proposes zero-shot learning to deal with real captured LR images by training with center region pairs, but it requires a long inference time due to online learning. Different from them, we argue that a well-aligned dual-lens SR dataset is required to further boost the real dual-lens SR performance. Therefore, we construct the first well-aligned DuSR-Real dataset, where the HR is aligned with the real captured LR and the reference is also real captured. The LR and Ref have overlapped FoV regions. In addition, we reorganize the previous dual-lens SR datasets and construct another two real datasets, namely RealMCVSR-Real and CamereFusionReal, for comprehensively evaluation. In summary, our contributions are as follows. • We are the first to explore the real dual-lens SR problem via supervised learning. We propose a center warping and corner warping strategy to align the reference with the LR input, which greatly improves the alignment quality. • We propose a kernel-free matching strategy by matching between LR-center and LR-corner, which avoids the resolution gap between the LR and Ref and makes the result be consistent across the whole image. • We constructed the first well-aligned DuSR-Real dataset. Extensive experiments on three datasets demonstrate the superiority of the proposed method. In addition, our method has the best generalization ability. Related Work Reference-Based SR RefSR, which leverages an HR reference to improve the SR performance, is a classical topic. From traditional methods to deep learning-based methods, the RefSR performance has been greatly improved. The key problem in RefSR is matching and warping. To improve the matching and warping performance, many sophisticated methods have been proposed, such as dense patch matching based (Yue et al. 2013; Zheng et al. 2017; Zhang et al. 2019b; Yang et al. 2020), optical-flow based warping (Zheng et al. 2018), and dense matching assisted DCN (deformable convolution network) warping, such as (Shim, Park, and Kweon 2020; Jiang et al. 2021; Huang et al. 2022). Specifically, C2-matching (Jiang et al. 2021) utilized contrastive learning to overcome scale and rotation transformation gaps and employed a teacherstudent correlation distillation network to address the resolution gap. To accelerate the matching process, MASA (Lu et al. 2021) and AMSA (Xia et al. 2022) explore efficient matching via a coarse-to-fine matching approach. Besides matching, advanced training and strategies are also emerging. RRSR (Zhang et al. 2022a) proposed a novel reciprocal training strategy. DATSR (Cao et al. 2022) incorporated transformer into RefSR and have achieved SOTA performance. However, all these methods perform matching between LR and Ref. Different from them, we propose a kernel-free matching strategy tailored for dual-lens SR, by matching between LR-corner and LR-center regions, which avoids the resolution gap between the LR and Ref. Dual-Lens SR Compared with RefSR, dual-lens SR is more practical since the telephoto camera can directly serve as the reference for the wide-angle camera. DCSR (Wang et al. 2021) was the pioneer in introducing the dual-lens SR task. Since the training was conducted with a synthesized dataset, they further proposed a self-supervised domain adaptation strategy to generalize to real-world images. To enhance matching robustness, Zou et al. (2023) introduced geometric constraints to make the matching results be smooth. To adapt to real images, SelfDZSR (Zhang et al. 2022b) proposed a selfsupervised learning framework that directly utilized weakly aligned real-world pairs for training. ZeDuSR (Xu, Yao, and Xiong 2023) proposed a zero-shot learning strategy by training with the pairs inside the overlapped FoV region, which had a good generalization ability. RefVSR (Lee et al. 2022) and ERVSR (Kim et al. 2023) extended the dual-lens SR strategy to video SR. Unlike them, we jointly utilize center and corner warping to improve the alignment performance between LR and Ref. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6882 Coarse Alignment Fine Alignment Color Correction Cropping Cropping Cropping Resolution Gap Resolution Gap 𝑋 (Wide-angel) LR LR Ref ↓ (Ref ↓) ↓↑ 𝐼Ref 𝐼HR 𝐼LR 𝑌 (Telephoto) Figure 1: Illustration of our DuSR-Real dataset construction process and the resolution gap between LR and Ref. Real-World SR Datasets The quality of the dataset is an important factor in promoting network development and improving SR performance. The widely used SR datasets are usually constructed by downsampling the GT, thus resulting in well-aligned LRHR pairs, such as the SISR datasets, i.e., DIV2K (Timofte et al. 2017) and the RefSR dataset CUFED5 (Zhang et al. 2019b). However, the models trained on the synthesized dataset cannot generalize well to real degraded images. Therefore, many real-world SR datasets are collected by capturing with different focuses, such as City100 (Chen et al. 2019), SR-Raw (Zhang et al. 2019a), DRealSR (Wei et al. 2020), which have greatly improved the model’s ability in dealing with real captured LR images. However, these datasets cannot be directly utilized for dual-lens SR due to the lack of triples (LR, Ref, and HR). The benchmark duallens SR datasets, i.e., CameraFusion (Wang et al. 2021) and RealMCVSR (Lee et al. 2022), construct the training triples by downsampling the LR and Ref, generating {LR↓, Ref↓, LR}, where the original LR image serves as the GT. To deal with real LR images, SelfDZSR (Zhang et al. 2022b) proposes to use the misaligned triples for training, which requires tedious operations to deal with the misalignment problem. Afterward, ZeDuSR (Xu, Yao, and Xiong 2023) explores zero-shot learning to solve the real dual-lens SR. Different from them, we argue that a real triple dataset is still needed to further boost the development of real-world dual-lens SR. Therefore, we construct a DuSR-Real dataset with well-aligned LR and HR pairs and corresponding assisted HR references with overlapped FoV. DuSR-Real Dataset Construction In the literature, there are two datasets, i.e., CameraFusion (Wang et al. 2021) and RealMCVSR (Lee et al. 2022) for dual-lens SR. However, they only provide LR and Ref pairs, without HR ground truth for the LR. In this work, we construct real triples for dual-lens SR. Data Collection. The scene numbers in CameraFusion and RealMCVSR are relatively small. Therefore, we collect a new large dataset for real dual-lens SR. Specifically, we use an iPhone 13 to capture dual-lens images through the DoubleTake App 2. The focal length of the telephoto lens is two times that of the wide-angle lens. Note that, different from CameraFusion (Wang et al. 2021), which utilizes lens switching to capture the same scene, we simultaneously activate both lenses for capturing. In this way, our dataset can avoid the misalignments between the two cameras in dynamic scenes and is consistent with real applications. Data Processing. For supervised learning, we need to generate the HR GT for the input LR. We propose to warp the telephoto image with the wide-angle image to generate the HR GT and the original center area of the telephoto image can serve as the Ref. We adopt the coarse-tofine alignment strategy proposed in (Yue, Zhang, and Yang 2022) to create well-aligned LR-HR pairs. As depicted in Fig. 1, X and Y represent the original images captured by the wide-angle lens and telephoto lens, respectively. Firstly, we employ SIFT (Lowe 2004) and RANSAC (Fischler and Bolles 1981) to calculate the optimal homography matrix to coarsely align Y with X. Then, we adopt Deepflow (Weinzaepfel et al. 2013) for fine alignment. In this work, we focus on the SR task and the color differences between LR and HR will affect the learning process. Therefore, we further utilize color correction, namely a linear scaling coefficient for each channel (Yue, Zhang, and Yang 2022), to make them have similar colors. Then, we crop the overlapped area between X and warped Y, generating the LR-HR pair ILR and IHR. Then, we further crop the central area (according to the relative position between the two lenses) of Y, generating IRef to serve as the reference. We totally captured 730 pairs, and manually removed 255 triples with alignment errors. Among the remaining triples (ILR,IRef,IHR), 420 triples are used for training, and 55 triples are used for testing. In order to perform cross-dataset evaluation, we further apply the same processing approach on the CameraFusion (Wang et al. 2021) and RealMCVSR (Lee et al. 2022) datasets, to generate well-aligned real LR-HR pairs and the pairs that cannot be well aligned are removed. The reorganized datasets are named CameraFusion-Real and RealMCVSR-Real, respectively. Detailed information about the three datasets is provided in our supp. file. Method Framework Overview Dual-lens SR is different from general RefSR since the Ref in dual-lens SR shares the same scene with that of the LR center. Therefore, we propose to deal with the center and corner region differently. As shown in Fig. 3, the Ref image first goes through center warping, and then with the index obtained from kernel-free matching, we further perform corner warping on the Ref. Combining the warped reference feature and the LR feature via an adaptive fusion module and reconstruction module, we obtain the final result ISR. The following gives details about these modules in our Kernelfree matching based Dual-lens SR (termed as KeDuSR). 2https://apps.apple.com/us/app/doubletake-by-filmicpro/id1478041592 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6883 (b) (a) 𝒒𝒊 𝒌𝒊 𝑰LR 𝒗𝒊 𝑰 Ref 𝑰LRC Figure 2: Illustration of the similarity between the corner and center regions, where (a) shows the similar patch pairs and (b) is the matching curve. The LR-center region is circled by a white dotted box and ¯IRef is its corresponding HR Ref. Center Warping Given the LR image ILR and its reference IRef, we need to first identify the overlapped area between them. Specifically, we utilize SIFT (Lowe 2004) matching to find the matched points between IRef and ILR. Then, we utilize RANSAC (Fischler and Bolles 1981) to filter outliers, and use the inliers to calculate the homography matrix, which is applied on IRef and generates the warped reference ¯IRef. Afterward, we crop the area in ILR that corresponds to ¯IRef, and name it as the center region of the LR, denoted as ILRC. After this global warping, ¯IRef is coarsely aligned with ILRC. However, there are still small displacements between ¯IRef and ILRC, which may cause alignment errors in the following corner warping. Therefore, we further utilize the faster and differentiable flow-guided DCN (Chan et al. 2022) for local warping. Note that, we did not utilize DeepFlow for center warping since it is much slower and nondifferentiable. To reduce the computation cost, we downsample ¯IRef to make it have the same scale as that of ILRC. Then, we utilize the pretrained Spynet (Ranjan and Black 2017) to compute the optical flow f between ¯IRef↓and ILRC. Then, we utilize residual blocks (ResBlocks) to extract features from ILRC and ¯IRef, generating F LRC and ¯F Ref. Afterwards, the upsampled optical flow f↑is utilized to guide DCN (Dai et al. 2017; Zhu et al. 2019) to align ¯F Ref with F LRC, generating the finealigned features ˆF Ref. Note that, compared with the widely used dense patch matching-based warping, our global warping and local warping combined strategy can preserve the reference image structures, which not only improves the following cornerwarping performance but also improves the final SR quality in the center region. Kernel-Free Matching and Corner Warping An intuitive strategy for warping the corner region is performing patch matching between the corner of ILR and the reference IRef. However, there is a large resolution gap between them. Even after downsampling, the resolution gap still exists between ILR and IRef↓↓↑(as shown in Fig. 1) since simple downsampling cannot simulate the mapping between the two cameras. Another strategy is learning the mapping process via KernelGAN (Bell-Kligler, Shocher, and Irani 2019) or probabilistic degradation model (PDM) (Luo et al. 2022), and utilizing the learned kernel to degrade IRef. However, the kernel depends on cameras, which makes the kernel learned with one specific camera does not generalize well to other cameras. In contrast, we observe that due to the nonlocal similarity, for one query patch (qi) in the corner region, we can find its similar patch (ki) in the center region, as shown in Fig. 2 (a). To visualize the similarity between corner region and center region, we plot the matching curve, namely hit rate versus error rate in Fig. 2 (b). The error rate is defined as er = ∥q −k∥2/∥q∥2, where k is the matched patch from the center region with the minimal mean square error for q. The hit rate is the percentage of the query patch whose error rate is smaller than er. The blue matching curve is obtained by matching between the HR query and the HR key, but the er is obtained by the HR patches with the corresponding index. For more than 90% of patches, their error rates are smaller than 0.3, which indicates a high similarity between corner and center regions. In addition, the matching in LR domain can well approximate the matching in HR domain. Therefore, in this work, we propose to perform matching between the corner and center regions of the LR image. Since this matching process is not influenced by different camera kernels, we formulate it as kernel-free matching. Following (Yang et al. 2020; Wang et al. 2021), we also perform matching in the VGG (denoted as ϕ) feature space by extracting features from ILR and ILRC. The features are densely divided into 3 × 3 patches with a stride of 1, and the cosine similarity Si,j is computed between each patch pair, namely P LR i and P LRC j . For P LR i , its matched patch is the one that has the highest similarity score, and the matched index Mi and confidence score Ci can be obtained by Mi = arg max j Si,j, Ci = max j Si,j. (1) Then, we utilize the matched index map to extract HR matched patches from the Ref. Therefore, the reference warping result for the corner region is ˜F Ref 2i = ˆF Ref Mi , (2) where ˜F Ref i denotes the patch value of ˜F Ref in the ith patch position. Since the query patches are densely extracted, the overlapped value patches are averaged in the overlapped region. In addition, the center region of ˜F Ref is indeed the center warping result ˆF Ref. Correspondingly, the center region of the confidence map is set to 1. SISR Encoder As demonstrated in (Huang et al. 2022), coupling the image super-resolution task from the input LR image with the texture transfer task from the reference will introduce interference. Therefore we also decouple the SISR task into a separate module, which is constructed by 24 residual blocks with channel attention. The extracted features are upsampled (named as F LR) to cope with the size of the aligned reference feature. Note that, different from the two-stage training approach used in (Huang et al. 2022), we train our SISR encoder as an integral part of the entire network. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6884 VGG Extractor Center Warping Flow-Guided DCN SISR Encoder VGG Extractor Corner Warping ResBlocks ResBlocks SpyNet AdaFusion Reconstruction K-F Matching 𝐼SR 𝐼 Ref 𝐹 Ref 𝐹 Ref 𝐹LRC 𝑓 𝑀 𝐶 𝐹LR 𝐹SR 𝐹 Ref 𝐼LR 𝐼LRC Figure 3: The framework of our KeDuSR. Kernel-Free Matching is performed between LR-corner and LR-center(ILRC) to obtain the index map M and confidence map C. Then, employing center warping and corner warping, we obtain the warped high-resolution feature map ˜F Ref of the reference. After fusion with F LR, we generate the SR result ISR. Adaptive Fusion The warped reference feature ˜F Ref and LR image feature F LR are complementary to each other, and we utilize a fusion model to fuse them together. Since the matching quality for different positions is different, inspired by (Wang et al. 2021), we also utilize adaptive fusion by introducing the confidence map obtained in the matching process. In addition, instead of fusing ˜F Ref, we fuse its high-frequency part ˜F Ref hf = ˜F Ref −˜F Ref ↓↑. This process is formulated as F SR = Φ(concat(g(C) · ˜F Ref hf , F LR)), (3) where C denotes the confidence map, g() represents the convolution operations, · represents the element-wise multiplication, and Φ represent the AdaFusion block. The AdaFusion module is constructed by ResBlocks with spatial and channel attention. Loss Functions Following previous RefSR methods, we also utilize hybrid loss functions. First, for reconstruction loss, we utilize Charbonnier loss (Lai et al. 2017), which is a differentiable variant of ℓ1 loss, denoted as Lch = q ∥IHR −ISR ∥2 2 +ε, (4) where ε = 1 × 10−6. IHR denotes the HR GT, while ISR denotes the SR result. For better visual effects, we further incorporate perceptual loss and adversarial loss. The perceptual loss is expressed as Lper =∥ϕi(IHR) −ϕi(ISR)∥2, (5) where ϕi denotes the i-th layer of VGG19. We adopt the Relativistic GANs (Jolicoeur-Martineau 2018) as our adversarial loss (Goodfellow et al. 2014), denoted as Ladv. In summary, our hybrid loss can be represented as L = Lch + λ1Lper + λ2Ladv, (6) where the weighting parameters λ1 and λ2 are set to 1×10−3 and 1×10−4, respectively. Note that, we provide two results in experiments. One is trained with only the reconstruction loss Lch and the other is trained with the hybrid loss L. Experiments Training Details and Datasets During training, the batch size is 4, and the patch size for the input LR is 128 × 128. We utilized the Adam optimizer (Kingma and Ba 2014) and the cosine annealing scheme (Loshchilov and Hutter 2016). The learning rate is initially set to 10−4 and is decayed to 10−6. All experiments were conducted using PyTorch (Paszke et al. 2019) on an Nvidia GeForce RTX 3090 GPU. We conduct comparisons on three datasets, namely our DuSR-Real, the reorganized CameraFusion-Real, and RealMCVSR-Real datasets. Our DuSR-Real contains 420 training triples and 55 testing images, and the HR GT has a resolution of 1792 × 896. The RealMCVSR-Real (CameraFusion-Real) consists of 330 (83) training triples and 50 (15) testing images, and the GT has a resolution of 1792 × 896 (3584 × 2560). Comparison with State-of-the-arts To evaluate the effectiveness of our KeDuSR, we compare with three kinds of SR methods, including the SISR methods: RCAN (Zhang et al. 2018b), SwinIR (Liang et al. 2021), ESRGAN (Wang et al. 2018), BSRGAN (Zhang et al. 2021), the RefSR methods: TTSR (Yang et al. 2020), MASA (Lu et al. 2021), DASTR (Cao et al. 2022), and the dual-lens SR methods: DCSR (Wang et al. 2021), SelfDZSR (Zhang et al. 2022b), ZeDuSR (Xu, Yao, and Xiong 2023). For a fair comparison, we retrained the aforementioned methods with the same training set as that used in our method. SISR methods use the LR-HR pair during training, while RefSR methods use the LR-Ref-HR triples (except for ZeDuSR, which uses LR-Ref pairs). For SelfDZSR, since the Ref and LR have large and irregular displacements in our dataset, we did not paste the center Ref back to its warped features to avoid misalignment artifacts. Quantitative Comparison. We evaluate all the methods on three datasets, as shown in Tables 1, 2, 3. Full-image results represent the quantitative results of the entire image, the center-image corresponds to the results in overlapped The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6885 Method L(s) Full-Image Corner-Image PSNR / SSIM / LPIPS PSNR / SSIM RCAN-ℓ 0.69 26.44 / 0.8676 / 0.147 26.33 / 0.8667 SwinIR-ℓ 2.85 26.14 / 0.8601 / 0.157 26.11 / 0.8597 ESRGAN 0.08 25.78 / 0.8622 / 0.152 25.77 / 0.8617 BSRGAN 0.45 24.77 / 0.8227 / 0.202 24.71 / 0.8225 TTSR-ℓ 7.51 26.48 / 0.8676 / 0.147 26.17 / 0.8631 MASA-ℓ 1.52 26.36 / 0.8592 / 0.160 26.25 / 0.8582 DATSR-ℓ 9.35 26.17 / 0.8583 / 0.157 26.11 / 0.8579 DCSR-ℓ 0.84 26.77 / 0.8748 / 0.134 26.29 / 0.8635 DCSR 0.84 26.19 / 0.8553 / 0.110 25.75 / 0.8425 SelfDZSR-ℓ 0.17 26.27 / 0.8559 / 0.158 26.10 / 0.8548 SelfDZSR 0.17 25.98 / 0.8455 / 0.105 25.81 / 0.8442 ZeDuSR-ℓ 180 25.41 / 0.8247 / 0.191 25.21 / 0.8216 KeDuSR-ℓ 0.51 27.66 / 0.8890 / 0.117 27.24 / 0.8750 KeDuSR 0.51 27.18 / 0.8752 / 0.084 26.77 / 0.8593 Table 1: Quantitative comparisons on DuSR-Real. Bold and underlined indicate the best and second-best performance, respectively. -ℓdenotes training with only reconstruction loss. L (Latency) indicates the time required to generate one HR result (1792 × 896) using one NVIDIA 3090 GPU. Method Full-Image Corner-Image PSNR↑/SSIM↑/LPIPS↓ PSNR / SSIM RCAN-ℓ 25.96 / 0.8033 / 0.234 26.12 / 0.8065 SwinIR-ℓ 25.78 / 0.7982 / 0.246 25.94 / 0.8015 TTSR-ℓ 25.92 / 0.8017 / 0.235 25.98 / 0.8036 MASA-ℓ 25.95 / 0.7989 / 0.239 26.07 / 0.8020 DATSR-ℓ 25.81 / 0.7975 / 0.242 25.95 / 0.8007 DCSR-ℓ 26.28 / 0.8111 / 0.217 26.08 / 0.8048 DCSR 25.85 / 0.7966 / 0.186 25.58 / 0.7793 SelfDZSR-ℓ 25.33 / 0.7928 / 0.246 25.30 / 0.7952 SelfDZSR 25.24 / 0.7786 / 0.175 25.23 / 0.7805 ZeDuSR-ℓ 24.98 / 0.7702 / 0.262 24.93 / 0.7720 KeDuSR-ℓ 27.05 / 0.8406 / 0.180 26.56 / 0.8139 KeDuSR 26.42 / 0.8184 / 0.127 25.95 / 0.7875 Table 2: Quantitative comparisons on RealMCVSR-Real. FoV area (the quantitative comparison of center-image is provided in our supp), and the corner-image represents excluding the center-image from the full-image. For the models trained with only reconstruction loss, we denote it with −ℓ. Otherwise, the corresponding model is trained with hybrid loss functions as proposed in their paper. On all three datasets, our method outperforms the second-best method by a large margin in terms of PSNR, SSIM (2004), and LPIPS (2018a). In addition, our method achieves the best performance in both the center and corner regions. For TTSR, its center result is better than that of RCAN due to the introduction of HR Ref. However, its corner result is worse since it cannot utilize Ref patches well. Meanwhile, DCSR works much better in the center region than TTSR due to its robust feature warping and fusion strategy. Different from them, we utilize a tailored center warping and kernel-free matching based corner warping, which greatly improves the matching performance in both center and corner regions. Meanwhile, the zero-shot learning (ZeDuSR) method can only utilize the single image informaFull-Image Corner-Image Method PSNR↑/SSIM↑/LPIPS↓ PSNR / SSIM RCAN-ℓ 25.67 / 0.8049 / 0.308 25.45 / 0.8012 SwinIR-ℓ 25.32 / 0.8007 / 0.315 25.22 / 0.7985 TTSR-ℓ 25.83 / 0.8044 / 0.311 25.62 / 0.7996 MASA-ℓ 25.78 / 0.8030 / 0.303 25.58 / 0.7988 DCSR-ℓ 26.02 / 0.8123 / 0.293 25.51 / 0.8016 DCSR 25.47 / 0.7605 / 0.165 25.08 / 0.7512 DCSR-SRA 24.75 / 0.7347 / 0.189 24.55 / 0.7254 SelfDZSR-ℓ 25.94 / 0.8041 / 0.283 25.68 / 0.8005 SelfDZSR 25.64 / 0.7790 / 0.151 25.39 / 0.7753 ZeDuSR-ℓ 26.16 / 0.7920 / 0.279 25.87 / 0.7871 KeDuSR-ℓ 27.53 / 0.8292 / 0.276 26.93 / 0.8169 KeDuSR 27.00 / 0.7931 / 0.133 26.43 / 0.7768 Table 3: Quantitative comparisons on CameraFusion-Real. Variant Traditional Matching Center Warping Corner Warping PSNR / SSIM A ✓ 27.03 / 0.8801 B ✓ 27.16 / 0.8757 C ✓ 27.50 / 0.8804 D ✓ ✓ 27.66 / 0.8890 Table 4: Ablation study on our key modules, evaluated on DuSR-Real dataset. tion. Therefore, their performance is also inferior to ours. Note that, ZeDuSR works better in the CameraFusion-Real dataset since the image size in this dataset is large, which makes ZeDuSR extract more training pairs from one single image. In, addition, our method ranks second among all the RefSR methods in terms of latency. We also evaluate the benefits of training with real pairs over finetuning. We utilize the released weights of DCSR, which is finetuned by the Self-supervised Real-image Adaptation (SRA) strategy, and term it DCSR-SRA. As shown in Table 3, DCSR-SRA falls largely behind the original DCSR trained with our constructed real pairs. Qualitative Comparison. Fig. 4 presents the visual comparison results. In the center region, our method achieves results that closely resemble the HR GT, surpassing other existing methods by a significant margin. In the corner region, our approach is capable of recovering more fine-grained details if similar textures are present in the Ref. In addition, for large LR input, the majority of methods need to divide the LR input into blocks due to the limitation of available memory. In this case, our method can avoid the blocking artifacts due to our kernel-free matching strategy while the compared methods suffer from these artifacts. More results are provided in our supp. file. Ablation Study We conduct ablation experiments on our proposed matching and warping strategy. First, we remove the proposed center warping and corner warping and replace them with traditional dense feature matching, namely matching between the VGG features of ILR and IRef ↓. Then utilize the matching result for reference warping. As shown in Table 4, this The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6886 GT Ours SelfDZSR DCSR TTSR RCAN MASA LR (top) / Ref (bottom) ZeDuSR RealMCVSR-Real CameraFusion-Real DuSR-Real Figure 4: Visual comparisons on real-world dual-lens datasets. The white dotted box indicates the overlapped FoV area between LR and Ref. The presented results are obtained with only reconstruction loss. (variant A) degrades the result by 0.63 dB. Note that, variant A is our baseline, which is better than SOTA RefSR methods, verifying that our proposed baseline is a robust baseline for the dual-lens SR task. For variant B, we remove the corner warping process, namely that we only utilize the center warping result and there is no reference for the corner region. Variant B still outperforms variant A in terms of PSNR since the center region is well reconstructed. This demonstrates that our global and local warping combined strategy is effective for the center region. For variant C, we remove the center warping, namely that we only utilize the coarsely aligned reference feature ¯F Ref for the center region, and thus the key-value pair in kernel-free matching is not accurately aligned. Therefore, the result of variant C is worse than our full model (variant D). In summary, our center warping and corner warping are essential for improving the dual-lens SR performance. Generalization Evaluation We also evaluate the generalization ability of different models trained on DuSR-Real by testing on the two other datasets. As shown in Table 5, our method has the best generalization ability, even outperforming the zero-shot learning method ZeDuSR. The main reason is that our kernel-free matching strategy is independent of cameras. Method RealMCVSR-Real CameraFusion-Real PSNR / SSIM / LPIPS PSNR / SSIM / LPIPS TTSR-ℓ 24.67 / 0.7814 / 0.248 25.23 / 0.7760 / 0.289 MASA-ℓ 24.99 / 0.7830 / 0.258 25.45 / 0.7769 / 0.291 DCSR-ℓ 25.46 / 0.7986 / 0.226 25.58 / 0.7931 / 0.263 SelfDZSR-ℓ 24.86 / 0.7778 / 0.252 25.55 / 0.7805 / 0.285 ZeDuSR-ℓ 24.98 / 0.7702 / 0.262 26.16 / 0.7920 / 0.279 KeDuSR-ℓ 26.55 / 0.8325 / 0.186 27.24 / 0.8178 / 0.215 Table 5: Generalization evaluation with the model trained on DuSR-Real. Conclusion In this work, we proposed a KeDuSR network to deal with the real dual-lens SR task. We designed a global and local combined warping strategy to make the Ref well-aligned with the center region of LR input. Then, we formulate the LR center and the aligned reference as key-value pairs and propose a kernel-free matching strategy, whose matching index is used for corner warping. Afterward, we fuse the features of enhanced LR input and the features of corner and center well-aligned reference to generate the SR result. Experiments demonstrate the superiority of the proposed method. We also construct a DuSR-Real dataset with well-aligned pairs to facilitate research in this area. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6887 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 62072331, Grant 62231018, and Grant 62171317. References Bell-Kligler, S.; Shocher, A.; and Irani, M. 2019. Blind super-resolution kernel estimation using an internal-gan. Advances in Neural Information Processing Systems, 32. Cao, J.; Liang, J.; Zhang, K.; Li, Y.; Zhang, Y.; Wang, W.; and Gool, L. V. 2022. Reference-Based Image SuperResolution with Deformable Attention Transformer. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII, 325–342. Springer. Chan, K. C.; Zhou, S.; Xu, X.; and Loy, C. C. 2022. Basicvsr++: Improving video super-resolution with enhanced propagation and alignment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5972–5981. Chen, C.; Xiong, Z.; Tian, X.; Zha, Z.-J.; and Wu, F. 2019. Camera lens super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1652–1660. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, 764–773. Fischler, M. A.; and Bolles, R. C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): 381–395. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Huang, Y.; Zhang, X.; Fu, Y.; Chen, S.; Zhang, Y.; Wang, Y.-F.; and He, D. 2022. Task Decoupled Framework for Reference-based Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5931–5940. Jiang, Y.; Chan, K. C.; Wang, X.; Loy, C. C.; and Liu, Z. 2021. Robust reference-based super-resolution via c2matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2103–2112. Jolicoeur-Martineau, A. 2018. The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734. Kim, Y.; Lim, J.; Cho, H.; Lee, M.; Lee, D.; Yoon, K.-J.; and Choi, H.-J. 2023. Efficient Reference-based Video SuperResolution (ERVSR): Single Reference Image Is All You Need. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1828–1837. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lai, W.-S.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2017. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, 624–632. Lee, J.; Lee, M.; Cho, S.; and Lee, S. 2022. Reference-based video super-resolution using multi-camera video triplets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17824–17833. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, 1833–1844. Loshchilov, I.; and Hutter, F. 2016. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Lowe, D. G. 2004. Distinctive image features from scaleinvariant keypoints. International journal of computer vision, 60: 91–110. Lu, L.; Li, W.; Tao, X.; Lu, J.; and Jia, J. 2021. Masa-sr: Matching acceleration and spatial adaptation for referencebased image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6368–6377. Luo, Z.; Huang, Y.; Li, S.; Wang, L.; and Tan, T. 2022. Learning the degradation distribution for blind image superresolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6063–6072. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Ranjan, A.; and Black, M. J. 2017. Optical flow estimation using a spatial pyramid network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4161–4170. Shim, G.; Park, J.; and Kweon, I. S. 2020. Robust referencebased super-resolution with similarity-aware deformable convolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8425–8434. Timofte, R.; Agustsson, E.; Van Gool, L.; Yang, M.-H.; and Zhang, L. 2017. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 114–125. Wang, T.; Xie, J.; Sun, W.; Yan, Q.; and Chen, Q. 2021. Dual-camera super-resolution with aligned attention modules. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2001–2010. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Change Loy, C. 2018. Esrgan: Enhanced superresolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops, 0–0. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6888 structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wei, P.; Xie, Z.; Lu, H.; Zhan, Z.; Ye, Q.; Zuo, W.; and Lin, L. 2020. Component divide-and-conquer for real-world image super-resolution. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VIII 16, 101–117. Springer. Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; and Schmid, C. 2013. DeepFlow: Large displacement optical flow with deep matching. In Proceedings of the IEEE international conference on computer vision, 1385–1392. Xia, B.; Tian, Y.; Hang, Y.; Yang, W.; Liao, Q.; and Zhou, J. 2022. Coarse-to-fine embedded patchmatch and multi-scale dynamic aggregation for reference-based super-resolution. In Proceedings of the AAAI Conference on Artificial Intelligence, 2768–2776. Xu, R.; Yao, M.; and Xiong, Z. 2023. Zero-Shot Dual-Lens Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9130– 9139. Yang, F.; Yang, H.; Fu, J.; Lu, H.; and Guo, B. 2020. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5791–5800. Yang, Q.; Liu, Y.; and Yang, J. 2023. Two-branch crisscross network for realistic and accurate image super-resolution. Displays, 80: 102549. Yue, H.; Sun, X.; Yang, J.; and Wu, F. 2013. Landmark image super-resolution by retrieving web images. IEEE Transactions on Image Processing, 22(12): 4865–4878. Yue, H.; Zhang, Z.; and Yang, J. 2022. Real-RawVSR: Real-World Raw Video Super-Resolution with a Benchmark Dataset. In Proceedings of the European conference on computer vision (ECCV), 608–624. Zhang, K.; Liang, J.; Van Gool, L.; and Timofte, R. 2021. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4791–4800. Zhang, L.; Li, X.; He, D.; Li, F.; Wang, Y.; and Zhang, Z. 2022a. RRSR: Reciprocal Reference-Based Image SuperResolution with Progressive Feature Alignment and Selection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIX, 648–664. Springer. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018a. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595. Zhang, X.; Chen, Q.; Ng, R.; and Koltun, V. 2019a. Zoom to learn, learn to zoom. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3762–3770. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018b. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), 286–301. Zhang, Z.; Wang, R.; Zhang, H.; Chen, Y.; and Zuo, W. 2022b. Self-supervised Learning for Real-World SuperResolution from Dual Zoomed Observations. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII, 610– 627. Springer. Zhang, Z.; Wang, Z.; Lin, Z.; and Qi, H. 2019b. Image super-resolution by neural texture transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7982–7991. Zheng, H.; Ji, M.; Han, L.; Xu, Z.; Wang, H.; Liu, Y.; and Fang, L. 2017. Learning Cross-scale Correspondence and Patch-based Synthesis for Reference-based SuperResolution. In BMVC, volume 1, 2. Zheng, H.; Ji, M.; Wang, H.; Liu, Y.; and Fang, L. 2018. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping. In Proceedings of the European conference on computer vision (ECCV), 88–104. Zhu, X.; Hu, H.; Lin, S.; and Dai, J. 2019. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9308–9316. Zou, H.; Xu, L.; and Okatani, T. 2023. Geometry Enhanced Reference-Based Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6123–6132. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6889 | 2024 | 765 |
18,590 | SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation Wenxi Yue1, Jing Zhang1, Kun Hu1, Yong Xia2, Jiebo Luo3, Zhiyong Wang1 1School of Computer Science, The University of Sydney 2School of Computer Science, Northwestern Polytechnical University 3Department of Computer Science, University of Rochester {wenxi.yue, jing.zhang1, kun.hu, zhiyong.wang}@sydney.edu.au, [email protected], [email protected] Abstract The Segment Anything Model (SAM) is a powerful foundation model that has revolutionised image segmentation. To apply SAM to surgical instrument segmentation, a common approach is to locate precise points or boxes of instruments and then use them as prompts for SAM in a zeroshot manner. However, we observe two problems with this naive pipeline: (1) the domain gap between natural objects and surgical instruments leads to inferior generalisation of SAM; and (2) SAM relies on precise point or box locations for accurate segmentation, requiring either extensive manual guidance or a well-performing specialist detector for prompt preparation, which leads to a complex multi-stage pipeline. To address these problems, we introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to effectively integrate surgical-specific information with SAM’s pre-trained knowledge for improved generalisation. Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes and eliminates the use of explicit prompts for improved robustness and a simpler pipeline. In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning, further enhancing the discrimination of the class prototypes for more accurate class prompting. The results of extensive experiments on both EndoVis2018 and EndoVis2017 datasets demonstrate that SurgicalSAM achieves state-of-the-art performance while only requiring a small number of tunable parameters. The source code is available at https://github.com/wenxi-yue/SurgicalSAM. Introduction Surgical instrument segmentation (SIS) is a crucial task in surgical vision, aimed at precisely delineating surgical instruments in operative scenes. It provides vital assistance to surgeons and facilitates the development of advanced computer-assisted operation systems (Shademan et al. 2016; Jin et al. 2021; Liu et al. 2021; Jian et al. 2020; Yue et al. 2023; Zhang and Tao 2020). Existing deep learning methods for SIS have achieved impressive results through the design and training of specialist models featuring task-specific components. Nevertheless, these methods usually require Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Precise Prompt SAM Precise Prompt … 0 t Track Anything 0 0 t Class: Monopolar Curved Scissors Input Image Ground-Truth Mask Detection-based Tracking-based Reference-based PerSAM 0 0 SurgicalSAM SAM Mask Decoder SAM Image Encoder Prototype-based Class Prompt Encoder SurgicalSAM Frozen Point Tuning Bounding Box k Class Prototype for Class k 1 2 3 4 5 t t t Figure 1: Comparison of our SurgicalSAM against existing detection-based, tracking-based, and reference-based zeroshot SAM frameworks for surgical instrument segmentation. training the complete set of model parameters (i.e., full training) using SIS datasets, resulting in inefficiency. In addition, due to the limited scale of the SIS datasets, the trained models tend to exhibit subpar generalisation performance. The Segment Anything Model (SAM) (Kirillov et al. 2023) has recently gained significant attention as a pioneering foundation model for promptable segmentation. Utilising SAM for downstream medical tasks holds great promise for enhancing training efficiency and leveraging strong pretrained knowledge. Current research predominantly employs SAM in a zero-shot manner for medical image segmentation. However, the lack of sufficient medical data in SAM pre-training and the substantial domain gap between natural objects and medical targets hinders the direct generalisation of SAM towards medical tasks. Many studies have reported subpar performance of SAM in zero-shot medical image segmentation (Deng et al. 2023; He et al. 2023; Wald et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6890 (a) SAM Prediction Mask mAP vs. Bounding Box Prompt Jitter (b) Scale Jitter -0.2 (c) GT Bounding Box (d) Scale Jitter 0.4 (e) Position Jitter -0.2 (f) GT Mask (g) Position Jitter 0.4 Figure 2: Prompt robustness study of SAM against bounding box jitter in terms of scale and position for surgical instrument segmentation. A jitter factor of 0 represents the ground-truth bounding box with no jitter; a higher absolute value of the jitter factor indicates larger prompt noises. 2023; Mazurowski et al. 2023; Huang et al. 2023; Cheng et al. 2023; Wang et al. 2023a,b). Specifically, surgical instruments differ significantly from natural objects in terms of specialised appearance, complex anatomical background, and high inter-category similarity. We evaluate three essential zero-shot SAM strategies on SIS: (1) MT-RCNN (MaskTrack-RCNN) (Yang, Fan, and Xu 2019) or Mask2Former (Cheng et al. 2022) as a bounding box detector followed by SAM, (2) Track Anything (Yang et al. 2023), and (3) PerSAM (Zhang et al. 2023), representing detection-based, tracking-based, and reference-based frameworks, respectively. As shown in Fig. 1, these methods demonstrate inferior results, where detection-based and tracking-based methods depict incorrect contours and the reference-based method misidentifies the instrument class. This further highlights the challenge of bridging the naturalsurgical domain gap and emphasises the necessity of SAM tuning. In addition, the performance of SAM relies on the precise locations of explicit prompts (Cheng et al. 2023; Wald et al. 2023). We confirm this through a prompt robustness study on SIS by introducing various scale and position jitters to the ground-truth bounding box as a prompt for SAM and recording the prediction mAP. As shown in Fig. 2, our study demonstrates SAM’s sensitivity to prompt jitters: even minor deviations in the provided bounding box prompts can significantly impair segmentation accuracy. As a result, existing zero-shot SAM frameworks often involve complex multi-stage pipelines, requiring either precise manual guidance or a well-performing specialist detector to provide accurate points or bounding boxes for accurate prompting. This complexity further restricts the direct application of SAM in the surgical domain. To address the above challenges, we propose SurgicalSAM, an end-to-end approach that effectively mitigates the surgical-natural domain gap through efficient tuning of SAM. A comparison of SurgicalSAM against existing pipelines is shown in Fig. 1. We propose a lightweight prototype-based class prompt encoder, which takes an instrument class as a prompt and learns the class prototypes by interacting with the image embedding to directly generate prompt embeddings for the mask decoder. By tuning the prototype-based class prompt encoder and the mask decoder, surgical knowledge is integrated with SAM’s pre-trained knowledge, effectively mitigating the domain gap. Moreover, our strategy of directly generating latent prompt embeddings from class prompts and eliminating the use of explicit points and bounding boxes further addresses the poor robustness associated with explicit prompts as well as maintains an end-to-end pipeline. In SurgicalSAM, the class prototypes play a vital role in effectively prompting the instrument of interest from an image. However, different surgical instrument categories often exhibit high similarity and low inter-class differences, thus posing a big challenge. To address this, we further propose contrastive prototype learning, utilising contrastive loss to acquire discriminative learned class prototypes. This method enhances the distinction between fine-grained instrument categories, resulting in more accurate class prompting and improved segmentation outcomes. In summary, the contributions of this paper are threefold: • We introduce SurgicalSAM to integrate surgical instrument knowledge with the pre-trained knowledge in SAM through efficient tuning for class promptable surgical instrument segmentation. It outperforms both specialist models and complex multi-stage solutions. • We propose a prototype-based class prompt encoder that eliminates the use of explicit prompts and facilitates direct learning of latent prompt embeddings from class prompts for an end-to-end pipeline. We also propose contrastive prototype learning to enhance the discrimination of the prototypes of fine-grained instrument categories for more accurate class prompting. • We conduct extensive experiments on the challenging EndoVis2018 and EndoVis2017 datasets, achieving state-of-the-art (SOTA) performance while significantly improving training efficiency. Related Work Surgical Instrument Segmentation Current research addresses SIS by training customised specialist models. Early research employs a pixel classification paradigm to predict pixel-wise class probabilities in a frame. Notably, TernausNet pioneers this direction using a U-Netbased encoder-decoder network (Shvets et al. 2018). This The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6891 has been later extended with feature pyramid attention (Ni et al. 2020) and flow-based temporal priors (Jin et al. 2019; Zhao et al. 2020). Nevertheless, these approaches encounter spatial class inconsistency, where one instrument may be assigned multiple instrument types. An alternative paradigm is mask classification, which aims to predict a set of masks and associate each mask with a class label, inherently reducing spatial class inconsistency. ISINet introduces mask classification to instrument segmentation with Mask-RCNN (Gonz´alez, Bravo-S´anchez, and Arbelaez 2020; He et al. 2017). Later, Baby et al. (2023) improve its classification performance by designing a specialised classification module. In addition, TraSeTR integrates tracking cues with a track-to-segment transformer (Zhao, Jin, and Heng 2022) and MATIS incorporates temporal consistency with Mask2Former (Ayobi et al. 2023; Cheng et al. 2022). Although various methods have been proposed for surgical instrument segmentation, they primarily rely on designing specialist models and training the complete set of model parameters, which is inefficient. Particularly with the small datasets in the surgical domain, these models may exhibit subpar generalisation performance. Segment Anything Model SAM is recognised as a pioneering foundation model for image segmentation. The large-scale pre-training equips it with excellent zero-shot generalisation capabilities, driving various downstream applications (Wang et al. 2023c; Li et al. 2023; Yan et al. 2023). However, SAM has been shown to struggle with zero-shot generalisation to medical scenarios (Deng et al. 2023; He et al. 2023; Mazurowski et al. 2023; Huang et al. 2023; Cheng et al. 2023) due to the substantial domain gap between natural objects and medical subjects. Moreover, SAM relies on explicit points and bounding boxes at precise locations for accurate segmentation (Cheng et al. 2023; Wald et al. 2023). As a result, extensive manual guidance or a specialist detector is often required, leading to a complex multi-stage pipeline (Wang et al. 2023a). To bridge the natural-medical domain gap, some studies seek to adapt SAM through domain-specific fine-tuning. However, they either require accurate point or bounding box prompts (Ma et al. 2023; Wu et al. 2023) or employ universal prompt embeddings for all classes which lack discrimination for fine-grained surgical instrument categories (Zhang and Liu 2023; Chen et al. 2023; Wang et al. 2023b). In contrast, our approach introduces a novel efficient-tuning approach for SAM with a prototype-based prompt encoder, which generates prompt embeddings from contrastivelylearned class prototypes. This enhances the discrimination of fine-grained classes while simplifying the pipeline by eliminating the need for explicit prompts. Methodology Overview In this work, we address the task of surgical instrument segmentation in a class promptable manner through efficient tuning of SAM. Specifically, given a surgical image I ∈RH×W ×3 with spatial resolution H × W and the class of an instrument in the image c as prompt, our goal is to predict the class c mask of the image, denoted as M (c): M (c) = SurgicalSAM(I, c). (1) SurgicalSAM is composed of three core components as shown in Fig. 3(a): an image encoder, a prototype-based class prompt encoder, and a mask decoder. Similar to SAM, the image encoder EI first extracts the embedding of the input image as FI ∈Rh×w×d, with h × w denoting the shape of the image embedding and d representing the number of embedding channels. Then, our prototype-based class prompt encoder ECP utilises the class prototypes B to activate the image embedding and leverages the obtained activated feature conditioned on the prompt class c to generate prompt embeddings, including dense prompt embeddings T (c) D and sparse prompt embeddings T (c) S . Finally, the image embedding and prompt embeddings are used to predict the mask M (c) by the mask decoder DM. The above process can be expressed as: FI = EI(I), (2) T (c) D , T (c) S = ECP (FI, B, c), (3) M (c) = DM(FI, [T (c) D , T (c) S , TO]), (4) where TO denotes the learnable output tokens in SAM. Prototype-based Class Prompt Encoder The prototype-based class prompt encoder exploits the similarity between the image and class prototypes to create prompt embeddings. Specifically, as shown in Fig. 3(b), the spatial-wise similarity between the image embedding and the class prototype is computed to activate class-specific regions within the image, resulting in a class-activated feature to generate prompt embeddings for the mask decoder. Furthermore, inspired by the utilisation of both foreground and background point prompts in SAM, we propose to not only employ the prototype of the prompted class but integrate all class prototypes to incorporate both positive and negative cues. Such a strategy provides more robust priors for the model to effectively distinguish between instrument classes with high similarity. Specifically, the prototype-based class prompt encoder ECP is built upon a prototype bank B = concat({B(k)}k∈{1,2,...,C}) ∈RC×d consisting of a representative prototype for each class, where C is the total number of classes. Given an image I with image embedding FI, we construct a similarity matrix S = concat({S(k)}k∈{1,2,...,C}) ∈RC×h×w to represent the spatial-wise similarity of the image with the prototypes of all classes. It is generated by computing the dot product between the image embedding at every spatial location and each class prototype: S(k) = FI × B(k), for k ∈{1, 2, ..., C}. (5) The similarity matrix is then employed as spatial attention to activate the class-specific regions, resulting in class-activated feature for all classes F C I = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6892 Image Encoder Mask Decoder Prototype-based Class Prompt Encoder Prompt: Class 4 Image Embedding Input Image Output Mask (Class 4) 1 2 3 4 5 Training Data 1 2 3 4 5 Class Prototypes within Embedding Space Dense Prompt Embeddings Sparse Prompt Embeddings 𝐼 𝐹𝐼 𝐵 𝑐 𝑇𝐷 (𝑐) 𝑇𝑆 (𝑐) 𝑀(𝑐) (a) Overview of SurgicalSAM Reshape Image Embedding Sparse Prompt Embeddings Dense Prompt Embeddings MLPS MLPD Matrix Multiply Element-Wise Multiply Element-Wise Sum 1 2 3 4 5 Prompt: Class 4 MLPS C Positive ClassActivated Feature Negative ClassActivated Feature Shared Parameters 𝐵 𝐹𝐼 𝑆 𝑐 𝐹𝐼 𝐶 𝑇𝐷 (𝑐) 𝑇𝑆 (𝑐) 𝜆+ 𝜆− 𝐹𝐼 (𝑐) Similarity Matrix Concat (b) Prototype-based Class Prompt Encoder Figure 3: SurgicalSAM for class promptable surgical instrument segmentation through efficient tuning of SAM. concat({F (k) I }k∈{1,2,...,C}) ∈RC×h×w×d: F (k) I = FI ◦S(k) + FI, for k ∈{1, 2, ..., C}, (6) where ◦and + represents element-wise multiplication and addition, respectively, and F (k) I ∈Rh×w×d represents the class-activated feature for class k. Finally, the class-activated feature is used to formulate dense and sparse prompt embeddings. In SAM, dense prompt embeddings are derived from foreground masks, providing positive cues for segmenting the object. Imitating this, we leverage the class-activated feature of the positive class, i.e., the prompted class c, for encoding dense prompt embeddings T (c) D ∈Rh×w×d. This is achieved through a two-layer Multilayer Perceptron (MLP): T (c) D = gD(ReLU(fD(F (c) I ))), (7) where fD and gD are two linear projection functions with intermediate dimension rD. On the other hand, the sparse prompt embeddings in SAM are encoded from both positive information (foreground points and bounding boxes) and negative information (background points). Inspired by this, we generate sparse prompt embeddings using the class-activated feature of all classes that include both positive, prompted class and negative, non-prompted classes. The positive and negative classes are then distinguished through a pair of positive and negative embeddings. Specifically, F C I is first fed into a two-layer MLP to obtain positivity-agnostic sparse prompt embeddings ˆT C S = concat({ ˆT (k) S }k∈{1,2,...,C}) ∈RC×n×d: ˆT C S = gS(ReLU(fS(F C I ))), (8) where fS and gS are two linear projection functions with intermediate dimension rS, n indicates the number of sparse tokens per class, and ˆT (k) S ∈Rn×d represents the positivityagnostic sparse prompt embedding activated by class k. Then, a pair of positive and negative embeddings, λ+ ∈Rd and λ−∈Rd, are respectively added to the embeddings corresponding to positive class (class c) and negative classes (classes other than c), resulting in the final sparse prompt embeddings T (c) S ∈RC×n×d that are positivity-aware: T (c) S = concat({ ˆT (k) S + 1(k = c)λ+ + (1 −1(k = c))λ−}), for k ∈{1, 2, ..., C}. (9) T (c) S is then reshaped to Cn × d and is fed with T (c) D into the mask decoder for mask prediction. Contrastive Prototype Learning Our method relies on discriminative class prototypes for precise instrument category identification and accurate class region activation. However, obtaining accurate class prototypes in surgical scenarios with highly similar instrument appearances is challenging. To enhance prototype discriminativeness for more accurate class prompting, we propose contrastive prototype learning to acquire the optimised class prototypes during tuning of the framework, as illustrated in Fig. 4. Specifically, we propose prototype contrastive loss motivated by infoNCE loss (van den Oord, Li, and Vinyals 2019; Poole et al. 2019), where the class prototypes are considered as anchors and the SAM-based class embeddings in training images are regarded as samples. Given image embedding FI, the ground-truth binary mask of class c, G(c), is processed to resolution h × w and used to extract the SAMbased class embedding v(c) ∈Rd for class c by averaging the foreground features: v(c) = Phw i (FI ◦G(c)) Phw i G(c) . (10) To this end, the prototype contrastive loss is expressed as: LP CL = −1 C C X k=1 log exp(B(k) · v(k)/τ) PC q=1 exp(B(k) · v(q)/τ) , (11) where τ refers to the temperature parameter for modulating the similarities and B(k) is the prototype of class k. It can The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6893 Input Image Image Encoder Image Embedding Ground-Truth Mask of Class 4 Reshape Element-Wise Multiply Mean Class Prototypes SAM-based Class Embedding Repel Attract Embedding Space 1 2 3 4 5 𝐼 𝐺(𝑐) 𝐹𝐼 𝑣(𝑐) 𝐵(𝑐) Figure 4: Contrastive Prototype Learning. be seen that LP CL strengthens the similarity between the prototype of class k (anchor) and the SAM-based class embeddings of k (positive samples), simultaneously suppressing the similarity between the prototype of class k (anchor) with the SAM-based class embeddings of the classes other than k (negative samples). This results in more discriminative prototype representations and enhanced surgical domain knowledge infusion through SAM tuning. Efficient Tuning SurgicalSAM is of high training efficiency. During tuning, the large image encoder is frozen and only the parameters of the lightweight prototype-based prompt encoder and mask decoder are updated. The tuning is end-to-end, supervised by a loss function consisting of two terms: dice loss for segmentation (Milletari, Navab, and Ahmadi 2016) and prototype contrastive loss for prototype learning: L = LDICE + LP CL, (12) LDICE = 2 PHW i migi PHW i m2 i + PHW i g2 i , (13) where mi and gi are the predicted logit and the ground-truth binary value at pixel i of the image, respectively. Experiments and Discussion Datasets and Evaluation We use the EndoVis2018 (Allan et al. 2020) and EndoVis2017 (Allan et al. 2019) datasets and adhere to the standard protocols defined by Shvets et al. (2018) and Gonz´alez, Bravo-S´anchez, and Arbelaez (2020). EndoVis2017 consists of eight videos, each with 255 frames, for which we perform 4-fold cross-validation following Shvets et al. (2018). EndoVis2018 offers 11 training videos and four validation videos with each consisting of 149 frames. Both datasets provide seven instrument categories. For evaluation, we follow prior research and adopt three segmentation metrics: Challenge IoU (Allan et al. 2019), IoU, and mean class IoU (mc IoU) (Gonz´alez, BravoS´anchez, and Arbelaez 2020; Baby et al. 2023; Ayobi et al. 2023). The efficiency of our method is evaluated in terms of training speed, training GPU usage, and inference speed. Implementation Details The data from EndoVis2017 and EndoVis2018 are preprocessed following Shvets et al. (2018). For the prototypebased prompt encoder, the intermediate dimensions rD and rS are both set to 128 and the number of tokens per class n is set to 2 and 4 for EndoVis2018 and EndoVis2017, respectively. For prototype contrastive loss, a temperature τ of 0.07 is used. In terms of training, we initialise the image encoder, the mask decoder, and the positive and negative embeddings (λ+ and λ−) of SurgicalSAM with SAM’s pre-trained weight of the ViT-H version (Dosovitskiy et al. 2020). The image encoder and the positive and negative embeddings of our model remain frozen while the weights of the prompt encoder and mask decoder are updated. We employ an Adam optimiser with a learning rate of 0.001 and 0.0001 for EndoVis2018 and EndoVis2017, respectively. To reduce computational load, we adopt pre-computed image embeddings in training, employing a batch size of 32. Our model is implemented using PyTorch and trained and evaluated on an Nvidia Tesla V100 16GB GPU. Main Results The comparison of SurgicalSAM with existing methods on EndoVis2018 and EndoVis2017 are presented in Table 1 and Table 2, respectively. A visual comparison of the predictions is shown in Fig. 5. The evaluated instrument categories include Bipolar Forceps (BF), Prograsp Forceps (PF), Large Needle Driver (LND), Suction Instrument (SI), Vessel Sealer (VS), Clip Applier (CA), Grasping Retractor (GR), Monopolar Curved Scissors (MCS), and Ultrasound Probe (UP). In our comparison, we categorise existing strategies into specialist models and SAM-based models. Remarkably, SurgicalSAM surpasses existing SAM-based models, matching or even exceeding the performance of SOTA specialist models, while using only a few tunable parameters. In terms of SAM-based models, the three zero-shot SAM baselines: MT-RCNN or Mask2Former with SAM (Yang, Fan, and Xu 2019; Cheng et al. 2022) (detection-based), Track Anything (Yang et al. 2023) (tracking-based), and PerSAM (Zhang et al. 2023) (reference-based), all exhibit inferior performance. In particular, PerSAM is notably unsuitable for the task due to its reliance on a single instance for visual reference and a simple two-point prompting mechanism. Given the substantial intra-class variance and low inter-class variance among surgical instruments, a single instance lacks the necessary information for accurately referencing an instrument, resulting in missing instances in prediction, as shown in Fig. 5(b) and (d). Additionally, the use of just one foreground point and one background point fails to effectively prompt SAM for zero-shot instrument segmentation due to SAM’s lack of surgical domain knowledge, leading to an incorrect interpretation of the instrument contours (Fig. 5(a), (b), and (c)). While Track Anything exhibits improved performance compared to PerSAM, its efficacy heavily relies on the quality of prompts, as shown by the large gap between the results obtained from prompting with one point versus five points. Furthermore, the significant motion of instruments often causes Track Anything to lose The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6894 Instrument Categories Method Category Method Challenge IoU IoU mc IoU BF PF LND SI CA MCS UP #Params Specialist Model TernausNet 46.22 39.87 14.19 44.20 4.67 0.00 0.00 0.00 50.44 0.00 32.20M MF-TAPNet 67.87 39.14 24.68 69.23 6.10 11.68 14.00 0.91 70.24 0.57 37.73M Dual-MF 70.40 35.09 74.10 6.80 46.00 30.10 7.60 80.90 0.10 203.80M ISINet 73.03 70.94 40.21 73.83 48.61 30.98 37.68 0.00 88.16 2.16 162.52M TraSeTr 76.20 47.71 76.30 53.30 46.50 40.60 13.90 86.20 17.15 S3Net 75.81 74.02 42.58 77.22 50.87 19.83 50.59 0.00 92.12 7.44 68.41M MATIS Frame 82.37 77.01 48.65 83.35 38.82 40.19 64.49 4.32 93.18 16.17 68.72M SAM-based Model MT-RCNN + SAM 78.49 78.49 56.07 79.83 74.86 43.12 62.88 16.74 91.62 23.45 57.67M Mask2Former + SAM 78.72 78.72 52.50 85.95 82.31 44.08 0.00 49.80 92.17 13.18 68.72M TrackAnything (1 Point) 40.36 38.38 20.62 30.20 12.87 24.46 9.17 0.19 55.03 12.41 TrackAnything (5 Points) 65.72 60.88 38.60 72.90 31.07 64.73 10.24 12.28 61.05 17.93 PerSAM 49.21 49.21 34.55 51.26 34.40 46.75 16.45 15.07 52.28 25.62 PerSAM (Fine-Tune) 52.21 52.21 37.24 57.19 36.13 53.86 14.34 25.94 54.66 18.57 2 SurgicalSAM (Ours) 80.33 80.33 58.87 83.66 65.63 58.75 54.48 39.78 88.56 21.23 4.65M GT Centroid + SAM 60.26 60.26 63.34 44.35 65.92 30.99 87.14 69.69 80.04 65.26 GT Bbox + SAM 88.04 88.04 84.23 87.10 86.81 72.23 91.21 75.91 93.08 83.24 Table 1: Comparative Results on the EndoVis2018 Dataset. #Params represents number of tunable parameters. Instrument Categories Method Category Method Challenge IoU IoU mc IoU BF PF LND VS GR MCS UP Specialist Model TernausNet 35.27 12.67 10.17 13.45 12.39 20.51 5.97 1.08 1.00 16.76 MF-TAPNet 37.25 13.49 10.77 16.39 14.11 19.01 8.11 0.31 4.09 13.40 Dual-MF 45.80 26.40 34.40 21.50 64.30 24.10 0.80 17.90 21.80 ISINet 55.62 52.20 28.96 38.70 38.50 50.09 27.43 2.10 28.72 12.56 TraSeTr 60.40 32.56 45.20 56.70 55.80 38.90 11.40 31.30 18.20 S3Net 72.54 71.99 46.55 75.08 54.32 61.84 35.50 27.47 43.23 28.38 MATIS Frame 68.79 62.74 37.30 66.18 50.99 52.23 32.84 15.71 19.27 23.90 SAM-based Model Mask2Former + SAM 66.21 66.21 55.26 66.84 55.36 83.29 73.52 26.24 36.26 45.34 TrackAnything (1 Point) 54.90 52.46 55.35 47.59 28.71 43.27 82.75 63.10 66.46 55.54 TrackAnything (5 Points) 67.41 64.50 62.97 55.42 44.46 62.43 83.68 62.59 67.03 65.17 PerSAM 42.47 42.47 41.80 53.99 25.89 50.17 52.87 24.24 47.33 38.16 PerSAM (Fine-Tune) 41.90 41.90 39.78 46.21 28.22 53.12 57.98 12.76 41.19 38.99 SurgicalSAM (Ours) 69.94 69.94 67.03 68.30 51.77 75.52 68.24 57.63 86.95 60.80 GT Centroid + SAM 44.42 44.42 54.41 63.42 36.03 22.57 54.21 75.18 70.17 59.25 GT Bbox + SAM 76.31 76.31 81.18 89.36 73.44 67.67 90.04 87.79 94.03 65.91 Table 2: Comparative Results on the EndoVis2017 Dataset. track or confuse between instruments with similar appearances (Fig. 5(b), (c), and (d)). Detection-based SAM shows the most promising performance among the three zero-shot SAM baselines. However, its effectiveness relies on a welltrained detector model which requires significant training effort. Also, without SAM tuning, the lack of domain knowledge can result in incomplete masks or misidentification of instrument categories (Fig. 5(a), (b), and (c)). SurgicalSAM outperforms all three zero-shot SAM baselines. Different from these solutions, SurgicalSAM integrates surgical domain knowledge with SAM’s pre-trained general knowledge, enhancing its expertise with surgical instruments and resulting in more accurate segmentation (Fig. 5). Meanwhile, the tuning of SurgicalSAM is highly efficient, requiring significantly fewer tunable parameters than the detection-based model (4.65M for SurgicalSAM vs. 57.67M for MT-RCNN + SAM). Furthermore, SurgicalSAM utilises learned prototypes as references, which are more general and descriptive than the single instance reference in PerSAM, and eliminates the use of explicit prompts for a pipeline much simpler than the multi-stage detectionbased pipeline. We also establish two oracle scenarios by employing ground-truth centroids or ground-truth bounding boxes as prompts for SAM. As shown in Table 1 and Table 2, SurgicalSAM demonstrates substantial superiority over the utilisation of ground-truth centroids, achieving an improvement of 20.07% and 25.52% in Challenge IoU for EndoVis2018 and EndoVis2017, respectively. These promising results show that SurgicalSAM already attains superior results compared to employing basic manual guidance. Moreover, SurgicalSAM achieves SOTA performance competitive with the specialist models while requiring substantially fewer tunable parameters (4.65M for SurgicalSAM vs. 68.72M for MATIS Frame). Particularly, significant improvements can be observed in mean class IoU, indicating that the general knowledge in foundation models serves as extra priors that help to diminish the class imbalance problem in small datasets. In summary, our method achieves promising performance with high efficiency. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6895 SurgicalSAM Ground Truth MaskTrack-RCNN + SAM PerSAM (Fine-Tune) Track Anything (5 Points) Bipolar Forceps Prograsp Forceps Large Needle Driver (a) (b) (c) (d) Suction Instrument Monopolar Curved Scissors Figure 5: Visualisation of Predicted Masks. Challenge IoU mc IoU Challenge IoU mc IoU n \LP CL ✗ ✓ 2 76.38 53.95 80.33 58.87 4 78.26 56.54 79.46 58.40 6 77.28 53.71 79.67 56.97 8 76.98 53.94 80.10 58.30 Table 3: Ablation Study on SurgicalSAM. Ablation Study We conduct an ablation study on EndoVis2018 for contrastive prototype learning and the number of tokens n. Specifically, we remove the contrastive prototype learning module and use fixed class prototypes computed by taking the average of the class embeddings across all training samples. The results, as depicted in Table 3, show a significant difference. Without the contrastive learning process, the precomputed fixed prototypes tend to be overly similar across different instrument categories due to their highly similar appearance. Contrastive prototype learning helps the model to learn more discriminative class prototypes and accurately identify the instrument classes. Moreover, the efficacy of contrastive prototype learning remains consistent across different numbers of tokens. Regarding the impact of different numbers of tokens on our complete model, as shown in Table 3, no notable changes can be observed. In contrast to the original SAM which is sensitive to the number of points provided (Cheng et al. 2023), the use of class prompt in our work demonstrates enhanced robustness. Cross-Dataset Generalisation We verify the cross-dataset generalisability of SurgicalSAM by training it on one dataset and evaluating it on another. The results are shown in Table 4, where only the instrument classes shared by both datasets are considered. Compared to the SOTA specialist model MATIS Frame, our method consistently performs better in both ways (EndoVis2018 to EndoVis2017 and EndoVis2017 to EndoVis2018). Notably, when trained on EndoVis2018 and evaluated on EndoVis2017, we achieve a large improvement of 11.43% in Instrument Categories (IoU) T V Method BF PF LND MCS Mean IoU 18 17 MATIS Frame 45.57 32.62 44.98 58.84 45.50 SurgicalSAM 70.95 35.21 45.46 76.08 56.93 17 18 MATIS Frame 65.55 13.89 38.25 65.58 45.81 SurgicalSAM 44.50 27.17 50.76 62.94 46.34 Table 4: Cross-Dataset Generalisation. T: training dataset; V: validation dataset; 18: EndoVis2018; 17: EndoVis2017. SpeedT (fps) MemoryT (GB) Method bz=2 bz=16 bz=32 bz=2 bz=16 bz=32 MATIS Frame 3.1 13.1 MT-RCNN+SAM 8.2 12.8 3.2 13.9 SurgicalSAM 40.1 57.4 59.8 1.9 5.9 9.6 SpeedI (fps) Method Online Feature Offline Feature MT-RCNN+SAM 1.6 14.3 SurgicalSAM 1.7 91.7 Table 5: Complexity Analysis. T : Training; I: Inference. the IoU averaged over all classes. This underscores the advantage of SurgicalSAM over dedicated specialist models in terms of its ability to effectively generalise to new data distributions, owing to its integration of both foundation general knowledge and surgical domain expertise. Complexity Analysis We conduct a complexity analysis of SurgicalSAM against the best-performing zero-shot SAM baseline (MT-RCNN + SAM) and the SOTA specialist model MATIS Frame (Ayobi et al. 2023). Their comparison regarding training efficiency across three batch sizes (bz) and inference efficiency is depicted in Table 5. In training, our method demonstrates considerably improved efficiency with notably faster speed and lower GPU memory consumption. Owing to the small number of tunable parameters, SurgicalSAM utilises less than 1/6 of the GPU memory of MATIS Frame with the same batch size, while achieving training over 10 times faster. In inference, the end-to-end pipeline of SurgicalSAM allows it to run faster than the complex multi-stage SAM baseline. Conclusion In this paper, we present SurgicalSAM, a novel method to efficiently tune SAM for surgical instrument segmentation. SurgicalSAM introduces a prototype-based class prompt encoder, which generates prompt embeddings directly from class prototypes. This eliminates the need for explicit points or boxes from manual guidance or specialist detectors, enabling an end-to-end pipeline and enhancing prompt robustness. We also introduce contrastive prototype learning to enhance the discriminative capability of class prototypes, improving differentiation among fine-grained instrument categories. Our method achieves state-of-the-art performance on both EndoVis2018 and EndoVis2017 with remarkable training and inference efficiency. It shows great promise for adapting SAM for surgical instrument segmentation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6896 Acknowledgements This study was partially supported by Australian Research Council (ARC) grant DP210102674. References Allan, M.; Kondo, S.; Bodenstedt, S.; Leger, S.; Kadkhodamohammadi, R.; Luengo, I.; Fuentes, F.; Flouty, E.; Mohammed, A.; Pedersen, M.; Kori, A.; Alex, V.; Krishnamurthi, G.; Rauber, D.; Mendel, R.; Palm, C.; Bano, S.; Saibro, G.; Shih, C.-S.; Chiang, H.-A.; Zhuang, J.; Yang, J.; Iglovikov, V.; Dobrenkii, A.; Reddiboina, M.; Reddy, A.; Liu, X.; Gao, C.; Unberath, M.; Kim, M.; Kim, C.; Kim, C.; Kim, H.; Lee, G.; Ullah, I.; Luna, M.; Park, S. H.; Azizian, M.; Stoyanov, D.; Maier-Hein, L.; and Speidel, S. 2020. 2018 Robotic Scene Segmentation Challenge. arXiv:2001.11190. Allan, M.; Shvets, A.; Kurmann, T.; Zhang, Z.; Duggal, R.; Su, Y.-H.; Rieke, N.; Laina, I.; Kalavakonda, N.; Bodenstedt, S.; Herrera, L.; Li, W.; Iglovikov, V.; Luo, H.; Yang, J.; Stoyanov, D.; Maier-Hein, L.; Speidel, S.; and Azizian, M. 2019. 2017 Robotic Instrument Segmentation Challenge. arXiv:1902.06426. Ayobi, N.; P´erez-Rond´on, A.; Rodr´ıguez, S.; and Arbel´aez, P. 2023. MATIS: Masked-Attention Transformers for Surgical Instrument Segmentation. In ISBI, 1–5. Baby, B.; Thapar, D.; Chasmai, M.; Banerjee, T.; Dargan, K.; Suri, A.; Banerjee, S.; and Arora, C. 2023. From Forks to Forceps: A New Framework for Instance Segmentation of Surgical Instruments. In WACV, 6180–6190. IEEE. Chen, T.; Zhu, L.; Deng, C.; Cao, R.; Wang, Y.; Zhang, S.; Li, Z.; Sun, L.; Zang, Y.; and Mao, P. 2023. SAM-Adapter: Adapting Segment Anything in Underperformed Scenes. In ICCV Workshops, 3367–3375. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention Mask Transformer for Universal Image Segmentation. In CVPR, 1290–1299. Cheng, D.; Qin, Z.; Jiang, Z.; Zhang, S.; Lao, Q.; and Li, K. 2023. SAM on Medical Images: A Comprehensive Study on Three Prompt Modes. arXiv:2305.00035. Deng, R.; Cui, C.; Liu, Q.; Yao, T.; Remedios, L. W.; Bao, S.; Landman, B. A.; Tang, Y.; Wheless, L. E.; Coburn, L. A.; Wilson, K. T.; Wang, Y.; Fogo, A. B.; Yang, H.; and Huo, Y. 2023. Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging. In Medical Imaging with Deep Learning, short paper track. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Gonz´alez, C.; Bravo-S´anchez, L.; and Arbelaez, P. 2020. ISINet: An Instance-Based Approach for Surgical Instrument Segmentation. In MICCAI, 595–605. Springer. He, K.; Gkioxari, G.; Doll´ar, P.; and Girshick, R. 2017. Mask R-CNN. In ICCV, 2961–2969. He, S.; Bao, R.; Li, J.; Stout, J.; Bjornerud, A.; Grant, P. E.; and Ou, Y. 2023. Computer-Vision Benchmark SegmentAnything Model (SAM) in Medical Images: Accuracy in 12 Datasets. arXiv:2304.09324. Huang, Y.; Yang, X.; Liu, L.; Zhou, H.; Chang, A.; Zhou, X.; Chen, R.; Yu, J.; Chen, J.; Chen, C.; et al. 2023. Segment Anything Model for Medical Images? Medical Image Analysis, 103061. Jian, Z.; Yue, W.; Wu, Q.; Li, W.; Wang, Z.; and Lam, V. 2020. Multitask Learning for Video-based Surgical Skill Assessment. In DICTA, 1–8. Jin, Y.; Cheng, K.; Dou, Q.; and Heng, P.-A. 2019. Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video. In MICCAI, 440–448. Springer. Jin, Y.; Long, Y.; Chen, C.; Zhao, Z.; Dou, Q.; and Heng, P.A. 2021. Temporal Memory Relation Network for Workflow Recognition From Surgical Video. IEEE Transactions on Medical Imaging, 40(7): 1911–1923. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.Y.; Dollar, P.; and Girshick, R. 2023. Segment Anything. In ICCV, 4015–4026. Li, Y.; Zhang, J.; Teng, X.; and Lan, L. 2023. RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation. arXiv:2307.00997. Liu, D.; Li, Q.; Jiang, T.; Wang, Y.; Miao, R.; Shan, F.; and Li, Z. 2021. Towards Unified Surgical Skill Assessment. In CVPR, 9522–9531. Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; and Wang, B. 2023. Segment Anything in Medical Images. arXiv:2304.12306. Mazurowski, M. A.; Dong, H.; Gu, H.; Yang, J.; Konz, N.; and Zhang, Y. 2023. Segment Anything Model for Medical Image Analysis: An Experimental Study. Medical Image Analysis, 102918. Milletari, F.; Navab, N.; and Ahmadi, S.-A. 2016. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In 3DV, 565–571. IEEE. Ni, Z.-L.; Bian, G.-B.; Wang, G.-A.; Zhou, X.-H.; Hou, Z.G.; Chen, H.-B.; and Xie, X.-L. 2020. Pyramid Attention Aggregation Network for Semantic Segmentation of Surgical Instruments. In AAAI, volume 34, 11782–11790. Poole, B.; Ozair, S.; Van Den Oord, A.; Alemi, A.; and Tucker, G. 2019. On Variational Bounds of Mutual Information. In ICML, 5171–5180. PMLR. Shademan, A.; Decker, R. S.; Opfermann, J. D.; Leonard, S.; Krieger, A.; and Kim, P. C. 2016. Supervised Autonomous Robotic Soft Tissue Surgery. Science Translational Medicine, 8(337): 337ra64–337ra64. Shvets, A. A.; Rakhlin, A.; Kalinin, A. A.; and Iglovikov, V. I. 2018. Automatic Instrument Segmentation in RobotAssisted Surgery Using Deep Learning. In ICMLA, 624– 628. IEEE. van den Oord, A.; Li, Y.; and Vinyals, O. 2019. Representation Learning with Contrastive Predictive Coding. arXiv:1807.03748. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6897 Wald, T.; Roy, S.; Koehler, G.; Disch, N.; Rokuss, M. R.; Holzschuh, J.; Zimmerer, D.; and Maier-Hein, K. 2023. SAM. MD: Zero-Shot Medical Image Segmentation Capabilities of the Segment Anything Model. In Medical Imaging with Deep Learning, short paper track. Wang, A.; Islam, M.; Xu, M.; Zhang, Y.; and Ren, H. 2023a. SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective. arXiv:2304.14674. Wang, A.; Islam, M.; Xu, M.; Zhang, Y.; and Ren, H. 2023b. SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation. In MICCAI Workshops. Wang, D.; Zhang, J.; Du, B.; Xu, M.; Liu, L.; Tao, D.; and Zhang, L. 2023c. SAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment Anything Model. In NeurIPS Datasets and Benchmarks Track. Wu, J.; Zhang, Y.; Fu, R.; Fang, H.; Liu, Y.; Wang, Z.; Xu, Y.; and Jin, Y. 2023. Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation. arXiv:2304.12620. Yan, Z.; Li, J.; Li, X.; Zhou, R.; Zhang, W.; Feng, Y.; Diao, W.; Fu, K.; and Sun, X. 2023. RingMo-SAM: A Foundation Model for Segment Anything in Multimodal RemoteSensing Images. IEEE Transactions on Geoscience and Remote Sensing, 61: 1–16. Yang, J.; Gao, M.; Li, Z.; Gao, S.; Wang, F.; and Zheng, F. 2023. Track Anything: Segment Anything Meets Videos. arXiv:2304.11968. Yang, L.; Fan, Y.; and Xu, N. 2019. Video Instance Segmentation. In ICCV, 5188–5197. Yue, W.; Liao, H.; Xia, Y.; Lam, V.; Luo, J.; and Wang, Z. 2023. Cascade Multi-Level Transformer Network for Surgical Workflow Analysis. IEEE Transactions on Medical Imaging. Zhang, J.; and Tao, D. 2020. Empowering Things with Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things. IEEE Internet of Things Journal, 8(10): 7789–7817. Zhang, K.; and Liu, D. 2023. Customized Segment Anything Model for Medical Image Segmentation. arXiv:2304.13785. Zhang, R.; Jiang, Z.; Guo, Z.; Yan, S.; Pan, J.; Ma, X.; Dong, H.; Gao, P.; and Li, H. 2023. Personalize Segment Anything Model with One Shot. arXiv:2305.03048. Zhao, Z.; Jin, Y.; Gao, X.; Dou, Q.; and Heng, P.-A. 2020. Learning Motion Flows for Semi-supervised Instrument Segmentation from Robotic Surgical Video. In MICCAI, 679–689. Springer. Zhao, Z.; Jin, Y.; and Heng, P.-A. 2022. TraSeTR: Track-toSegment Transformer with Contrastive Query for Instancelevel Instrument Segmentation in Robotic Surgery. In ICRA, 11186–11193. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6898 | 2024 | 766 |
18,591 | Unveiling Details in the Dark: Simultaneous Brightening and Zooming for Low-Light Image Enhancement Ziyu Yue1,2, Jiaxin Gao1, Zhixun Su1,2* 1Dalian University of Technology 2Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province {11901015, zxsu}@mail.dlut.edu.cn, [email protected] Abstract Existing super-resolution methods exhibit limitations when applied to nighttime scenes, primarily due to their lack of adaptation to low-pair dynamic range and noise-heavy darklight images. In response, this paper introduces an innovative customized framework to simultaneously Brighten and Zoom in low-resolution images captured in low-light conditions, dubbed BrZoNet. The core method begins by feeding lowlight, low-resolution images and their corresponding ground truths into the Retinex-induced siamese decoupling network. This process yields distinct reflectance maps and illuminance maps, guided by supervision from the ground truth’s decomposition maps. Subsequently, these reflectance and illuminance maps transition into an intricate super-resolution sub-network. This sub-network employs a meticulously designed cross-layer content-aware interactor - Illumination-aware Interaction Unit (IaIU), elegantly endowed with a gating mechanism. The IaIU facilitates meaningful feature interaction between illuminance and reflectance features while effectively reducing unwanted noise. An intricate super-resolution cage is also constructed to comprehensively integrate information, ultimately resulting in the generation of high-resolution images featuring intricate details. Thorough and diverse experiments validate the superiority of the proposed BrZoNet, surpassing contemporary cutting-edge technologies by proficiently augmenting brightness and intricately recovering complex details, showcasing advancements of 7.1% in PSNR, 2.4% in SSIM, and an impressive 36.8% in LPIPS metrics. Introduction The task of low-light image processing has been a hot research topic in the field of computer vision (Sharma and Tan 2021; Jin, Yang, and Tan 2022; Jin et al. 2023; Tan et al. 2021; Xie et al. 2023). It has practical applications across diverse domains, encompassing nighttime photography, nighttime detection, and segmentation, as well as security surveillance and autonomous driving (Wu and Deng 2022; Deng et al. 2022). These tasks necessitate the acquisition of bright and detailed images or video frames to ensure effective visual perception and capture rich feature information, thereby enhancing high-level perceptual performance. Hence, enhancing the brightness of low-light images along with improving their Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: We explore three different approaches to address the challenging task of super-resolution in low-light scenes: (a) task cascade, (b) direct super-resolution, and (c) siamese decoupling and illumination-guided super-resolution. resolution to capture more details holds substantial research significance for the aforementioned task. To tackle this issue, two of the most intuitive approaches are considered: one involves cascading low-light enhancement with the super-resolution method, while the other entails training the existing super-resolution network model directly on the corresponding dataset. The corresponding schemes are shown in Figure 1(a) and Figure 1(b). We explored the effectiveness of these two implementation strategies by using the state-of-the-art super-resolution model (i.e., HAT (Chen et al. 2023)) as well as the low-light enhancement models (i.e., LLFormer (Wang et al. 2023) and SCI (Ma et al. 2022)) for direct and cascade training. The corresponding visual results are presented in Figure 2. Notably, we retrained both the cascaded and standalone super-resolution models using the low-light super-resolution dataset to ensure a fair comparison. Nevertheless, both of the aforementioned approaches present sub-optimal solutions to this joint task. As evidenced in Figure 2, the results produced by cascaded models, namely SCI+HAT and LLFormer+HAT, are contingent upon the efficacy of the low-light enhancement model. However, this approach is hindered by prominent noise and a deficiency of detail. Direct utilization of existing super-resolution models produces artifacts and is not sufficiently adaptable to different levels of darkness. The root cause of these limitations lies in the inherent characteristics of existing super-resolution The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6899 Input SCI+HAT LLFormer+HAT HAT Ours GT Figure 2: Visual results of various methods in low-light scenes. Cascade mode and direct super-resolution mode exhibit inferior performance. In contrast, the proposed approach achieves more natural and intricate texture details. methods under normal lighting conditions, making it challenging to robustly learn rich features from extremely dark images to enhance super-resolution. However, in the cascade model, because of the inherent limitations of low illumination enhancement methods, the enhanced input propagated to the subsequent super-resolution network tends to suffer from color biases, noise, and even blurred details, thereby significantly impeding the overall performance of the superresolution model. Conversely, network models without a specific design tailored to low-light images face considerable challenges when attempting to glean sufficient fine-grained information from such inputs, given their low pixel values and narrow dynamic ranges. Consequently, these models are prone to generating issues such as artifacts, noise, and color biases. Moreover, the actual images and video frames captured in real-world scenarios often exhibit varying degrees of darkness, thus further compounding the challenge of adapting to diverse levels of low-light conditions. This variability necessitates a flexible and adaptable approach to address the specific darkness level in a given scenario. In order to address the above challenges, we have devised a novel framework and corresponding training strategy, which leverages the principles of the Retinex theory within the context of decomposition space. The corresponding scheme is shown in Figure 1(c). Specifically, a decomposition space network is introduced, trained by siamese unsupervised learning, to decompose the low-illumination image into distinct reflection and illumination maps. These individual components, representing the inherent reflectance and illuminance characteristics, are then directed into a subsequent multi-scale contextual UNet for dedicated enhancement. To further improve the quality of the reflectance maps and adapt them to varying degrees of darkness, we have introduced a cross-layer aware interactor with gating mechanisms, which serves a dual purpose of implicit denoising and illuminating the reflectance maps. By employing gating mechanisms, we can selectively regulate the information flow, enabling the reflectance maps to better acclimate to diverse low-light conditions. Furthermore, the enriched multi-scale illumination and reflection features undergo a meticulous super-resolution fusion process, meticulously designed within an intricate fusion cage. This fusion facilitates comprehensive information integration across multiple scales and ultimately leads to the reconstruction of super-resolution images. The final results are obtained by a dot product, which effectively enhances both luminance and fine-grained details, thus achieving superior visual enhancement outcomes. The main contributions of the paper are summarized: • In order to solve the low-light super-resolution problem, BrZoNet is proposed from the perspective of decomposition space for simultaneously Brightening and Zooming low-light low-resolution images. • To enable the interaction of information on different scales of illuminance and reflectance features, this paper proposes a cross-layer content-aware interactive component - Illumination-aware Interaction Unit (IaIU), which enhances the adaptability of features to different darkness levels and suppresses the noise implicitly. • In order to enhance the detail of the final reconstruction results, this paper proposes an intricate super-resolution fusion cage - Multi-stream Super-resolution Cage (MSC), which emphasizes faithful texture details by fusing reflectance and illumination features at different scales. • Thorough and comprehensive experimentation unequivocally establishes the superiority of the proposed BrZoNet over contemporary cutting-edge technologies, as it not only enhances brightness but also adeptly restores intricate details, achieving remarkable improvements of 7.1% in PSNR, 2.4% in SSIM, and an impressive 36.8% in LPIPS metrics. Related Work Image Super-resolution In recent years, remarkable advancements have been achieved in image super-resolution algorithms, driven by the rapid development of deep learning (Zhou et al. 2023a; Sun, Pan, and Tang 2022). The utilization of convolutional neural network (Ledig et al. 2017; Zamir et al. 2020; Gao et al. 2023a) and generative adversarial network (Wang et al. 2018, 2021) in addressing image super-resolution challenges has resulted in outstanding performance across various datasets. The application of transformer-based networks (Liu et al. 2021c; Liang et al. 2021; Chen et al. 2023; Gao et al. 2023b) in the realm of image super-resolution has significantly contributed to advancements in model architecture, computational efficiency, and practical application, thereby fostering progress in the research and development of super-resolution tasks. These methods promote the development of super-resolution tasks in terms of model structure, computational consumption, and application to realistic scenarios. However, these methods are not specifically designed for nighttime scenes, and their direct use can result in insufficient brightening, artifacts, color deviation, and noise amplification. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6900 MSC MSC Illuminationaware Interaction Unit (IaIU) Softmax IaIU IaIU IaIU IaIU (a) Retinex-induced Siamese Decoupling (c) Intricate SR Fusion Cage (b) Cross-layer Content-aware Interactor (d) Training Strategy of BrZoNet (e) Key Operation/Description Refined Content Unit Ilumination/Reflection Constraint Multi-stream SR Cage Decoupling Constraint Element-wise Multiplication Matrix Multiplication MSC Element-wise Addition Reconstruction/Perception Constraint Ref. and Ill. Branch Feed Figure 3: The pipeline of the BrZoNet. The overall network consists of three parts: Part (a) utilizes a siamese decoupling module to decompose the low-light low-resolution input. Part (b) constructs cross-layer content-aware interactor between the illumination and reflection branches, proposing an illumination-based perceptual-guided reflection for fine enhancement. Part (c) builds a multi-stream feature aggregation super-resolution module to improve high-frequency details and enlarge the resolution. (d) illustrates the proposed training strategy and the specific constraint losses (i.e., marked with red dashed arrows) introduced in each module, including decoupling constraint, illumination-reflection constraint, and reconstruction constraint, respectively. Low-light Image Enhancement The integration of Retinex theory (Rahman, Jobson, and Woodell 2004) into deep learning represents a predominant approach in addressing the majority of current methods (Wei et al. 2018; Wang et al. 2019; Zhang et al. 2021; Ma et al. 2023; Gao et al. 2023a; Liu et al. 2023; Li et al. 2024). Further, Retinex theory is also combined with neural architecture search (Liu, Simonyan, and Yang 2018; Liu et al. 2021b,a) and unrolling modes (Wu et al. 2022) to address low-light enhancement tasks. Besides supervised methods, there are semi-supervised and unsupervised approaches (Jiang et al. 2021; Guo et al. 2020; Ma et al. 2022; Liu et al. 2022) for low-light enhancement. Drawing inspiration from the Retinex theory principles, this paper delves into the learning process of super-resolution tasks in low-light scenarios. Methodology The problem of super-resolution in low-light scenarios is a complex and challenging task within a practical application context, and as such, it has not received sufficient attention for an extended period. It aims to reconstruct a super-resolution image with normal illumination xnsr from a low-resolution image captured under low-lighting conditions xllr. In the following, we propose a specialized methodology that encompasses three essential processing steps: siamese decoupling, cross-layer interaction, and fusion reconstruction. The overall pipeline of the proposed method is illustrated in Figure 3. Firstly, leveraging the principles of Retinex theory, we establish a Retinex-induced siamese decoupling module Nrsd to derive the low-resolution illumination mapping ullr (or unhr) and reflection mapping vllr (or vnhr) in terms of xllr and its corresponding normal-light high-resolution image ynhr. This decoupling process can be formulated as xllr/ynhr Nrsd −→{ullr/unhr, vllr/vnhr}. (1) Building upon this foundation, we further devise a crosslayer content-aware interactor Ncci that facilitates feature exchange between two mapping branches. Finally, through the meticulous integration of a intricate super-resolution fusion cage and the retinex-based element-wise multiplication, we obtain the enhanced super-resolution image xnsr with faithfully restored lighting conditions. The workflow can be succinctly formalized as follows: {ullr, vllr} Ncci −→{unsr, vnsr} ⊙ −→xnsr. (2) Retinex-induced Siamese Decoupling We posit that the decomposition mechanism guided by Retinex theory can be effectively extended across different resolutions. Thus, for a given data pair (xllr, ynhr), where xllr represents the low-resolution input captured under weak lighting conditions and ynhr corresponds to the high-resolution ground truth with optimal illumination, we jointly input them into a shared-parameter representation space. Specifically, we construct a parallel Retinex-induced siamese decoupling network Nrsd parameterized by θrsd, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6901 which is an improved version of the Unet-type architecture. It is worth noting that, in order to enhance the representational capacity of features, we introduce refined content reconstruction units as fundamental building blocks at each layer, as depicted in Figure 4. The entire module is learned by incorporating a decoupling constraint1, yielding the low-resolution illumination map and reflection map. The entire process can be formalized as follows: {ullr, vllr} = Nrsd(xllr; θrsd), xllr = ullr ⊙vllr {unhr, vnhr} = Nrsd(ynhr; θrsd), ynhr = unhr ⊙vnhr. (3) Cross-layer Content-aware Interactor Building upon the aforementioned framework, the separated illumination mapping and reflection mapping {ullr, vllr} are fed into two parallel Unet-style network, designed for finegrained feature enhancement and interaction. This module is denoted as Ncci parameterized by θcci, which is formulated as {unsr, vnsr} = Ncci(ullr, vllr; θcci). To elaborate further, we first extract features at n different scales from the decoder of the illumination sub-network denoted as {ullr Fi}n i=1. Subsequently, utilizing the designed Illumination-aware Interaction Unit (IaIU; ψIaIU), the preceding features are employed as guidance masks to fuse with the features {vllr Fi}n i=1 from the reflection sub-network at each layer, ultimately obtaining the guided reflection map vnsr: vllr Fi+1 = ψIaIU(ullr Fi, vllr Fi), i = 1, · · · , n. (4) The ψIaIU is illustrated in the middle of Figure 3. Specifically, the illumination features ullr Fi and reflection features vllr Fi obtained from the encoder are inputted separately into the multi-dconv block with norm layers, denoted as DcN. After that, the dimensions are reshaped to obtain the illumination-aware guidance map GIaIU, which represents the relationship between the illumination and reflection features, expressed as GIaIU = Softmax DcN(ullr Fi) ⊗DcN(vllr Fi) , (5) where ⊗denotes the matrix multiplication operation. Using this guidance map as an attention map, we apply softmax normalization and modulate the transformed reflection features. At the same time, we introduce a feed forward block with stack of gated conv. layers, denoted as ˜ΨF eed. This stack performs self-coordinated transformation and produces a reflection map with the same dimensions as the input, i.e., FIaIU i = ˜ΨF eed DcN(vllr Fi) ⊗GIaIU . (6) It’s important to note that the illumination map contains more structural detail information. Thus, it can serve as a mask to guide the enhancement of the reflection map. Moreover, inspired by the Retinex theory, we perform elementwise multiplication between the reflection map and the illumination map in the feature level, formulated as vllr Fi+1 = Up↑(ullr Fi ⊙FIaIU i ), where Up↑denotes the ConvTranspose layer for upsampling. The purpose of this step is to obtain the guided reflection feature vllr Fi+1 that are more consistent in terms of contextual content. 1Please refer to the self-regularized decoupling loss in Eq. (8). Figure 4: Illustrations of the basic RCU module. Figure 5: Architectural details of the MSC module. Intricate Super-resolution Fusion Cage We designed the Multi-stream Super-resolution Cage (MSC) module for the illumination and reflection branches to perform feature aggregation and amplification across multiple scales. As shown in Figure 5, for each specific indexed scale layer of the illumination and reflection features, they undergo sequential refined content unit (RCU) and selective attention mechanism (SKFF) layers. The SKFF layer is inspired by the setting of method (Zamir et al. 2020), enabling feature selection and aggregation interaction between the two scales to enhance representational capacity. By applying two sets of the same operation forms, along with residual connections, we obtain the amplified illumination and reflection maps through convolution and upsampling operations at the desired magnification ratio. Loss Function The following is the loss function used in this paper: Ltotal = λSD ∗LSD + λF R ∗LF R + λRP ∗LRP , (7) where LSD and LF R and LRP are self-regularized decoupling loss, and fusion resolution loss and reconstruction perception loss respectively. λSD, λF R and λRP are the corresponding loss weights. Self-regularized Decoupling Loss. Within the decomposition space, following the principles of Retinex, we impose constraints on the input data pairs separately, ensuring that the decomposed illumination and reflection components satisfy fundamental imaging rules. The constructed decoupling The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6902 Scale Metrics EDSR D-DBPN ESRGAN RDN RCAN SRFBN PAN MSRResNet MIRNet SwinIR Restormer SRFormer HAT Ours ×2 PSNR↑ 18.38 18.70 18.08 18.79 19.76 18.42 18.78 18.15 21.05 18.38 21.21 19.55 20.21 22.79 SSIM↑ 0.679 0.682 0.655 0.701 0.712 0.662 0.693 0.677 0.720 0.640 0.727 0.704 0.719 0.745 LPIPS↓0.466 0.460 0.300 0.455 0.426 0.510 0.450 0.451 0.436 0.577 0.385 0.469 0.454 0.243 RMSE↓0.125 0.120 0.135 0.120 0.110 0.125 0.119 0.128 0.095 0.125 0.095 0.110 0.103 0.078 FSIM↑ 0.851 0.862 0.873 0.874 0.881 0.847 0.867 0.848 0.889 0.845 0.892 0.877 0.882 0.902 SRE↑ 55.62 55.80 55.52 55.86 56.37 55.67 55.85 55.52 57.06 55.64 57.09 56.23 56.58 57.90 ×4 PSNR↑ 17.69 17.96 17.18 18.21 19.07 17.67 18.10 17.59 19.78 17.53 20.29 18.72 19.75 21.41 SSIM↑ 0.679 0.674 0.647 0.701 0.712 0.665 0.700 0.684 0.704 0.663 0.720 0.705 0.715 0.726 LPIPS↓0.623 0.575 0.471 0.584 0.550 0.640 0.559 0.581 0.599 0.688 0.492 0.613 0.561 0.383 RMSE↓0.135 0.132 0.149 0.128 0.119 0.136 0.129 0.137 0.109 0.139 0.106 0.121 0.110 0.090 FSIM↑ 0.832 0.848 0.858 0.866 0.874 0.836 0.859 0.841 0.878 0.840 0.885 0.869 0.873 0.886 SRE↑ 58.51 58.64 58.30 58.77 59.21 58.51 58.71 58.45 59.63 58.43 59.82 59.02 59.57 60.39 Table 1: Quantitative comparison of ×2 and ×4 tasks on the RELLISUR dataset. Best results are bolded. Six reference indicators, including PSNR, SSIM, LPIPS, RMSE, FSIMC and SRE are quantitatively analyzed. Figure 6: Qualitative comparisons of ×2 tasks on RELLISUR dataset. The top is the full result images, the middle are local zoom images of the red boxes, and the bottom is the statistical distributions of the RGB channels. constraints can be represented as follows: Lu,v SD = X p∈{llr,nhr} ||up⊗vp−xp||1+SATV (up, vp), (8) where SATV (·, ·) denotes the structure-aware total variation loss (Wei et al. 2018). Fusion Resolution Loss. After merging the two branches and performing super-resolution, we apply constraints separately to the resulting illumination and reflection components to ensure their consistency with the corresponding ground truth illumination and reflection content. Additionally, we introduce an illumination smoothness constraint to guarantee structural smoothness. This loss term can be represented as follows: LF R = X q∈{u,v} ||qnsr−qnhr||1+SATV (unsr, vnsr). (9) Reconstruction and Perception Loss. In the concluding stage of the network reconstruction, we apply the following loss terms to enforce content consistency and perceptual content consistency, expressed as LRP = ||ynsr−ynhr||1+||ψ(ynsr)−ψ(ynhr)||P erc, (10) where ψ denotes the pretrained VGG-19 network with specific network layers. By imposing content consistency and perceptual consistency losses, the network is encouraged to generate visually consistent and more realistic enhancement results. Experiments Experimental Setting We use the widely recognized dataset RELLISUR2 to train and evaluate the proposed method. The dataset comprises paired images, including low-resolution dark light/highresolution normal light images at ×1, ×2, and ×4 resolutions. The training set consists of 3610 pairs, while the test 2https://vap.aau.dk/rellisur/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6903 Input ×4 MIRNet Restormer SwinIR HAT SRFormer Ours GT Figure 7: Qualitative comparisons of ×4 tasks on RELLISUR dataset. The top is the full result images, the middle two rows are local zoom images of the red and green boxes, and the bottom is the statistical distribution of the RGB channels. Ours GT HAT Restormer MIRNet SRFormer -3.0 EV -4.0 EV -5.0 EV -3.0 EV -4.0 EV -5.0 EV -3.0 EV -4.0 EV -5.0 EV -3.0 EV -4.0 EV -5.0 EV -3.0 EV -4.0 EV -5.0 EV -3.0 EV -4.0 EV -5.0 EV Figure 8: Enhancement results comparison for input images under different low-light levels (i.e., -3.0EV, -4.0EV and 5.0EV). Across all varying low-light levels, the proposed method excels in both detail and luminance restoration, effectively suppressing color distortion. set includes 425 pairs of images with varying darkness levels at each resolution. We experimented with data at ×2 and ×4 resolution. To augment our dataset, we utilize three data augmentation techniques, namely random cropping, random rotation, and random flipping (Liu et al. 2020). The experiments were conducted using the PyTorch 2.0.1 framework on a single NVIDIA GeForce GTX 2080Ti GPU, and the optimizer used was AdamW with 15W iterations. Dynamic patch size and batch size were employed during training. The initial learning rate was set to 2 × 10−3, and we opted for the CosineAnnealingRestartCyclicLR as the learning rate tuning method. Experimental Evaluation To fully verify the effectiveness of our method, we compare 13 state-of-the-art normal light super-resolution methods, including MSRFBN (Li et al. 2019), D-DBPN (Haris, Shakhnarovich, and Ukita 2018), RDN (Zhang et al. 2018c), EDSR (Lim et al. 2017), ESRGAN (Wang et al. 2018), RCAN (Zhang et al. 2018b), PAN (Zhao et al. 2020), ESRGAN (Wang et al. 2018), SwinIR (Liang et al. 2021), MIRNet (Zamir et al. 2020), Restormer (Zamir et al. 2022), SRFormer (Zhou et al. 2023b) and HAT (Chen et al. 2023). All compared methods are retrained on the RELLISUR dataset according to the official parameters. And we choose six evaluation metrics: Peak Signal to Noise Ratio (PSNR) (Chan and Whiteman 1983) and Structural Similarity Index Measure (SSIM) (Wang et al. 2004), Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al. 2018a), Root Mean Square Error (RMSE) (Ferrari et al. 2018), Feature-based Similarity Index (FSIMC) (Zhang et al. 2011) and Signal to Reconstruction Error Ratio (SRE) (Lanaras et al. 2018). Quantitative Evaluation. As shown in Table 1, our BrZoNet outperforms other state-of-the-art techniques, ranking first in all six evaluation metrics. This indicates that the results recovered by the proposed method consistently exhibit superior performance in terms of illumination, color, and texture details compared to alternative approaches. Particularly for the ×2 upscaling task, PSNR and RMSE show improvements of 7.4% and 17.8%, respectively. For the ×4 upscaling task, the improvements are 5.5% for PSNR and 15.1% for RMSE. Qualitative Evaluation. The qualitative comparison visualization results are presented in Figure 6 and Figure 7. In comparison to other state-of-the-art methods, the proposed method excels not only enhances illumination but also exhibits superior restoration of texture details. It effectively suppresses the generation of artifacts and noise issues. Furthermore, the statistical distribution of RGB values indicates that, our method produces results with color distributions closer to those of the reference image, as compared to the contrastive methods. To demonstrate that our proposed method can adaptively enhance and super-resolve low-light low-resolution images of different darkness levels, we show the comparison results in Figure 8. Compared to these above methods, the proposed The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6904 Scale Method Metric IllNet IaIU MSC PSNR↑ SSIM↑ LPIPS↓ ×2 ✗ ✗ ✗ 21.91 0.725 0.270 ✓ ✗ ✗ 22.16↑0.25 0.734↑0.009 0.261↓0.009 ✓ ✓ ✗ 22.31↑0.40 0.733↑0.008 0.255↓0.015 ✓ ✓ ✓ 22.79↑0.88 0.745↑0.020 0.243↓0.027 ×4 ✗ ✗ ✗ 20.78 0.720 0.387 ✓ ✗ ✗ 20.90↑0.12 0.723↑0.003 0.392↓0.005 ✓ ✓ ✗ 21.16↑0.38 0.724↑0.004 0.389↓0.002 ✓ ✓ ✓ 21.42↑0.64 0.726↑0.006 0.383↓0.004 Table 2: Ablation study regarding the proposed network components (i.e., IllNet, IaIU and MSC) on ×2 and ×4 task. Each numerical subscript in the bottom right corner is marked, indicating the difference from the baseline method. w/ percep GT w/o percep Figure 9: Illustrating ablation analysis of the perceptual loss. method accurately recover the brightness and detail information of images with different levels of darkness, effectively avoiding color deviation. This confirms the remarkable adaptability of our method to scenes with varying levels of darkness and challenging lighting conditions. Ablation Study Effects of Decomposition Space. To validate the effectiveness of addressing low-light image super-resolution from the perspective of decomposition space, we conducted ablation experiments by training models with the retained illuminance sub-network (i.e., IllNet) and models without the illuminance sub-network. The results in the second and sixth rows of Table 2 demonstrate a significant performance improvement after incorporating the IllNet, thereby confirming the efficacy from the perspective of decomposition space. Effects of IaIU. To demonstrate the effectiveness of the IaIU, we excluded the MSC while maintaining the IllNet. The results in the third and seventh rows of Table 2 show that the cross-layer content-aware interactor with IaIU improves model performance compared to results without its inclusion. Effects of MSC. To confirm the effectiveness of the MSC, we Method PSNR↑ SSIM↑ LPIPS↓ RSD w/o ynhr (×2) 22.46 0.738 0.252 Ours (×2) 22.79↑0.33 0.745↑0.007 0.243↓0.009 RSD w/o ynhr (×4) 21.25 0.724 0.378 Ours (×4) 21.42↑0.17 0.726↑0.012 0.383↓0.005 Table 3: Ablation study regarding the siamese decoupling for ynhr on ×2 and ×4 task. The numbers annotated at the bottom right corner indicate the differences. LSD (w/o SATV) LF R (w/o SATV) LRP PSNR↑ 0.5 0.5 0.5 22.33 1 0.1 0.1 22.57 0.8 0.5 0.5 22.74 1 0.5 0.5 22.79 w/o skip connection 22.17 Table 4: Analysis of loss weights and skip connections (Note that the SATV loss is by default set with a weight of 1). compare the quantitative results of the complete network with those from the third and seventh rows in Table 2. The inclusion of the super-resolution fusion cage with MSC improves model performance, leading to more detailed reconstruction results. Effects of Siamese Decoupling. To showcase the enhanced performance of the decomposition subnetwork with the siamese decoupling training strategy in our method, we excluded the decomposition of the normal light high-resolution images (i.e., RSD w/o ynhr) and the corresponding loss function during training. The resulting quantitative improvements are evident in Table 3, affirming the effectiveness of the siamese decoupling strategy. Effects of Loss Weights and Perceptual Loss. To validate the impact of perceptual loss, we present the results of a qualitative comparison in Figure 9. It is evident that the perceptual loss effectively enhances detail recovery and mitigates color deviation. Table 4 shows the impact of different loss weights and the skip connections of Unet on the final performance. It can be observed that the decoupling loss accounts for the largest proportion, and removing the skip connections results in a 0.62dB decrease in PSNR performance. Conclusion This study introduces the BrZoNet framework to address limitations in existing super-resolution methods for nighttime scenes. It enhances adaptability to low-pair dynamic range and noise-laden dark-light images by combining siamese decomposition and a super-resolution network, featuring the IaIU for effective feature interaction and noise reduction. BrZoNet achieves high-resolution images through comprehensive information integration in a super-resolution cage. Extensive experiments demonstrate its superiority over state-of-theart techniques, with significant improvements in brightness and detail recovery. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6905 Acknowledgements This paper was supported by the National Key R&D Program of China (No. 2018AAA0100300) and National Natural Science Foundation of China (No. 61976041). Ziyu Yue and Jiaxin Gao equally contributed to this work. Professor Zhixun Su is the corresponding author of this paper (The corresponding author is marked with ∗), thank him for his guidance. References Chan, L. C.; and Whiteman, P. 1983. Hardware-constrained hybrid coding of video imagery. IEEE Transactions on Aerospace and Electronic Systems, (1): 71–84. Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; and Dong, C. 2023. Activating more pixels in image super-resolution transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22367–22377. Deng, X.; Wang, P.; Lian, X.; and Newsam, S. 2022. NightLab: A dual-level architecture with hardness detection for segmentation at night. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16938–16948. Ferrari, V.; Hebert, M.; Sminchisescu, C.; and Weiss, Y. 2018. Computer Vision–ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part V, volume 11209. Springer. Gao, J.; Liu, X.; Liu, R.; and Fan, X. 2023a. Learning adaptive hyper-guidance via proxy-based bilevel optimization for image enhancement. The Visual Computer, 39(4): 1471– 1484. Gao, J.; Yue, Z.; Liu, Y.; Xie, S.; Fan, X.; and Liu, R. 2023b. Diving into Darkness: A Dual-Modulated Framework for High-Fidelity Super-Resolution in Ultra-Dark Environments. arXiv preprint arXiv:2309.05267. Guo, C.; Li, C.; Guo, J.; Loy, C. C.; Hou, J.; Kwong, S.; and Cong, R. 2020. Zero-reference deep curve estimation for lowlight image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1780–1789. Haris, M.; Shakhnarovich, G.; and Ukita, N. 2018. Deep back-projection networks for super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1664–1673. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; and Wang, Z. 2021. Enlightengan: Deep light enhancement without paired supervision. IEEE transactions on image processing, 30: 2340–2349. Jin, Y.; Lin, B.; Yan, W.; Yuan, Y.; Ye, W.; and Tan, R. T. 2023. Enhancing visibility in nighttime haze images using guided apsf and gradient adaptive convolution. In Proceedings of the 31st ACM International Conference on Multimedia, 2446– 2457. Jin, Y.; Yang, W.; and Tan, R. T. 2022. Unsupervised night image enhancement: When layer decomposition meets lighteffects suppression. In European Conference on Computer Vision, 404–421. Lanaras, C.; Bioucas-Dias, J.; Galliani, S.; Baltsavias, E.; and Schindler, K. 2018. Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network. ISPRS Journal of Photogrammetry and Remote Sensing, 146: 305– 319. Ledig, C.; Theis, L.; Husz´ar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4681–4690. Li, Y.; Xu, R.; Niu, Y.; Guo, W.; and Zhao, T. 2024. Perceptual Decoupling with Heterogeneous Auxiliary Tasks for Joint Low-Light Image Enhancement and Deblurring. IEEE Transactions on Multimedia. Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; and Wu, W. 2019. Feedback network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3867–3876. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1833–1844. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Mu Lee, K. 2017. Enhanced deep residual networks for single image superresolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 136–144. Liu, H.; Simonyan, K.; and Yang, Y. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. Liu, M.; Yan, P.; Lian, C.; and Cao, X. 2020. Machine Learning in Medical Imaging: 11th International Workshop, MLMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings, volume 12436. Springer Nature. Liu, R.; Gao, J.; Liu, X.; and Fan, X. 2023. Learning with Constraint Learning: New Perspective, Solution Strategy and Various Applications. arXiv preprint arXiv:2307.15257. Liu, R.; Gao, J.; Zhang, J.; Meng, D.; and Lin, Z. 2021a. Investigating bi-level optimization for learning and vision from a unified perspective: A survey and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12): 10045–10067. Liu, R.; Ma, L.; Ma, T.; Fan, X.; and Luo, Z. 2022. Learning with nested scene modeling and cooperative architecture search for low-light vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5): 5953–5969. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; and Luo, Z. 2021b. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10561–10570. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021c. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10012–10022. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6906 Ma, L.; Jin, D.; An, N.; Liu, J.; Fan, X.; Luo, Z.; and Liu, R. 2023. Bilevel fast scene adaptation for low-light image enhancement. International Journal of Computer Vision, 1–19. Ma, L.; Ma, T.; Liu, R.; Fan, X.; and Luo, Z. 2022. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5637–5646. Rahman, Z.-u.; Jobson, D. J.; and Woodell, G. A. 2004. Retinex processing for automatic image enhancement. Journal of Electronic imaging, 13(1): 100–110. Sharma, A.; and Tan, R. T. 2021. Nighttime visibility enhancement by increasing the dynamic range and suppression of light effects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11977–11986. Sun, L.; Pan, J.; and Tang, J. 2022. Shufflemixer: An efficient convnet for image super-resolution. Advances in Neural Information Processing Systems, 35: 17314–17326. Tan, X.; Xu, K.; Cao, Y.; Zhang, Y.; Ma, L.; and Lau, R. W. 2021. Night-time scene parsing with a large real dataset. IEEE Transactions on Image Processing, 30: 9085–9098. Wang, R.; Zhang, Q.; Fu, C.-W.; Shen, X.; Zheng, W.-S.; and Jia, J. 2019. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 6849– 6857. Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; and Lu, T. 2023. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2654–2662. Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1905–1914. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Change Loy, C. 2018. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops, 0–0. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wei, C.; Wang, W.; Yang, W.; and Liu, J. 2018. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560. Wu, A.; and Deng, C. 2022. Single-domain generalized object detection in urban scene via cyclic-disentangled selfdistillation. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 847–856. Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; and Jiang, J. 2022. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5901–5910. Xie, Z.; Wang, S.; Xu, K.; Zhang, Z.; Tan, X.; Xie, Y.; and Ma, L. 2023. Boosting Night-time Scene Parsing with Learnable Frequency. IEEE Transactions on Image Processing. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5728–5739. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2020. Learning enriched features for real image restoration and enhancement. In European Conference on Computer Vision, 492–511. Springer. Zhang, L.; Zhang, L.; Mou, X.; and Zhang, D. 2011. FSIM: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8): 2378–2386. Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; and Wang, O. 2018a. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; and Zhang, J. 2021. Beyond brightening low-light images. International Journal of Computer Vision, 129: 1013–1037. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018b. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), 286–301. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2018c. Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2472–2481. Zhao, H.; Kong, X.; He, J.; Qiao, Y.; and Dong, C. 2020. Efficient image super-resolution using pixel attention. In European Conference on Computer Vision, 56–72. Springer. Zhou, M.; Yan, K.; Pan, J.; Ren, W.; Xie, Q.; and Cao, X. 2023a. Memory-augmented deep unfolding network for guided image super-resolution. International Journal of Computer Vision, 131(1): 215–242. Zhou, Y.; Li, Z.; Guo, C.-L.; Bai, S.; Cheng, M.-M.; and Hou, Q. 2023b. SRFormer: Permuted Self-Attention for Single Image Super-Resolution. arXiv preprint arXiv:2303.09735. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6907 | 2024 | 767 |
18,592 | Weakly-Supervised Temporal Action Localization by Inferring Salient Snippet-Feature Wulian Yun, Mengshi Qi, Chuanming Wang, Huadong Ma* Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, China {yunwl,qms,wcm,mhd}@bupt.edu.cn Abstract Weakly-supervised temporal action localization aims to locate action regions and identify action categories in untrimmed videos simultaneously by taking only video-level labels as the supervision. Pseudo label generation is a promising strategy to solve the challenging problem, but the current methods ignore the natural temporal structure of the video that can provide rich information to assist such a generation process. In this paper, we propose a novel weaklysupervised temporal action localization method by inferring salient snippet-feature. First, we design a saliency inference module that exploits the variation relationship between temporal neighbor snippets to discover salient snippet-features, which can reflect the significant dynamic change in the video. Secondly, we introduce a boundary refinement module that enhances salient snippet-features through the information interaction unit. Then, a discrimination enhancement module is introduced to enhance the discriminative nature of snippetfeatures. Finally, we adopt the refined snippet-features to produce high-fidelity pseudo labels, which could be used to supervise the training of the action localization network. Extensive experiments on two publicly available datasets, i.e., THUMOS14 and ActivityNet v1.3, demonstrate our proposed method achieves significant improvements compared to the state-of-the-art methods. Our source code is available at https://github.com/wuli55555/ISSF. Introduction Temporal action localization (TAL) (Shou, Wang, and Chang 2016; Zhao et al. 2017; Chao et al. 2018; Huang, Wang, and Li 2022; He et al. 2022) aims to find action instances from untrimmed videos, i.e., predicting the start positions, end positions, and categories of certain actions. It is an important yet challenging task in video understanding and has been widely used in surveillance and video summarization. To achieve accurate localization, most existing methods (Shou, Wang, and Chang 2016; Zhao et al. 2017; Chao et al. 2018; Lin et al. 2018; Long et al. 2019) rely on training a model in a fully supervised manner with the help of human-labeled precise temporal annotations. However, fine-detailed labeling of videos is labor-intensive and expensive. In contrast, weakly-supervised methods recently *Corresponding authors: Huadong Ma, Mengshi Qi. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Difference Values Action Scores GT Action Action Background Background Background Action Figure 1: Illustration of difference values among snippets, action scores and Ground-Truth (GT). The action and background snippets are marked as red and black boxes, respectively. have gained increasing attention from both academia and industry, since they only utilize video-level labels for temporal action localization, achieving competitive results while reducing the cost of manual annotations. Weakly-supervised TAL methods (Xu et al. 2019; Shi et al. 2020; Qu et al. 2021; Lee et al. 2021; Liu et al. 2021; Narayan et al. 2021) mainly utilize a “localization by classification” framework, where a series of Temporal Class Activation Maps (TCAMs) (Nguyen et al. 2018; Paul, Roy, and Roy-Chowdhury 2018) are obtained by snippet-wise classification, and then TCAMs are used to generate temporal proposals for action localization. However, the classifiers primarily tend to focus on easily distinguishable snippets while ignoring other subtle yet equally important information, so there is a discrepancy between classification and localization. To balance the performance of classification and localization, pseudo label based methods (Huang, Wang, and Li 2022; He et al. 2022; Pardo et al. 2021; Zhai et al. 2020; Luo et al. 2020; Li et al. 2022) have been proposed, which supervises the training of the model mainly by generating snippet-level pseudo label information. Nevertheless, accurately generating pseudo label remains challenging, since existing methods ignore the important role played in the temporal structure of videos. We observed that neighbor snippets exhibit obvious distinctively difference relationships, which can discover salient features and identify differentiated boundaries. As shown in Figure 1, neighbour snippet-features with substantial variaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6908 tions (higher difference value) may correspond to the junctions between action and background, alternations between action, or abrupt changes between background. However, how to find these features and refine them into more discriminative features is the key to discover action boundaries and then improve the localization performance. Inspired by this observation, we propose a novel weaklysupervised TAL method, which takes a new perspective that boosts the generation of high-fidelity pseudo-labels by leveraging the temporal variation. First, we design a saliency inference module to discover significant snippet-feature by leveraging the variation and calculating the difference values of neighbor snippet pairs. However, this process only considers local relationships and ignores the global information in the video. Thus, we propose a boundary refinement module to enhance salient features through information interaction while making the model focus on the entire temporal structure. Subsequently, considering diverse action information can provide additional clues, we propose a discrimination enhancement module to further refine the feature by constructing a memory to introduce the same category of action knowledge. Finally, the output features are fed into the classification head to generate the final refined pseudo labels for supervision. The contributions can be summarized as follows: (1) We propose a new pseudo-label generation strategy for weakly-supervised TAL by inferring salient snippet-feature, which can exploit the dynamic variation. (2) We design a boundary refinement module and a discrimination enhancement module to enhance the discriminative nature of action and background, respectively. (3) We conduct extensive experiments and the results show our model achieves 46.8 and 25.8 average mAP on THUMOS14 and ActivityNet v1.3, respectively. Related Work Fully-supervised temporal action localization. Fullysupervised TAL has been an active research area in video understanding (Qi et al. 2021, 2020, 2019, 2018; Liu et al. 2016, 2018) for many years and existing methods are divided into two categories, i.e., one-stage methods and twostage methods. One-stage methods (Long et al. 2019; Lin, Zhao, and Shou 2017; Yang et al. 2020; Lin et al. 2021) predict action boundaries as well as labels simultaneously. On the contrary, two-stage methods (Shou, Wang, and Chang 2016; Zhao et al. 2017; Chao et al. 2018; Zeng et al. 2019) first find candidate action proposals and then predict their labels. However, these fully-supervised methods are trained with instance-level human annotation, leading to an expensive and time-consuming process. Weakly-supervised temporal action localization. Weaklysupervised TAL methods (Xu et al. 2019; Min and Corso 2020; Shi et al. 2020; Lee et al. 2021; Liu et al. 2021; Narayan et al. 2021; Zhai et al. 2020; Huang, Wang, and Li 2022; Chen et al. 2022) mainly learn from video-level labels, which avoid labor-intensive annotations compared to the fully-supervised methods. UntrimmedNet (Wang et al. 2017) and STPN (Nguyen et al. 2018) generate class activation sequences by Multiple Instance Learning (MIL) framework and then locate action instances by thresholding processing. RPN (Huang et al. 2020) and 3C-Net (Narayan et al. 2019) use metric learning algorithms to learn more discriminative features. Lee et al. (Lee, Uh, and Byun 2020) design a background suppression network to suppress background snippets activation. However, there is still a discrepancy between classification and localization. Recently, numerous methods (Pardo et al. 2021; Luo et al. 2020; Zhai et al. 2020; Yang et al. 2021; Huang, Wang, and Li 2022; He et al. 2022) attempt to generate pseudo labels to supervise the model and thus alleviate the discrepancy. RefineLoc (Pardo et al. 2021) alleviates the discrepancy between classification and localization by extending the previous detection results to generate pseudo labels. Luo et al. (Luo et al. 2020) exploit the Expectation–Maximization framework (Moon 1996) to generate pseudo labels by alternately updating the key-instance assignment branch and the foreground classification branch. TSCN (Zhai et al. 2020) generates frame-level pseudo labels by later fusing attention sequences in consideration of two-stream consensus. Li et al. (Li et al. 2022) exploit contrastive representation learning to enhance the feature discrimination ability. ASMLoc (He et al. 2022) generates action proposals as pseudo labels by using the standard MIL-based methods. In contrast, our method exploits the variation between neighbor snippetfeatures to find salient snippet-features, and further designs a boundary refinement module and a discrimination enhancement module to generate high-fidelity pseudo labels. Methodology In this section, we will begin by presenting the problem definition of weakly-supervised TAL and provide an overview of our proposed method. Next, we will describe the different modules of our method in detail, which are designed to generate high-fidelity pseudo labels by utilizing the variation between snippet-features. Finally, we introduce the training details of optimizing the temporal localization model. Problem definition. Weakly-supervised TAL aims to predict a group of action instances (c, q, ts, te) for each test video with the assistance of a set of untrimmed training videos {Vi}N i=1 and their corresponding ground-truth labels {yi}N i=1. Specifically, yi ∈RC is a binary vector indicating the presence/absence of each of C actions. For one action instance, c denotes the action category, q refers to the prediction confidence score, ts and te mean the start time and end time of the action, respectively. Overview. The overview of our proposed method is shown in Figure 2, which mainly contains four parts: (a) base branch, (b) saliency inference module, (c) boundary refinement module, and (d) discrimination enhancement module. First, in the base branch, we exploit a fixed pre-trained backbone network (e.g., I3D) to extract T snippet-features from both the appearance (RGB) and motion (optical flow) of the input video. Then, a learnable classification head is adopted to classify each snippet and obtain the predicted TCAMs. Second, we utilize the saliency inference module to generate salient snippet-features by calculating the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6909 Backbone embedding Video RGB Optical Flow SF 1 … diff SF 2 SF 3 SF T diff diff diff Saliency Inference Module (b) Difference Values update 𝓜1 𝓜2 𝓜3 … 𝓜𝐶 Memory Classification Head Classification Head Classification Head Pseudo Labels Supervision 𝑏𝑖= 0 𝑏𝑖= 1 Salient Snippet-Features ℱ𝑎 Non-Salient Snippet-Features ℱ𝑏 Weighted Sum ෨ℱb ෨ℱa Information Interaction Unit SF 1 SF 2 SF T Discrimination Enhancement Module (d) Information Interaction Unit Snippet-Features ℱ Base Branch (a) Shared Shared … ෨ℱ ℱ Information Interaction Unit Boundary Refinement Module (c) SF 4 diff Figure 2: Overview of our model. Firstly, the base branch (a) extracts features from RGB and optical flow in a video and uses the classification head to predict TCAMs. Then, the saliency inference module (b) exploits the variation relationship between snippet-features to discover salient snippet features. Next, the boundary refinement module (c) utilizes the information interaction unit to enhance salient snippet features. Subsequently, the discrimination enhancement module (d) leverages action information stored in memory to enhance the discrimination of action and background. Finally, (c) and (d) generate high-fidelity pseudo labels to supervise the base branch. difference between adjacent pairs of snippet-features. Subsequently, the boundary refinement module and the discrimination enhancement module both utilize the information interaction unit to refine coarse boundaries by enhancing salient snippet-features and the separability of action snippet-features from those of the background. Finally, the output features are fed into the classification head to generate high-fidelity pseudo labels as a supervised signal for the base branch. Base Branch Given an untrimmed video V , we follow (Nguyen et al. 2018; Huang, Wang, and Li 2022) to split it into multiple non-overlapping snippets {vi}T i=1, and then we use the I3D (Carreira and Zisserman 2017) network pre-trained on the Kinetics-400 (Kay et al. 2017) dataset to extract features from the RGB and optical flow streams for each snippet. An embedding layer takes the concatenation of these two types of features to fuse them together, and the fused features of all snippets are treated as snippet-features of the video F = {f1, f2, · · · , fT } ∈RT ×D, where T is the number of snippets and D denotes the dimension of one snippet-feature. Next, we use the classification head to obtain Temporal Class Activation Maps (TCAMs) T ∈RT ×(C+1), where C +1 denotes the number of action categories plus the background class. Specifically, following previous work (Huang, Wang, and Li 2022), the classification head consists of a Class-agnostic Attention (CA) head and a Multiple Instance Learning (MIL) head. Saliency Inference Module The significant variation of temporal neighbor snippets can indicate whether each snippet belongs to a salient snippetfeature. Therefore, we propose a saliency inference module that utilizes such variation to explore the difference between neighbor snippet pairs and then use it to identify salient boundaries in the video. Given a video and its snippet-level representation F ∈ RT ×D, we first calculate the difference value τ(t−1,t) of each pair of temporal adjacent snippet-features {ft−1, ft} in the formulation of: τ(t−1,t) = D X d=1 |diff(ft, ft−1, d)|, (1) where diff denotes the operation of dimensional-wise subtraction, and d ∈D means the element index of the feature. Subsequently, we obtain the difference set τ of the input video by calculating the difference for all pairs: τ = {τ(1,2), τ(2,3), · · · , τ(t−1,t)}. (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6910 To obtain the salient snippet-features of the video, we first perform a descending sort on the difference set τ, and then assign the initial labels B = {bi}T i=1 to each snippet based on the sorted τ. The snippets with the top K sorted scores are selected as salient snippet-features, while the remaining are selected as non-salient snippet-features, and the process of assigning labels can be formulated as: bt = 1, if τ(t−1,t) ∈Top(sorted(τ), K) 0, otherwise , (3) where bt = 1 denotes that its corresponding snippet ft belongs to the salient snippet-features, otherwise to the non-salient snippet-features. Finally, salient snippet-features are discovered in a simple manner. However, since these snippet-features cannot be determined as actions or backgrounds, directly using these features to supervise the learning of the base branch may lead to poor performance. Next, we will present how to refine these salient snippet-features. Boundary Refinement Module In the saliency inference module, we calculate the difference values between each pair of adjacent snippets, and the operation can be seen as one type of exploiting local relationships, but the relationship among non-local snippets is still underexplored. Therefore, we propose a boundary refinement module to enhance salient snippet-features, where exploring the contextual relationship among the salient snippetfeatures, non-salient snippet-features, and the same video snippet-features via information interaction unit along the channel and temporal dimensions, respectively. First, we collect the salient snippet-features (bi = 1) and non-salient snippet-features (bi = 0) candidates to form Fa ∈RT a×D and Fb ∈RT b×D, respectively, where Fa ∪Fb = F , T a + T b = T, T a denotes the number of salient snippet-features, and T b denotes the number of nonsalient snippet-features. Then, we leverage a channel-level information interaction unit in the squeeze-and-excitation pattern to generate the feature ˆFa ∈RT a×D: ˆFa = exp (θ(Fa)) PD d=1 exp θ(Fa ·,d) ⊗Fa + Fa, (4) where ⊗denotes the element-wise multiplication. θ is a simple multi-layer perceptron, which is consisted of FC-ReLUFC. We set the weight of the first FC to W1 ∈RD×(D/r) and that of the second FC to W2 ∈R(D/r)×D, and r is a scaling factor. Residual connection is adopted to maintain the stability of training. Subsequently, we conduct a temporal-level information interaction unit to capture the global contextual relationships between ˆFa and F as the following equation: ˜Fa = softmax(F ⊙( ˆFa)T ) ⊙ˆFa, (5) where ⊙denotes the matrix multiplication. By integrating such information, we obtain a set of discriminative snippetfeatures ˜Fa ∈RT ×D. However, some information contained in Fb maybe neglected, which contains some action-related or backgroundrelated information. Thus, utilizing the information in Fb can help boost the diversity of snippet-features, and we also utilize the information interaction unit to generate nonsalient enhanced features ˜Fb between Fb and F through Eq.(4) and Eq.(5). Note that the parameters in Eq.(4) are not shared between Fa and Fb. Finally, we apply a weighted sum operation to balance the contribution between ˜Fa and ˜Fb to obtain the enhanced features ˜F ∈RT ×D as follows: ˜F = sum( ˜Fa, ˜Fb, σ) = σ ˜Fa + (1 −σ) ˜Fb, (6) where σ denotes a trade-off factor. Discrimination Enhancement Module Action information from videos of the same category can provide additional clues to help improve the discriminative nature of the snippet-features and the quality of the generated pseudo-labels. Therefore, we design a discrimination enhancement module that utilizes the correlation among videos to make action and background snippet-features more separable. First, we introduce a memory bank M ∈RC×N×D as the action knowledge base to store the diverse action information from the entire dataset during training, where C denotes the number of classes, N indicates the number of stored snippets of each class, and D is the dimension number. Initially, we use the classification head to predict the scores of the salient snippet-features and select the snippets with the highest N classification scores to initialize the memory M along with the scores. At t-th training iteration, we select N snippet-features F(t) [c] with the high scores for each class to update the memory of last iteration M(t−1) [c] . The process can be formulated as: M(t) [c] ←(1 −η) · M(t−1) [c] + η · F(t) [c] , (7) where η denotes the momentum coefficient. To boost the robustness, we exploit the momentum update strategy (He et al. 2020) to update memory M, so η is adjusted by: η = η0 · log (exp (e/E) + 1) , (8) where η0 denotes the initial momentum coefficient, e is the current epoch, E denotes the total epoch, and c is the class index of the current snippet. Meanwhile, we use the temporal-level information interaction unit to implement the interaction between the mixed features ˜F in the boundary refinement module and memory M(t) [c] to bring the class information of the entire dataset into ˜F, which can be formulated as: ˆF = softmax( ˜F ⊙(M(t) [c] )T ) ⊙M(t) [c] . (9) Finally, we get the output features ˜F and ˆF from the boundary refinement module and discrimination enhancement module. Then, we feed them to the classification head to output two TCAMs, i.e., ˜T and ˆT , which are summed after to obtain T p as the pseudo labels to supervise the learning of the base branch. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6911 Sup Method Feature mAP@IoU(%) AVG AVG AVG 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (0.1:0.5) (0.3:0.7) (0.1:0.7) Full S-CNN (Shou, Wang, and Chang 2016) 47.7 43.5 36.3 28.7 19.0 10.3 5.3 35.0 19.9 27.3 SSN (Zhao et al. 2017) 66.0 59.4 51.9 41.0 29.8 49.6 TAL-Net (Chao et al. 2018) 59.8 57.1 53.2 48.5 42.8 33.8 20.8 52.3 39.8 45.1 GTAN (Long et al. 2019) 69.1 63.7 57.8 47.2 38.8 55.3 Weak* STAR (Xu et al. 2019) I3D 68.8 60.0 48.7 34.7 23.0 47.0 3C-Net (Narayan et al. 2019) I3D 59.1 53.5 44.2 34.1 26.6 8.1 43.5 Weak STPN (Nguyen et al. 2018) I3D 52.0 44.7 35.5 25.8 16.9 9.9 4.3 35.0 18.5 27.0 RPN (Huang et al. 2020) I3D 62.3 57.0 48.2 37.2 27.9 16.7 8.1 46.5 27.6 36.8 BaS-Net (Lee, Uh, and Byun 2020) I3D 58.2 52.3 44.6 36.0 27.0 18.6 10.4 43.6 27.3 35.3 DGAM (Shi et al. 2020) I3D 60.0 56.0 46.6 37.5 26.8 17.6 9.0 45.6 27.5 37.0 TSCN (Zhai et al. 2020) I3D 63.4 57.6 47.8 37.7 28.7 19.4 10.2 47.0 28.8 37.8 A2CL-PT (Min and Corso 2020) I3D 61.2 56.1 48.1 39.0 30.1 19.2 10.6 46.9 29.4 37.8 UM (Lee et al. 2021) I3D 67.5 61.2 52.3 43.4 33.7 22.9 12.1 51.6 32.9 41.9 CoLA (Zhang et al. 2021) I3D 66.2 59.5 51.5 41.9 32.2 22.0 13.1 50.3 32.1 40.9 AUMN (Luo et al. 2021) I3D 66.2 61.9 54.9 44.4 33.3 20.5 9.0 52.1 32.4 41.5 UGCT (Yang et al. 2021) I3D 69.2 62.9 55.5 46.5 35.9 23.8 11.4 54.0 34.6 43.6 D2-Net (Narayan et al. 2021) I3D 65.7 60.2 52.3 43.4 36.0 51.5 FAC-Net (Huang, Wang, and Li 2021) I3D 67.6 62.1 52.6 44.3 33.4 22.5 12.7 52.0 33.1 42.2 DCC (Li et al. 2022) I3D 69.0 63.8 55.9 45.9 35.7 24.3 13.7 54.1 35.1 44.0 RSKP (Huang, Wang, and Li 2022) I3D 71.3 65.3 55.8 47.5 38.2 25.4 12.5 55.6 35.9 45.1 ASM-Loc (He et al. 2022) I3D 71.2 65.5 57.1 46.8 36.6 25.2 13.4 55.4 35.8 45.1 DELU (Chen et al. 2022) I3D 71.5 66.2 56.5 47.7 40.5 27.2 15.3 56.5 37.4 46.4 FBA-Net (Moniruzzaman and Yin 2023) I3D 71.9 65.8 56.7 48.6 39.3 26.4 14.2 56.5 37.0 46.1 Ours I3D 72.4 66.9 58.4 49.7 41.8 25.5 12.8 57.8 37.6 46.8 Table 1: Comparison with state-of-the-art methods on THUMOS14 dataset. The AVG columns show average mAP under IoU thresholds of 0.1:0.5, 0.3:0.7 and 0.1:0.7. I3D denotes the utilization of the I3D network as the feature extractor, respectively. * indicates the methods use extra information. The best results are highlighted in bold. Sup means supervision manner. Training loss Following previous methods, the whole learning process is jointly driven by video-level classification loss Lcls, knowledge distillation loss Lkd and attention normalization loss Latt (Zhai et al. 2020). The total loss function can be formulated as: L = Lcls + Lkd + λLatt, (10) where λ denotes trade-off factors. The knowledge distillation Lkd in (Huang, Wang, and Li 2022) is used to implement the process of T p supervising T for training. The video-level classification loss is the combination of two losses calculated from the CA head and MIL head, which can be formulated as: Lcls = LCA + θLMIL, (11) where θ is a hyper-parameter. More details about each loss function please refer to the corresponding references. Experiments Datasets and Evaluation Metrics. We conduct our experiments on the two commonly-used benchmark datasets, including THUMOS14 (Jiang et al. 2014) and AcitivityNet v1.3 (Heilbron et al. 2015). Following the general weak-supervised setting, we only use the video-level category labels in the training process. THUMOS14 includes 200 untrimmed validation videos and 212 untrimmed test videos, where videos are collected from 20 action categories. Following the previous work (Wang et al. 2017; He et al. 2022; Huang, Wang, and Li 2021), we use the validation videos to train our model and test videos for evaluation. ActivityNet v1.3 contains 10,024 training videos, 4,926 validation videos, and 5,044 testing videos of 200 action categories. Following (Lee et al. 2021; Huang, Wang, and Li 2022), we use the training videos to train our model and validation videos for evaluation. Evaluation metrics. We evaluate the performance of our method with the standard evaluation metrics: mean average precise (mAP) under different intersection over union (IoU) thresholds. For THUMOS14 dataset, we report the mAP under thresholds IoU={0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7}. For ActivityNet v1.3 dataset, we report the mAP under thresholds [0.5:0.05:0.95]. At the same time, we also calculate the average mAP for different IoU ranges on the two datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6912 Implementation Details We implement our model with the PyTorch framework and train the model with Adam optimizer (Kingma and Ba 2015). The scaling factor r is set to 4. The hyper-parameter θ and λ are set to 0.2 and 0.1, respectively. The feature is extracted using the I3D (Carreira and Zisserman 2017), which is pre-trained on the Kinetics-400 (Kay et al. 2017) dataset. For THUMOS14 dataset, we train 180 epochs with a learning rate of 0.00005, the batch size is set to 10, σ is set to 0.88, and K is set to ⌊50% ∗T⌋, where T is the number of video snippets. For ActivityNet v1.3 dataset, we train 100 epochs with a learning rate of 0.0001, the batch size is set to 32, σ is set to 0.9, and K is set to ⌊90% ∗T⌋. Comparison with State-of-the-Art Methods THUMOS14. We first compare our method with the state-of-the-art (SOTA) methods on THUMOS14 dataset. These SOTA methods contain fully-supervised methods and weakly-supervised methods, the results are shown in Table 1. We can observe that our proposed model outperforms the SOTA weakly-supervised temporal action localization methods. Our proposed method reaches 46.8 at average mAP for IoU thresholds 0.1:0.7. Meanwhile, our result can reach 41.8 at [email protected]. The reasons for the improved performance stem from 1) our method uses the variation relationships between snippet-features to generate salient snippet-features and then considers contextual information to enhance salient snippet-features, thereby improving the discriminative of snippet-features; 2) we introduce additional clues to leverage the relationships between videos, improving the discriminative nature of the action and background snippet-features. Thus, generating more high-fidelity pseudo labels can significantly improve the performance. ActivityNet v1.3. Table 2 shows the evaluation results in terms of mAP@IoU on ActivityNet v1.3 dataset. From the table, our model achieves competitive performance compared to other SOTA methods. In addition, our method achieves 25.8 for average mAP, which is 0.7 higher than ASM-Loc, demonstrating the superiority of our method. Ablation Study We conduct experiments to demonstrate the impact of different components in our method on THUMOS14 dataset. Impact of Saliency Inference Module. To find a proper function in Eq.(1), we explore several strategies to calculate the difference between each pair of neighbor snippets, including cosine distance, L1 distance, and L2 distance, and the results are reported in Table 3. In addition, we explore other ways of generating salient snippet-features, such as random assignment and classification. Among them, random assignment means randomly assigning salient or non-salient labels to each snippet, and classification uses the pre-trained classification head of the base model to classify snippets into salient and non-salient. The results show that L2 distance can achieve higher mAP than cosine distance, and L1 distance yields the best results compared to other methods, so we adopt it as the default diff function. The reason is that L1 focuses on the subtle variations between features by computing the absolute differences, which is important for TAL. Method mAP@IoU(%) 0.5 0.75 0.95 AVG STPN (Nguyen et al. 2018) 29.3 16.9 2.6 16.3 CMCS (Liu, Jiang, and Wang 2019) 34.0 20.9 5.7 21.2 BaS-Net (Lee, Uh, and Byun 2020) 34.5 22.5 4.9 22.2 TSCN (Zhai et al. 2020) 35.3 21.4 5.3 21.7 A2CL-PT (Min and Corso 2020) 36.8 22.0 5.2 22.5 TS-PAC (Liu et al. 2021) 37.4 23.5 5.9 23.7 UGCT (Yang et al. 2021) 39.1 22.4 5.8 23.8 AUMN (Luo et al. 2021) 38.3 23.5 5.2 23.5 FAC-Net (Huang, Wang, and Li 2021) 37.6 24.2 6.0 24.0 DCC (Li et al. 2022) 38.8 24.2 5.7 24.3 RSKP (Huang, Wang, and Li 2022) 40.6 24.6 5.9 25.0 ASM-Loc (He et al. 2022) 41.0 24.9 6.2 25.1 Ours 39.4 25.8 6.4 25.8 Table 2: Comparison with state-of-the-art methods on ActivityNet v1.3 dataset. The AVG column shows the averaged mAP under the IoU thresholds [0.5:0.05:0.95]. Method mAP@IoU(%) 0.1 0.3 0.5 0.7 AVG random 68.6 53.1 34.4 11.7 42.3 classification 67.7 53.4 36.9 11.7 43.0 cosine distance 68.6 53.5 36.6 12.3 43.3 L2 distance 70.7 55.5 36.6 12.3 44.2 L1 distance (Ours) 72.4 58.4 41.8 12.8 46.8 Table 3: Ablation studies about different strategies of detecting salient snippet-feature on THUMOS14 dataset. Whereas cosine distance calculates relative differences and L2 may suppress these subtle differences by squaring and then taking the square root. Impact of Different Modules. We evaluate the effect of the boundary refinement module and discrimination enhancement module. The results are presented in Table 4. We set the base branch as Base and progressively add the boundary refinement module and discrimination enhancement module to Base, and the performances are continuously improved by 3.4 and 6.3 for average mAP. Impact of Boundary Refinement Module. We evaluate the impact of different variants of boundary refinement module. The results are reported in Table 5, in which 1) self denotes utilize temporal-level information interaction unit to interact information between video snippet-features F and itself; 2) w/o salient and w/o non-salient denote removing the salient and non-salient snippet-features, respectively; 3) salient + non-salient denotes directly adding the two types of features together; 4) weighted sum denotes using weighted sum operation to fuse two types of features; 5) temporal denotes enhancing snippet-features only using the temporal-level information interaction unit. We can observe that 1) the salient snippet-features have a significant influence on the performance, removing them will lead to a significant drop in the performance; 2) weighted sum is more effective compared to directly adding, which can assist the utilizing of the inforThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6913 Method mAP@IoU(%) 0.1 0.3 0.5 0.7 AVG Base 62.7 45.5 29.3 10.4 37.1 Base + BRM 66.3 49.7 32.6 11.3 40.5 Base + BRM + DEM 72.4 58.4 41.8 12.8 46.8 Table 4: The effects of different modules on THUMOS14 dataset. BRM and DEM denote boundary refinement module and discrimination enhancement module, respectively. Method mAP@IoU(%) 0.1 0.3 0.5 0.7 AVG self 65.8 48.1 30.6 10.9 39.2 w/o salient 49.9 34.8 20.3 5.9 27.6 w/o non-salient 71.7 56.9 39.9 12.4 45.9 salient + non-salient 71.0 54.1 35.0 10.3 43.1 weighted sum 72.4 58.4 41.8 12.8 46.8 temporal-level 71.4 57.4 39.5 12.8 46.0 Table 5: The effect of different components in boundary refinement module on THUMOS14 dataset. mation in non-salient snippet-features; 3) information interaction unit at both the channel-level and temporal-level can enhance the discriminative nature of features better. Impact of Memory Update Strategies. We explore the impact of different memory update strategies in the discrimination enhancement module. The evaluation results are shown in Table 6. We evaluate two variants of memory update strategy, i.e., only using the high-confidence action snippetfeatures to direct update memory, and only using the momentum update strategy. From the table, we can see that our method obtains better performance than only using the momentum update strategy, because the momentum update strategy will include many noisy features and impair the learning of intra-video relation. The results indicate that our method effectively incorporates more action information compared to the direct update strategy. GT Our Base ASM-Loc Our Full Figure 3: Qualitative comparisons of our method, our Base, and ASM-Loc on “Shotput” on THUMOS14. Method mAP@IoU(%) 0.1 0.3 0.5 0.7 AVG direct update 71.9 58.1 40.3 12.3 46.4 momentum update 71.0 55.7 37.4 11.3 44.5 Ours 72.4 58.4 41.8 12.8 46.8 Table 6: The effect of different memory update strategies on THUMOS14 dataset. Background Foreground (b) Our Full (a) Our Base Foreground Background Figure 4: T-SNE visualization of foreground and background features on example “CliffDiving” on THUMOS14. Qualitative results To help understand the effect of our proposed method, we present some qualitative results in this subsection. First, we show one case selected from THUMOS14 dataset in Figure 3, and we observe that our method can locate more accurate action and background regions than our Base (base branch) and ASM-Loc. Meanwhile, we adopt t-SNE technology to project the embedding features in THUMOS14 dataset into a 2-dimensional features space, and the results are shown in Figure 4. We observe that our method can accurately bring the embedding features of foregrounds together, and make them away from the background. Conclusion In this paper, we propose a novel weakly-supervised TAL method by inferring salient snippet-feature, of which several modules are designed to assist pseudo label generation by exploring the information variation and interaction. Comprehensive experiments demonstrate the effectiveness and superiority of our proposed method. Acknowledgements This work was partly supported by the Funds for Innovation Research Group Project of NSFC under Grant 61921003, NSFC Project under Grant 62202063, and 111 Project under Grant B18008. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6914 References Carreira, J.; and Zisserman, A. 2017. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4724–4733. Chao, Y.-W.; Vijayanarasimhan, S.; Seybold, B.; Ross, D. A.; Deng, J.; and Sukthankar, R. 2018. Rethinking the Faster R-CNN Architecture for Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1130–1139. Chen, M.; Gao, J.; Yang, S.; and Xu, C. 2022. DualEvidential Learning for Weakly-supervised Temporal Action Localization. In Proceedings of the European Conference on Computer Vision, 192–208. He, B.; Yang, X.; Kang, L.; Cheng, Z.; Zhou, X.; and Shrivastava, A. 2022. ASM-Loc: Action-Aware Segment Modeling for Weakly-Supervised Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13925–13935. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729–9738. Heilbron, F. C.; Escorcia, V.; Ghanem, B.; and Niebles, J. C. 2015. ActivityNet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 961–970. Huang, L.; Huang, Y.; Ouyang, W.; and Wang, L. 2020. Relational Prototypical Network for Weakly Supervised Temporal Action Localization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 11053–11060. Huang, L.; Wang, L.; and Li, H. 2021. Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision, 8002–8011. Huang, L.; Wang, L.; and Li, H. 2022. Weakly Supervised Temporal Action Localization via Representative Snippet Knowledge Propagation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3272–3281. Jiang, Y.-G.; Liu, J.; Zamir, A. R.; Toderici, G.; Laptev, I.; Shah, M.; and Sukthankar, R. 2014. THUMOS challenge: Action recognition with a large number of classes. https: //www.crcv.ucf.edu/THUMOS14/. Kay, W.; Carreira, J.; Simonyan, K.; Zhang, B.; Hillier, C.; Vijayanarasimhan, S.; Viola, F.; Green, T.; Back, T.; Natsev, P.; Suleyman, M.; and Zisserman, A. 2017. The Kinetics Human Action Video Dataset. CoRR, abs/1705.06950. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations. Lee, P.; Uh, Y.; and Byun, H. 2020. Background Suppression Network for Weakly-Supervised Temporal Action Localization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 11320–11327. Lee, P.; Wang, J.; Lu, Y.; and Byun, H. 2021. Weaklysupervised Temporal Action Localization by Uncertainty Modeling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 1854–1862. Li, J.; Yang, T.; Ji, W.; Wang, J.; and Cheng, L. 2022. Exploring Denoised Cross-Video Contrast for WeaklySupervised Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19914–19924. Lin, C.; Xu, C.; Luo, D.; Wang, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F.; and Fu, Y. 2021. Learning Salient Boundary Feature for Anchor-free Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3320–3329. Lin, T.; Zhao, X.; and Shou, Z. 2017. Single Shot Temporal Action Detection. In Proceedings of the 25th ACM International Conference on Multimedia, 988–996. Lin, T.; Zhao, X.; Su, H.; Wang, C.; and Yang, M. 2018. BSN: Boundary Sensitive Network for Temporal Action Proposal Generation. In Proceedings of the European Conference on Computer Vision, 3–19. Liu, D.; Jiang, T.; and Wang, Y. 2019. Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1298–1307. Liu, K.; Liu, W.; Gan, C.; Tan, M.; and Ma, H. 2018. T-C3D: Temporal Convolutional 3D Network for Real-time Action Recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 7138–7145. Liu, X.; Liu, W.; Ma, H.; and Fu, H. 2016. Large-scale Vehicle Re-identification in Urban Surveillance Videos. In Proceedings of the IEEE International Conference on Multimedia and Expo, 1–6. Liu, Y.; Chen, J.; Chen, Z.; Deng, B.; Huang, J.; and Zhang, H. 2021. The Blessings of Unlabeled Background in Untrimmed Videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6176– 6185. Long, F.; Yao, T.; Qiu, Z.; Tian, X.; Luo, J.; and Mei, T. 2019. Gaussian Temporal Awareness Networks for Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 344–353. Luo, W.; Zhang, T.; Yang, W.; Liu, J.; Mei, T.; Wu, F.; and Zhang, Y. 2021. Action Unit Memory Network for Weakly Supervised Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9969–9979. Luo, Z.; Guillory, D.; Shi, B.; Ke, W.; Wan, F.; Darrell, T.; and Xu, H. 2020. Weakly-Supervised Action Localization with Expectation-Maximization Multi-Instance Learning. In Proceedings of the European Conference on Computer Vision, 729–745. Min, K.; and Corso, J. J. 2020. Adversarial BackgroundAware Loss for Weakly-Supervised Temporal Activity Localization. In Proceedings of the European Conference on Computer Vision, 283–299. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6915 Moniruzzaman, M.; and Yin, Z. 2023. Collaborative Foreground, Background, and Action Modeling Network for Weakly Supervised Temporal Action Localization. IEEE Transactions on Circuits and Systems for Video Technology, 33(11): 6939–6951. Moon, T. 1996. The expectation-maximization algorithm. IEEE Signal Processing Magazine, 13(6): 47–60. Narayan, S.; Cholakkal, H.; Hayat, M.; Khan, F. S.; Yang, M.-H.; and Shao, L. 2021. D2-Net: Weakly-Supervised Action Localization via Discriminative Embeddings and Denoised Activations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13608–13617. Narayan, S.; Cholakkal, H.; Khan, F. S.; and Shao, L. 2019. 3C-Net: Category Count and Center Loss for WeaklySupervised Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision, 8679–8687. Nguyen, P.; Han, B.; Liu, T.; and Prasad, G. 2018. Weakly Supervised Action Localization by Sparse Temporal Pooling Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6752–6761. Pardo, A.; Alwassel, H.; Heilbron, F. C.; Thabet, A.; and Ghanem, B. 2021. RefineLoc: Iterative Refinement for Weakly-Supervised Action Localization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3319–3328. Paul, S.; Roy, S.; and Roy-Chowdhury, A. K. 2018. WTALC: Weakly-supervised Temporal Activity Localization and Classification. In Proceedings of the European Conference on Computer Vision, 563–579. Qi, M.; Li, W.; Yang, Z.; Wang, Y.; and Luo, J. 2019. Attentive Relational Networks for Mapping Images to Scene Graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3957–3966. Qi, M.; Qin, J.; Li, A.; Wang, Y.; Luo, J.; and Van Gool, L. 2018. stagNet: An Attentive Semantic RNN for Group Activity Recognition. In Proceedings of the European Conference on Computer Vision, 104–120. Qi, M.; Qin, J.; Yang, Y.; Wang, Y.; and Luo, J. 2021. Semantics-Aware Spatial-Temporal Binaries for CrossModal Video Retrieval. IEEE Transactions on Image Processing, 30: 2989–3004. Qi, M.; Wang, Y.; Li, A.; and Luo, J. 2020. STC-GAN: Spatio-Temporally Coupled Generative Adversarial Networks for Predictive Scene Parsing. IEEE Transactions on Image Processing, 29: 5420–5430. Qu, S.; Chen, G.; Li, Z.; Zhang, L.; Lu, F.; and Knoll, A. 2021. ACM-Net: Action Context Modeling Network for Weakly-Supervised Temporal Action Localization. arXiv preprint arXiv:2104.02967. Shi, B.; Dai, Q.; Mu, Y.; and Wang, J. 2020. WeaklySupervised Action Localization by Generative Attention Modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1009–1019. Shou, Z.; Wang, D.; and Chang, S.-F. 2016. Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1049–1058. Wang, L.; Xiong, Y.; Lin, D.; and Van Gool, L. 2017. UntrimmedNets for Weakly Supervised Action Recognition and Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4325–4334. Xu, Y.; Zhang, C.; Cheng, Z.; Jianwen, X.; Niu, Y.; Pu, S.; and Wu, F. 2019. Segregated Temporal Assembly Recurrent Networks for Weakly Supervised Multiple Action Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 9070–9078. Yang, L.; Peng, H.; Zhang, D.; Fu, J.; and Han, J. 2020. Revisiting Anchor Mechanisms for Temporal Action Localization. IEEE Transactions on Image Processing, 29: 8535– 8548. Yang, W.; Zhang, T.; Yu, X.; Qi, T.; Zhang, Y.; and FengWu. 2021. Uncertainty Guided Collaborative Training for Weakly Supervised Temporal Action Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 53–63. Zeng, R.; Huang, W.; Gan, C.; Tan, M.; Rong, Y.; Zhao, P.; and Huang, J. 2019. Graph Convolutional Networks for Temporal Action Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision, 7094–7103. Zhai, Y.; Wang, L.; Tang, W.; Zhang, Q.; Yuan, J.; and Hua, G. 2020. Two-Stream Consensus Network for WeaklySupervised Temporal ActionLocalization. In Proceedings of the European Conference on Computer Vision, 37–54. Zhang, C.; Cao, M.; Yang, D.; Chen, J.; and Zou, Y. 2021. CoLA: Weakly-Supervised Temporal Action Localization With Snippet Contrastive Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16010–16019. Zhao, Y.; Xiong, Y.; Wang, L.; Wu, Z.; Tang, X.; and Lin, D. 2017. Temporal Action Detection with Structured Segment Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2914–2923. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6916 | 2024 | 768 |
18,593 | Behavioral Recognition of Skeletal Data Based on Targeted Dual Fusion Strategy Xiao Yun1, Chenglong Xu1, Kevin Riou2, Kaiwen Dong1*, Yanjing Sun1, Song Li1, Kevin Subrin2, Patrick Le Callet2 1School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, China 2Nantes Universit´e, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, Nantes, France {xyun, clongxu, dongkaiwen, yjsun, lisong}@cumt.edu.cn, {kevin.riou, kevin.subrin, patrick.lecallet}@univ-nantes.fr Abstract The deployment of multi-stream fusion strategy on behavioral recognition from skeletal data can extract complementary features from different information streams and improve the recognition accuracy, but suffers from high model complexity and a large number of parameters. Besides, existing multi-stream methods using a fixed adjacency matrix homogenizes the model’s discrimination process across diverse actions, causing reduction of the actual lift for the multi-stream model. Finally, attention mechanisms are commonly applied to the multi-dimensional features, including spatial, temporal and channel dimensions. But their attention scores are typically fused in a concatenated manner, leading to the ignorance of the interrelation between joints in complex actions. To alleviate these issues, the Front-Rear dual Fusion Graph Convolutional Network (FRF-GCN) is proposed to provide a lightweight model based on skeletal data. Targeted adjacency matrices are also designed for different front fusion streams, allowing the model to focus on actions of varying magnitudes. Simultaneously, the mechanism of Spatial-Temporal-Channel Parallel Attention (STCP), which processes attention in parallel and places greater emphasis on useful information, is proposed to further improve model’s performance. FRF-GCN demonstrates significant competitiveness compared to the current state-of-theart methods on the NTU RGB+D, NTU RGB+D 120 and Kinetics-Skeleton 400 datasets. Our code is available at: https://github.com/sunbeam-kkt/FRF-GCN-master. Introduction Human action recognition (HAR) aims to determine the current behavior category of a person based on a series of welltrained recognition models, and it has wide range of applications, such as human-computer interaction (Liu and Wang 2020), video surveillance (Xin et al. 2023), and autonomous driving (Saleh et al. 2022). In particular, after the introduction of skeleton-based HAR by ST-GCN (Yan, Xiong, and Lin 2018), algorithms based on GCN for skeleton-based behavior recognition have emerged rapidly. Recent approaches on GCN based behavior recognition mostly focused on 3 axis. *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. The first is how the adjacency matrix in GCN can better learn the relationship between joints to enhance the learning ability of the model. ST-GCN adopts the natural connections between human joints as the topological relations to be learned, but is unable to establish new generative relations as unnatural connections. 2S-AGCN (Shi et al. 2019b) proposes an adaptive adjacency matrix to solve this problem in order to learn more information. Further, MS-AAGCN (Shi et al. 2020) adds learnable coefficients to the adjacency matrix to make it more flexible. It has also been argued that more relations between joints do not always help action learning, for example, STSF-GCN (Fang et al. 2022) has obtained relatively good performance using fewer relations on a GCN model with joints as data input. The second is different attention mechanisms are adopted to better capture the spatio-temporal as well as the channel attention scores. Numerous studies have explored the influence of attentional mechanisms on action judgments, which is very reasonable from a biological point of view. All attention mechanisms are roughly divided into three categories: temporal (Liu et al. 2020), spatial (Song et al. 2020), and channel (Chen et al. 2021c). Many scholars have addressed one or two of these, but more advanced models incorporate all three, such as DC-GCN (Zhou et al. 2023), AM-GCN (Sun et al. 2022). The last is how multi-stream information can be better fused. Different types of data often contain distinct feature information, and the fusion of multiple streams of information generally achieves better results than using a single stream. Currently, most behavior recognition models based on skeleton data employ architectures that fuse multiple streams of information (Hu et al. 2022; Qin et al. 2022; Wu, Zhang, and Zou 2023; Zhang et al. 2023). The data fusion schemes used can be broadly classified into two categories: 1) Rear fusion of data (Chen et al. 2021d; Liu et al. 2022a; Xiong et al. 2022), where each stream of information is processed by the same model to obtain behavior category scores. These scores are then combined through weighted fusion to produce a final behavior classification result. 2) Front fusion of data (Song et al. 2020, 2022) where the input data is fused prior to being processed by the model, followed by feature extraction. We neutralized these two schemes in FRF-GCN and obtained better fusion results. We noticed 3 main limitations in these recent developThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6917 Data Preprocess Joint Bone J_M B_M GCN_TCN_ unit GCN_TCN_ unit STC-P GAP Softmax GAP Softmax Front fusion Rear fusion BN B1-B4 STC-P STC-P STC-P B8-B10 GAP softmax Internal composition of GCN_TCN_unit Input Data Figure 1: Overall flow chart of FRF-GCN, the J M and B M are the joint motion and bone motion, respectively, where ⊕ denotes information fusion (including forward fusion and backward fusion), GCN TCN unit is the spatial temporal joint unit, the upper and lower branches use a targeted adjacency matrix AJB and AJBM, STC-P is the spatial temporal channel parallel attention mechanism, GAP and Softmax are the global average pooling and classifier, respectively. ments: (1) Different adjacency matrices are not selected according to the different characteristics of multi-stream information, which may overlook valuable information. (2) Most of the existing attention mechanisms use a serial processing approach, learning spatial, temporal and channel features sequentially or in a different order. The attention learned at the back-end may be influenced by the frontend, while some interconnections may be lost. Moreover, we know from our experiments that joint spatio-temporal attention and channel attention do not seem to enjoy equal status for behavioral classification tasks. (3) Incorporating novel modalities increases the complexity of the architecture, and while some efficient solutions have been proposed for multistream GCN based behavior recognition, there is still a significant gap with lightweight solutions. To alleviate these limitations, we propose the FRF-GCN, which incorporates the following 3 key novelties: 1. Instead of using a fixed adjacency matrix, we propose targeted adjacency matrices for the fusion of two different information sources. This approach enhances the efficiency of GCN computations and minimizes redundant calculations. 2. To simultaneously capture attention scores in the temporal, spatial, and channel dimensions of skeleton data, we introduce the STC-P attention mechanism. By incorporating it into FRF-GCN, we preserve the interdependencies between temporal, spatial, and channel attention, resulting in more effective extraction of skeleton information. 3. We propose a lightweight architecture for fusing multiple streams of skeleton information through bidirectional fusion. This approach reduces the model’s parameterization while maintaining its performance. FRF-GCN achieves a balance between parameter size and performance on the NTU60, NTU120 and KineticsSkeleton datasets, demonstrating competitiveness against current state-of-the-art models. Related Works Multi-Stream Information Data Fusion In recent years, many studies have demonstrated that multistream information fusion can lead to better performance, such as MS-AAGCN (Shi et al. 2020), MST-GCN (Chen et al. 2021d) and 4S-ACE-Ens (Qin et al. 2022). The number of information streams ranges from dual streams (Shi et al. 2019b; Wu, Wu, and Kittler 2021), to triple streams (Song et al. 2022), quadruple streams (Chen et al. 2021a; Yang et al. 2021), and even more (Chen et al. 2021b). The types of information are also diverse, including not only joint bones and their motion information, but also angular information and so on. They are fused at the end or at the beginning of the model to obtain better results after fusion. Based on the results done in previous studies, it is demonstrated that front fusion leads to a significant reduction in the number of parameters but a decrease in performance, while rear fusion leads to a significant increase in performance but an exponential increase in the number of parameters. In contrast, our FRF-GCN model utilizes a dual data fusion scheme, combining both forward and post fusion approaches, along with targeted adjacency matrices. This approach effectively reduces the parameter count while minimizing performance loss. By leveraging both forward and post fusion techniques, our model achieves a balance between parameter reduction and preserving performance. Attention Mechanisms in Behavior Recognition Attention plays an extremely important role in behavioral recognition models, and there are two main categories of its forms of action: local attention and global attention. In previous studies, local attention has been the majority and has given significant impetus to later studies. For example, ST-GCN (Yan, Xiong, and Lin 2018) and AT-GCN (Sheng and Li 2021) use the local attention mechanism to capture the local association information in the skeleton sequence. Later, with the rise of transformer and other attention mechanisms in Natural Language Processing (NLP), many scholars introduced them into behavior recognition tasks, such as KA-AGTN (Liu et al. 2022b), ACT (Mazzia et al. 2022), HGCT (Bai et al. 2022), CSCMFT (Liu et al. 2023), etc. The idea of transformer is adopted. And the attention mechanism represented by transformer focuses on the global state information. In addition to the transformer model, excellent GCN models such as CTR-GCN (Chen et al. 2021c) and EfficientNet (Song et al. 2022) also choose to adopt a more global focus on attention information. This also makes sense biologically, as people are often used to adopting a top-down strategy (Lange and Lappe 2006) when judging what category an action belongs to, which shows the importance of global information. The STC-P attention mechanism we deployed The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6918 in FRF-GCN also adopts the idea of global attention. Model Architecture The model of FRF-GCN proposed in this paper is shown in Figure 1, and its main component structures are specified by following subsections. Front-Rear Dual Fusion Strategy To balance the number of parameters and performance of the state of the art models, this paper designs a dual fusion strategy, combining both front fusion and rear fusion. The idea is straightforward: before inputting the information into the network, the four streams (i.e. joint, bone, joint motion and bone motion) are fused in pairs, which the joint and bone information are fused, as well as the joint motion and bone motion information, as shown in Figure 1. Subsequently, the two fusion streams are input into the network using a dualstream model. The front fusion stage is to splice the two sets of information mentioned earlier in terms of channel dimensions, after which it is fed into the GCN TCN unit shown in Figure 1. Unlike the conventional two-stream GCN model, the adjacency matrix in FRF-GCN is based on different focuses on input data characteristics. After the front fusion stage, the adjacency matrix AJB is selected based on the fact that the joint and bone fusion flow information is more focused on large-amplitude movements (Figure 2, left). And to focus more on small-amplitude movements as well as the motion of the joint points themselves, we also select adjacency matrix AJBM (Figure 2, right) based on the features that the joint motion and bone motion fusion streams provide. The features obtained from the two tributaries enter a rear fusion phase. This process can be described as the following equation. fout = αfc (GAP (fJB)) + βfc (GAP (fJBM)) (1) The final output scores of the FRF-GCN model are shown in equation (1). α and β are set to 0.6 and 0.4, respectively, as the backward fusion weights after a small grid search. fc is the fully connected layer, GAP is the average pooling layer, fJB is the behavioral discriminant score of joint plus bone fusion flow, and fJBM is the behavioral discriminant score of joint motion plus bone motion fusion flow. This reduces the model’s parameter count by half and enhances performance through targeted adjacency matrices compared to pure front fusion. Relevant experiments are presented in Table 4. Targeted Spatial Graph Convolution In the model of 2S-AGCN (Shi et al. 2019b), the original adjacency matrix can be represented as the following equation. Where I stands for the joint point itself, Aske records the physical connections inherent in the body. A = I + Aske (2) Motivated by previous research (Fang et al. 2022), we studied whether different adjacency matrices should be used 1 2 4 5 6 3 7 1 2 4 5 6 3 7 Figure 2: Illustration of targeted adjacency matrix, the left is AJB, contains only the newly learned relationships between joints, and the right is AJBM, includes both the inherent connections between joints (the orange connections) and the motion patterns of individual joints themselves (the red connections), best view in the color mode. for different information streams. Through extensive experimentation, we found that considering relationships between all joints may lead to misjudgments by the model. However, using only identity matrix I and learnable masks is not sufficient for the fused information after front fusion. Moreover, existing attention mechanisms often prioritize actions with larger variations by the connections learned with adjacency matrix, which may not be suitable for subtle actions with smaller variations. Additionally, using a fixed adjacency matrix for different information streams is not always optimal, as it introduces more computilities and overlooks data characteristics. To alleviate these issues, FRF-GCN performs targeted adjacency matrix selection for different information flows. This approach increases the utilization of effective computility and reduces the chances of redundant information interfering with action judgments. For joint and bone fusion streams, both joint and bone information represent absolute positional information. The coordinates of the data indicate the positions of the respective joints or bones. In this case, it is important to focus on the newly generated relationships, as illustrated in the left diagram of Figure 2. Thus, we utilize an adjacency matrix AJB to represent the fused information of joint and bone. AJB represents the parameterized initial physical topology relationship (A) as shown in the left side of Figure 2. The parameters in the matrix indicates whether there is a connection between two articulations and the strength of the connection. The parameters are updated each time by back propagation and there is no restriction on the parameters in AJB to maintain its learning capability. The formula for the joint and bone fusion flow is shown in equation (3), where AJB is initialized by A. Where Kv follows the setting of ST-GCN (Yan, Xiong, and Lin 2018), which is set to 3, and Wk is the corresponding weight. fout = Kv X k=1 Wkfin (AJBk + Pk) (3) For the joint motion and bone motion fusion streams, both fused information belong to relative position information, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6919 which is different to joint and bone information. Moreover, the fusion flow information of joint motion and bone motion itself focuses on motion information, adding I and Aske can make it fully exploit small-amplitude movements. AJBM is shown in the right of Figure 2, and its calculation formula is as in Eq. (4), fout = Kv X k=1 Wkfin (AJBMk + Pk) (4) AJBMk = I + Aske + AJBk, I represents the movement of the joint itself, and Aske is the normalized relationship matrix between the joints. Pk is a unique graph learned for each sample, obtained by calculating the similarity between two vertices by two normalized embedding Gaussian functions (θ and φ) and then normalizing, as in equations. Where θ and φ are realized by two 1*1 convolutions, N is the number of joints in a single human skeleton, i and j ∈[1, N]. The computation process of f can be expressed as Equation (6). f (vi, vj) = eθ(vi)T φ(vj) PN j=1 eθ(vi)T φ(vj) (5) Pk = softmax f T inW T θkWφkfin (6) In theory, large-scale movements often prioritize newly generated relationships. Small-scale movements typically emphasize the motion of individual joints or the motion of initial physical connections. The joint and bone fusion streams using AJB are better able to focus on the behaviors that serve as criteria for large-scale movements. Meanwhile, the fusion of joint motion and bone motion streams using AJBM is more attentive to behaviors that serve as criteria for small-scale movements. These two types of information complement each other in the rear fusion stage, resulting in improved fusion performance. The relative experiments could be seen in experiments section. Multi-Field Temporal Depth-Point Convolution To address the issues with conventional temporal convolution in terms of parameter explosion and limited temporal receptive field, FRF-GCN replaces it with a multiscale depth-point convolution, which combines depth-wise and point-wise convolutions with different receptive field sizes achieved through dilated convolutions. This allows for a more comprehensive extraction of temporal information. The number of parameters in the whole model drops dramatically due to the effective splitting of the convolution process, achieving lightweight design. The multi-scale depth-point temporal graph convolution used in FRF-GCN was called MD-TGCN. MD-TGCN is shown in Figure 3 and consists of three branches, regular depth-point convolution, large field of view depth-point convolution, and residual connection. The regular depth-point convolution captures the local detail information, the large field of view depth-point convolution captures the global information, and the residual join is added with the information from the other two branches after stitching to optimize the training process. i C / C i i C / C i i C / C r i C / C r i C / C 1* 1 Conv 5* 1 D -Conv(dil = 1) 1* 1 P -Conv 1* 1 P -Conv res(1* 1) concat C 5* 1 D -Conv(dil = 3) Figure 3: Temporal convolution flow chart, where Ci is the number of embedded channels, Cr is the number of singlebranch output channels, D −Conv and P −Conv represent depth-wise convolution and point-wise convolution, respectively. The depth-point convolution model with multiple fields of view operates on the input sequence as in equation (7) and (8), where fin is the input data, fD1 is the depth-wise convolution with an expansion factor of 1, fD2 is the expansion depth-wise convolution with an expansion factor of 3, fP is the 1∗1 point-wise convolution, ⊕represents the splicing in the channel dimension, fD−P is the combined output of the depth-point convolution, Res is the residual connection. fD−P = fP (fD1 (fin)) ⊕fP (fD2 (fin)) (7) fout = Res (fD−P , fin) (8) STC-P Attentional Mechanisms In addition to the intuitive spatio-temporal features, the hidden channel features should not be ignored in the extraction of attention. As the number of channels changes, during the learning process of the model, many channels start to contribute less to the overall performance, but expend considerable computilities to re-learn their weights, especially in convolutional layers with a large number of channels. To alleviate this impunity, we propose the Spatial-TemporalChannel Parallel Attention Module (STC-P), which integrates channel-level attention based on the Squeeze-andExcitation Network concept (Hu, Shen, and Sun 2018). The specific operations are illustrated in Figure 4. The extraction of spatio-temporal attention (Song et al. 2022) is shown in the upper left corner of Figure 4. However, an additional branch is introduced to extract channel attention. The whole process is simple and is divided into three major steps as shown in the bottom left corner of Figure 4: T-Pooling, S-Pooling, and FC. The resulting channel attention map is multiplied element-wise with the input feature map to obtain the final channel attention-weighted feature map. The final output is obtained via a fusion module. Therefore, the STC-P module completes the task of simultaneously obtaining attention scores in three dimensions: spatial, temporal, and channel. The learned spatial features The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6920 T-Pooling S-Pooling T-Pooling S-Pooling Spatial-temporal attention SENet channel attention Attention fusion model STC-P T-Pooling S-Pooling T-Pooling STC-P3 T-Pooling S-Pooling STC-P2 C*1*V C*1*T C T V 1 C 1 V+T C 1 FC(C/rd,C)+BN+Hardswish FC(C/rd,C)+sigmoid FC(C/rd,C)+sigmoid BN+Hardswish The Output C*1*V C*T*1 C V T C V 1 C 1 T V C 1 V C T C T 1 FC(C/rd,C)+BN+Hardswish FC(C/rd,C)+Sigmoid C*1*V C*T*1 C*1*V C V 1 1+T+V C 1 C*1*1 C*1*1 C 1 1 C V T C V T C V 1 C 11 FC(C/rd,C)+Sigmoid FC(C/rd,C)+Sigmoid C/ra 11 C 11 Figure 4: The STC-P attention module, “Hardswish” refers to the activation function used, and rd, ra denote reduction factors, temporal pooling and spatial pooling are denoted as T-pooling and S-pooling, respectively. The variants of STC-P were also be shown, where STC-P2 is in the green area and STC-P3 is in the yellow area, only shows how they differ from the STC-P. of the STC-P module are connected to the prior features learned by the preceding spatial convolutional neural network, which makes the learned spatial attention map more accurate. During the experiment, it was found that the best results were achieved after adding STC-P only to layers 5, 6 and 7. The other cases caused different degrees of performance loss. Based on the experimental results, it may be due to the close relationship between the channel attention mechanism and the distribution of the number of channels in the convolutional layers. The function of the STC-P attention module can be represented by the following equation, fst = θ ((poolt (fin) ⊕poolv (fin)) · w) (9) fstj = fin ⊙(σ (fst · wt) ⊗σ (fst · wv)) (10) fc = σ (wk (wl · pooltv (fin))) ⊙fin (11) fout = ∅(BN (fstj ⊕fc)) (12) where fstj is the spatial-temporal joint attention fraction, fc is the channel attention fraction, and fout is the three parallel attention features of the final output. w, wt and wv are the corresponding weights, which are updated with backpropagation, and θ , σ and ∅are activation functions. Additionally, we designed two other variants of STC-P, referred to as STC-P2 and STC-P3, to compare their performance with the proposed STC-P attention mechanism. This comparison aims to validate the importance of the joint relationship between spatial, temporal, and channel attention for action recognition. The design diagrams are shown in the figure 4 right. Experiments In this section, we conducted experimental evaluations of the proposed FRF-GCN model on three large-scale datasets. We also performed extensive ablation experiments to validate the effectiveness of the proposed components. To reduce the complexity of the experiments, unless otherwise stated, the experiments in the ablation study were conducted only on the CS evaluation metric of the NTURGB-D 60 dataset for validation purposes. Datasets NTU RGB+D 60 NTU RGB+D 60 dataset (Shahroudy et al. 2016), which is a large-scale skeleton dataset used for human action recognition models. It contains a total of 56,880 action sequences, 60 action categories, performed by different people, of which 40 actions are daily behaviors, 9 health-related actions, and 11 two-person interaction behaviors. The dataset is evaluated using two different evaluation protocols: Cross-Subject and Cross-View. NTU RGB+D 120 The NTU RGB+D 120 dataset (Liu et al. 2019) is a supplement to the NTU RGB+D 60 dataset. The evaluation is conducted using two protocols: CrossSubject and Cross-Setup. In the Cross-Subject evaluation, the 106 participants are divided into training and testing sets, with each set containing 53 subjects. In the Cross-Setup evaluation, action sequences from even-numbered setup IDs are used for training, while action sequences from oddnumbered setup IDs are used for testing. Kinetics-Skeleton 400 Kinetics-Skeleton 400 (Carreira and Zisserman 2017) is a very large dataset for behavior recognition. It contains 400 actions, each video lasts about 10 seconds, and includes both indoor and outdoor, which not only has a wide variety of actions but also interacts with the scene, making it challenging to correctly identify behavior categories. The data were processed in the same way as 2SAGCN (Shi et al. 2019b), and the experiments provided the Top-1 and Top-5 accuracies of the model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6921 J+B J M+B M Acc(%) Param(M) I I + Aske 89.57 2.38 I + Aske I 90.03 2.38 I + Aske I + Aske 90.19 2.38 I + Aske AJB 90.45 2.40 AJB I + Aske 91.19 2.40 AJB AJB 91.18 2.42 AJBM AJB 91.10 2.42 AJBM AJBM 90.90 2.42 AJB(ours) AJBM(ours) 91.29 2.42 Table 1: Selection and comparison of adjacency matrix for different fusion flows Attention mechanism Acc(%) Param(M) STC-P 91.29 2.42 STC-P2 90.95 2.42 STC-P3 91.18 2.44 ST-joint-Att (Song et al. 2022) 89.72 2.42 Table 2: Performance Comparison of Different Attention Mechanisms(ntu60 cs) Implementation Details All experiments were conducted using the PyTorch deep learning framework. The stochastic gradient descent (SGD) with Nesterov momentum (0.9) was employed as the optimization strategy, with a batch size of 56. The cross-entropy function was selected as the loss function for backpropagation gradients. A weight decay of 0.0003 and a learning rate of 0.1 were set. For NTU60,120 and Kinetics-Skeleton, the total epochs were set to 50, 60, and 65, respectively. The two-phase auto-attenuation renditions were 30, 40, 30, 50, 45, and 55, respectively. Unlike most of the previous studies, FRF-GCN uses a preprocessing method of data interpolation to supplement the skeleton sequence to 300 frames. Defaults to two individuals per sample, supplemented with zeros if there is only one. Additionally, a warm-up strategy (He et al. 2016) was applied during the first 5 epochs to enhance training stability. All experiments were conducted on RTX 3080 GPUs. Ablation Studies Adjacency Matrix Targeting In Table 1, the effect of using AJBM for all information is worse than using AJB for all information, which shows that the establishment of more relations does not necessarily make the recognition better, and the selection of a suitable adjacency matrix is very important. The best performance is obtained with the targeted adjacency matrix used in FRF-GCN, which fully reflects the complementary nature of the concerns generated by using targeted adjacency matrices, again in agreement with our analysis. Because Pk is a plot unique to each sample and exists by default, it is no longer shown in the Table 1. Comparison of the Effects of Different Attention Mechanisms As mentioned in Model architecture, STC-P2 reduces the fusion step of temporal and spatial features, while Model Acc(%) Param(M) Conventional TCN 90.13 6.94 MD-TGCN 91.29 2.42 Table 3: Comparison between different temporal convolutions Category Acc(%) Param(M) Joint 88.55 1.21 Bone 89.63 1.21 Joint motion 86.37 1.21 Bone motion 86.23 1.21 BF-GCN 91.56 4.84 FF-GCN 90.03 1.21 Joint+Bone 89.93 1.21 Joint motion+Bone motion 86.79 1.21 FRF-GCN 91.29 2.42 Table 4: Comparison between different fusion strategies keeping the same number of parameters in the attention module. However, this modification results in a certain degree of performance degradation, indicating the importance of joint processing of temporal and spatial information. STC-P3 fuses temporal, spatial and channel attention and then extracts the attention between the three separately. The recognition rate decreases slightly compared with STC-P, and there is a slight increase in the number of model parameters, which indicates that channel attention plays a moderating role in the learning of joint spatial-temporal attention. But if the weight of the channel attention score is too large, it will hinders the normal learning of spatial-temporal attention and makes the performance decrease. Compared to ST-joint-Att, STC-P significantly improves the performance of FRF-GCN while almost not increasing the parameter count. The Temporal Depth-Point Convolutional Layer As shown in Table 3, after replacing the conventional temporal convolution with multi-field depth-point convolution, the number of parameters of the model is reduced to about 35% of the original one and the performance is further improved. This demonstrates the effectiveness of fusing different temporal sensory fields and the efficiency of depth-point convolution. Comparison Among Different Fusion Strategies Table 4 shows the comparison between different fusion strategies, where BF-GCN refers to a pure backward fusion strategy and FF-GCN refers to a pure forward fusion strategy. From Table 4, we can conclude that the performance of these two forward fusion stream information is complementary. FRF-GCN makes a trade-off between performance and number of parameters, and consumes only half the number of parameters of the former while losing very little accuracy compared to BF-GCN, which further reflects the advantages of the front-rear dual fusion strategy. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6922 Algorithm Cross Subject(%) Cross View(%) X-sub120 X-set120 Param(M) ST-GCN(Yan, Xiong, and Lin 2018) 81.5 88.3 – – 3.10 AS-GCN(Li et al. 2019) 86.8 94.2 – – 9.50 DGNN(Shi et al. 2019a) 89.9 96.1 – – 26.24 2S-AGCN(Shi et al. 2019b) 88.5 95.1 – – 6.94 SGN(Zhang et al. 2020) 89.0 94.5 79.2 81.5 0.69 4S-Shift-GCN(Cheng et al. 2020) 90.7 96.5 85.9 87.6 2.76 MS-G3D(Liu et al. 2020) 91.5 96.2 86.9 88.4 6.40 MS-AAGCN(Shi et al. 2020) 90.0 96.2 – – 3.77 Dynamic-GCN(Ye et al. 2020) 91.5 96.0 87.3 88.6 14.40 AdaSGN(Shi et al. 2021) 90.5 95.3 85.9 86.8 2.05 SEFN(Kong, Deng, and Jiang 2021) 90.7 96.4 86.2 87.8 34.7 Graph2Net(Wu, Wu, and Kittler 2021) 90.1 96.0 86.0 87.6 0.9 MST-GCN(Chen et al. 2021d) 91.5 96.6 87.5 88.8 12.00 FR-AGCN(Hu et al. 2022) 90.5 95.8 86.6 87.0 13.88 EfficientGCN-B0(Song et al. 2022) 90.2 94.9 86.6 85.0 0.29 SMotif-GCN+TBs(Wen et al. 2022) 90.5 96.1 87.1 87.7 – ASE-GCN(Xiong et al. 2022) 89.4 96.2 – – 6.00 4s-ACE-Ens(Qin et al. 2022) 91.6 96.3 88.2 89.2 5.80 ML-STGNet(Zhu et al. 2022) 91.9 96.2 88.6 90.0 5.76 2M-STGCN(Zhang et al. 2023) 90.8 96.2 – – – 4s STF-Net(Wu, Zhang, and Zou 2023) 91.1 96.5 86.5 88.2 6.80 FRF-GCN(ours) 91.3 96.5 87.1 88.4 2.42 Table 5: Performance comparison with various methods on NTURGB+D dataset and NTURGB+D 120 dataset Algorithm Top-1 Top-5 ST-GCN(Yan, Xiong, and Lin 2018) 30.7 52.8 AS-GCN(Li et al. 2019) 34.8 56.5 2S-AGCN(Shi et al. 2019b) 36.1 58.7 MS-G3D(Liu et al. 2020) 38.0 60.9 MST-GCN(Chen et al. 2021d) 38.1 60.8 SMotif-GCN+TBs(Wen et al. 2022) 37.8 60.6 ASE-GCN(Xiong et al. 2022) 36.9 59.7 ML-STGNet(Zhu et al. 2022) 38.9 62.2 2M-STGCN(Zhang et al. 2023) 39.0 61.6 STF-Net(Wu, Zhang, and Zou 2023) 36.1 58.9 FRF-GCN(ours) 37.9 60.7 Table 6: Performance comparison with various methods on Kinetics-Skeleton dataset Comparisons With SOTA Methods Apart from the direct comparison with other methods, Table 5 highlights three methods that deserve our special attention: ST-GCN, MS-G3D, and EfficientGCN. ST-GCN, has become a baseline model for many subsequent research studies and has had a significant impact. Our FRF-GCN improves accuracy by 9.8% and 8.2% relative to ST-GCN on two different evaluation criteria, CS and CV, respectively, while reducing model complexity by about 22%. MS-G3D, as one of the state-of-the-art methods in the industry. It achieves slightly higher recognition accuracy compared to FRF-GCN. However, FRF-GCN has significantly fewer parameters, achieving comparable performance with approximately 38% of the parameter count. EfficientGCN, as one of our baselines, to maintain variable consistency, we will only compare FRF-GCN with EfficientGCN-B0 (without composite scaling strategy). From Table 5, it can be observed that although our model has a higher parameter count compared to EfficientGCN-B0, FRF-GCN outperforms it in terms of performance. Moreover, FRF-GCN’s input stream information is easier to obtain compared to EfficientGCN-B0, and the forward fusion process is simpler and more practical. It is worth noting that not only does FRF-GCN exhibit competitive performance compared to multiple current SOTA methods, but it also has a lightweight overall model and a lower parameter count. Table 5 also demonstrates the competitiveness of FRFGCN on the NTURGB+D120 dataset compared to mainstream methods. NTU 120 is a larger dataset than NTU 60, which contains more subtle actions as well as similar actions. 4s-ACE-Ens and ML-STGNet employed ingenious design for this feature to improve performance on this dataset, enable them to outperform FRF-GCN on NTU 120. The performance results of each model on Kinetics-Skeleton 400 are shown in Table 6, and the results show that FRFGCN is still superior for action classes with less regularity. Conclusion In this work, we have developed a novel lightweight model, FRF-GCN, for skeleton-based action recognition. It aligns well with the targeted adjacency matrices. As a result, FRFGCN achieves state-of-the-art recognition results on three large-scale action recognition datasets. Although the model works better, FRF-GCN is not enough to discriminate extremely similar actions, the next work will continue to research in this direction, such as multimodal data fusion and the design of more complex attention mechanisms. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6923 Acknowledgments This work is supported by the Natural National Science Foundation of China (62071472), and the Fundamental Research Funds for the Central Universities (2020ZDPYMS26). References Bai, R.; Li, M.; Meng, B.; Li, F.; Jiang, M.; Ren, J.; and Sun, D. 2022. Hierarchical graph convolutional skeleton transformer for action recognition. In 2022 IEEE International Conference on Multimedia and Expo (ICME), 01–06. IEEE. Carreira, J.; and Zisserman, A. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6299–6308. Chen, T.; Zhou, D.; Wang, J.; Wang, S.; Guan, Y.; He, X.; and Ding, E. 2021a. Learning multi-granular spatiotemporal graph network for skeleton-based action recognition. In Proceedings of the 29th ACM international conference on multimedia, 4334–4342. Chen, T.; Zhou, D.; Wang, J.; Wang, S.; Guan, Y.; He, X.; and Ding, E. 2021b. Learning multi-granular spatiotemporal graph network for skeleton-based action recognition. In Proceedings of the 29th ACM international conference on multimedia, 4334–4342. Chen, Y.; Zhang, Z.; Yuan, C.; Li, B.; Deng, Y.; and Hu, W. 2021c. Channel-wise topology refinement graph convolution for skeleton-based action recognition. In Proceedings of the IEEE/CVF international conference on computer vision, 13359–13368. Chen, Z.; Li, S.; Yang, B.; Li, Q.; and Liu, H. 2021d. Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 1113– 1122. Cheng, K.; Zhang, Y.; He, X.; Chen, W.; Cheng, J.; and Lu, H. 2020. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 183–192. Fang, Z.; Zhang, X.; Cao, T.; Zheng, Y.; and Sun, M. 2022. Spatial-temporal slowfast graph convolutional network for skeleton-based action recognition. IET Computer Vision, 16(3): 205–217. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141. Hu, Z.; Pan, Z.; Wang, Q.; Yu, L.; and Fei, S. 2022. Forwardreverse adaptive graph convolutional networks for skeletonbased action recognition. Neurocomputing, 492: 624–636. Kong, J.; Deng, H.; and Jiang, M. 2021. Symmetrical enhanced fusion network for skeleton-based action recognition. IEEE Transactions on Circuits and Systems for Video Technology, 31(11): 4394–4408. Lange, J.; and Lappe, M. 2006. A model of biological motion perception from configural form cues. Journal of Neuroscience, 26(11): 2894–2906. Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; and Tian, Q. 2019. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3595–3603. Liu, J.; Shahroudy, A.; Perez, M.; Wang, G.; Duan, L.-Y.; and Kot, A. C. 2019. Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding. IEEE transactions on pattern analysis and machine intelligence, 42(10): 2684–2701. Liu, Y.; and Wang, X. 2020. The analysis of driver’s behavioral tendency under different emotional states based on a Bayesian Network. IEEE Transactions on Affective Computing. Liu, Y.; Zhang, H.; Xu, D.; and He, K. 2022a. Graph transformer network with temporal kernel attention for skeletonbased action recognition. Knowledge-Based Systems, 240: 108146. Liu, Y.; Zhang, H.; Xu, D.; and He, K. 2022b. Graph transformer network with temporal kernel attention for skeletonbased action recognition. Knowledge-Based Systems, 240: 108146. Liu, Z.; Cheng, Q.; Song, C.; and Cheng, J. 2023. Crossscale cascade transformer for multimodal human action recognition. Pattern Recognition Letters, 168: 17–23. Liu, Z.; Zhang, H.; Chen, Z.; Wang, Z.; and Ouyang, W. 2020. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 143–152. Mazzia, V.; Angarano, S.; Salvetti, F.; Angelini, F.; and Chiaberge, M. 2022. Action Transformer: A self-attention model for short-time pose-based human action recognition. Pattern Recognition, 124: 108487. Qin, Z.; Liu, Y.; Ji, P.; Kim, D.; Wang, L.; McKay, R.; Anwar, S.; and Gedeon, T. 2022. Fusing higher-order features in graph neural networks for skeleton-based action recognition. IEEE Transactions on Neural Networks and Learning Systems. Saleh, K.; Mihaita, A.-S.; Yu, K.; and Chen, F. 2022. Realtime Attention-Augmented Spatio-Temporal Networks for Video-based Driver Activity Recognition. In 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), 1579–1585. IEEE. Shahroudy, A.; Liu, J.; Ng, T.-T.; and Wang, G. 2016. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1010–1019. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6924 Sheng, W.; and Li, X. 2021. Multi-task learning for gaitbased identity recognition and emotion recognition using attention enhanced temporal graph convolutional network. Pattern Recognition, 114: 107868. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019a. Skeletonbased action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7912–7921. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2019b. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12026– 12035. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2020. Skeletonbased action recognition with multi-stream adaptive graph convolutional networks. IEEE Transactions on Image Processing, 29: 9532–9545. Shi, L.; Zhang, Y.; Cheng, J.; and Lu, H. 2021. Adasgn: Adapting joint number and model size for efficient skeletonbased action recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 13413– 13422. Song, Y.-F.; Zhang, Z.; Shan, C.; and Wang, L. 2020. Stronger, faster and more explainable: A graph convolutional baseline for skeleton-based action recognition. In proceedings of the 28th ACM international conference on multimedia, 1625–1633. Song, Y.-F.; Zhang, Z.; Shan, C.; and Wang, L. 2022. Constructing stronger and faster baselines for skeleton-based action recognition. IEEE transactions on pattern analysis and machine intelligence, 45(2): 1474–1488. Sun, Y.; Huang, H.; Yun, X.; Yang, B.; and Dong, K. 2022. Triplet attention multiple spacetime-semantic graph convolutional network for skeleton-based action recognition. Applied Intelligence, 52(1): 113–126. Wen, Y.-H.; Gao, L.; Fu, H.; Zhang, F.-L.; Xia, S.; and Liu, Y.-J. 2022. Motif-GCNs with local and non-local temporal blocks for skeleton-based action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2): 2009–2023. Wu, C.; Wu, X.-J.; and Kittler, J. 2021. Graph2Net: Perceptually-enriched graph learning for skeleton-based action recognition. IEEE transactions on circuits and systems for video technology, 32(4): 2120–2132. Wu, L.; Zhang, C.; and Zou, Y. 2023. SpatioTemporal focus for skeleton-based action recognition. Pattern Recognition, 136: 109231. Xin, W.; Liu, R.; Liu, Y.; Chen, Y.; Yu, W.; and Miao, Q. 2023. Transformer for Skeleton-based action recognition: A review of recent advances. Neurocomputing. Xiong, X.; Min, W.; Wang, Q.; and Zha, C. 2022. Human skeleton feature optimizer and adaptive structure enhancement graph convolution network for action recognition. IEEE Transactions on Circuits and Systems for Video Technology, 33(1): 342–353. Yan, S.; Xiong, Y.; and Lin, D. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Yang, H.; Yan, D.; Zhang, L.; Sun, Y.; Li, D.; and Maybank, S. J. 2021. Feedback graph convolutional network for skeleton-based action recognition. IEEE Transactions on Image Processing, 31: 164–175. Ye, F.; Pu, S.; Zhong, Q.; Li, C.; Xie, D.; and Tang, H. 2020. Dynamic gcn: Context-enriched topology learning for skeleton-based action recognition. In Proceedings of the 28th ACM international conference on multimedia, 55–63. Zhang, H.; Liu, X.; Yu, D.; Guan, L.; Wang, D.; Ma, C.; and Hu, Z. 2023. Skeleton-based action recognition with multistream, multi-scale dilated spatial-temporal graph convolution network. Applied Intelligence, 1–15. Zhang, P.; Lan, C.; Zeng, W.; Xing, J.; Xue, J.; and Zheng, N. 2020. Semantics-guided neural networks for efficient skeleton-based human action recognition. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1112–1121. Zhou, H.; Xiang, X.; Qiu, Y.; and Liu, X. 2023. Graph convolutional network with STC attention and adaptive normalization for skeleton-based action recognition. The Imaging Science Journal, 1–11. Zhu, Y.; Shuai, H.; Liu, G.; and Liu, Q. 2022. Multilevel Spatial–Temporal Excited Graph Network for SkeletonBased Action Recognition. IEEE Transactions on Image Processing, 32: 496–508. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6925 | 2024 | 769 |
18,594 | Context Enhanced Transformer for Single Image Object Detection in Video Data Seungjun An*1, Seonghoon Park*1, Gyeongnyeon Kim*1, Jeongyeol Baek2, Byeongwon Lee2, Seungryong Kim1 1Korea University, Seoul, Korea 2SK Telecom, Seoul, Korea {dkstmdwns, seong0905, kkn9975}@korea.ac.kr, {jeongyeol.baek, bwon.lee}@sk.com, seungryong [email protected] Abstract With the increasing importance of video data in real-world applications, there is a rising need for efficient object detection methods that utilize temporal information. While existing video object detection (VOD) techniques employ various strategies to address this challenge, they typically depend on locally adjacent frames or randomly sampled images within a clip. Although recent Transformer-based VOD methods have shown promising results, their reliance on multiple inputs and additional network complexity to incorporate temporal information limits their practical applicability. In this paper, we propose a novel approach to single image object detection, called Context Enhanced TRansformer (CETR), by incorporating temporal context into DETR using a newly designed memory module. To efficiently store temporal information, we construct a class-wise memory that collects contextual information across data. Additionally, we present a classification-based sampling technique to selectively utilize the relevant memory for the current image. In the testing, We introduce a test-time memory adaptation method that updates individual memory functions by considering the test distribution. Experiments with CityCam and ImageNet VID datasets exhibit the efficiency of the framework on various video systems. The project page and code will be made available at: https://ku-cvlab.github.io/CETR. Introduction Object detection is one of the fundamental and essential tasks in the computer vision field with its extensive versatility across a wide range of applications. Moreover, various applications in real-world scenarios, including video surveillance (Nascimento and Marques 2006; Fu et al. 2019), autonomous driving (Chen et al. 2015, 2016), and robot navigation (Hern´andez et al. 2016), heavily rely on video data. Despite the remarkable success of object detectors for a single image, directly applying them to video data encounters challenges due to appearance deterioration caused by motions and occlusions. To address this challenge, video object detection (VOD) models (Zhu et al. 2017a; Wang et al. 2018) have been proposed to improve object detection performance by leveraging temporal information. Previous approaches usually *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Comparisons between existing works and ours. (a) standard DETR (Carion et al. 2020), (b) DETR-based video object detection (Zhou et al. 2022), and (c) our proposed framework, dubbed CETR. Our method effectively detects objects in video data without adding heavy components. aggregate features from nearby frames exploiting optical flow (Zhu et al. 2017a; Wang et al. 2018) or LSTM (Kang et al. 2017a,b). Nevertheless, these methods primarily focus on short-term frames, thus limiting their ability to capture a more extensive feature representation. To overcome this limitation, attention-based approaches (Chen et al. 2020; Deng et al. 2019a,b) attempt to capture long-range temporal dependency by utilizing memory structures to aggregate features globally or locally. Yet, they depend on randomly sampled images within a clip, which struggle to integrate holistic contextual information from video data. Furthermore, the construction of stacked memory modules to store features of adjacent frames incurs high computational costs and unnecessary memory usage. On the other hand, in light of the remarkable performance of Transformer-based models in image object detection (Carion et al. 2020; Liu et al. 2022), e.g., detection with Transformers (DETR), researchers have commenced extending them to the video domain (Zhou et al. 2022; Wang et al. 2022a). However, these methods exhibit an essential The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 682 reliance on auxiliary networks and the need for multiple sequential frames, as shown in Fig. 1. Such prerequisites lead to a considerable decrease in processing speed, thereby failing to fulfill the real-time operational demands of various systems. As a consequence, there is a need for more efficient and streamlined approaches that can meet the real-time requirements essential for practical applications. In this work, we propose a novel single image object detection method, dubbed Context Enhanced TRansformer (CETR), that effectively incorporates contextual information across the given data. Following the recent trend of Transformer-based detectors, we adopt DETR as our baseline. Due to the inherent attention mechanism within the Transformer framework, our approach effectively incorporates temporal information through attention modules. In order to utilize temporal context without requiring additional reference frames or networks, we present a context memory module (CMM) that stores class-wise feature representations and updates effectively using momentum update in a non-parametric manner. The memory also represents each class as a set of prototypes, allowing intra-classes to contain a variety of attributes. In addition, to effectively capture relevant information for the current features, we introduce a score-based sampling methodology. By propagating the encoded memory features through a classification network for making predictions, CETR employs a sampled class-specific memory that closely aligns with the current input. Furthermore, we introduce an adaptive memory updating technique tailored to the test domain across different camera settings. Unlike the uniform exponential moving average update employed during training, we implement an online updating strategy aligned with the class-wise distribution of the test domain. Utilizing a weighted sum of the target and source domain memories, this strategy facilitates adaptation toward the test data distribution while retaining contextual information from the training phase. To validate the effectiveness of the proposed method, we conduct extensive experiments on the CityCam dataset (Zhang et al. 2017), one of the real traffic video data. Furthermore, experiments on the ImageNet VID (Russakovsky et al. 2015) demonstrate that our framework achieves comparable accuracy with the state-of-the-art video object detectors with a much faster speed and efficient memory resource. We also perform detailed ablation studies and deeply analyze the memory module to confirm that it is effective at capturing contextual information. Related Work Single image object detection Single image object detectors have been extensively explored due to the development of deep convolutional neural networks (CNNs). CNN-based object detectors can be classified into two pipelines: twostage and one-stage detectors. Two-stage detectors (Girshick 2015; Ren et al. 2015; Dai et al. 2016) generate coarse object proposals and then classify the proposals and regress the bounding boxes to refine them. In contrast, one-stage detectors (Duan et al. 2019; Tian et al. 2019) directly predict object locations and categories in an image by utilizing densely designed anchors. In recent years, DETR (Carion et al. 2020), a prominent Transformer-based object detector, casts object detection as a direct set prediction problem by removing hand-crafted representations and post-processing techniques. Many follow-up works (Li et al. 2022; Liu et al. 2022; Meng et al. 2021; Zhu et al. 2020) have attempted to address the slow training convergence of DETR’s inefficient design and use of queries. In this paper, we choose these variants of DETR as our baseline considering this efficiency. Video object detection. Video object detection (VOD) methods aim to address the challenging cases, such as motion blur, and occlusion, suffered from the single frame. To tackle this problem, many studies have focused on improving the performance of the current frame by leveraging temporal information across videos. For example, FGFA (Zhu et al. 2017a), MANET (Wang et al. 2018), and THP (Zhu et al. 2018) utilize optical flow derived from FlowNet (Dosovitskiy et al. 2015) by aligning and aggregating the nearby features from current frames. TPN (Kang et al. 2017a) and TCNN (Kang et al. 2017b) exploit LSTM (Hochreiter and Schmidhuber 1997) to construct temporal coherence between detected bounding boxes. To capture long-range dependencies, numerous methods adopt self-attention mechanism. Among them, SELSA (Wu et al. 2019) presents to use of global temporal cues by taking the full-sequence level feature aggregation. OGEMN (Deng et al. 2019a) proposes to use object-guided external memory for further global aggregation. MEGA (Chen et al. 2020) presents a memory module that considers aggregating global and local information to enhance the feature representation. Recently, TransVOD (Zhou et al. 2022) extends the DETR detector into the video object detection domain via a temporal Transformer. Test-time adaptation. Test-time adaptation (TTA) attempts to adapt pre-trained models to test data without relying on source domain data or incurring labeling costs. Existing TTA methods (Wang et al. 2020, 2022b; Li et al. 2016) typically recalibrate batch normalization (BN) layers using a batch of test samples. However, since Transformers typically do not contain a BN layer, it is not appropriate to apply the re-estimating BN statistics method to Transformer-based models. Alternatively, several studies (Chen et al. 2022; Iwasawa and Matsuo 2021; Jang and Chung 2022) adopt pseudo labels generated at test time for updating the model. In the domain of object detection, TTAOD (Chen et al. 2023) focuses on enhancing real-time robustness across target domains via self-training and feature distribution alignment. This paper introduces a test-time adaptation technique suitable for Transformer-based image detectors, leveraging a newly designed memory module. Methodology In this section, we first review DETR framework (Carion et al. 2020), and then introduce our proposed framework, called CETR, which is a single frame context-aware object detector with context memory module (CMM), score-based sampling strategy, and memory-guided Transformer decoder (MGD) in detail. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 683 CNN Backbone Positional encoding Transformer Encoder … Context Memory Module read … … Input images Score-based Sampling … Classification read update Self-Attention Memory Cross-Attention Cross-Attention Memory-Guided Transformer Decoder … … … ×L Regression Figure 2: Overview of our framework. CETR builds upon the DETR (Carion et al. 2020) architecture. Within our framework, a pivotal component is the context memory module (CMM), which serves as the input for the Transformer encoder. Subsequently, the encoded memory features are passed through the classification network. Predicted probability serves as a threshold for scorebased sampling. The sampled class-wise memory is aggregated with the query using the cross-attention mechanism within the memory-guided Transformer decoder (MGD). … Classification ross-Attention ross-Attention FFN FFN Bounding box Class Regression encoded memory feature memory feature multi threshold … … … FFN FFN FFN FFN … Embedding Score-based Sampling Confidence Figure 3: Details of score-based sampling module. Preliminaries: Revisiting DETR DETR and its variants are based on encoder-decoder Transformer architecture. The encoder layer consists of a multihead self-attention and feed-forward network (FFN), and the decoder layer has additional cross-attention layers. Specifically, given an input image I, CNN backbone extract feature map F ∈RH·W ×d, where d denotes the dimension of feature and H, W are the height and width of the feature map, respectively. Then F augmented with positional encoding are fed into the Transformer encoder (denoted by Enc(·)): F = Enc(F). (1) Note that we omit the positional encoding in the description for clarity. Enc(·) is composed of self-attention layers, which would be applied to F to generate the query Q, key K, and value V vectors for exchanging information features at all spatial positions. Self-attention of the Transformer encoder is conducted as: Attn(Q = F, K = F, V = F), (2) where multi-head attention is represented as: Attn(Q, K, V ) = Softmax(QKT √ d )V. (3) The image feature F is input to the Transformer decoder, with object queries O. The Transformer decoder is composed of the following two types of attention layers: multihead self-attention and multi-head cross-attention. Ol sa = Self-Attn(Q = Ol, K = Ol, V = Ol), (4) Ol+1 = Cross-Attn(Q = Ol sa, K = F, V = F), (5) where decoder blocks are repeated L times and Ol means object queries of l-th decoder block. Then, the final object queries OL+1, which have acquired semantic information from image features, are passed through a feedforward neural network (FFN) for classification and box regression. Finally, by the Hungarian algorithm, one-to-one matching between predicted objects and their corresponding groundtruth targets is established. We propose a memory module that can be applied to single frame DETR-like methods for processing video data. Our Approach: CETR Overview. Most VOD methods that utilize memory modules or reference images have a large memory footprint, which limits the amount of information that can be used at The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 684 one time. To alleviate this, recent methods randomly sample memory (Deng et al. 2019a; Chen et al. 2020) or utilize information from frames close to the current frame (Zhou et al. 2022; Cui 2023). However, these approaches also have the drawback of either requiring information from the entire frame beforehand or being able to reference unnecessary information. To address these limitations, we propose CETR that selectively stores only the necessary information from the data to form a class-specific memory with a fixed size, and utilizes only the information useful for the current frame in memory. This method enables models to efficiently leverage contextual information from the entire dataset for each frame. For this, as illustrated in Fig. 2, we introduce three modules applicable to single frame DETR-like methods for the video data: 1) the context memory module (CMM), which stores contextual information of the entire dataset in a fixed size; 2) the score-based sampling, which samples only the necessary information from memory for the current frame data; and 3) the memory-guided Transformer decoder (MGD), which enhances the semantic information of object queries using the sampled spatio-temporal memory information. Context Memory Module. Most single frame DETR-like methods employ a Transformer encoder to aggregate spatial information from current image features, enabling each image feature to include spatial context information from the current image. However, when the CNN backbone extracts ambiguous features from the current image due to challenges such as low image quality or part occlusion, they may struggle to effectively refine the image features. To mitigate this limitation, We use the fixed size memory obtained from the entire dataset to enhance each single frame image feature using temporal context information, without the need to directly use data from other frames. Our proposed CMM has a multi-prototype class-wise memory M ∈RC·K×d with K prototypes for each of the C classes. In our pipeline, the feature map F and M would be fed into the Transformer encoder: [F, M] = Enc([F, M]), (6) where [·, ·] means concatenation. In the Transformer encoder, the current information of F and the spatio-temporal contextual information of M are aggregated. Consequently, image feature F gains rich contextual information from M, and simultaneously, encoded memory feature M obtains class information fitted to the current image. Then, M is passed to the score-based sampling module to obtain the classification score of the current image, and F is forwarded to the Transformer decoder similar to DETR. F is also utilized in the context memory module for memory update. The context memory module extracts N instance features ˜F = {fn}N n=1 from image feature F and the set of class-wise memory M = {mc,k}C,K c,k=1 is updated as: mc,kn ←αmc,kn + (1 −α)fn, (7) where kn = argmaxk{⟨fn, mc,k⟩}K k=1, (8) where ⟨·, ·⟩is defined as correlation between two features, and α ∈[0, 1] is a momentum coefficient. During training, Image Big bus Black sedan Taxi Figure 4: Visualization of the attention map. For certain classes, we present an attention map showing the correlation between class-wise memory and current image features. ˜F is extracted from the ground-truth box, while during inference, ˜F is extracted from the predicted box. Aggregated F with M in the Transformer encoder contains abundant contextual information. Accordingly, M updated recurrently by F in each image of the video dataset acquires contextual information about the entire dataset that was previously observed. Additionally, the class-wise memory with multiprototypes can also accommodate diverse distributions of instance features appearing in the entire dataset. Score-based Sampling. Carefully selecting the relevant information from the memory is equally important as creating a high-quality memory. However, recent VOD works that employ memory modules often fail to guarantee optimal memory sampling by either randomly sampling memory or utilizing only the memory information around the current frame. Empirically, we have discovered that utilizing class information from the current image to selectively sample memory leads to significant performance improvement. Detailed results of the related experiments are provided in Table 5. Motivated by this, we introduce a scorebased sampling module to extract information relevant to the current image from the class-wise memory M (see Fig. 3). The score-based sampling module consists of two parts: a classification part and a multi-threshold sampling part. In the classification part, the sampling module obtains classification score pc,k ∈[0, 1] by passing encoded memory feature M = {mc,k}C,K c,k=1, which contains the class information of the current image. This operation is executed via a classification head FFNc,k composed independently for each mc,k, followed by a sigmoid function: pc,k = Sigmoid(FFNc,k(mc,k)). (9) Subsequently, in the multi-threshold sampling part, the sampled memory ˜ M = { ˜mc,k}C,K c,k=1 is obtained by: ˜mc,k = Proj( ˜m1 c,k, ˜m2 c,k, . . . , ˜mT c,k), (10) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 685 Figure 5: Qualitative Results on the CityCam dataset (Zhang et al. 2017). Comparison between the baseline (Liu et al. 2022) (top) and our proposed method (bottom) is shown. As exemplified, our method provides more robust detection results compared to the baseline. where ˜mt c,k = st c,kmc,k + (1 −st c,k)∅, (11) where T is the number of sampling index st c,k, and ∅denotes learnable no-class embedding. Here st c,k is derived by binarizing pc,k using T thresholds τt (i.e., st c,k = δ(pc,k > τt)). The delta function δ outputs 1 when the condition is true, and 0 otherwise. The projection layer Proj(·) combines multi-thresholded memory information with varying confidences to generate the final sampled memory. During training, we employ asymmetric loss (Ben-Baruch et al. 2020) additionally to train the classification head and enhance the class discrimination capability of the Transformer encoder. Memory-guided Transformer Decoder. Many recent works that have built upon DETR-like methods address the ambiguity in the role of object queries by incorporating positional information into the object queries (Meng et al. 2021; Liu et al. 2022). This has clarified the positional information of object queries, enabling them to locate objects at various positions within the current image. However, if object queries are aggregated with poor image features of the current data, they might acquire incorrect semantic information. To address this issue, we propose a method to enhance the semantic information of object queries, using a memoryattention layer. One block of our proposed memory-guided Transformer decoder (MGD) is composed of three types of attention layers, formed by adding a memory cross-attention layer to the components of the existing decoder blocks: Ol sa = Self-Attn(Q = Ol, K = Ol, V = Ol), (12) Ol ca = Cross-Attn(Q = Ol sa, K = F, V = F), (13) Ol+1 = Mem.Cross-Attn(Q = Ol ca, K = ˜ M, V = ˜ M), (14) where O means object queries. In the MGD, O acquires semantic information about the current image from the existing attention layers, and then enhances its associated class information through the memory cross-attention layer. Finally, each output object query of MGD is transformed by an FFN to output a class score and box location for each object. The subsequent processes, such as Hungarian matching and losses, follow DETR (Carion et al. 2020). Test-time Memory Adaptation Given the non-parametric design of our CMM, adapting our memory to test data becomes achievable without the need for additional fine-tuning. To facilitate this, we introduce a test-time adaptation strategy utilizing CMM. Within this framework, the CMM preserves representations acquired during training and independently stores contextual information specific to the target domain. To mitigate the storage of initial noisy representations, uniform coefficients are employed for the momentum update of individual memories during the training stage. During testing, on the other hand, we update the test memory module M ′ by individually adjusting the update rate of each memory m′ c,k to align with the memory distribution in the test domain. m′ c,k ← 1 ic,k + J (ic,km′ c,k + J X j=1 fj), (15) where ic,k indicates the total number of detected instances before current frame, and J is the number of instance features fj at current frame. The update process involves adding to memory the instance features in the current image that are highly correlated with the mean values in test and source memory. For memory retrieval, a weighted sum is applied to combine memory from training and memory originating from the target domain: M ′ ←βM + (1 −β)M ′, (16) where β ∈[0, 1] denotes the weight of the source domain. This adaptation technique ensures that memory is adapted to the target domain while preserving important contextual information from the source distribution. We can also tailor the memory to specific individual cameras, resulting in a more robust test memory module for various camera systems. Experiments Experimental Setup Datasets. In this study, we assess the effectiveness of our framework through experiments conducted on two distinct datasets: the CityCam dataset (Zhang et al. 2017) and the ImageNet VID dataset (Russakovsky et al. 2015). The CityCam dataset consists of approximately 60K labeled frames, with 900K annotated objects across 10 vehicle classes. It is composed of 16 camera locations in downtown and parkway areas, spanning four typical weather conditions and time periods. For our experiments, We use 13 camera locations for training and 3 camera locations for testing. ImageNet VID consists of 3,862 training videos and 555 validation videos across 30 object classes. Following common settings in previous works (Zhou et al. 2022; Chen et al. 2020), we train CETR on the training split of ImageNet VID and DET datasets. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 686 Model FPS ↑ Mem ↓ (GB) AP AP50 APS APM APL Faster R-CNN (Girshick 2015) 37.8 0.42 23.3 39.6 18.5 39.0 36.3 Conditional DETR (Meng et al. 2021) 36.7 0.46 23.0 41.9 17.7 38.9 43.2 Conditional DETR + CETR 33.8 0.48 24.4 42.4 19.3 40.0 45.0 DAB-DETR (Liu et al. 2022) 33.5 0.47 23.8 40.2 18.6 39.4 43.4 DAB-DETR + CETR 30.7 0.48 25.0 43.0 19.7 40.4 48.4 Deformable DETR (Zhu et al. 2020) 37.9 0.42 24.2 41.0 20.1 41.4 48.0 Deformable DETR + TransVOD (Zhou et al. 2022) 4.3 4.19 23.6 40.2 19.8 40.2 47.4 Deformable DETR + CETR 30.6 0.48 25.7 43.0 20.5 41.8 53.6 Table 1: Quantitative results on the CityCam dataset (Zhang et al. 2017). Implementation details. Our framework is trained on 24GB RTX-3090 GPUs with a batch size of 16, using the AdamW (Loshchilov and Hutter 2017) optimizer. We train CETR for 150K iterations, with a learning rate of 10−4 for the first 120K iterations and 10−5 for the last 30K iterations. For fast convergence, we employed variants of DETR as our baseline. In the CityCam experimentation, ResNet50 (He et al. 2016) is used as the backbone and initialized with pre-trained weights from ImageNet dataset (Russakovsky et al. 2015), while the Transformer encoder and decoder were initialized randomly. In the ImageNet VID experiment, ResNet-101 is used as the backbone and the entire network is initialized with pre-trained weights from COCO dataset (Lin et al. 2014). For the ImageNet VID experiment and the ablation study, we used DAB-DETR with CETR. Evaluation metric. We follow the standard COCO evaluation. We report the average precision under different IoU thresholds (AP), AP scores at IoU thresholds are 0.5 (AP50), and different object scales (APS, APM, APL). For the ImageNet VID dataset, we follow the common protocol (Zhou et al. 2022; Chen et al. 2020; Deng et al. 2019b) and leverage average precision at IoU thresholds are 0.5 (AP50) as the evaluation metric. Experimental Results Quantitative results. Table 1 presents our main results on the CityCam testing set, which we have divided. We applied our proposed framework to the single frame DETRlike methods and conducted quantitative comparisons with other single frame detection methods and a multi-frame DETR-like method, TransVOD (Zhou et al. 2022) that use Deformable DETR as their baseline. Compared to the single frame baseline (Zhu et al. 2020), our method showed improvements of 1.5% AP and 2.0% AP50, with only a marginal increase of 0.06 GB in allocated memory and a decrease of 7.3 FPS, while the multi-frame DETR-like method showed an increase of 3.77 GB in allocated memory and a decrease of 33.6 FPS. Additionally, the multiframe method that has mainly been utilized with video clip data of consistent short-frame intervals exhibits poor performance on the CityCam dataset with wider frame intervals. We also compare our approach with SELSA (Wu et al. Model Online AP50 FPS ↑ #Params ↓ (M) Mem ↓ (GB) SELSA 80.3 7.2 LRTR 80.6 10 RDN 81.8 10.6 TransVOD 80.5 32.3 74.2 2.94 DFF 73.1 20.25 97.8 D&T 75.8 7.8 LWDN 76.3 20 77.5 OGEMN 76.8 14.9 PSLA 77.1 18.7 63.7 LSTS 77.2 23.0 64.5 CETR 79.6 23.3 65.7 0.55 Table 2: Performance comparison with state-of-the-art realtime VOD methods with ResNet-101 backbone on ImageNet VID dataset (Russakovsky et al. 2015). Here we use AP50, which is commonly used as mean average precision (mAP) in other VOD methods. 2019), LRTR (Shvets, Liu, and Berg 2019), RDN (Deng et al. 2019b), TransVOD Lite (Zhou et al. 2022), DFF (Zhu et al. 2017b), D&T (Feichtenhofer, Pinz, and Zisserman 2017), LWDN (Jiang et al. 2019), OGEMN (Deng et al. 2019a), PSLA (Guo et al. 2019), and LSTS (Jiang et al. 2020) on ImageNet VID dataset. As shown in Table 2, our method demonstrates competitive performance with other VOD methods without the need for multi-frame approaches, which require significant allocated memory and disrupt general online inference. Among online methods, it achieves the highest AP50 of 79.6%. Furthermore, our method exhibits higher FPS compared to most approaches, except for the method that requires multi-frame image inference at once. Qualitative results. As shown in Fig. 4, we provide a visualization of the correlation between the proposed memory and image features. To do this, we utilize the attention map in the 4th layer of the Transform encoder. The CMM exhibits notable attention scores for objects of the corresponding class. This observation emphasizes that class-specific memory clearly contains the relevant information specific The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 687 Methods CMM MGD SS MT AP50 Baseline 40.2 Ours ✓ 40.5 ✓ ✓ 40.5 ✓ ✓ ✓ 41.1 ✓ ✓ ✓ ✓ 42.5 Table 3: Ablation study on main components. CMM, SS, MT, and MGD denote Context Memory Module, scorebased sampling, Multi-level thresholding, and MemoryGuided Transformer Decoder, respectively. # Prototype (K) AP AP50 APS APM APL 1 23.7 41.9 18.2 38.9 46.6 3 24.8 42.5 19.6 40.0 47.4 5 23.2 41.1 17.4 38.8 45.1 10 23.3 40.0 18.2 39.1 45.6 Table 4: Performance for the number of prototypes per class. to each class. In addition, we visually compare the object detection results of the baseline model and our approach in Fig. 5. The results showcase higher confidence scores for most classes in our approach compared to the baseline. Notably, comparing the 2nd and 4th columns, we notice that our model performs well in detecting objects that are partially visible within the frame, while the baseline fails. In addition, the first qualitative result shows the superiority of our model in capturing even rare classes (e.g., small trucks) within the dataset. Ablation Study and Analysis Memory module analysis. We first investigate the effect of our main components. As shown in Table 3, when our CMM is used only in the Transformer encoder, it has a 0.3% improvement AP50 compared to the single frame baseline (Liu et al. 2022). When used in isolation, the MGD does not exhibit any performance enhancement. However, when used in combination with the score-based sampling method, there was an increase of 0.6% AP50. In addition, when used in conjunction with the multi-level thresholding method, it shows an additional improvement of 1.4% AP50. The number of prototypes. Table 4 illustrates the ablation study on the number of prototypes of each class in our context memory module. When using one prototype and three prototypes, our approach achieves performance improvements of 1.7% and 2.3% AP50 compared to the single frame baseline, respectively. However, we observe that the performance of our approach in the CityCam dataset decreases as the number of prototypes is increased beyond 3. This outcome is believed to be due to the CityCam dataset consisting solely of classes grouped under the vehicle category that share similar types. As a result, a small number of prototypes is enough to represent the distribution of class features, and too many prototypes may, in fact, hinder the utilization of class features. Method AP50 Baseline 40.2 Learnable memory 40.9 Full memory 40.5 Random sampling 41.2 Score-based sampling 42.5 GT sampling (oracle) 57.6 Table 5: Experiments on sampling strategy. Method AP AP50 APS APM APL Our Baseline 24.8 42.5 19.6 40.0 47.4 + Memory update 24.9 42.8 19.7 40.2 47.8 + Cam specific 25.0 43.0 19.7 40.4 48.4 Table 6: Experiments on the test-time memory adaptation. Sampling strategy. Table 5 reports the performance of our approach according to various class-wise memory sampling strategies for the memory-guided Transformer decoder. When employing the class-wise memory sampling strategy using ground-truth images, If we have an oraclelevel knowledge of the correct answers, it shows a significant performance improvement of 17.4% AP50. Taking this result as motivation, we designed our score-based sampling module. The performance of the approaches improves in each case when using all learnable memory or randomly sampling class-wise memory. However, the classification scorebased sampling strategy leads to the most improvement in performance. This result demonstrates the effectiveness of our score-based sampling method. Test-time memory adaptation. Lastly, we conduct extensive experiments to assess the effectiveness of the CMMbased test-time adaptation approach, as shown in Table 6. The results highlight that the memory update technique adapted to the target domain yields meaningful performance improvements without requiring further training or parameter optimization. From the table, we also notice that by adding a camera-specific way of configuring memory, performance can be improved by 0.5% for the AP50 over the baseline. Conclusion To handle video data with a single frame approach, we introduced a context memory module that enables the use of spatio-temporal contextual information from the entire dataset. In addition, we used a score-based sampling and a memory-guided transformer decoder to effectively make use of our context memory. Our method exhibited a meaningful performance improvement over the single frame baseline in the CityCam dataset, with only a slight increase in allocated memory and a low decrease in FPS. Furthermore, our method demonstrated a remarkable performance improvement over other real-time online video object detection methods when evaluated on the ImageNet VID dataset. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 688 Acknowledgments This research was supported by the MSIT, Korea (IITP2023-2020-0-01819, RS-2023-00222280), National Research Foundation of Korea (NRF-2021R1C1C1006897, NRF-2018M3E3A1057288). References Ben-Baruch, E.; Ridnik, T.; Zamir, N.; Noy, A.; Friedman, I.; Protter, M.; and Zelnik-Manor, L. 2020. Asymmetric loss for multi-label classification. arXiv preprint arXiv:2009.14119. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, 213–229. Springer. Chen, C.; Seff, A.; Kornhauser, A.; and Xiao, J. 2015. Deepdriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE international conference on computer vision, 2722–2730. Chen, D.; Wang, D.; Darrell, T.; and Ebrahimi, S. 2022. Contrastive test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 295–305. Chen, X.; Kundu, K.; Zhang, Z.; Ma, H.; Fidler, S.; and Urtasun, R. 2016. Monocular 3d object detection for autonomous driving. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2147–2156. Chen, Y.; Cao, Y.; Hu, H.; and Wang, L. 2020. Memory enhanced global-local aggregation for video object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10337–10346. Chen, Y.; Xu, X.; Su, Y.; and Jia, K. 2023. STFAR: Improving Object Detection Robustness at Test-Time by SelfTraining with Feature Alignment Regularization. arXiv preprint arXiv:2303.17937. Cui, Y. 2023. Feature Aggregated Queries for TransformerBased Video Object Detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6365–6376. Dai, J.; Li, Y.; He, K.; and Sun, J. 2016. R-fcn: Object detection via region-based fully convolutional networks. Advances in neural information processing systems, 29. Deng, H.; Hua, Y.; Song, T.; Zhang, Z.; Xue, Z.; Ma, R.; Robertson, N.; and Guan, H. 2019a. Object guided external memory network for video object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6678–6687. Deng, J.; Pan, Y.; Yao, T.; Zhou, W.; Li, H.; and Mei, T. 2019b. Relation distillation networks for video object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 7023–7032. Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, 2758–2766. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; and Tian, Q. 2019. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 6569–6578. Feichtenhofer, C.; Pinz, A.; and Zisserman, A. 2017. Detect to track and track to detect. In Proceedings of the IEEE international conference on computer vision, 3038–3046. Fu, Z.; Chen, Y.; Yong, H.; Jiang, R.; Zhang, L.; and Hua, X.-S. 2019. Foreground gating and background refining network for surveillance object detection. IEEE Transactions on Image Processing, 28(12): 6077–6090. Girshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, 1440–1448. Guo, C.; Fan, B.; Gu, J.; Zhang, Q.; Xiang, S.; Prinet, V.; and Pan, C. 2019. Progressive sparse local attention for video object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3909–3918. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Hern´andez, A. C.; G´omez, C.; Crespo, J.; and Barber, R. 2016. Object detection applied to indoor environments for mobile robot navigation. Sensors, 16(8): 1180. Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural computation, 9(8): 1735–1780. Iwasawa, Y.; and Matsuo, Y. 2021. Test-time classifier adjustment module for model-agnostic domain generalization. Advances in Neural Information Processing Systems, 34: 2427–2440. Jang, M.; and Chung, S.-Y. 2022. Test-time adaptation via self-training with nearest neighbor information. arXiv preprint arXiv:2207.10792. Jiang, Z.; Gao, P.; Guo, C.; Zhang, Q.; Xiang, S.; and Pan, C. 2019. Video Object Detection with Locally-Weighted Deformable Neighbors. In AAAI Conference on Artificial Intelligence. Jiang, Z.; Liu, Y.; Yang, C.; Liu, J.; Gao, P.; Zhang, Q.; Xiang, S.; and Pan, C. 2020. Learning where to focus for efficient video object detection. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part XVI 16, 18–34. Springer. Kang, K.; Li, H.; Xiao, T.; Ouyang, W.; Yan, J.; Liu, X.; and Wang, X. 2017a. Object detection in videos with tubelet proposal networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 727–735. Kang, K.; Li, H.; Yan, J.; Zeng, X.; Yang, B.; Xiao, T.; Zhang, C.; Wang, Z.; Wang, R.; Wang, X.; et al. 2017b. Tcnn: Tubelets with convolutional neural networks for object detection from videos. IEEE Transactions on Circuits and Systems for Video Technology, 28(10): 2896–2907. Li, F.; Zhang, H.; Liu, S.; Guo, J.; Ni, L. M.; and Zhang, L. 2022. Dn-detr: Accelerate detr training by introducing query denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13619–13627. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 689 Li, Y.; Wang, N.; Shi, J.; Liu, J.; and Hou, X. 2016. Revisiting batch normalization for practical domain adaptation. arXiv preprint arXiv:1603.04779. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 740– 755. Springer. Liu, S.; Li, F.; Zhang, H.; Yang, X.; Qi, X.; Su, H.; Zhu, J.; and Zhang, L. 2022. Dab-detr: Dynamic anchor boxes are better queries for detr. arXiv preprint arXiv:2201.12329. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Meng, D.; Chen, X.; Fan, Z.; Zeng, G.; Li, H.; Yuan, Y.; Sun, L.; and Wang, J. 2021. Conditional detr for fast training convergence. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3651–3660. Nascimento, J. C.; and Marques, J. S. 2006. Performance evaluation of object detection algorithms for video surveillance. IEEE Transactions on Multimedia, 8(4): 761–774. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115: 211– 252. Shvets, M.; Liu, W.; and Berg, A. C. 2019. Leveraging longrange temporal relationships between proposals for video object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9756–9764. Tian, Z.; Shen, C.; Chen, H.; and He, T. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627–9636. Wang, D.; Shelhamer, E.; Liu, S.; Olshausen, B.; and Darrell, T. 2020. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726. Wang, H.; Tang, J.; Liu, X.; Guan, S.; Xie, R.; and Song, L. 2022a. Ptseformer: Progressive temporal-spatial enhanced transformer towards video object detection. In European Conference on Computer Vision, 732–747. Springer. Wang, Q.; Fink, O.; Van Gool, L.; and Dai, D. 2022b. Continual test-time domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7201–7211. Wang, S.; Zhou, Y.; Yan, J.; and Deng, Z. 2018. Fully motion-aware network for video object detection. In Proceedings of the European conference on computer vision (ECCV), 542–557. Wu, H.; Chen, Y.; Wang, N.; and Zhang, Z. 2019. Sequence level semantics aggregation for video object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 9217–9225. Zhang, S.; Wu, G.; Costeira, J. P.; and Moura, J. M. 2017. Understanding traffic density from large-scale web camera data. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5898–5907. Zhou, Q.; Li, X.; He, L.; Yang, Y.; Cheng, G.; Tong, Y.; Ma, L.; and Tao, D. 2022. TransVOD: end-to-end video object detection with spatial-temporal transformers. IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhu, X.; Dai, J.; Yuan, L.; and Wei, Y. 2018. Towards high performance video object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7210–7218. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. Zhu, X.; Wang, Y.; Dai, J.; Yuan, L.; and Wei, Y. 2017a. Flow-guided feature aggregation for video object detection. In Proceedings of the IEEE international conference on computer vision, 408–417. Zhu, X.; Xiong, Y.; Dai, J.; Yuan, L.; and Wei, Y. 2017b. Deep feature flow for video recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2349–2358. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 690 | 2024 | 77 |
18,595 | Zero-Shot Aerial Object Detection with Visual Description Regularization Zhengqing Zang1,2*, Chenyu Lin1,2*, Chenwei Tang1,2, Tao Wang1,2†, Jiancheng Lv1,2 1College of Computer Science, Sichuan University, Chengdu, 610065, P. R. China 2Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education, Chengdu, 610065, P. R. China {2022223045158, 2022223040017}@stu.scu.edu.cn, [email protected], [email protected] [email protected] Abstract Existing object detection models are mainly trained on largescale labeled datasets. However, annotating data for novel aerial object classes is expensive since it is time-consuming and may require expert knowledge. Thus, it is desirable to study label-efficient object detection methods on aerial images. In this work, we propose a zero-shot method for aerial object detection named visual Description Regularization, or DescReg. Concretely, we identify the weak semantic-visual correlation of the aerial objects and aim to address the challenge with prior descriptions of their visual appearance. Instead of directly encoding the descriptions into class embedding space which suffers from the representation gap problem, we propose to infuse the prior inter-class visual similarity conveyed in the descriptions into the embedding learning. The infusion process is accomplished with a newly designed similarity-aware triplet loss which incorporates structured regularization on the representation space. We conduct extensive experiments with three challenging aerial object detection datasets, including DIOR, xView, and DOTA. The results demonstrate that DescReg significantly outperforms the state-of-the-art ZSD methods with complex projection designs and generative frameworks, e.g., DescReg outperforms best reported ZSD method on DIOR by 4.5 mAP on unseen classes and 8.1 in HM. We further show the generalizability of DescReg by integrating it into generative ZSD methods as well as varying the detection architecture. Codes will be released at https://github.com/zq-zang/DescReg. Introduction Aerial object detection aims to detect objects from aerial images (Xia et al. 2018; Yang et al. 2019; Ding et al. 2021), e.g., images captured from an unmanned aerial vehicle (UAV). It plays an important role in many remote sensing applications, such as UAV-aided environmental monitor and disaster response systems. Benefiting from the development of deep convolution neural networks (CNNs), aerial object detection has been extensively studied and advanced (Yang et al. 2019; Zhu et al. 2021; Li et al. 2022, 2020; Deng et al. 2020; Han et al. 2021) in recent years. Prior research mainly focuses on improving the accuracy or efficiency based on *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Illustration of weak semantic-visual correlation problem. We perform hierarchical clustering with semantic embeddings and show the radial dendrogram for the 20 common object classes from the Pascal VOC dataset (left) and the 20 aerial object classes from the DIOR dataset(right). The common object classes show clear clustering result which corresponds well to visual appearance (e.g., horse, cow, and sheep), while the semantic clustering of aerial object classes are inevident and shows much less correlation with visual appearance. Best viewed with zoom-in. a fully supervised paradigm. However, labeling objects for large-scale aerial images is extremely costly due to the small object size and irregular viewing angle. Hence expanding the vocabulary becomes a challenge for fully supervised aerial object detection methods (Lam et al. 2018). Zero-shot object detection (ZSD), which aims to detect unseen object classes without bounding box annotations (Bansal et al. 2018; Demirel, Cinbis, and IkizlerCinbis 2018; Rahman, Khan, and Porikli 2018), appears as a promising approach for reducing the copious label demand in aerial object detection. ZSD methods mainly leverage the semantic relation between object classes to detect unseen classes, e.g., Cat and Dog are semantically similar, and thus the knowledge learned on Cat could be transferred to recognize Dog. Methodologically, this knowledge transfer process is typically realized through learning a command embedding function to align visual and semantic features (Rahman, Khan, and Porikli 2018; Bansal et al. 2018; Zheng et al. 2020; Rahman, Khan, and Barnes 2020), or learning a universal synthesizer function to generate training samThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6926 ples (Hayat et al. 2020; Zhao et al. 2020; Zhu, Wang, and Saligrama 2019; Huang et al. 2022; Sarma, Kumar, and Sur 2022). However, we find that existing ZSD methods perform poorly on aerial images due to weak semantic-visual correlation. Concretely, as shown in Fig. 1, our core observation is that objects in natural images tend to be visually distinct and align well with semantic clustering, yet objects from aerial images often appear vague and lack semantic correlation. Such an issue hinders effective recognition of unseen classes. Based on the analysis, we aim to incorporate textual descriptions to enhance the semantic understanding of aerial object classes. These descriptions, which detail visual characteristics, act as prior knowledge. We initially encoded these descriptions using semantic embeddings from a pretrained language model, noting a performance improvement, though limited. This limitation is likely due to the visualsemantic representation gap (Wang and Chen 2017), more pronounced in aerial images. Consequently, we shift our approach, using textual descriptions for structural regularization. Our proposed method, Description Regularization (DescReg), aims to maintain the visual similarity structure in the classification space, enhancing knowledge transfer from seen to unseen classes. For this, we designed an adaptive triplet loss, treating each projected class embedding as separate samples. This involves sampling positive pairs from similar classes and negative pairs from dissimilar ones, using their difference as the margin. This similarity-aware triplet loss effectively preserves inter-class similarity relations in the embedding space during optimization. To validate the above method, we establish two challenging zero-shot aerial object detection setups with DOTA and xView datasets. Together with the existing aerial ZSD setup on the DIOR dataset (Huang et al. 2022), we conduct extensive experiments on the two-stage Faster R-CNN detector and further show generalization the multi-stage Cascaded R-CNN detector (Cai and Vasconcelos 2018) and the popular one-stage YOLOv8 detector (Redmon and Farhadi 2017; Jocher, Chaurasia, and Qiu 2023). DescReg effectively improves the detection accuracy of raw baseline method on both seen and unseen classes. Remarkbaly, DescReg with simple one-layer projection outperforms the SOTA generative ZSD methods (Huang et al. 2022) by 4.5 in unseen mAP and 8.1 in HM, with the same detection architecture. We further incorporate our method into the generative ZSD method by regularizing the visual feature synthesizing process and observe significant improvement, which demonstrates the strong generalizability of our DescReg as a structural similarity regularization method. In summary, Our contributions are four-fold: • Our study is the first comprehensive analysis in zero-shot aerial object detection, combining thorough investigation with specialized method development. • Addressing the weak semantic-visual link in aerial imagery, we use prior visual text descriptions as a solution. • We introduce a novel triplet loss that accounts for inter-class similarity, embedding structural regularization through textual descriptions. • Utilizing the DOTA and xView datasets, we establish two new challenging ZSD setups and conduct extensive experiments with various detection architectures to assess our method. Related Work Zero-shot Object Detection Driven by zero-shot learning (ZSL) research (Mishra et al. 2018; Tang et al. 2019; Jasani and Mazagonwalla 2019; Demirel, Cinbis, and Ikizler-Cinbis 2019; Tang et al. 2020, 2021), which transfers knowledge from seen to unseen classes, the challenging task of zero-shot detection (ZSD) has gained attention since its introduction in 2018 (Bansal et al. 2018). ZSD not only categorizes but also localizes unseen objects. Similar to ZSL, ZSD strategies are either embedding-based or generative-based. Embeddingbased methods learn a visual→semantic projection for aligning two spaces (Demirel, Cinbis, and Ikizler-Cinbis 2018; Li et al. 2019b), including refining background vectors for better differentiation from unseen classes (Zheng et al. 2020). Alternatively, generative methods (Zhu, Wang, and Saligrama 2019; Zhao et al. 2020) use GANs to create visual samples of unseen classes, enabling classifier and regressor training. These approaches focus on maintaining inter-class structure and increasing intra-class diversity, with novel components like robust feature synthesizers (Huang et al. 2022) and loss functions (Sarma, Kumar, and Sur 2022) for visual-semantic alignment. However, the critical role of category embedding representation in ZSD’s effectiveness, which is our study’s focus, remains underexplored. Aerial Object Detection Aerial images, taken by sensors on satellites, aircraft, or drones, are essential for gathering Earth’s surface data from afar. While object detection in natural images has advanced significantly, it remains a challenge in aerial imagery. Previous research has mainly addressed aerial-specific issues like small target sizes (Recasens et al. 2018; Yang et al. 2019; Meethal, Granger, and Pedersoli 2023; Li et al. 2020; Yang, Huang, and Wang 2022; Koyun et al. 2022) and object rotation (Cheng, Zhou, and Han 2016; Zhang et al. 2020b; Cheng et al. 2019). However, there’s limited focus on labelefficient detection methods in this field. Some have explored few-shot learning (Wolf et al. 2021; Lu et al. 2023) for aerial detection, but these still need target labels. Our work investigates using training data from known classes for direct application to unknown classes, i.e., zero-shot detection. Proposed Method Overview Given the bounding box annotation on a set of seen object categories F = {C1, C2, ...CN}, zero-shot object detection (ZSD) aims at training on the seen data and generalizing to a set of target unseen object categories F∗= {C1, C2, ...CM}. In the following paragraphs, we first present our main detection architecture and then introduce our approach with indepth analyses under the context of ZSD. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6927 Figure 2: The overall framework of the proposed method. Object Detection Architecture The classical two-stage Faster R-CNN (Ren et al. 2015) detection model consists of a visual feature extraction backbone T , a region proposal network R, a shared multi-layer feature transformation network F. a region classifier C, and a box regressor B. Given input image I, the model first extracts image feature F: F = T (I). Then object candidate proposals are predicted by the region proposal network: {pi} = R(F). With the proposals, feature pooling is conducted on the image feature map F to obtain the proposal region feature {vi}. The feature is then further refined by the shared network: vi = F(fi). Finally, object classification scores and refined bounding boxes are predicted by the classifier and the regressor: si = C(vi), bi = B(pi, vi). The detection model can be trained on the seen class data with proposal loss, classification loss, and regression loss: L = Lprop + Lcls + Lreg. The trained region proposal network is class-agnostic and thus may generalize directly to predict the unseen classes. The box regressor is also not sensitive to classes and thus can be applied directly to unseen classes, i.e., by using the class-agnostic version or using prediction from seen classes (Huang et al. 2022). The major challenge here is to generalize the classification to unseen classes, as the region classifier is only trained on the seen class data and cannot predict the unseen classes. Detecting the Unseen with Semantic Bridging While unseen class data is not available, the semantic relation can be efficiently represented with semantic word embeddings. These embeddings can be obtained from pre-trained word embedding models such as Word2vec (Mikolov et al. 2013) and large language models such as BERT (Devlin et al. 2018): cj = W(Cj), where cj is the vectorized representation and W is the embedding model. With these embeddings and trained detection models on the seen class data, existing zero-shot object detection methods mainly focus on bridging the gap between seen and unseen classes. These methods can be classified into embedding-alignment and generative methods. The embedding-alignment methods (Khandelwal et al. 2023; Zhang et al. 2020a; Yan et al. 2022) aim to bridge the gap between visual and semantic space by learning representation alignment. For example, learning an alignment function ϕ to align semantic embeddings to visual features (Zhang et al. 2020a): wj = ϕ(cj) (1) where wj is the visually-aligned class representation. With wj, the visual features vi can be classified based on similarity metrics such as cosine similarity, and the seen classification supervision is employed to learn the alignment function. The learned alignment function is expected to generalize to unseen classes by utilizing unseen class embeddings, and thus the detection model can detect unseen objects. The Semantic-Visual Correlation Challenge Although such embedding-alignment methods are shown to be effective on natural image datasets such as Pascal (Everingham et al. 2010) and COCO (Lin et al. 2014). They suffer from poor semantic-visual correlation on aerial images: the semantic embedding cj has poor correlation with visual features vi, which leads to severe difficulty for the learned embedding function ϕ to generalize on the unseen classes. Note this observation also applies to the generative methods which will be discussed in the next section. Based on this observation, we aim to improve the semantic-visual correlation by augmenting the semantic embeddings with extra visual cues. The visual cues are instantiated as simple textual descriptions. This is motivated by the prior works in zero-shot learning that employs textual descriptions to augment the recognition of unseen classes (Elhoseiny, Saleh, and Elgammal 2013; Paz-Argaman et al. 2020). The descriptions can be obtained from experts or simply through large-scale pre-trained large language models (LLM), e.g., GPT (OpenAI 2023). DescReg Formulation The textual descriptions are free-form and efficient to acquire, they can help provide valuable information such as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6928 shape, color, and context for the aerial objects. However, we find simply encoding them into the semantic representation offers limited gain. This is likely due to the following issues: 1) the representations of semantic feature space and visual feature space have distinct distributions, which causes ineffective transformation of the descriptions into visual feature space. 2) The image feature representations of aerial objects are less discriminative due to their smaller size than that of common objects, thus it is more difficult to classify them against the false classes and backgrounds. To address the above issues, we leverage the inter-class visual similarity information as a structural regularization to learn more discriminative alignment functions. Specifically, given the visual descriptions for all seen and unseen classes: {Tj}, pre-trained language models such as BERT (Devlin et al. 2018) are used to encode them into vectorized representation: tj = W(Tj) (2) where W is the employed language model and tj is the obtained representation. Then we compute the pair-wise cosine similarity and obtain the similarity matrix S: S(j, k) = tjtk ||tj||2||tk||2 (3) To encourage more discriminative inter-class similarity and keep the similarity score within the value range (0, 1], we introduce a self-excluding Softmax: ˆS(j, k) = eS(j,k)/τ P k′ ̸=j eS(j,k′ )/τ if k ̸= j S(j, k) otherwise (4) where ˆS is the normalized similarity matrix, of which all the diagonal elements are 1, corresponding to self-similarity and the other elements are in the value range (0, 1), corresponding to inter-class similarity. The visual characteristics of object classes are now encoded structurally as this similarity matrix. We then integrate it into the embedding alignment learning process. Motivated by the triplet loss (Frome et al. 2013; Akata et al. 2015), we treat the visually-aligned semantic representations wj as independent feature samples and perform positive-negative sampling based on the similarity, then triplet loss is imposed on the samples: Lj trip = max{0, d(wj, wh(j)) −d(wj, wl(j)) + ∆} (5) where d(·, ·) is the Euclidean distance. h(j) denotes sampling a similar class for class j and l(j) means sampling a less similar class, or dissimilar class. The sampling is conducted based on similarity scores ˆS(j, :). ∆is the margin. However, such a direct adoption of triplet loss does not consider the similarity level between classes, e.g., a bridge may be very similar to a dam, but less similar to an overpass, while a vehicle may look a bit dissimilar to a boat but is very distinct to a baseball-field. We thus propose to employ the similarity gap as the margin for the triplet regularization: ∆j = ˆS(j, h(j)) −ˆS(j, l(j)) (6) such a second-order metric helps encode the discrepancy in similarity level into the margin regularization. It facilitates the structural learning of the alignment function and thus the knowledge learned from seen classes can better transfer to the unseen classes. The improved similarity-aware triplet loss is thus: Lj trip = max{0, d(wj, wh(j)) −d(wj, wl(j)) + ∆j} (7) Unlike prior works that apply contrastive objectives (Huang et al. 2022; Yan et al. 2022), the proposed margin-adaptive triplet loss is less greedy and allows strong flexibility in representation space. The loss is summed over all the seen and unseen classes to compute the full regularization objective: Ltrip = X j Lj trip (8) During the learning of the alignment function, the classification objective (e.g., cross-entropy) is usually computed on the seen classes. So the complete objective with DescReg is: L = Lcls + Ltrip (9) Fig. 2 shows the overall framework integrated with Faster R-CNN. Generalization to Generative Methods Unlike the above embedding-alignment methods, the generative methods (Hayat et al. 2020; Zhu, Wang, and Saligrama 2019; Huang et al. 2022; Rahman, Khan, and Barnes 2020) aim to learn universal visual feature synthesizers. The method can be simplified as generating visual feature samples based on semantic embeddings: ˆvj = φ(cj, z) (10) where ˆvj is the synthesized visual feature and z is random noise to encourage the feature diversity. The synthesized features can be employed to train the classifier for both seen and unseen classes. Similar to the above-mentioned semanticvisual correlation challenge, here the synthesizer also faces generalization issues on the unseen classes. The proposed similarity-aware triplet loss can then easily added to the training process of generative networks: Lj trip = max{0, d(ˆvj, ˆvh(j)) −d(ˆvj, ˆvl(j) + ∆j} (11) Experiments We study four questions in experiments. 1) How does DescReg improve the performance of zero-shot aerial object detection? is it efficient? 2) Is DescReg sensitive to visual descriptions and embedding generation methods? 3) How does each component take effect? 4) Can DescReg generalize to the generative ZSD methods and be applied on different object detection meta-architectures? Datasets and Experiment Setup We evaluate the proposed method on three challenging remote sensing image object detection datasets: DIOR (Li et al. 2019a), xView (Lam et al. 2018), and DOTA (Xia The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6929 Method ZSD GZSD Recall@100 mAP Recall@100 mAP IoU=0.4 IoU=0.5 IoU=0.6 S U HM S U HM BLC (Zheng et al. 2020) 6.1 0.4 0.8 SU (Hayat et al. 2020) 10.5 30.9 2.9 5.3 RRFS (Huang et al. 2022) 11.3 30.9 3.4 6.1 V2S† (Khandelwal et al. 2023) 14.1 11.9 10.1 4.1 78.2 15.8 26.3 57.0 1.4 2.7 RRFS† (Huang et al. 2022) 22.1 19.8 18.1 9.7 60.0 19.9 29.9 41.9 2.8 5.2 ContrastZSD† (Yan et al. 2022) 24.9 22.3 20.1 8.7 69.2 25.9 37.7 51.4 3.9 7.2 DescReg (ours) 37.9 34.6 31.5 15.2 82.0 34.3 48.4 68.7 7.9 14.2 Table 1: Comparison with state-of-the-art methods under ZSD and GZSD settings on DIOR dataset. † denotes our implementation results. ”S” and ”U” denote seen classes and unseen classes, respectively. et al. 2017). For DIOR, we follow the setting in prior work (Huang et al. 2022). For xView and DOTA, we conduct semantic clustering and sample classes within clusters to ensure unseen class diversity and semantic relatness(Rahman, Khan, and Porikli 2018; Huang et al. 2022). The resulting xView contains 48 seen classes and 12 unseen classes, and the resulting DOTA contains 11 seen classes and 4 unseen classes. We also perform cropping on the xView and DOTA images to simplify the data. Due to space limits, please refer to our supplementary file for more details. Throughout the experiments, unless otherwise stated, we adopt the Faster RCNN model as the base detection model and IOU=0.5 for the evaluation. Implementation Details Following prior works (Yan et al. 2022; Huang et al. 2022; Yan et al. 2022), we adopt Faster R-CNN with ResNet101 (He et al. 2016) as the base detection architecture and conduct two-stage training. In the first stage, the model is first trained on the seen class data as conventional detection training, In the second stage, the model is frozen and the semantic-visual projection is fine-tuned with the proposed DescReg. In addition to Faster R-CNN, we also validate our method on the newly released one-stage YOLOv8 model (Jocher, Chaurasia, and Qiu 2023) and the cascaded detection model (Cai and Vasconcelos 2018). Due to space limit, please refer to supplementary for more details on implementation. Main Results Comparison with State-of-the-arts on DIOR In Tab. 1, we compare the results with state-of-the-art methods on the DIOR dataset. The proposed method outperforms all compared methods in both ZSD and GZSD settings. Under the ZSD setting, our method achieves more than 11.0% absolute gain for recalls of different IOU thresholds, and nearly 4.0% mAP increase compared to the best-reported method, demonstrating its much stronger ability to detect unseen categories compared to All other ZSD methods. Under the GZSD setting, the proposed method achieves the best mAP performance on seen classes, surpassing the best-compared method by 11.7% in mAP, this result shows that our zeroshot learning method achieves the least interference on the seen class recognition. Furthermore, our method achieves 7.9% unseen mAP and 14.2% HM, which also significantly outperforms the prior methods. Similar observations hold on the recall metrics. Experiments on xView and DOTA In addition to DIOR, we further conduct zero-shot detection experiments on the challenging xView and DOTA datasets. We compare to RRFS (Huang et al. 2022) and ContrastZSD (Yan et al. 2022) as representatives of SOTA generative methods and embedding-alignment methods. The result is shown in Tab. 2. On both datasets, our method shows higher performances compared to the baselines. Specifically, Under both ZSD and GZSD settings of xView, the proposed method achieves nearly two-fold improvement in unseen mAP compared to the best-performing ContrastZSD method (4.1% to 8.3%, 2.9% to 5.8%). The corresponding gains on DOTA are about 50% relatively. We also observe that with xView, ContrastZSD achieves similar or higher recalls on the GZSD setting compared to our method, but the unseen mAP is lower, which indicates its unseen images may be less discriminative against the background, and thus predicts more false positives. Class-wise Resutls We also report the class-wise mAP performance in terms of ZSD and GZSD for all three aerial object detection datasets. The results are shown in Tab. 3. We note some unseen classes are very challenging and show near 0% AP on the test set (e.g. 0.1% and 0.4% for helicopters on DOTA, under ZSD and GZSD settings respectively). This phenomenon is also observed in prior ZSD works (Yan et al. 2022; Huang et al. 2022; Yan et al. 2022), it is mainly caused by the weak discriminability of unseen class representations and remains a good topic for future ZSD research. Notably, benefiting from the introduced cross-class representation regularization, our method achieves relatively good performances on many unseen classes(e.g. 20.0% GZSD AP and 45.7% ZSD AP for groundtrackfield class on DIOR). Generalizability We further validate whether DescReg generalizes to the generative ZSD method and other detection architectures. DescReg with Generative ZSD Methods Generative methods aim at synthesizing samples for unseen classes, the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6930 xView DOTA Method ZSD GZSD ZSD GZSD RE@100 mAP RE@100 mAP RE@100 mAP RE@100 mAP 0.4 0.5 0.6 S U HM S U HM 0.4 0.5 0.6 S U HM S U HM RRFS 17.6 14.3 11.3 2.2 19.1 5.8 8.9 10.2 1.6 2.8 17.5 14.4 11.5 2.9 71.4 14.2 23.7 47.1 2.2 4.2 ContrastZSD 29.0 27.1 25.9 4.1 27.6 13.9 18.5 16.8 2.9 4.9 28.7 25.4 23.9 6.0 69.1 12.2 20.7 41.6 2.8 5.2 DescReg 45.9 43.0 40.1 8.3 28.0 12.8 17.6 17.1 5.8 8.7 37.3 34.4 29.6 8.5 83.8 29.9 44.0 68.7 4.7 8.8 Table 2: Performance of our proposed model on xView and DOTA datasets for ZSD and GZSD settings. DIOR DOTA xView Setting Method airport bask. f. gr. tra. f. windmill tenn. c. heli. soccer. field swim. pool heli. bus pic. track tru. tra. w/ tox tra. mar. vessel motorb. barge reach stacker mobile crane scraper excavator ship. cont. RRFS 3.1 2.0 6.3 0.0 4.4 0.0 4.5 0.0 0.8 2.0 0.0 0.0 3.9 5.5 5.2 0.1 0.0 1.1 0.0 0.4 ContrastZSD 5.2 2.1 8.1 0.0 3.5 2.9 4.8 0.0 8.1 5.5 0.1 1.2 6.3 9.7 1.2 0.0 0.0 0.1 3.1 0.0 GZSD DescReg 0.0 9.2 20.0 2.4 9.1 0.1 9.5 0.0 21.9 4.5 0.0 6.1 13.3 9.1 8.2 0.0 0.0 0.4 6.1 0.0 RRFS 12.3 6.2 19.7 0.6 5.4 0.1 6.1 0.0 0.1 2.4 0.0 0.1 6.9 1.5 9.2 0.5 0.0 5.1 0.0 0.0 ContrastZSD 9.7 3.9 21.2 0.1 7.4 4.5 11.9 0.0 14.1 5.7 0.0 2.8 7.3 9.7 1.1 0.1 0.0 0.6 7.6 0.0 ZSD DescReg 0.1 10.9 45.7 3.9 11.3 0.4 22.2 0.1 36.0 4.9 0.1 7.5 19.8 9.1 10.2 0.0 0.4 0.9 10.3 0.0 Table 3: Class-wise AP comparison of different methods on unseen classes of three aerial image datasets. proposed DescReg can be integrated into the framework for generating more discriminative samples. As shown in Tab. 4, by augmenting with DescReg, the best-reporting generative method of RRFS is improved on PASCAL VOC dataset. Specifically, with DescReg, the mAP performance for unseen classes is improved from 65.5% to 66.4% on ZSD setting, and from 49.1% to 50.4% on GZSD setting. Method ZSD GZSD S U HM SAN (2018) 59.1 48.0 37.0 41.8 HRE (2018) 54.2 62.4 25.5 36.2 BLC (2020) 55.2 58.2 22.9 32.9 RRFS (2022) 65.5 47.1 49.1 48.1 RRFS w/. DescReg 66.4 47.1 50.4 48.6 Table 4: ZSD and GZSD performance of the generative method on the PASCAL VOC dataset. DescReg with Other Detection Architectures In addition to Faster R-CNN, we further validate DescReg on the one-stage YOLOv8 (Jocher, Chaurasia, and Qiu 2023) and the multi-stage Cascaded R-CNN (Cai and Vasconcelos 2018). The results are shown in Tab. 5. Our method applies well to the two detection models, e.g. with 15.6% ZSD mAP and Cascased R-CNN and 6.4% ZSD mAP on YOLOv8 which achieves 64 FPS inference speed. Analysis We conduct several ablation studies and experimental analyses to better understand how the proposed method works. Architecture ZSD GZSD FPS S U HM Faster-RCNN 15.2 68.7 7.9 14.2 11 Cascaded-RCNN 15.6 70.0 8.1 14.5 8 YOLOv8-s 6.4 49.9 4.2 7.7 64 Table 5: Performance with different detection architectures on the DIOR dataset. FPS denotes frame per seconds. Please refer to supplementary for qualitative results. Ablation Study on the Proposed Triplet Loss As shown in Tab. 6, when replacing the semantic class embeddings with the visual description embeddings, the baseline performance is improved but the improvement is limited (e.g., 1.0% on ZSD mAP). This result means naively incorporating the visual description information as the class semantic representation cannot help much due to the representation gap between semantic feature space and visual feature space. Additionally, by applying the proposed interclass triplet loss, the performance is significantly improved (from 7.1% to 10.1% on ZSD mAP) which indicates that simple similarity-based triplet regularization could improve the zero-shot detection performance. By further introducing the proposed similarity-aware margin, the ZSD mAP is improved by 5.0% and HM is improved by 5.6%, meaning the adaptive margin helps better regularize the class representation space. We also observe the temperature value of 0.03 achieves the best performance, which is slightly higher than 0.01 and 0.05. Based on the best-performing model, further adding the visual description embeddings cannot offer imThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6931 0 2 4 6 8 10 12 0 50 100 Epoch mAP seen w DR seen w/o DR 0 5 10 unseen w DR unseen w/o DR Figure 3: Learning dynamics of DescReg. w/wo DescReg denotes DescReg and baseline without DescReg. provement, indicating our method may already incorporate the visual characteristics into the embeddings through structural similarity regularization. Fig. 3 shows how the learning dynamics of seen and unseen classes, apparently, with DescReg, the performance on both seen and unseen classes are higher and the learning process is more stable. S→V Desc-Softmax Desc-Adaptive Margin ZSD HM 6.1 5.3 ✓ 7.1 5.9 ✓(0.03) 10.1 8.2 ✓(0.01) ✓ 15.1 13.8 ✓(0.03) ✓ 15.2 14.2 ✓(0.05) ✓ 14.5 13.6 ✓ ✓(0.03) ✓ 15.3 13.9 Table 6: Ablation study of the proposed similarity-aware triplet loss. S→V means replacing the semantic embeddings with visual description embeddings. Desc-Softmax and Desc-Adaptive-Margin denote the proposed self-excluding Softmax and the similarity-aware triplet loss. The numbers in the parentheses are the temperature used in the Softmax. Effect of Varying Descriptions We investigate how sensitive is DescReg to the input visual descriptions by varying the description sources. We evaluate how different human and GPT-4 (OpenAI 2023) description inputs affect the zero-shot detection performance. As shown in Tab. 7, when focusing on the semantics, the performance of both human and GPT-4 descriptions is low (e.g., 5.0% ZSD mAP for human input and 6.9% mAP for GPT-4 input). The reason is that simple semantic description contains much fewer visual details of the objects. When switching to descriptions that focus on visual details from an aerial view, the performance is significantly improved by more than 8.0% in ZSD mAP and 7.0% in HM, benefiting from the visual details that generate effective similarity measures. We also test how sensitive the method works with different descriptions of varying lengths. The result shows that our method is not very sensitive to the description length. In addition, we observe that GPT-generated descriptions offer higher performance than that of human inputs. While we did not dedicatedly optimize the human input, the result shows the application of large language models in ZSD is very efficient. Method ZSD HM Human semantic 5.0 4.7 aerial 13.1 11.9 semantic 6.9 6.1 aerial-long 15.2 14.2 aerial-medium 15.1 13.9 GPT-4 aerial-short 13.5 13.2 Table 7: Varying descriptions. We acquire visual descriptions through both Human and GPT-4. semantic means simply describing the object class while aerial means focusing on the visual appearance in aerial images. long, medium, and short denote descriptions with varying lengths. Conclusion In this paper, we investigate the zero-shot object detection (ZSD) problem in the context of aerial images. We identified the weak semantic-visual correlation problem of aerial objects and propose to learn stronger visually-aligned class representations with external visual descriptions in text format. Our method is extensively validated on three challenging aerial object detection datasets and shows significantly improved performance to the prior ZSD methods. To the best of our knowledge, we are the first to conduct a comprehensive study on zero-shot aerial object detection. we hope our method and newly established experimental setups provide a baseline for future research. Limitations and Future Work While our method significantly improves the baselines, we note the performance on unseen classes is still low. The major challenge arises from the strong inter-class confusion and background confusion among aerial objects, which is further exacerbated by the small object size. While our method mitigates these problems, two future directions could further address them: 1) The non-uniform spatial processing approaches (Recasens et al. 2018; Yang et al. 2019) could be explored to amplify the small object signal for improved zero-shot recognition. 2) Based on our proposed regularization, other labelefficient methods could be incorporated to improve the performance, e.g. few-shot approach and open-vocabulary detection approach (Kang et al. 2019; Wang 2023). Acknowledgments This work is supported by the Key Program of National Science Foundation of China under Grant 61836006, the Fundamental Research Funds for the Central Universities under Grant YJ202342 and 1082204112364, the National Science Foundation of China under Grant 62106161, the Key R&D Program of Sichuan Province under Grant 2022YFN0017 and 2023YFG0278 and Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6932 References Akata, Z.; Perronnin, F.; Harchaoui, Z.; and Schmid, C. 2015. Label-embedding for image classification. IEEE transactions on pattern analysis and machine intelligence, 38(7): 1425–1438. Bansal, A.; Sikka, K.; Sharma, G.; Chellappa, R.; and Divakaran, A. 2018. Zero-Shot Object Detection. In European Conference on Computer Vision. Cai, Z.; and Vasconcelos, N. 2018. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6154–6162. Cheng, G.; Han, J.; Zhou, P.; and Xu, D. 2019. Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection. IEEE Transactions on Image Processing, 28: 265–278. Cheng, G.; Zhou, P.; and Han, J. 2016. Learning RotationInvariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images. IEEE Transactions on Geoscience and Remote Sensing, 54: 7405–7415. Demirel, B.; Cinbis, R. G.; and Ikizler-Cinbis, N. 2018. Zero-Shot Object Detection by Hybrid Region Embedding. In British Machine Vision Conference. Demirel, B.; Cinbis, R. G.; and Ikizler-Cinbis, N. 2019. Learning Visually Consistent Label Embeddings for ZeroShot Learning. 2019 IEEE International Conference on Image Processing (ICIP), 3656–3660. Deng, S.; Li, S.; Xie, K.; Song, W.; Liao, X.; Hao, A.; and Qin, H. 2020. A global-local self-adaptive network for drone-view object detection. IEEE Transactions on Image Processing, 30: 1556–1569. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ding, J.; Xue, N.; Xia, G.-S.; Bai, X.; Yang, W.; Yang, M. Y.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; et al. 2021. Object detection in aerial images: A large-scale benchmark and challenges. IEEE transactions on pattern analysis and machine intelligence, 44(11): 7778–7796. Elhoseiny, M.; Saleh, B.; and Elgammal, A. 2013. Write a classifier: Zero-shot learning using purely textual descriptions. In Proceedings of the IEEE International Conference on Computer Vision, 2584–2591. Everingham, M.; Van Gool, L.; Williams, C. K.; Winn, J.; and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303–338. Frome, A.; Corrado, G. S.; Shlens, J.; Bengio, S.; Dean, J.; Ranzato, M.; and Mikolov, T. 2013. Devise: A deep visualsemantic embedding model. Advances in neural information processing systems, 26. Han, J.; Ding, J.; Li, J.; and Xia, G.-S. 2021. Align deep features for oriented object detection. IEEE Transactions on Geoscience and Remote Sensing, 60: 1–11. Hayat, N.; Hayat, M.; Rahman, S.; Khan, S. H.; Zamir, S. W.; and Khan, F. S. 2020. Synthesizing the Unseen for Zero-shot Object Detection. In Asian Conference on Computer Vision. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Huang, P.; Han, J.; Cheng, D.; and Zhang, D. 2022. Robust Region Feature Synthesizer for Zero-Shot Object Detection. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7612–7621. Jasani, B.; and Mazagonwalla, A. 2019. Skeleton based Zero Shot Action Recognition in Joint Pose-Language Semantic Space. ArXiv, abs/1911.11344. Jocher, G.; Chaurasia, A.; and Qiu, J. 2023. YOLO by Ultralytics. Kang, B.; Liu, Z.; Wang, X.; Yu, F.; Feng, J.; and Darrell, T. 2019. Few-shot object detection via feature reweighting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8420–8429. Khandelwal, S.; Nambirajan, A.; Siddiquie, B.; Eledath, J.; and Sigal, L. 2023. Frustratingly Simple but Effective Zeroshot Detection and Segmentation: Analysis and a Strong Baseline. arXiv preprint arXiv:2302.07319. Koyun, O. C.; Keser, R. K.; Akkaya, I. B.; and T¨oreyin, B. U. 2022. Focus-and-Detect: A small object detection framework for aerial images. Signal Processing: Image Communication, 104: 116675. Lam, D.; Kuzma, R.; McGee, K.; Dooley, S.; Laielli, M.; Klaric, M. K.; Bulatov, Y.; and McCord, B. 2018. xView: Objects in Context in Overhead Imagery. ArXiv, abs/1802.07856. Li, C.; Yang, T.; Zhu, S.; Chen, C.; and Guan, S. 2020. Density map guided object detection in aerial images. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 190–191. Li, K.; Wan, G.; Cheng, G.; Meng, L.; and Han, J. 2019a. Object Detection in Optical Remote Sensing Images: A Survey and A New Benchmark. ArXiv, abs/1909.00133. Li, W.; Chen, Y.; Hu, K.; and Zhu, J. 2022. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1829–1838. Li, Z.; Yao, L.; Zhang, X.; Wang, X.; Kanhere, S. S.; and Zhang, H. 2019b. Zero-Shot Object Detection with Textual Descriptions. In AAAI Conference on Artificial Intelligence. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, 740–755. Springer. Lu, X.; Sun, X.; Diao, W.; Mao, Y.; Li, J.; Zhang, Y.; Wang, P.; and Fu, K. 2023. Few-shot object detection in aerial imagery guided by text-modal knowledge. IEEE Transactions on Geoscience and Remote Sensing, 61: 1–19. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6933 Meethal, A.; Granger, E.; and Pedersoli, M. 2023. Cascaded Zoom-in Detector for High Resolution Aerial Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2045–2054. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. Mishra, A.; Verma, V. K.; Reddy, M. S. K.; Subramaniam, A.; Rai, P.; and Mittal, A. 2018. A Generative Approach to Zero-Shot and Few-Shot Action Recognition. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 372–380. OpenAI. 2023. GPT-4. https://chat.openai.com/. Accessed: 2024-01-24. Paz-Argaman, T.; Atzmon, Y.; Chechik, G.; and Tsarfaty, R. 2020. Zest: Zero-shot learning from text descriptions using textual similarity and visual summarization. arXiv preprint arXiv:2010.03276. Rahman, S.; Khan, S.; and Barnes, N. 2020. Improved visual-semantic alignment for zero-shot object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 11932–11939. Rahman, S.; Khan, S.; and Porikli, F. 2018. Zero-shot object detection: Learning to simultaneously recognize and localize novel concepts. In Asian Conference on Computer Vision, 547–563. Springer. Recasens, A.; Kellnhofer, P.; Stent, S.; Matusik, W.; and Torralba, A. 2018. Learning to zoom: a saliency-based sampling layer for neural networks. In Proceedings of the European conference on computer vision (ECCV), 51–66. Redmon, J.; and Farhadi, A. 2017. YOLO9000: better, faster, stronger. arXiv preprint. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91–99. Sarma, S.; Kumar, S.; and Sur, A. 2022. Resolving Semantic Confusions for Improved Zero-Shot Detection. arXiv preprint arXiv:2212.06097. Tang, C.; He, Z.; Li, Y.; and Lv, J. 2021. Zero-shot learning via structure-aligned generative adversarial network. IEEE transactions on neural networks and learning systems, 33(11): 6749–6762. Tang, C.; Lv, J.; Chen, Y.; and Guo, J. 2019. An anglebased method for measuring the semantic similarity between visual and textual features. Soft Computing, 23: 4041–4050. Tang, C.; Yang, X.; Lv, J.; and He, Z. 2020. Zero-shot learning by mutual information estimation and maximization. Knowledge-Based Systems, 194: 105490. Wang, Q.; and Chen, K. 2017. Zero-shot visual recognition via bidirectional latent embedding. International Journal of Computer Vision, 124: 356–383. Wang, T. 2023. Learning to detect and segment for open vocabulary object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7051–7060. Wolf, S.; Meier, J.; Sommer, L.; and Beyerer, J. 2021. Double head predictor based few-shot object detection for aerial imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 721–731. Xia, G.-S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; and Zhang, L. 2018. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3974–3983. Xia, G.-S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S. J.; Luo, J.; Datcu, M.; Pelillo, M.; and Zhang, L. 2017. DOTA: A Large-Scale Dataset for Object Detection in Aerial Images. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3974–3983. Yan, C.; Chang, X.; Luo, M.; Liu, H.; Zhang, X.; and Zheng, Q. 2022. Semantics-guided contrastive network for zeroshot object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. Yang, C.; Huang, Z.; and Wang, N. 2022. QueryDet: Cascaded sparse query for accelerating high-resolution small object detection. In IEEE Conference on computer vision and pattern recognition, 13668–13677. Yang, F.; Fan, H.; Chu, P.; Blasch, E.; and Ling, H. 2019. Clustered object detection in aerial images. In IEEE International conference on computer vision, 8311–8320. Zhang, L.; Wang, X.; Yao, L.; Wu, L.; and Zheng, F. 2020a. Zero-shot object detection via learning an embedding from semantic space to visual space. In Twenty-Ninth International Joint Conference on Artificial Intelligence. Zhang, Z.; Jiang, R.; Mei, S.; Zhang, S.; and Zhang, Y. 2020b. Rotation-Invariant Feature Learning for Object Detection in VHR Optical Remote Sensing Images by DoubleNet. IEEE Access, 8: 20818–20827. Zhao, S.; Gao, C.; Shao, Y.; Li, L.; Yu, C.; Ji, Z.; and Sang, N. 2020. GTNet: Generative Transfer Network for ZeroShot Object Detection. ArXiv, abs/2001.06812. Zheng, Y.; Huang, R.; Han, C.; Huang, X.; and Cui, L. 2020. Background Learnable Cascade for Zero-Shot Detection. Zhu, P.; Wang, H.; and Saligrama, V. 2019. Don’t Even Look Once: Synthesizing Features for Zero-Shot Detection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11690–11699. Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; and Ling, H. 2021. Detection and tracking meet drones challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 7380–7399. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6934 | 2024 | 770 |
18,596 | Controllable Mind Visual Diffusion Model Bohan Zeng1*, Shanglin Li1*, Xuhui Liu1, Sicheng Gao1 Xiaolong Jiang3, Xu Tang3, Yao Hu3, Jianzhuang Liu4, Baochang Zhang1,2,5† 1Institute of Artificial Intelligence, Hangzhou Research Institute, Beihang University, China 2Nanchang Institute of Technology, Nanchang, China 3Xiaohongshu Inc 4Shenzhen Institute of Advanced Technology, Shenzhen, China 5 Zhongguancun Laboratory, Beijing, China {bohanzeng, shanglin, bczhang}@buaa.edu.cn Abstract Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models. Diffusion-based methods have recently shown promise in analyzing functional magnetic resonance imaging (fMRI) data, including the reconstruction of high-quality images consistent with original visual stimuli. Nonetheless, it remains a critical challenge to effectively harness the semantic and silhouette information extracted from brain signals. In this paper, we propose a novel approach, termed as Controllable Mind Visual Diffusion Model (CMVDM). Specifically, CMVDM first extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks. Then, a control model is introduced in conjunction with a residual block to fully exploit the extracted information for image synthesis, generating high-quality images that closely resemble the original visual stimuli in both semantic content and silhouette characteristics. Through extensive experimentation, we demonstrate that CMVDM outperforms existing state-of-theart methods both qualitatively and quantitatively. Our code is available at https://github.com/zengbohan0217/CMVDM. Introduction Understanding the cognitive processes that occur in the human brain when observing visual stimuli (e.g., natural images) has long been a primary focus for neuroscientists. Both objective visual stimuli and subjective cognitive activities can elicit the transmission of intricate neural signals in the visual cortex of the brain, thus laying the foundation for higher-order cognitive and decision-making processes. With the advancement of techniques such as functional magnetic resonance imaging (fMRI), it has become possible to capture real-time brain activity signals with greater accuracy and finer granularity, thereby accelerating the progress of neuroscientific research. Deciphering and reconstructing from these intricate signals remain a great challenge to both cognitive neuroscience and downstream applications like Brain*These authors contributed equally. †Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Ground Truth (GT) Ours MinD-Vis Figure 1: Illustration of synthesis results. A recent method MinD-Vis (Chen et al. 2023) can generate photo-realistic results, but they cannot well match the visual stimuli in terms of semantics and silhouette. Our method can generate better results more consistent with the GT visual stimuli. Computer Interfaces (BCI) (Nicolas-Alonso and Gomez-Gil 2012; Milekovic et al. 2018). Early attempts (Van Gerven et al. 2010; Damarla and Just 2013; Horikawa and Kamitani 2017; Akamatsu et al. 2020) at analyzing brain activity on visual tasks mainly focus on matching human subjects’ brain activity with observed natural images, or reconstructing visual patterns of simple geometric shapes (Miyawaki et al. 2008; Schoenmakers et al. 2013; Van Gerven, De Lange, and Heskes 2010). These explorations demonstrate the feasibility of deriving semantic information for perceived images from brain signals, yet they have poor generalization to unseen semantic categories or complicated reconstruction tasks. Recent studies (Beliy et al. 2019; Gaziv et al. 2022; Ozcelik et al. 2022; Chen et al. 2023; Takagi and Nishimoto 2023) have made significant progress in reconstructing visual stimuli from brain signals. (Beliy et al. 2019; Gaziv et al. 2022) can generate images that are similar in shape to the original visual stimuli, but the images suffer from severe distortion and blur issues. (Ozcelik et al. 2022; Chen The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6935 et al. 2023; Takagi and Nishimoto 2023) have employed commonly used generative models, such as Generative Adversarial Networks (GAN) or diffusion models, to generate high-quality RGB images that maintain semantic consistency with the original visual stimuli conditioned on corresponding fMRI signals. However, such methods struggle with positional inconsistency, as shown in Fig. 1. In general, existing methods have not effectively utilized the semantic and spatial features inherent in fMRI signals. In this paper, we present a Controllable Mind Visual Diffusion Model (CMVDM) that enables the mind diffusion model with a control network to leverage the extracted faithful semantic and silhouette information for high-fidelity human vision reconstruction. Specifically, we first finetune a pretrained latent diffusion model (LDM) with a semantic alignment loss and pretrain a silhouette extractor to estimate accurate semantic and silhouette information of the fMRI data. Taking inspiration from ControlNet, we then introduce a control network, which takes the silhouette information as a condition, into the pretrained LDM to guide the diffusion process to generate desired images that match the original visual stimuli in terms of both semantic and silhouette information. Fig. 1 shows two examples where CMVDM outperforms the previous state-of-the-art approach, MinD-Vis. In summary, the main contributions of this paper are as follows: • We propose a novel Controllable Mind Visual Diffusion Model (CMVDM) that leverages both semantic and spatial visual patterns in brain activity to reconstruct photorealistic images. A control network is utilized to enable effective manipulation over the positions of generated objects or scenes in the reconstructed images, providing a much better structural similarity to the original visual stimuli. • We design two extractors to extract semantic and silhouette attributes to provide accurate information for generating images that closely resemble the visual stimuli. Besides, we build a residual module to provide information beyond semantics and silhouette. • We conduct comprehensive experiments on two datasets to evaluate the performance of our method. It achieves state-of-the-art qualitative and quantitative results compared to existing methods, demonstrating the efficacy of CMVDM for decoding high-quality and controllable images from fMRI signals. Related Work Diffusion Probabilistic Models. Diffusion models (DMs) were initially introduced by (Sohl-Dickstein et al. 2015) as a novel generative model that gradually denoises images corrupted by Gaussian noise to produce samples. Recent advances in DMs have demonstrated their superior performance in image synthesis, with notable models including (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020; Dhariwal and Nichol 2021; Vahdat, Kreis, and Kautz 2021; Rombach et al. 2022; Peebles and Xie 2022). DDGAN (Xiao, Kreis, and Vahdat 2022) is a model that reduces the number of sampling steps by directly predicting the ground truth in each timestep. DMs have also achieved state-of-theart performance in other synthesis tasks, such as text-toimage generation with GLIDE (Nichol et al. 2021), speech synthesis with (Kong et al. 2020; Liu et al. 2021), and superresolution with (Li et al. 2022a; Saharia et al. 2022; Gao et al. 2023). In addition, DMs have been applied to text-to3D synthesis in (Poole et al. 2022; Lin et al. 2022), and other 3D object syntheses in (Anciukeviˇcius et al. 2022; Li et al. 2022b; Luo and Hu 2021). Furthermore, DMs have found applications in video synthesis (Ho et al. 2022b,a), semantic segmentation (Baranchuk et al. 2021), text-to-motion generation (Tevet et al. 2022), face animation (Zeng et al. 2023), and object detection (Chen et al. 2022). (Kulikov et al. 2022; Wang et al. 2022) are models that generate diverse results by learning the internal patch distribution from a single image. ControlNet employs a control network on a pretrained textconditioned LDM for controllable image synthesis. Overall, DMs have shown promising results and have been widely adopted in various synthesis tasks. Neural Decoding of Visual Stimuli. Neural decoding of visual stimuli has been a topic of growing interest in recent years. Numerous studies have explored the possibility of using machine learning algorithms to decode visual information from patterns of neural activity in the human brain. For instance, (Naselaris et al. 2009) demonstrates that it is possible to reconstruct natural images from fMRI data using a linear decoder. Similarly, (Kay et al. 2008) shows that the orientation of gratings from patterns of activity in the early visual cortex can be decoded using a support vector machine. More recent studies have built on these findings by exploring more complex visual stimuli, such as natural scenes (Nishimoto et al. 2011) and faces (Kriegeskorte et al. 2007), and by developing more sophisticated machine learning algorithms, such as deep neural networks (Yamins et al. 2014). To enable decoding of novel scenarios, some works use an identification-based approach (Horikawa and Kamitani 2017; Akamatsu et al. 2020; Kay et al. 2008), where they model the relationship between brain activity and visual semantic knowledge such as image features extracted by a CNN (Horikawa and Kamitani 2017; Akamatsu et al. 2020). These studies provide valuable insights into the interpretation of human brain signals in the visual cortex, which can help the development of more effective decoding algorithms for a wide range of neuroimaging applications, such as Brain-Computer Interfaces. However, these methods require a large amount of paired stimuli-responses data that is hard to obtain. Therefore, decoding novel image categories accurately remains a challenge. fMRI-to-Image Reconstruction With the remarkable advancements in generative models, recent studies have focused on the reconstruction of images from human brain activity. These studies employ various approaches, such as building an encoder-decoder structure to align image features with corresponding fMRI data, as demonstrated by (Beliy et al. 2019) and (Gaziv et al. 2022). To further enhance the quality of image reconstruction, researchers have turned to more sophisticated techniques, including generaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6936 visual stimuli 1D fMRI signals Region of Interest (ROI) Data Acquisition Paired fMRIImage data (limited) Only fMRI data (largescale) Silhouette Extraction Pretrained LDM fMRI Embedding Finetuning LDM CMVDM Reconstructed Image Data Add Noise Fixed weights 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 𝐵𝑙𝑜𝑐𝑘! 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 𝐵𝑙𝑜𝑐𝑘" 𝐷𝑒𝑐𝑜𝑑𝑒𝑟 𝐵𝑙𝑜𝑐𝑘#$! 𝐷𝑒𝑐𝑜𝑑𝑒𝑟 𝐵𝑙𝑜𝑐𝑘#$" 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 𝐵𝑙𝑜𝑐𝑘! 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 𝐵𝑙𝑜𝑐𝑘" 𝑍𝑒𝑟𝑜𝐶𝑜𝑛𝑣 𝑍𝑒𝑟𝑜𝐶𝑜𝑛𝑣 𝑀𝑖𝑑 𝐵𝑙𝑜𝑐𝑘 Diffusion Process 𝑍𝑒𝑟𝑜𝐶𝑜𝑛𝑣 Latent Diffusion Control Model Denoising Process Conditions Copy weights Skip connection Addition Semantic Extraction Structural Alignment T × Diffusion / Denoising Steps 𝑀𝑖𝑑𝐵𝑙𝑜𝑐𝑘 𝑍𝑒𝑟𝑜𝐶𝑜𝑛𝑣 Figure 2: Overview of our proposed method. Initially, we train Efmri and Dslh in the “Finetuning LDM” and “Silhouette Extraction” parts, respectively. Subsequently, we utilize Efmri, Dslh, and Fres to extract semantic, silhouette, and supplement information from fMRI signals as conditions. Finally, we integrate the control network with the LDM to generate high-fidelity and controllable results tailored to the aforementioned conditions. tive adversarial networks (GAN) (Ozcelik et al. 2022) and diffusion models (Takagi and Nishimoto 2023; Chen et al. 2023). These methods have shown promise in achieving more plausible image reconstruction. Nonetheless, the approaches described above have limitations in terms of image reconstruction quality and localization accuracy, resulting in unreliable reconstruction outcomes and inadequate utilization of the deep semantic and shallow positional information inherent in fMRI signals. Method In this section, we describe the CMVDM model, which combines attribute extractors and a control model to produce precise and controllable outcomes from fMRI signals. Fig. 2 illustrates the architecture of CMVDM. Problem Statement and Overview of CMVDM Let the paired {fMRI, image} dataset Ω = {(cfmri,i, Ii)}n i=1, where cfmri,i ∈ R1×N and Ii ∈ RH×W ×3. The fMRI data is extracted as a 1D signal from the region of interest (ROI) on the visual cortex averaged across the time during which the visual stimuli are presented. N denotes the number of voxels of the extracted signal. We adopt the pretrained image encoder of the LDM (Rombach et al. 2022) to encode the observed image I into the latent code z. Our CMVDM aims to learn an estimation of the data distribution p(z|cfmri) through a Markov chain with T timesteps. Following (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2020; Rombach et al. 2022), we define the fixed forward Markov diffusion process q as: q (z1:T | z0) = T Y t=1 q (zt | zt−1) , q (zt | zt−1) = N zt | p 1 −βtzt−1, βtI , (1) where z0 denotes the latent code of an image. This Markov diffusion process propagates by adding Gaussian noise, with variances βt ∈(0, 1) in T iterations. Given z0, the distribution of zt can be represented by: q (zt | z0) = N (zt | √γtz0, (1 −γt)I) , (2) where γt = Qt i=1 (1 −βi). In the inference process, CMVDM learns the conditional distributions pθ(zt−1|zt, cfmri) and conducts a reverse Markov process from Gaussian noise zT ∼N(0, I) to a target latent code z0 as: pθ (z0:T | cfmri) = p (zT ) T Y t=1 pθ (zt−1 | zt, cfmri) , p (zT ) = N (zT | 0, I) , pθ (zt−1 | zt, cfmri) = N zt−1 | µθ (cfmri, zt, t) , σ2 t I , (3) where σt = 1−γt−1 1−γt βt. The pretrained image decoder of the LDM (Rombach et al. 2022) turns the final latent code to an image. Furthermore, we extract the attributes and control the generated results. Firstly, we extract the semantic and silhouette information by utilizing the fMRI encoder Efmri and the silhouette estimating network Dslh, respectively. This step enables us to accurately decouple the fMRI information cfmri. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6937 Subsequently, we utilize the control model Fctrl to generate high-quality images that match the visual stimuli in terms of both semantic and silhouette information. Fctrl is able to leverage the extracted information to produce better results. Besides, the residual module Fres is designed to provide information beyond semantics and silhouette. Finetuning of the Pretrained LDM Before extracting the silhouette information and controlling the generated results, we need to finetune the pretrained LDM (Rombach et al. 2022) to enable it to generate consistent images and extract the semantic information based on the input fMRI signals. Following MinD-Vis, we employ the fMRI encoder Efmri pretrained on the HCP dataset (Van Essen et al. 2013) to encode the brain activity signals to the fMRI embeddings. Besides, we use the pretrained LDM to generate output images. By optimizing the fMRI encoder Efmri and the cross-attention layers in the LDM, while freezing the other blocks during the finetuning process, we can obtain reliable consistent generated results. The finetuning loss is defined as follows: Lf = Ez0,t,cfmri,ϵ∼N (0,1)[||ϵ −ϵθ(zt, t, Efmri(cfmri))||2 2], (4) where ϵθ is the denoising network of the LDM. In this way, the LDM can ensure the consistency of the generated results. Let cctx = Efmri(cfmri) be the semantic information extracted from the fMRI signals. Due to the lack of direct semantic supervision, Efmri may be insufficient for providing enough semantic information. Therefore, we design a noval alignment loss Lalign to further enhance the semantic information cctx: Lalign = e−cosine(fimg,MLP(cctx)), (5) where cosine(·, ·) denotes the cosine similarity, fimg is the image feature extracted by the CLIP image encoder (Radford et al. 2021), and MLP represents a trainable multilayer perceptron. After this training stage, the LDM can make the generated images consistent with the fMRI signals. Nonetheless, due to the absence of explicit positional condition guidance, it is still a challenge for the LDM to generate silhouette-matched results. In the next two sections, we will describe how to extract silhouette information from the fMRI signals and control the final results. Silhouette Extraction In this section, we aim to extract silhouette information from fMRI signals. (Gaziv et al. 2022) uses a combination of selfsupervised and supervised learning to reconstruct images similar to visual stimuli. Despite the low fidelity of the image generation quality, their generated results demonstrate a notable ability to accurately replicate the silhouette of the visual stimuli (see Fig. 3). Based on this, we devise a silhouette estimation network that is capable of providing rough positional guidance for CMVDM. Our silhouette estimation network consists of two components: an encoder Eslh and a decoder Dslh. The encoder Eslh projects the input images to the fMRI signal space, while the decoder Dslh performs the inverse transformation. Let cfmri,i be the ground truth (GT) fMRI signal, Ii be the corresponding GT image, and ˆ cfmri,i = Eslh(Ii) be the estimated fMRI signal. We define the encoder training loss Le by a combination of the Mean Square Error (MSE) loss and cosine similarity: Le = 1 |Ω| |Ω| X i=1 [α1 · ∥cfmri,i − ˆ cfmri,i∥2 + α2 · (1 −cosine(cfmri,i, ˆ cfmri,i))], (6) where αi∈{1,2} are the hyperparameters set empirically to α1 = 1 and α2 = 0.3. After completing the training of Eslh, we fix its parameters and train the reverse process for the decoder Dslh. Due to the limited availability of paired {fMRI, image} data, mapping fMRI signals to images is challenging. Inspired by (Gaziv et al. 2022), we utilize semi-supervised training to extract intricate silhouette information. The self-supervised process can be simply represented as: ˆϕi = Dslh(Eslh(ϕi)), where ϕi ∈Φ denotes the image from ImageNet (without corresponding fMRI data) (Deng et al. 2009), and ˆϕi denotes the reconstructed image. By minimizing the disparity between ϕi and ˆϕi, the self-supervised process helps Eslh and Dslh to learn more generalized image representation. We employ the Structural Similarity (SSIM) loss besides the Mean Absolute Error (MAE) loss to penalize the spatial distances between the reconstructed images and the GT images. The two losses are: Lmae = 1 |Ω| |Ω| X i=1 | ˆIi −Ii| | {z } supervised + 1 |Φ| |Φ| X i=1 | ˆϕi −ϕi| | {z } self−supervised , (7) Lssim = 1 − (2µIµˆI + C1)(2σI ˆI + C2) (µ2 I + µ2 ˆI + C1)(σ2 I + σ2 ˆI + C2), (8) where µˆI, µI, σˆI, and σI represent the mean and std values of the reconstructed images ˆI and GT images I, C1 and C2 are constants to stabilize the calculation. The decoder loss Ld is defined as the combination of the two losses: Ld = Lmae + Lssim. (9) After training, Dslh is able to generate images ˆI from cfmri that provide positional guidance for CMVDM. To avoid confusion, we’ll refer to ˆI as cslh in the following section. Training of Control Model After obtaining the enhanced semantic information cctx = Efmri(cfmri) and the reliable silhouette information cslh = Dslh(cfmri) from cfmri, we use them to control the generated results as shown in Fig. 2. Inspired by ControlNet, we design a control model to control the overall composition of the generated images. Specifically, we freeze all the parameters in the denoising network ϵθ and clone the U-Net encoder of ϵθ into the trainable Fctrl(·; Θc) with a set of parameters The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6938 Method GOD BOLD5000 Acc (%) PCC SSIM Acc(%) PCC SSIM Beliy (2019) 4.288 0.48285 0.51795 / / / Gaziv (2022) 9.128 0.68326 0.64857 / / / IC-GAN (2022) 29.386 0.44857 0.54489 / / / MinD-Vis (2023) 26.644 0.53159 0.52669 25.918 0.54486 0.52379 CMVDM (Ours) 30.112 0.76751 0.63167 27.791 0.55691 0.53459 Table 1: Quantitative comparison with four state-of-the-art (SOTA) methods. Bold results denote the best results and underlined results denote the second-best results. Θc (the red blocks of control model in Fig. 2). The inputs of Fctrl include zt, cctx, and the silhouette feature cslh. The combined condition code x′ c,t can be formulated as: x′ c,t = Z(Fctrl(zt + Z(cslh), cctx; Θc)), (10) where Z(·) denotes the zero convolution operation (Zhang and Agrawala 2023). Furthermore, in order to compensate for the fMRI data loss during attribute extraction, we utilize a trainable residual block denoted as Fres. This block is trained in conjunction with Fctrl. The final combined condition code xc,t is represented as: xc,t =Z(Fctrl(zt+ Z(cslh + Z(Fres(cfmri))), cctx; Θc)). (11) Then the output features xc,t of the control model are added to the U-Net decoder features of the frozen ϵθ, as shown in Fig. 2. Finally, we use the following loss Lctrl to supervise the training of the control model and Fres in our CMVDM: Lctrl = Ez0,t,cfmri,ϵ∼N (0,1)[||ϵ −ϵθ(zt, t, cctx, xc,t)||2 2]. (12) Note that with their losses, the control model training, the pretrained LDM finetuning, and the Dslh training are independent. In our framework, we separately pretrained Efmri and Dslh and froze their weights to jointly train Fres and Fctrl (as depicted in Fig 2). Experiments Datasets and Implementation Datasets. In this study, we employ two public datasets with paired fMRI signals and images: Generic Object Decoding (GOD) dataset (Horikawa and Kamitani 2017), and Brain, Object, Landscape Dataset (BOLD5000) (Chang et al. 2019). The GOD dataset is a well-known and extensively researched collection of fMRI-based brain signal decoding data. It comprises 1250 distinct images belonging to 200 different categories, with 50 images designated for testing. The BOLD5000 dataset is a rich resource for studying the neural representation of visual stimuli, as it contains diverse images from natural and artificial domains. The images are drawn from three existing datasets: SUN (Xiao et al. 2010), COCO (Lin et al. 2014), and ImageNet (Deng et al. 2009), which contain images of various categories of objects and animals. BOLD5000 was acquired from four subjects who underwent fMRI scanning while viewing 5,254 images in 15 sessions. The fMRI data were preprocessed and aligned to a common anatomical space, resulting in 4803 fMRI-image pairs for training and 113 for testing. The dataset provides a unique opportunity to investigate how the human brain encodes visual information across different levels of abstraction and complexity. Additionally, we use the large-scale fMRI data from Human Connectome Project (HCP) (Van Essen et al. 2013) in an unsupervised manner to pretrain the fMRI encoder Efmri in our method, which aims to fully extract the features of fMRI signals. Training Details. We adopt 1 A100-SXM4-40GB GPU for the training of Efmri and the control model, and 1 V100SXM2-32GB GPU for Dslh training. Both Efmri and the control model are trained by the AdamW (Loshchilov and Hutter 2017) with β = (0.9, 0.999) and eps = 1e −8 for 500 epochs. Dslh is optimized using Adam (Kingma and Ba 2015) with a learning rate of 5e −3 and β = (0.5, 0.99) for 150 epochs. Evaluation Metrics N-way Classification Accuracy (Acc). Following (Gaziv et al. 2022; Chen et al. 2023), we employ the n-way top-1 classification task to evaluate the semantic correctness of the generated results, where multiple trials for top-1 classification accuracies are calculated in n −1 randomly selected classes with the correct class. Specifically, we follow MinDVis and use a pretrained ImageNet-1K classifier (Dosovitskiy et al. 2020) to estimate the accuracy. Firstly, we input the generated results and the ground-truth images into the classifier, and then check whether the top-1 classification matches the correct class. Pearson Correlation Coefficient (PCC). The Pearson correlation coefficient (PCC) measures the degree of linear association between two variables. PCC is used to measure the correlation between the pixel values of the generated results and those of the ground truth, with +1 indicating a perfect positive linear relationship and -1 indicating a perfect negative linear relationship. The larger the PCC value, the stronger the relevance between visual stimuli and generated images. Structure Similarity Index Measure (SSIM). We adopt SSIM to evaluate the reconstruction faithfulness of the generated results. As analyzed in (Wang et al. 2004), the structural similarity of two images is measured by three different factors, brightness, contrast, and structure, where the mean The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6939 Ground Truth Ours MinD-Vis IC-GAN Gaziv Beliy Figure 3: Comparison with four SOTA methods on the GOD dataset. is used as the estimate of brightness, the standard deviation as the estimate of contrast, and the covariance as the measurement of structural similarity. Comparison with State-of-the-Art Methods Methods. We compare our CMVDM with four stateof-the-art (SOTA) methods: MinD-Vis, IC-GANs (Ozcelik et al. 2022), Gaziv (Gaziv et al. 2022), and Beliy (Beliy et al. 2019). We use their official pretrained models for all the comparisons, which are trained on the GOD dataset. For the BOLD5000 dataset, we only compare with the official pretrained MinD-Vis model, because other works (Beliy et al. 2019; Gaziv et al. 2022; Ozcelik et al. 2022) did not conduct experiments and release their models on BOLD5000. Results on the GOD Dataset. We conduct a quantitative comparison between CMVDM and the four SOTA models using the testing dataset of GOD. Table 1 summarizes the results, revealing that CMVDM overall outperforms the other methods significantly. Compared to MinD-Vis and ICGAN, both of which yield good results, CMVDM outperforms them significantly in terms of SSIM. This indicates that the images generated by CMVDM exhibit a higher degree of resemblance to the visual stimuli in terms of object silhouette and image structure. Additionally, Fig. 3 demonstrates that CMVDM generates visually impressive images with semantic and structural information closest to the visual stimuli. Gaziv achieves remarkable results in terms of SSIM, but their accuracy reported in Table 1 and visual results presented in Fig. 3 demonstrate that their method is not Method Acc (%) PCC SSIM MinD-Vis 26.644 0.53159 0.54489 MinD-Vis+Lalign 27.362 0.56686 0.52628 MinD-Vis+Control Model 28.438 0.75730 0.63404 CMVDM 30.112 0.76751 0.63167 Table 2: Ablation study of CMVDM’s components. capable of generating high-fidelity images. Results on the BOLD5000 Dataset. We conduct a comparative analysis between our CMVDM and the most recent method MinD-Vis using the testing dataset of BOLD5000. As depicted in Table 1, it is evident that CMVDM consistently outperforms MinD-Vis across all evaluation metrics. Additionally, Fig. 4 provides visualizations of some results from both methods, clearly demonstrating that CMVDM generates more realistic outcomes that are more similar to the GT visual stimuli. Notably, the BOLD5000 dataset, being more complex than the GOD dataset, further validates the effectiveness of our proposed method. Ablation Study We further conduct experiments on the GOD dataset to analyze the effectiveness of each module of CMVDM. Specifically, we employ MinD-Vis as the baseline and design two comparison models: (1) adding the semantic align loss Lalign to MinD-Vis, (2) adding the control model to MinDVis. The results, presented in Table 2, demonstrate the efThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6940 Ground Truth Ours MinD-Vis Figure 4: Comparison with MinD-Vis on the BOLD5000 dataset. ficacy of both Lalign and the control model within our CMVDM. MinD-Vis with Lalign yields improved results in terms of ACC and PCC, which illustrate that Lalign can improve the capability of CMVDM to obtain semantic information. Furthermore, MinD-Vis+Control Model outperforms MinD-Vis+Lalign in each metric, particularly in SSIM, indicating that the silhouette contains valuable semantic information that is used in the control model. Consistency Analysis To further verify the generative stability of CMVDM, we conduct an analysis to compare the consistency of two diffusion-based methods. As shown in Fig. 5, we sample three images reconstructed by CMVDM and MinD-Vis from the same fMRI signal. The images generated by CMVDM demonstrate a high degree of consistency to GT images both semantically and structurally. However, the results generated by MinD-Vis are capable of reproducing GT images semantically but are not consistent in structure. Further Analysis The impact of using the residual module Fres in our CMVDM is significant on the BOLD5000 dataset, as demonstrated in Table 3. However, the effect of Fres on the GOD dataset is not as pronounced. We believe that there are two reasons for this discrepancy. Firstly, the voxels of a single fMRI signal provided by the BOLD5000 dataset are Ground Truth Sample-1 Sample-2 Ours MinD-Vis Sample-3 Figure 5: Consistency analysis of the generated results. Dataset Method Acc(%) PCC SSIM BOLD5000 w/o Fres 25.393 0.54184 0.52951 w Fres 27.791 0.55691 0.53459 GOD w/o Fres 29.436 0.75837 0.63894 w Fres 30.112 0.76751 0.63167 Table 3: Quantitative analysis of the residual block in CMVDM. much less than that provided by the GOD dataset, making it more challenging to extract valid semantic and silhouette information from BOLD5000. Therefore, Fres is necessary to compensate for the information gap. Secondly, compared to GOD, BOLD5000 has more diverse images, including scenes that are not present in GOD. The semantic judgment and position alignment of the images in BOLD5000 are more complex than those in GOD. Therefore, we utilize Fres to provide more information and improve the reconstruction performance. Conclusion In this paper, we propose a Controllable Mind Visual Diffusion Model (CMVDM) for decoding fMRI signals. Firstly, we simultaneously train a semantic encoder and perform finetuning on a pretrained latent diffusion model to generate semantically consistent images from fMRI signals. Secondly, we incorporate a silhouette extractor to derive reliable position information from fMRI signals. Furthermore, we design a control model to ensure CMVDM generates semantically-consistent and spatially-aligned images with the original visual stimuli. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in generating high-quality images from fMRI signals. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6941 Acknowledgements This research was supported by Zhejiang Provincial Natural Science Foundation of China under Grant No. LD24F020007, Beijing Natural Science Foundation L223024, National Natural Science Foundation of China under Grant 62076016. The work was also supported by the National Key Research and Development Program of China (Grant No. 2023YFC3300029) and “One Thousand Plan” projects in Jiangxi Province Jxsg2023102268 and ATR key laboratory grant 220402. References Akamatsu, Y.; Harakawa, R.; Ogawa, T.; and Haseyama, M. 2020. Brain decoding of viewed image categories via semisupervised multi-view Bayesian generative model. IEEE Transactions on Signal Processing. Anciukeviˇcius, T.; Xu, Z.; Fisher, M.; Henderson, P.; Bilen, H.; Mitra, N. J.; and Guerrero, P. 2022. RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation. arXiv:2211.09869. Baranchuk, D.; Voynov, A.; Rubachev, I.; Khrulkov, V.; and Babenko, A. 2021. Label-efficient semantic segmentation with diffusion models. In ICLR. Beliy, R.; Gaziv, G.; Hoogi, A.; Strappini, F.; Golan, T.; and Irani, M. 2019. From voxels to pixels and back: Selfsupervision in natural-image reconstruction from fMRI. In NeurIPS. Chang, N.; Pyles, J. A.; Marcus, A.; Gupta, A.; Tarr, M. J.; and Aminoff, E. M. 2019. BOLD5000, a public fMRI dataset while viewing 5000 visual images. Scientific data. Chen, S.; Sun, P.; Song, Y.; and Luo, P. 2022. Diffusiondet: Diffusion model for object detection. arXiv:2211.09788. Chen, Z.; Qing, J.; Xiang, T.; Yue, W. L.; and Zhou, J. H. 2023. Seeing beyond the brain: Conditional diffusion model with sparse masked modeling for vision decoding. In CVPR. Damarla, S. R.; and Just, M. A. 2013. Decoding the representation of numerical values from brain activation patterns. Human Brain Mapping. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. Dhariwal, P.; and Nichol, A. 2021. Diffusion models beat gans on image synthesis. In NeurIPS. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Gao, S.; Liu, X.; Zeng, B.; Xu, S.; Li, Y.; Luo, X.; Liu, J.; Zhen, X.; and Zhang, B. 2023. Implicit Diffusion Models for Continuous Super-Resolution. arXiv preprint arXiv:2303.16491. Gaziv, G.; Beliy, R.; Granot, N.; Hoogi, A.; Strappini, F.; Golan, T.; and Irani, M. 2022. Self-supervised natural image reconstruction and large-scale semantic classification from brain activity. NeuroImage. Ho, J.; Chan, W.; Saharia, C.; Whang, J.; Gao, R.; Gritsenko, A.; Kingma, D. P.; Poole, B.; Norouzi, M.; Fleet, D. J.; et al. 2022a. Imagen video: High definition video generation with diffusion models. arXiv:2210.02303. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. In NeurIPS. Ho, J.; Salimans, T.; Gritsenko, A.; Chan, W.; Norouzi, M.; and Fleet, D. J. 2022b. Video diffusion models. In NeurIPS. Horikawa, T.; and Kamitani, Y. 2017. Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications. Kay, K. N.; Naselaris, T.; Prenger, R. J.; and Gallant, J. L. 2008. Identifying natural images from human brain activity. Nature. Kingma, D. P.; and Ba, J. 2015. Adam: A method for stochastic optimization. In ICLR. Kong, Z.; Ping, W.; Huang, J.; Zhao, K.; and Catanzaro, B. 2020. Diffwave: A versatile diffusion model for audio synthesis. arXiv:2009.09761. Kriegeskorte, N.; Formisano, E.; Sorger, B.; and Goebel, R. 2007. Individual faces elicit distinct response patterns in human anterior temporal cortex. Proceedings of the National Academy of Sciences, 104(51): 20600–20605. Kulikov, V.; Yadin, S.; Kleiner, M.; and Michaeli, T. 2022. SinDDM: A Single Image Denoising Diffusion Model. arXiv:2211.16582. Li, H.; Yang, Y.; Chang, M.; Chen, S.; Feng, H.; Xu, Z.; Li, Q.; and Chen, Y. 2022a. Srdiff: Single image superresolution with diffusion probabilistic models. Neurocomputing. Li, M.; Duan, Y.; Zhou, J.; and Lu, J. 2022b. Diffusion-SDF: Text-to-Shape via Voxelized Diffusion. arXiv:2212.03293. Lin, C.-H.; Gao, J.; Tang, L.; Takikawa, T.; Zeng, X.; Huang, X.; Kreis, K.; Fidler, S.; Liu, M.-Y.; and Lin, T.-Y. 2022. Magic3D: High-Resolution Text-to-3D Content Creation. arXiv:2211.10440. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV. Liu, J.; Li, C.; Ren, Y.; Chen, F.; Liu, P.; and Zhao, Z. 2021. Diffsinger: Diffusion acoustic model for singing voice synthesis. arXiv:2105.02446. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. In ICLR. Luo, S.; and Hu, W. 2021. Diffusion probabilistic models for 3d point cloud generation. In CVPR. Milekovic, T.; Sarma, A. A.; Bacher, D.; Simeral, J. D.; Saab, J.; Pandarinath, C.; Sorice, B. L.; Blabe, C.; Oakley, E. M.; Tringale, K. R.; et al. 2018. Stable long-term BCIenabled communication in ALS and locked-in syndrome using LFP signals. Journal of Neurophysiology, 120(7): 343– 360. Miyawaki, Y.; Uchida, H.; Yamashita, O.; Sato, M.-a.; Morito, Y.; Tanabe, H. C.; Sadato, N.; and Kamitani, Y. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6942 2008. Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron, 60(5): 915–929. Naselaris, T.; Prenger, R. J.; Kay, K. N.; Oliver, M.; and Gallant, J. L. 2009. Bayesian reconstruction of natural images from human brain activity. Neuron, 63(6): 902–915. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv:2112.10741. Nicolas-Alonso, L. F.; and Gomez-Gil, J. 2012. Brain computer interfaces, a review. Sensors, 12(2): 1211–1279. Nishimoto, S.; Vu, A. T.; Naselaris, T.; Benjamini, Y.; Yu, B.; and Gallant, J. L. 2011. Reconstructing visual experiences from brain activity evoked by natural movies. Current biology, 21(19): 1641–1646. Ozcelik, F.; Choksi, B.; Mozafari, M.; Reddy, L.; and VanRullen, R. 2022. Reconstruction of perceived images from fMRI patterns and semantic brain exploration using instance-conditioned GANs. In IJCNN. Peebles, W.; and Xie, S. 2022. Scalable Diffusion Models with Transformers. arXiv:2212.09748. Poole, B.; Jain, A.; Barron, J. T.; and Mildenhall, B. 2022. Dreamfusion: Text-to-3d using 2d diffusion. arXiv:2209.14988. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In ICML. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In CVPR. Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2022. Image super-resolution via iterative refinement. TPAMI. Schoenmakers, S.; Barth, M.; Heskes, T.; and Van Gerven, M. 2013. Linear reconstruction of perceived images from human brain activity. NeuroImage, 83: 951–961. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv:2010.02502. Takagi, Y.; and Nishimoto, S. 2023. High-resolution image reconstruction with latent diffusion models from human brain activity. In CVPR. Tevet, G.; Raab, S.; Gordon, B.; Shafir, Y.; Cohen-Or, D.; and Bermano, A. H. 2022. Human motion diffusion model. arXiv:2209.14916. Vahdat, A.; Kreis, K.; and Kautz, J. 2021. Score-based generative modeling in latent space. In NeurIPS. Van Essen, D. C.; Smith, S. M.; Barch, D. M.; Behrens, T. E.; Yacoub, E.; Ugurbil, K.; Consortium, W.-M. H.; et al. 2013. The WU-Minn human connectome project: an overview. NeuroImage. Van Gerven, M. A.; Cseke, B.; De Lange, F. P.; and Heskes, T. 2010. Efficient Bayesian multivariate fMRI analysis using a sparsifying spatio-temporal prior. NeuroImage. Van Gerven, M. A.; De Lange, F. P.; and Heskes, T. 2010. Neural decoding with hierarchical generative models. Neural Computation, 22(12): 3127–3142. Wang, W.; Bao, J.; Zhou, W.; Chen, D.; Chen, D.; Yuan, L.; and Li, H. 2022. SinDiffusion: Learning a Diffusion Model from a Single Natural Image. arXiv:2211.12445. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. TIP. Xiao, J.; Hays, J.; Ehinger, K. A.; Oliva, A.; and Torralba, A. 2010. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR. Xiao, Z.; Kreis, K.; and Vahdat, A. 2022. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. In ICLR. Yamins, D. L.; Hong, H.; Cadieu, C. F.; Solomon, E. A.; Seibert, D.; and DiCarlo, J. J. 2014. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23): 8619–8624. Zeng, B.; Liu, X.; Gao, S.; Liu, B.; Li, H.; Liu, J.; and Zhang, B. 2023. Face Animation with an Attribute-Guided Diffusion Model. arXiv preprint arXiv:2304.03199. Zhang, L.; and Agrawala, M. 2023. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6943 | 2024 | 771 |
18,597 | MGQFormer: Mask-Guided Query-Based Transformer for Image Manipulation Localization Kunlun Zeng*, Ri Cheng*, Weimin Tan†, Bo Yan† School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University [email protected], [email protected], [email protected], [email protected] Abstract Deep learning-based models have made great progress in image tampering localization, which aims to distinguish between manipulated and authentic regions. However, these models suffer from inefficient training. This is because they use ground-truth mask labels mainly through the crossentropy loss, which prioritizes per-pixel precision but disregards the spatial location and shape details of manipulated regions. To address this problem, we propose a Mask-Guided Query-based Transformer Framework (MGQFormer), which uses ground-truth masks to guide the learnable query token (LQT) in identifying the forged regions. Specifically, we extract feature embeddings of ground-truth masks as the guiding query token (GQT) and feed GQT and LQT into MGQFormer to estimate fake regions, respectively. Then we make MGQFormer learn the position and shape information in ground-truth mask labels by proposing a mask-guided loss to reduce the feature distance between GQT and LQT. We also observe that such mask-guided training strategy has a significant impact on the convergence speed of MGQFormer training. Extensive experiments on multiple benchmarks show that our method significantly improves over state-of-the-art methods. Introduction Digital image manipulation risk has become more serious in recent years due to advances in deep generative models and editing techniques. An increasing number of image processing applications are emerging and easily accessible to produce tampered images in a visually imperceptible way. Traditional local image editing method including copy-move, splicing, and removal, as a currently common forgery category, requires meticulous and skillful processing. Recent deep generative models, such as GAN (Goodfellow et al. 2020) and diffusion (Ho, Jain, and Abbeel 2020) models, can generate realistic false contents in designated areas or use language prompts to modify image semantics and style. As a result, these manipulated images cause various social security issues and can mislead the public. Consequently, it is a realistic requirement to develop a reliable model to accurately locate the manipulated region. *These authors contributed equally. †Corresponding authors: Bo Yan, Weimin Tan. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The difference between previous methods and ours. Our method uses a query-based transformer which is efficient and explainable. Token similarity means the softmax result of the scalar product between the image feature and query tokens. In addition, we use ground-truth masks to guide the learnable query token (LQT) in identifying the authentic and forged regions. Despite significant advancements that have been made, existing image manipulation localization networks suffer from two shortcomings that lead to poor performance. First, these methods employ the convolutional neural network (CNN) in the final decoder process to classify the per-pixel feature (Lin et al. 2023; Zhang et al. 2021), as shown in Figure 1 (a). However, the local receptive field property of convolution filters restricts access to global information in the image. To address this problem, we propose using a querybased transformer for the image manipulation localization task. Figure 1 (b) shows that the query-based single-stage method utilizes learnable query tokens (LQT) to select pixel embeddings that are highly similar to itself, which makes network processes more explainable and effectively exploits The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6944 the attention mechanism of the transformer. The second shortcoming is that these image manipulation localization networks mainly utilize ground-truth mask labels through cross-entropy loss. However, cross-entropy loss does not exploit the spatial location and shape details of manipulated areas. This is because cross-entropy loss operates at the pixel level to evaluate whether each position estimation is correct, stressing per-pixel precision. As a result, the network training is inefficient. To address this problem, we feed ground-truth masks into the MGQFormer to guide the network focusing on forged regions, leading to an efficient training process. In this paper, we propose a Mask-Guided Query-based Transformer framework (MGQFormer), which uses groundtruth masks to guide the learnable query token (LQT) in identifying forged regions. During training, we first use a multi-branch feature extractor to extract space-channelaware features from the RGB input image. It uses two distinct transformer encoders to extract features from an RGB input image and its noise map, respectively. Then, it uses spatial and channel attention to fuse RGB image and noise map features with different distributions and domains. Finally, the fused feature is fed into our proposed query-based transformer decoder to output the location of the forged region in the image. As shown in Figure 1(b), we utilize authentic and forged LQT to distinguish manipulated regions from authentic ones. The closer a token is to the authentic query token, the more authentic it is, whereas the closer it is to a forged query token, the more fraudulent it is. In order to force the LQT to concentrate on the forged regions, we extract the ground-truth mask feature as authentic and forged guiding query tokens (GQT) and input them into the decoder to also estimate the location of forged regions. Since GQT comes from the ground-truth mask, which is the target of the predicted mask, the GQT will contain the spatial location and shape details of forged regions. Hence, we propose a mask-guided loss to reduce the feature distance between GQT and LQT. After the model is trained, the LQT also makes the network focus on the position and shape of the forge region. As a result, we only use LQT during inference to locate manipulated regions in our query-based transformer decoder. In summary, our main contributions are summarized as follows: • We introduce the Mask-Guided Query-based Transformer, which contains a query-based transformer decoder utilizing the learnable query token (LQT) to locate the manipulated regions. • We propose a mask-guided training approach, which applies the guiding query token (GQT) extracted from the GT mask as the guidance to refine LQT. In addition, we design mask-guided loss to force the GQT to guide LQT concentrating on the spatial location and shape details of manipulated regions. • We conduct extensive experiments on multiple benchmarks and demonstrate that our method achieves stateof-the-art performance on several datasets. Related Work Image Manipulation Localization Although early methods achieve excellent performance on a specific type of manipulation, including splicing (Cozzolino, Poggi, and Verdoliva 2015b; Huh et al. 2018; Kniaz, Knyaz, and Remondino 2019; Lyu, Pan, and Zhang 2014; Salloum, Ren, and Kuo 2018; Wu, Abd-Almageed, and Natarajan 2017), copy-move (Cozzolino, Poggi, and Verdoliva 2015a; D’Amiano et al. 2018; Islam et al. 2020; Wu, Abd-Almageed, and Natarajan 2018b), and removal (Wu and Zhou 2021; Wu, Abd-Almageed, and Natarajan 2018a; Yang et al. 2020; Zhu et al. 2018), they cannot generalize well to other unknown and diverse forgery, restricting their practical application. Recent studies (Zhou et al. 2018; Wu, AbdAlmageed, and Natarajan 2019; Hu et al. 2020; Liu et al. 2022; Wang et al. 2022; Chen et al. 2021; Cozzolino and Verdoliva 2019) attempt to build a unified model to tackle multiple forgery types. RGB-N (Zhou et al. 2018) adopts the steganalysis-rich model and Faster R-CNN (Ren et al. 2015), but it can only output bounding boxes instead of segmenting masks. SPAN (Hu et al. 2020) models spatial correlation at multiple scales through the pyramid structure of local self-attention blocks. PSCC-Net (Liu et al. 2022) utilizes a progressive mechanism and spatial and channel-wise correlations to enhance feature representation. ObjectFormer (Wang et al. 2022) combines RGB features and frequency features to identify the tampering artifacts, and ERMPC (Li et al. 2023) exploits the edge information to model the inconsistency between the forged and authentic regions. In this work, we exploit the novel query-based model and fulfill the task by introducing ground-truth masks that serve as guidance. Efficient Training for Query-based Transformers Query-based transformers employ learnable query embeddings to generate predictions (Strudel et al. 2021; Li et al. 2022b; Cheng, Schwing, and Kirillov 2021), and benefit from global attention that they can capture information from the whole image, achieving a better result than convolutional networks. However, it causes the problem that the process of training becomes difficult due to the global computation. For example, DETR (Zhu et al. 2020) suffers from low efficiency in training, which requires 500 epochs. Therefore, methods aiming to ease training for Transformers are proposed. DN-DETR (Li et al. 2022a) comes up with the idea of adding noised ground-truth boxes as positional queries for denoising training, and this approach is proved effective to speed up detection. Except for the target of the detection, Mask2Former (Cheng et al. 2022) proposes mask attention in segmentation which adds predicted masks as attention masks and speeds up query refinement compared with other query-based models. FastInst (He et al. 2023) uses instance activation-guided queries, which selects pixels with high semantics from feature maps and holds rich information about potential objects at the initial to improve the efficiency of query iterations in the Transformer decoder. MP-Former (Zhang et al. 2023) aims to address the inconsistent prediction problem in Mask2Former that leads to the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6945 Figure 2: An overview of the proposed framework MGQFormer consisting of two-branch transformer encoder, a fusing module, and a mask-guided transformer decoder. During training, the input is a suspicious image (H × W × 3) and a ground-truth mask, the output includes a predicted mask and an auxiliary mask (H × W × 1) which are both involved in loss computation (Lloc and Laux). Note that the red-line part is mask-guided training and is not required during inference. low utilization of queries and uses noised GT masks as the attention masks to further stable training at the early stage. These methods apply the guidance to Transformer decoder and intend to refine class queries efficiently. Our approach differs from previous query-based segmentation methods as we use extra tokens and encode the ground truth mask into GQT. We further propose auxiliary loss and mask-guided loss to guide LQT refinement. Method Our approach aims to identify the manipulated area in a suspicious image using a mask-guided query-based transformer (MGQFormer). Figure 2 is an overview of our framework. We denote the input image as X ∈RH×W ×3, where H and W are the height and width of the image, respectively. We first extract the RGB and noise feature from the input image with BayarConv and Transformer Encoder. Then, the multi-modal features are fused by a spatial and channel attention module (SCAM). We design two learnable query tokens (LQT) to represent authentic and forged features, which are used to search manipulated regions in our proposed query-based transformer decoder. To make the query token refine effectively and our query-based decoder converge rapidly, we propose a mask-guided training strategy, which exploits the ground-truth mask’s spatial location and shape details. Specifically, we input the noised GT masks into MGQFormer to obtain guiding query token (GQT) and auxiliary masks Maux. Then, an auxiliary loss Laux is used to make GQT contain forged regions’ spatial and shape information. Furthermore, we propose a mask-guided loss Lguide to reduce the distance between LQT and GQT. Multi-Branch Feature Extractor The image manipulation localization usually contains elaborate post-processing, making detecting minute differences and forgery traces challenging for the RGB domain. Therefore, we employ a two-branch transformer encoder to entirely exploit the information from two domains. A BayarConv first processes the input image X to extract the noise feature Xn ∈RH×W ×3. Then the input image and noise map are sent to Transformer Encoder. Specifically, we divide X and Xn into patches with size P, and the patch is reshaped to embeddings Xp ∈RN×D, where N = HW/P 2 is the number of patches, and D is the dimension of the embedding. Learnable position embeddings pos ∈RN×D are added to image embeddings to produce the sequenced tokens Z = Xp + pos, then these tokens are processed through L Transformer layers. The same settlement mentioned above is also performed on the noise branch. After the Transformer Encoder, the output of two branches is concatenated and we get Zc ∈RN×2D, which is used for subsequent fusing. The contextualized tokens from the two-branch Transformer Encoder have distinct domains and separate distributions. Therefore, we use the Spatial and Channel Attention Module (SCAM) to fulfill this task. We first reshape the tokens Zc and use a convolution layer to get Zm ∈Rh×w×c, where h = H/P, w = W/P and c = D. Next we project The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6946 and transpose Zm as V = proj(Zm) ∈Rhw×c, K = proj(Zm) ∈Rhw×c and Q = transpose(proj(Zm)) ∈ Rc×hw, where each proj is a distinct projection layer including a 1 × 1 convolution and a reshape operation. Then we perform the channel attention module as follows: CAM(Zm) = proj(V (softmax(QK))). (1) In the meanwhile, we proceed to compute spatial attention, nearly the same as channel attention, except for transposed Q and K. Subsequently, we can obtain the contextualized tokens as follows: SAM(Zm) = proj(softmax(QT KT )V ), (2) Zf = CAM(Zm) + SAM(Zm) + Zm. (3) Then the image feature tokens Zf ∈RN×D are sent to query-based transformer decoder. Mask Transformer Decoder We first introduce the decoder at the stage of inference. For the proposed query-based transformer decoder, we employ authentic and forged learnable query tokens LQT ∈ R2×D. The queries are randomly initialized and represent the forged and authentic features. Specifically, the image feature tokens Zf and LQT are processed simultaneously by the decoder consisting of n transformer-based layers. During the attention mechanism, LQT interacts with feature tokens Zf and extracts the rich forgery information. After that, we obtain the contextualized image feature Z∗ f and the LQT∗. Then, the mask is computed as follows: M ∗= norm(proj(Z∗ f)) ∗(norm(proj(LQT∗))T , (4) where proj is a linear layer, norm represents L2 normalization, and we get the M ∗∈RN×2 by performing a scalar product between refined image features and learnable query tokens. To get the final mask, we reshape the sequence to the mask M ∗∗∈Rh×w×2 and apply a softmax on the class dimension: M = upsample(norm(softmax(M ∗∗))), (5) where M ∈RH×W is the predicted mask, and upsample is a bilinear upsampling operation to resize the mask to the same size as the input image. In conclusion, our query-based method utilizes authentic and forged LQT to select regions that are highly similar to itself, which makes the process of predicting the forged regions more explainable and effective. Next, we will describe the training stage for the transformer decoder in the following section. Mask-Guided Training The query-based model has achieved great success in the corresponding tasks. However, these models have been proven to suffer from the low efficiency of query refinement. Previous methods have proposed approaches like denoising (Li et al. 2022a) and masked attention (Cheng et al. 2022). We point out that previous methods lack direct supervision of LQT by the location and shape details of forged regions, leading to inefficient training. These methods mainly utilize ground-truth masks through cross-entropy loss, prioritizing per-pixel precision. To address this issue, we propose a mask-guided training strategy, which uses guiding query tokens (GQT) to force the LQT to focus on the location and shape of forged regions. GQT is obtained by extracting the feature of the noised ground-truth mask, and we use the auxiliary loss to make GQT contain the spatial and shape information of forged regions. As a result, the convergence speed of MGQFormer training will be improved. Specifically, we first add noise to the ground-truth mask. This step is because predicting the auxiliary mask from the pristine ground-truth mask may be too simple for transformer decoder and retard training. We apply point noises to the mask, analogous to DN-DETR (Li et al. 2022a) for box denoising training, to obtain more robust models. We randomly select the points within the mask and invert the original value to represent the distinct region. In addition, we use a tuned parameter µ to denote the noised percentage of area, so the number of noised points is µ · HW. Given the noised mask, we further convert the mask to GQT with a convolution network to maintain the spatial information in the mask, and the ground-truth mask G ∈ RH×W is transformed into the GQT ∈R2×N. After that, the GQT together with image features Zf and LQT are sent to the transformer decoder. In the decoder, the groundtruth information GQT serves as guidance to interact with other queries and assists the decoder in refining the LQT. After the transformer decoder, we obtain the image feature Z∗ f and query tokens LQT∗and GQT∗, which are already guided by the ground-truth token GQT. An auxiliary mask Maux ∈RH×W is further calculated by performing scalar product on Z∗ f and GQT∗with the same process described in the mask transformer decoder section. We then make the Maux get involved in the loss computation. Auxiliary Loss. Since we employ a convolution network to convert the ground-truth mask to queries, and the mask is noised to keep robustness, supervision for the convolution network is needed to make the auxiliary mask more accurate. Therefore, we use the pixel-level cross-entropy loss as follows to make GQT contain the spatial and shape information of forged regions: Laux = − HW X i=1 Gi · log(Maux,i) (6) where G ∈RH×W is the ground-truth mask. Note that we calculate the auxiliary loss using the pristine GT masks G without applying noises to make the model predict the desired accurate mask. Mask-Gudied Loss. The purpose of GQT is to guide the LQT, and both are processed the same to generate the predicted mask M and auxiliary mask Maux. Thus, we expect the LQT to become similar to GQT to make the prediction more precise. A cosine similarity loss is applied to reduce the distance of both queries, which can be formulated as: Lguide = 1 −cos(LQT∗, GQT∗) (7) where cos denotes computing the cosine similarity. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6947 Method Col. CASIA NIST16 IMD20 Avg. ManTra. 82.4 81.7 79.5 74.8 79.6 SPAN 93.6 79.7 84.0 75.0 83.1 Object. 95.5 84.3 87.2 82.1 87.3 ERMPC 96.8 87.6 89.5 85.6 89.9 Ours 97.1 88.6 86.2 88.3 90.1 Table 1: Comparison of manipulation localization AUC(%) scores of different pre-trained models. Loss Function The total loss function L includes three parts: the auxiliary loss to make Maux accurate, the mask-guided loss to make LQT∗and GQT∗closer, and the localization loss Lloc for the predicted mask M, which employs the same crossentropy loss as auxiliary loss: L = Lloc + Laux + λLguide (8) where λ is a weight parameter and set to 0.5 during training. Experiment Experiment Setup Testing Datasets. We first pre-train our model with the dataset synthesized by PSCC-Net (Liu et al. 2022). Then we evaluate our model on CASIA dataset (Dong, Wang, and Tan 2013), Columbia dataset (Hsu and Chang 2006), NIST16 dataset (Guan et al. 2019) and IMD20 dataset (Novozamsky, Mahdian, and Saic 2020). Specifically, CASIA provides splicing and copy-move images, which widely appear in the image forgery field. Columbia consists of 180 splicing images, which are uncompressed and lack post-process. NIST16 is a challenging dataset with 564 high-resolution images that are hard for the eyes to recognize. IMD20 collects 35,000 real images captured by different camera models and is comprised of different types of manipulation generated by various inpainting methods. Evaluation Metrics. To evaluate the localization performance of the proposed MGQFormer, following PSCCNet (Liu et al. 2022), we report the image-level F1 score and Area Under Curve (AUC) as the evaluation metric. We adopt the fixed threshold to binarize the predicted masks, which are necessary to calculate F1 scores. Implementation Details. The MGQFormer is implemented on the Pytorch with an NVIDIA GTX 1080 Ti GPU. All input images are resized to 384 × 384. We use Adam as the optimizer, and the learning rate decays from 2.5e-7 to 1.5e-8 with a batch size of 2. The feature extractor is initialized using the ImageNet pre-trained ViT model weights (Steiner et al. 2021) with 12 layers and a patch size of 16, while the decoder is initialized using random weights from a truncated normal distribution with 6 layers. Comparison with State-of-the-Art Methods We compare our model with other state-of-the-art methods under two settings: 1) training on the synthetic dataset and evaluating on the full test datasets. 2) fine-tuning the Method CASIA AUC F1 RGB-N 79.5 40.8 SPAN 83.8 38.2 PSCCNet 87.5 55.4 ObjectFormer 88.2 57.9 ERMPC 90.4 58.6 Ours 91.5 58.8 Table 2: Comparison of manipulation localization results using fine-tuned models. pre-trained model on the training split of test datasets and evaluating on their test split. For the pre-trained model, we evaluate the performance with ManTraNet (Wu, AbdAlmageed, and Natarajan 2019), SPAN (Hu et al. 2020), ObjectFormer (Wang et al. 2022), and ERMPC (Li et al. 2023), while further comparing with RGB-N (Zhou et al. 2018) and PSCCNet (Liu et al. 2022) for the fine-tuned model. Pre-trained Model. Table 1 reports the best localization AUC(%) scores with pre-trained models. We can observe that MGQFormer achieves the highest performance on Columbia, CASIA, IMD20 and the average AUC(%) of all datasets, and gets competitive performance on NIST16. In particular, MGQFormer achieves 88.3 % on the real-world IMD20 dataset and outperforms ERMPC by 2.7%. This validates our method has the outstanding ability of capturing tampering traces and the generalization to high-quality datasets. On NIST16 dataset, we fail to achieve the best performance. We believe that the performance of Transformer networks is influenced by the training resolution. High performance can be fully achieved if the resolution at test time is close to training. However, NIST16 is a high-resolution dataset that greatly exceeds our training dataset. Fine-tuned Model. To compensate for the difference in visual quality between the synthesized datasets and standard datasets, the network weights of the pre-trained model are used to initiate the fine-tuned models, which will be trained on the training split of CASIA dataset. As shown in Table 2, we compare the AUC and F1 results (%) with other methods, and our model achieves the best performance, which demonstrates that MGQFormer can capture subtle tampering artifacts effectively by query. Robustness Evaluation We apply different image distortion methods on raw images from Columbia dataset and evaluate the robustness of our MGQFormer. The distortion types include: 1) Resize the image with different scales, 2) Gaussian blurring with a kernel size k, and 3) JPEG compression with a quality factor q. We compare the manipulation localization performance (AUC scores) of our pre-trained models on the pristine dataset and the corrupted data, and report the results in Table 3. Compared to previous methods, MGQFormer has the best robustness against all distortions. Especially, when facing the resizing and JPEG Compress, the performance of our method The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6948 Figure 3: AUC scores (%) of MGQFormer without and with mask-guided training on the validation split of the synthesized dataset in different training epochs. Distortion SPAN PSCC-Net Ours no distortion 93.6 98.19 97.10 Resize (0.78×) 89.99 93.40 96.69 ↓0.41 Resize (0.25×) 69.08 78.41 96.85 ↓0.25 Blur (k=3) 78.97 84.18 92.75 ↓4.35 Blur (k=15) 67.7 73.24 77.84 ↓19.26 Compress (q=100) 93.32 97.97 97.05 ↓0.06 Compress (q=50) 74.62 89.11 95.62 ↓1.48 Table 3: Localization performance on Columbia dataset under various distortions. AUC scores are reported (in %), (Blur: Gaussian Blur, Compress: JPEG Compress). drops a little, denoting that the patch-wise MGQFormer has robustness against low-quality images. Ablation Analysis The design of MGQFormer contains the multi-branch feature extractor and mask-guided training. The multi-branch feature extractor employs an additional BayarConv branch to exploit the noise information and fuse both domains using SCAM. The mask-guided training is utilized to add groundtruth information, which guides the LQT to focus on the target area and improve the efficiency of the query refinement. Ablation Study of Noise Branch. The quantitative results are listed in Table 4. The baseline denotes that we just use a single encoder and the query-based transformer decoder. To evaluate the the effectiveness of noise branch, we use a single RGB branch and remove SCAM. We can observe that without the noise branch, the AUC scores drop by 1.1% on Columbia and 2.3% on CASIA. The performance promotion validates that the use of multi-branch feature extractor effectively improves the performance of our model. Ablation Study of Mask-guided Training. To prove the impression of Mask-guided training, we leave only LQT in Transformer decoder with the image feature and take out the input of ground-truth mask during training. As shown in Table 4, without mask-guided training, the AUC scores decrease by 2.8% on Columbia and 3.6% on CASIA. Figure 4: The effect of parameter µ in mask-guided training. Variants Columbia CASIA baseline 94.2 81.4 w/o noise branch 96.0 86.3 w/o MG training 94.3 85.0 w/o noised mask 96.2 87.4 Ours 97.1 88.6 Table 4: Ablation study of noise branch, mask-guided training (MG training), and applying noises to GT guiding masks on CASIA and Columbia dataset with pre-trained models. Except for the promotion of localization, mask-guided training further boosts the speed of the convergence. To evaluate this effect, we compare the result of the presence and absence of the training strategy in different epochs. As shown in Figure 3, we display the AUC (%) scores on the validation split of the synthesized dataset during training. Proof by facts, MGQFormer significantly boosts training at the beginning, which surpasses the model without maskguided training by 12.7% in the first epoch, and prominently reaches the convergence faster. This reveals that GQT certainly helps the Transformer decoder to improve the efficiency of refining LQT. Ablation Study of applying noises to GT guiding masks. In Figure 4, we show the different values of parameter µ denoting the percentage of noised points to verify its effect over Columbia and IMD20. With its increasing, the ground truth mask has more noised points to get a more robust and generalized model; however, a large value may cause damage to the spatial information and mislead the network. In contrast, a smaller value of µ provides a more accurate ground-truth mask but may be too easy for the model to predict the auxiliary mask and retard training. It can be seen from the comparison that the setting of 0.01 is the optimal solution. The usage of point noises achieves 0.9%/1.2% AUC gain as shown in Table 4. Visualization Results Qualitative Results. As shown in Figure 5, we provide predicted forgery masks of various methods. We can obThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6949 Figure 5: Visualization of the predicted manipulation mask by different methods. From top to bottom, we show forged images, GT masks, predictions of ManTraNet, PSCC-Net, and ours. serve that PSCC-Net and ManTraNet either output the false region or make unclear predictions. The comparison of visualization results demonstrates that our method can not only locate the tampering regions more accurately but also output clear regions. It benefits from the multi-modal information and the query-based transformer decoder that employs global attention to generate masks. Visualization of Mask-guided Training. To verify the effectiveness of the mask-guided training, we show the masks predicted by MGQFormer, the masks generated without mask-guided training, and further the auxiliary masks in Figure 6. It is clear that MGQFormer makes use of ground-truth masks to focus on the forged region, which can be seen from the similarity between the predicted mask and the auxiliary mask. Specifically, the network without mask-guided training will make false judgments about objects that are relatively small. In Figure 7, we further show the differences between attention maps for the LQT representing the forgery in Transformer decoder from MGQFormer and attention maps without mask-guided training. It is obvious that with maskguided training, the LQT can focus on the target area accurately due to the guidance of the GQT. In contrast, the LQT without mask-guided training does not detect forgery well and is even assigned to the totally opposite region representing the authentic place. This comparison demonstrates that the proposed GQT containing the spatial and shape information from GT mask can force the LQT to concentrate on the correct type of area that we assign to the LQT. Conclusion In this paper, we propose a novel mask-guided query-based transformer framework (MGQFormer). In detail, the first step is to extract RGB and noise features with a two-branch transformer encoder and further fuse them. In the second Figure 6: Visualization of results’ comparison for the proposed mask-guided training. From left to right, we show the forged images, GT masks, predicted masks generated w/o MG-training, the auxiliary masks, and the predicted masks from MGQFormer. Figure 7: Visualization of attention maps for the proposed mask-guided training. From left to right, we display the forged images, GT masks, and attention maps for LQT representing the forgery in Transformer decoder without (w/o) and with (w/) mask-guided training, respectively. step, we convert noised ground truth masks to the guiding query token (GQT) and feed GQT and LQT into MGQFormer to estimate fake regions, respectively. We further propose auxiliary loss and mask-guided loss to guide LQT refinement. Visualization results show that the proposed mask-guided training strategy has a significant impact on the convergence speed of MGQFormer training and the performance of localization. Extensive experimental results on several benchmarks demonstrate the effectiveness of our algorithm. Acknowledgments This work is supported by NSFC (GrantNo.: U2001209 and 62372117) and Natural Science Foundation of Shanghai (21ZR1406600). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6950 References Chen, X.; Dong, C.; Ji, J.; Cao, J.; and Li, X. 2021. Image manipulation detection by multi-view multi-scale supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 14185–14193. Cheng, B.; Misra, I.; Schwing, A. G.; Kirillov, A.; and Girdhar, R. 2022. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1290–1299. Cheng, B.; Schwing, A.; and Kirillov, A. 2021. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34: 17864–17875. Cozzolino, D.; Poggi, G.; and Verdoliva, L. 2015a. Efficient dense-field copy–move forgery detection. IEEE Transactions on Information Forensics and Security, 10(11): 2284– 2297. Cozzolino, D.; Poggi, G.; and Verdoliva, L. 2015b. Splicebuster: A new blind image splicing detector. In 2015 IEEE International Workshop on Information Forensics and Security (WIFS), 1–6. IEEE. Cozzolino, D.; and Verdoliva, L. 2019. Noiseprint: A CNNbased camera model fingerprint. IEEE Transactions on Information Forensics and Security, 15: 144–159. Dong, J.; Wang, W.; and Tan, T. 2013. Casia image tampering detection evaluation database. In 2013 IEEE China summit and international conference on signal and information processing, 422–426. IEEE. D’Amiano, L.; Cozzolino, D.; Poggi, G.; and Verdoliva, L. 2018. A patchmatch-based dense-field algorithm for video copy–move detection and localization. IEEE Transactions on Circuits and Systems for Video Technology, 29(3): 669– 682. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020. Generative adversarial networks. Communications of the ACM, 63(11): 139–144. Guan, H.; Kozak, M.; Robertson, E.; Lee, Y.; Yates, A. N.; Delgado, A.; Zhou, D.; Kheyrkhah, T.; Smith, J.; and Fiscus, J. 2019. MFC datasets: Large-scale benchmark datasets for media forensic challenge evaluation. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), 63– 72. IEEE. He, J.; Li, P.; Geng, Y.; and Xie, X. 2023. FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23663–23672. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems, volume 33, 6840–6851. Curran Associates, Inc. Hsu, J.; and Chang, S. 2006. Columbia uncompressed image splicing detection evaluation dataset. Columbia DVMM Research Lab, 6. Hu, X.; Zhang, Z.; Jiang, Z.; Chaudhuri, S.; Yang, Z.; and Nevatia, R. 2020. SPAN: Spatial pyramid attention network for image manipulation localization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16, 312– 328. Springer. Huh, M.; Liu, A.; Owens, A.; and Efros, A. A. 2018. Fighting fake news: Image splice detection via learned selfconsistency. In Proceedings of the European conference on computer vision (ECCV), 101–117. Islam, A.; Long, C.; Basharat, A.; and Hoogs, A. 2020. Doagan: Dual-order attentive generative adversarial network for image copy-move forgery detection and localization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4676–4685. Kniaz, V. V.; Knyaz, V.; and Remondino, F. 2019. The point where reality meets fantasy: Mixed adversarial generators for image splice detection. Advances in neural information processing systems, 32. Li, D.; Zhu, J.; Wang, M.; Liu, J.; Fu, X.; and Zha, Z.J. 2023. Edge-Aware Regional Message Passing Controller for Image Forgery Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8222–8232. Li, F.; Zhang, H.; Liu, S.; Guo, J.; Ni, L. M.; and Zhang, L. 2022a. Dn-detr: Accelerate detr training by introducing query denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13619– 13627. Li, Z.; Wang, W.; Xie, E.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; Luo, P.; and Lu, T. 2022b. Panoptic segformer: Delving deeper into panoptic segmentation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1280–1289. Lin, X.; Wang, S.; Deng, J.; Fu, Y.; Bai, X.; Chen, X.; Qu, X.; and Tang, W. 2023. Image manipulation detection by multiple tampering traces and edge artifact enhancement. Pattern Recognition, 133: 109026. Liu, X.; Liu, Y.; Chen, J.; and Liu, X. 2022. PSCC-Net: Progressive spatio-channel correlation network for image manipulation detection and localization. IEEE Transactions on Circuits and Systems for Video Technology, 32(11): 7505– 7517. Lyu, S.; Pan, X.; and Zhang, X. 2014. Exposing region splicing forgeries with blind local noise estimation. International journal of computer vision, 110: 202–221. Novozamsky, A.; Mahdian, B.; and Saic, S. 2020. IMD2020: A large-scale annotated dataset tailored for detecting manipulated images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 71–80. Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6951 Salloum, R.; Ren, Y.; and Kuo, C.-C. J. 2018. Image splicing localization using a multi-task fully convolutional network (MFCN). Journal of Visual Communication and Image Representation, 51: 201–209. Steiner, A.; Kolesnikov, A.; Zhai, X.; Wightman, R.; Uszkoreit, J.; and Beyer, L. 2021. How to train your vit? data, augmentation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270. Strudel, R.; Garcia, R.; Laptev, I.; and Schmid, C. 2021. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, 7262–7272. Wang, J.; Wu, Z.; Chen, J.; Han, X.; Shrivastava, A.; Lim, S.-N.; and Jiang, Y.-G. 2022. Objectformer for image manipulation detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2364–2373. Wu, H.; and Zhou, J. 2021. IID-Net: Image inpainting detection network via neural architecture search and attention. IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1172–1185. Wu, Y.; Abd-Almageed, W.; and Natarajan, P. 2017. Deep matching and validation network: An end-to-end solution to constrained image splicing localization and detection. In Proceedings of the 25th ACM international conference on Multimedia, 1480–1502. Wu, Y.; Abd-Almageed, W.; and Natarajan, P. 2018a. Busternet: Detecting copy-move image forgery with source/target localization. In Proceedings of the European conference on computer vision (ECCV), 168–184. Wu, Y.; Abd-Almageed, W.; and Natarajan, P. 2018b. Image copy-move forgery detection via an end-to-end deep neural network. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 1907–1915. IEEE. Wu, Y.; AbdAlmageed, W.; and Natarajan, P. 2019. Mantranet: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9543–9552. Yang, Q.; Yu, D.; Zhang, Z.; Yao, Y.; and Chen, L. 2020. Spatiotemporal trident networks: Detection and localization of object removal tampering in video passive forensics. IEEE Transactions on Circuits and Systems for Video Technology, 31(10): 4131–4144. Zhang, H.; Li, F.; Xu, H.; Huang, S.; Liu, S.; Ni, L. M.; and Zhang, L. 2023. MP-Former: Mask-piloted transformer for image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18074–18083. Zhang, Y.; Zhu, G.; Wu, L.; Kwong, S.; Zhang, H.; and Zhou, Y. 2021. Multi-task SE-network for image splicing localization. IEEE Transactions on Circuits and Systems for Video Technology, 32(7): 4828–4840. Zhou, P.; Han, X.; Morariu, V. I.; and Davis, L. S. 2018. Learning rich features for image manipulation detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1053–1061. Zhu, X.; Qian, Y.; Zhao, X.; Sun, B.; and Sun, Y. 2018. A deep learning approach to patch-based image inpainting forensics. Signal Processing: Image Communication, 67: 90–99. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6952 | 2024 | 772 |
18,598 | Weakly-Supervised Mirror Detection via Scribble Annotations Mingfeng Zha1, Yunqiang Pei1, Guoqing Wang1*, Tianyu Li1, Yang Yang1, Wenbin Qian2, Heng Tao Shen1 1University of Electronic Science and Technology of China 2Jiangxi Agricultural University [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Mirror detection is of great significance for avoiding false recognition of reflected objects in computer vision tasks. Existing mirror detection frameworks usually follow a supervised setting, which relies heavily on high quality labels and suffers from poor generalization. To resolve this, we instead propose the first weakly-supervised mirror detection framework and also provide the first scribble-based mirror dataset. Specifically, we relabel 10,158 images, most of which have a labeled pixel ratio of less than 0.01 and take only about 8 seconds to label. Considering that the mirror regions usually show great scale variation, and also irregular and occluded, thus leading to issues of incomplete or over detection, we propose a local-global feature enhancement (LGFE) module to fully capture the context and details. Moreover, it is difficult to obtain basic mirror structure using scribble annotation, and the distinction between foreground (mirror) and background (non-mirror) features is not emphasized caused by mirror reflections. Therefore, we propose a foreground-aware mask attention (FAMA), integrating mirror edges and semantic features to complete mirror regions and suppressing the influence of backgrounds. Finally, to improve the robustness of the network, we propose a prototype contrast loss (PCL) to learn more general foreground features across images. Extensive experiments show that our network outperforms relevant state-of-the-art weakly supervised methods, and even some fully supervised methods. The dataset and codes are available at https://github.com/winter-flow/WSMD. Introduction Mirrors are commonly used in everyday, but their reflective properties can disrupt tasks such as image enhancement (Wu et al. 2023) (Wang et al. 2021a) (Wang, Sun, and Sowmya 2019) (Wang, Sun, and Sowmya 2021), segmentation (Jain et al. 2021), and visual language navigation (An et al. 2021), making the study of mirror detection (MD) an important topic. Current research on MD utilizes pixel-level labels as supervised signals to train models. However, obtaining dense pixel labels is expensive. In this paper, we propose a weakly-supervised MD method. In the weakly supervised learning paradigm, there are four types of supervised signals: image-level, point-level, scribble-level, *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Image (b) Scribble (c) GT (e) SCWS (d) SS (f) Ours Figure 1: (a) Original image. (b) Scribbled image. (c) Ground-truth pixel-level annotations. (d) (Zhang et al. 2020) and (e) (Yu et al. 2021) are weakly supervised SOD models. (f) is our detection result. and box-level. We provide scribble annotation and use it to formulate our framework because it directly gives the location of mirror regions and offers flexibility in handling complex scenes. Therefore, we relabel 10,158 images, including 3,063 from MSD dataset (Yang et al. 2019), 5,095 from PMD dataset (Lin, Wang, and Lau 2020), 2,000 from MirrorRGBD dataset (Mei et al. 2021) and name the new dataset S-Mirror. The labeling time for each image slightly differs as the varying scene complexity of these datasets, averaging around 5s, 6s, and 8s, respectively. As shown in Figure 2, the percentage of labeled pixels is less than 0.01 for most images, significantly lower than full annotation and relevant weak annotation works (about half of (He et al. 2023)). Compared to traditional image detection tasks, MD shows some task-specific challenges: a) the scale of mirror regions varies greatly with some occupying more than half of the image and some occupying less than one tenth; b) many of the mirror regions are irregular and subject to occlusion; c) the diverse imagings and the varying surroundings of mirrors cause high noise as reflective property, thus making it a crucial task to distinguish between imagings (reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6953 Histogram of Labeled Pixel Ratio in Images 1750 Number of Images 1500 250 500 750 1000 1250 Percentage of Labeled Pixels 0.000 0.005 0.010 0.015 0.020 0.025 0.030 fg+bg bg Figure 2: Percentage of labeled pixels in S-Mirror dataset. fg and bg denote the foreground and background, respectively, i.e., the red and blue scribbles in Figure 1. flective objects) and entities (objects outside the mirrors). See the supplementary material for some examples showing the above cases, which make it a great challenge to formulate a weakly-supervised MD framework, and there are only few related weakly supervised works, i.e., scribble-based salient object detection (SOD) and camouflage object detection (COD). However, these methods cannot be directly applied to MD tasks due to the following reasons: 1) logical and physical associations between imagings and entities are not established; 2) mirror regions are not as salient as entities; 3) most camouflage objects have a single form, while mirror regions are diverse as reflection. To resolve this, we for the first time formulate a weaklysupervised MD framework utilizing scribble-based supervision. As shown in Figure 1, our method achieves promising results. We propose a local-global feature enhancement (LGFE) module with both global context understanding (e.g., establishing logical and physical associations between imagings and entities, mirror scale variation perception) and local details enhancement (e.g., edges, textures, colors) to improve long- and short-distance dependence sensitivity. Moreover, scribble is difficult to represent the underlying structure information. The foreground feature representation is not salient and distinctive enough as reflection interference. Therefore, we propose a foreground-aware mask attention (FAMA), fusing the initial prediction foreground mask and edge mask for semantic and boundary awareness to refine the mirror mask. Furthermore, to improve the robustness of the network, we propose to mine the prototype features of various foreground and background features and formulate it as a novel prototype contrast loss (PCL), which aims at pulling the foreground prototypes closer, pushing the foreground and background prototypes away, thus producing more generalizable image feature representations. In summary, our main contributions are as follows: • We propose the first weakly supervised MD dataset based on scribble annotations. Compared to pixel-level annotations, quickly and flexibly annotating few pixels allows us to obtain the location and partial structure information of the foreground and background regions. • We propose the first weakly supervised MD network that efficiently detects mirror regions with only simple scribble annotations and mirror edges as supervision signals. • We formulate a local-global feature enhancement module (LGFE) and a foreground-aware mask attention (FAMA) to mitigate scale variation, occlusion, irregularity, and reflection interference. Additionally, we design a prototype contrast loss (PCL) to leverage inter-image information for improving network robustness. • Extensive experiments on three mirror datasets show that our network outperforms relevant state-of-the-art methods on all evaluation metrics and achieves performance comparable to fully supervised approaches. Related Works Salient Object Detection. SOD aims to discover salient regions in images and has achieved significant progress. Ma et al. (Ma, Xia, and Li 2021) proposed aggregating adjacent feature layers to reduce interference. In recent years, some weakly supervised SOD works have also emerged. Zhang et al. (Zhang et al. 2020) proposed the first SOD method based on scribble annotations, which greatly reducing image annotation workload while achieving good performance. Yu et al. (Yu et al. 2021) proposed an end-to-end detection network based on structure consistency. Gao et al. (Gao et al. 2022) first proposed a multi-round training detection method based on point annotations. In addition, there are also similar works. For example, He et al. (He et al. 2023) first proposed a COD method based on scribble annotations, designing multiple functions to guide and constrain the model. Mirror Detection. MD aims to detect mirror regions in images. Currently, there are many fully-supervised detection methods proposed. Yang et al. (Yang et al. 2019) first introduced the task and proposed MirrorNet, which explores feature differences inside and outside mirrors. Lin et al. (Lin, Wang, and Lau 2020) proposed a progressive detection approach, exploring local feature similarity. Guan et al. (Guan, Lin, and Lau 2022) discovered potential feature correlations from a semantic association perspective. In addition, some works attempt to explore characteristics of mirrors. Mei et al. (Mei et al. 2021) incorporated depth information because the depth of mirror regions can differ significantly from their surroundings. Huang et al. (Huang et al. 2023) designed a dual-stream network based on Swin Transformer (Liu et al. 2021b), using symmetry invariance. Some works also consider the constraints of practical application scenarios. For example, He et al. (He, Lin, and Lau 2023) designed a efficient network by selectively processing structures based on the differences between low-level and high-level features. Methodology Overview The overall framework of our method is shown in Figure 3. It consists of four important parts, i.e., edge generation (EG) module (four 1×1 convolutions), CFM (Dong et al. 2021), local-global feature enhancement module (LGFE), foreground-aware mask attention (FAMA) and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6954 PCL FAMA CFM LGFE EG I EG CFM C Channel Concatenation Element-wise Multiplication · Convolution C C · · C X1 X2 X4 X4 X3 X2 E Lsal_ref Lsmooth Lsal_init Ledge Figure 3: The overall structure of our proposed method. We first use PVT network (Wang et al. 2021b) as the backbone to extract multi-scale long-range dependency feature maps. We then utilize EG module to generate edge maps and LGFE module to enhance low-level feature maps. We progressively decode features using CFM and apply FAMA to fuse semantic and edge features. Finally, we use saliency maps, edges, and auxiliary PCL loss as the entire loss function to supervise model training. prototype contrast learning loss (PCL). We first feed an image I ∈ R3×H×W to generate multi-scale feature maps Xi ∈RCi× H 4i × W 4i , where i ∈{1, 2, 3, 4}, Ci ∈ {64, 128, 320, 512}, H and W denote height and width respectively. We then feed low-level features X1 and X2, along with high-level feature X4 into EG to produce edge map E. We also feed X1 into LGFE to obtain the enhanced feature map X1 en, which combines context and details information. Next, the initial prediction map Sinit is decoded by progressively fusing X2, X3, and X4. We integrate CFM’s final feature map Fla with Sinit, X ′ 1 en (after adjustment based on X1 en), and E jointly into FAMA. Through the semantic and edge-aware fusion, we generate refined prediction map Sref. Furthermore, we design PCL as an auxiliary loss to enhance the model’s robustness. Local-global Feature Enhancement Module We found that mirror regions can be highly variable in scale, irregular in shape, and prone to occlusion. Although the feature maps Xi generated from PVT network contain longrange dependencies and rich contextual semantics, they lack local information construction. In addition, in this paper, we introduce object edges as auxiliary supervised signals, which may introduce interference, particularly in weakly supervised scenarios. Therefore, enhancing local useful features and suppressing background information (e.g., noisy texture, edges) is essential to retain details of mirror regions. To achieve this, we propose a local-global feature enhancement (LGEF) module to process X1, as shown in Figure 4. To illustrate, we create a duplicate of X1 and name it X1 loc to handle local features. For X1, we employ SqueezeC CBAM SE C Channel Concatenation Element-wise Multiplication Dense ASPP CBAM SE C Figure 4: Structure of Local Global Feature Enhancement (LGFE) module. We first use DenseASPP on X1 loc to obtain local features at different scales, and then use CBAM and SE on the fused feature maps to acquire spatial and channel attention, respectively. A similar process is performed on X1. Finally, we fuse X1 with four attentions. and-Excitation (SE) Attention (Hu, Shen, and Sun 2018) and Convolutional Block Attention Module (CBAM) (Woo et al. 2018) to obtain the channel attention map ca1 and spatial attention map sa1, respectively, ca1 = SE(X1), sa1 = CBAM(X1) (1) For X1 loc, DenseASPP (Zhang et al. 2020) is first applied to extract local features using various dilation rates, generating the feature map X ′ 1 loc that contains local perceptions. To further integrate contextual and local information while suppressing noise interference, we concatenate X1 and X ′ 1 loc along the channel axis and use 1×1 convolution to reduce channels by half. Subsequently, SE and CBAM are employed to obtain channel attention map ca2 The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6955 T Matrix Multiplication Element-wise Multiplication Transpose Convolution T T T Semantic-aware Branch Edge-aware Branch Global Reasoning T Figure 5: Structure of Foreground-Aware Mask Attention (FAMA). We first input Fla, Sinit and X ′ 1 en, E to the semantic and edge-aware branches, respectively, and then to the cross-attention for fusion. and spatial attention map sa2. The process is formulated as: ca2 = SE(Conv1×1(concat(X1, X1 loc ′))), sa2 = CBAM(Conv1×1(concat(X1, X1 loc ′))) (2) Finally, we fuse X1 with the four attentions to generate the enhanced X1 en, X1 en = (ca1 ⊙ca2) ⊙X1 ⊙(sa1 ⊙sa2)) (3) where ⊙is element-wise multiplication. X1 en can provide a robust foundation for the subsequent refinement of the prediction map. Foreground-aware Mask Attention Mirror regions are susceptible to interference from complex imagings and extra-mirror entities, resulting in less distinctive features from surroundings. Besides, weak annotations do not contain complete semantic regions, making it difficult to predict the object structure completely. To this end, we propose a foreground-aware mask attention (FAMA) that fuses foreground feature representation and edge guidance to obtain more complete mirror structure, as shown in Figure 5. Specifically, FAMA is divided into two branches: semantic-aware branch and edge-aware branch. The semantic-aware branch enhances the detection of mirror regions by incorporating a foreground mask prior, while the edge-aware branch refines the structure information by integrating edge maps. These two branches interact with each other to improve the overall detection quality. The core module of FAMA is based on multi-Dconv head transposed attention (MDTA) (Zamir et al. 2022), an efficient improved self-attention (SA) (Vaswani et al. 2017), which can be expressed as: MDTA(Q, K, V ) = softmax(QKT α )V (4) The generation of Q, K, and V is similar to SA, with the difference that MDTA uses a 3×3 depth-wise convolution (Sandler et al. 2018) to encode local features. And MDTA explores global feature dependencies from the channel dimension rather than spatial. α is a learnable scaling parameter that allows the gradient to remain stable during training. For the semantic-aware branch, the input Fla ∈ R32× H 8 × W 8 is processed by 3×3 and 1×1 convolutions to generate the query, key, and value matrices. To compute the associations of the mirror region features, we perform element-wise multiplication of Sinit ∈R1× H 8 × W 8 with the query and key matrices to obtain Q ′ f and K ′ f, while keeping the value matrix Vf unchanged. The subsequent operations are the same as those in MDTA. This process is written as: Fseg = softmax( Q ′ fK ′T f α )Vf (5) Similarly, for the edge-aware branch, we use the two inputs X ′ 1 en ∈R32× H 8 × W 8 (adjust the size of X1 en ∈ R64× H 4 × W 4 sequentially using 1×1 and 3×3 convolutions.) and E ∈R1× H 8 × W 8 to obtain Q ′ e, K ′ e, and Ve. The edge map E can be generated by: E = EG(X1, X2, X4) (6) Then we can obtain Fedge fused with edge priors, Fedge = softmax(Q ′ eK ′T e α )Ve (7) The features processed by these two branches possess semantic and edge contextual associations, respectively. To enrich the mirror region with more complex underlying structure features, we design a global reasoning module. Specifically, the semantic feature Fseg and the edge feature Fedge undergo the same convolutional processing to generate Qs, Kedge and Vedge. The subsequent operations are the same as MDTA, generating Fref, Fref = softmax( QsKT edge α )Vedge (8) Finally, we can obtain the refined prediction map Sref ∈ R1× H 8 × W 8 by compressing the channels of Frefine to 1 using a 1×1 convolution. Prototype Contrast Loss The semantic representation of mirror (forground) and nonmirror (background) regions in images differs, leading to closer feature distances for mirror regions and far distances between mirror and non-mirror regions in high-dimension feature space. Considering these, we design the PCL to learn more robust and essential feature representations. In particular, we use Fsal ∈R64× HW 64 (Similar operations (Zhang et al. 2020) are performed based on Sref, further fuse edge features and merge dimensions to generate) and Sref ∈R1× HW 64 (After width, height expansion and dimensions merging) to generate foreground prototype feature Pf ∈R1×64, while background prototype feature Pb ∈R1×64 is generated using the background mask 1 −Sref ∈R1× HW 64 instead Sref. So We have: Pf = Sref ⊗F T sal, Pb = (1 −Sref) ⊗F T sal (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6956 Next, we use cosine similarity to calculate the distance sim between the two prototypes, and subsequently compute the negative sample (foreground and background prototypes pair) loss function. The sim is written as: sim = Pf · Pb ∥Pf ∥× ∥Pb ∥ (10) where · represents dot product, ∥· ∥represents l2 norm. If there are n samples, these sims will form a list. Different samples have different inital similarity, and we tend to focus on negative samples with lower similarity and positive samples with higher similarity. To achieve this, we perform weighted calculations. The weight for the i-th element in the sim list can be expressed as: wi = esimi Pn j esimj (11) The weight list and the sim list can be multiplied correspondingly to get the weighted sim wsim. Finally, the negative sample loss can be written as: L−= −1 n2 n X i=1 n X j=1 log(1 −wsim) (12) Similarly, we can get positive sample loss by calculating the distance between foreground features, writing: L+ = − 1 n(n −1) n X i=1 n X j=1 I[i̸=j]log(wsim) (13) where the function I represents 1 when i and j are not equal, and 0 otherwise. Loss Function Inspired by (Zhang et al. 2020), we adpot four functions to supervise the model training. Partial cross entropy (PCE) is used for the initial and refined saliency maps, i.e., Lsal init and Lsal ref. Smooth loss (SL) is employed to align the mirror region with image structure, i.e., Lsmooth (using the input grayscale map). Cross entropy (CE) is applied to the edge detection network, i.e., Ledge. Finally, PCL is utilized to reinforce foreground and background feature learning. The entire loss function can be defined as: Lfinal = PCE(Sinit, mask) + PCE(Sref, mask) +SL(Sinit, gray) + SL(Sref, gray) +αCE(E, gt) + β(L−+ L+) (14) where mask denotes the product of the foreground and full scribble masks, gt is generated by the canny edge detector (Canny 1986). The performance may be better if a more advanced edge detection method is used, for example, RCF (Liu et al. 2017). α and β are hyperparameters. Experiments Datasets. We collect training images from MSD, PMD, and Mirror-RGBD datasets, totaling 10,158 images, and relabel them as the training set of S-Mirror dataset. Models are evaluated using the testing sets of the above three datasets. Implementation Details. We implement our network using PyTorch and conduct experiments on an A100 GPU. Specifically, We use PVT network pretrained on ImageNet as the backbone to accelerate convergence. Various data augmentation methods are employed, such as random rotation, horizontal and vertical flipping. All images are resized to 352×352. During the training phase, the batch size is 16, the initial learning rate is 1e-4, the decay rate is 0.9, Adam is used as the optimizer, and the epoch is 150. We first train our model on MSD dataset and then use the trained model weights as initial weights for further training on PMD and Mirror-RGBD dataset. No post-processing strategies are used during the testing phase. Evaluation Metrics. We use five evaluation metrics: Smeasure (Sm) (Fan et al. 2017), mean E-measure (Em) (Fan et al. 2018), weighted F-measure (F w β ) (Margolin, ZelnikManor, and Tal 2014), Mean Absolute Error (MAE), and Intersection over union (IoU). Comparison with State-of-the-arts To demonstrate the superiority of our method, we first compare it with several state-of-the-art models on RGB-based MSD and PMD dataset. As shown in Table 1, we select eight SOD models, namely CPDNet (Wu, Su, and Huang 2019), MINet (Pang et al. 2020b), LDFNet (Wei et al. 2020), VST (Liu et al. 2021a), R3Net (Deng et al. 2018), EGNet (Zhao et al. 2019), PoolNet (Liu et al. 2019), SETR (Zheng et al. 2021), four MD models, namely MirrorNet (Yang et al. 2019), PMDNet (Lin, Wang, and Lau 2020), HetNet (He, Lin, and Lau 2023), SATNet (Huang et al. 2023), and three related weakly supervised models, namely SS (Zhang et al. 2020), SCWS (Yu et al. 2021), WSCOD (He et al. 2023). Our method outperforms all the weakly supervised models and achieves comparable performance to fully supervised SOD and MD models. More evaluations regarding the robustness of our method utilizing PrecionRecall and F-Measure curves are provided in the supplementary material. We also select some representative samples for visual comparison. As shown in Figure 6, the first row demonstrates scene where the mirror region is occluded, our method can effectively establish logical and physical associations of objects, distinguish between occlusion and mirror area. In the second row, there is significant mirror reflection, our method can accurately tell whether it is a imaging or an entity, achieving complete detection. The third and fourth rows show scenes with large scale mirror variations, Our method can capture long and short-range dependencies, obtaining accurate mirror regions. We also compare our method with seven RGBD SOD models, namely A2dele (Piao et al. 2020), HDFNet (Pang et al. 2020a), S2MA (Liu, Zhang, and Han 2020), JL-DCF (Fu et al. 2020), DANet (Zhao et al. 2020), BBSTNet (Fan et al. 2020), VST (Liu et al. 2021a), and two MD models, namely PDNet (using depth information) and SATNet, as well as RGB-based SS, SCWS, and WSCOD on MirrorRGBD dataset. As shown in Table 2, our method also outperforms all the related weakly supervised detection methods and reduces the gap with fully supervised methods. We select several examples for comparison. As shown in Figure 7, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6957 Methods Sup. MSD PMD Sm↑ Em↑ F w β ↑ IoU↑ MAE↓ Sm↑ Em↑ F w β ↑ IoU↑ MAE↓ CPDNet F 0.725 0.770 0.625 0.576 0.116 0.779 0.817 0.651 0.600 0.041 MINet F 0.792 0.819 0.715 0.664 0.088 0.794 0.822 0.667 0.601 0.038 LDF F 0.821 0.867 0.773 0.729 0.068 0.799 0.833 0.683 0.633 0.038 VST F 0.861 0.901 0.818 0.791 0.054 0.783 0.814 0.639 0.591 0.036 R3Net F 0.723 0.743 0.615 0.554 0.111 0.720 0.756 0.561 0.496 0.045 EGNet F 0.771 0.776 0.668 0.630 0.096 0.617 0.593 0.362 0.210 0.088 PoolNet F 0.804 0.831 0.717 0.691 0.094 0.588 0.532 0.313 0.192 0.089 SETR F 0.797 0.840 0.750 0.690 0.071 0.753 0.775 0.633 0.564 0.035 MirrorNet F 0.850 0.891 0.812 0.790 0.065 0.761 0.841 0.663 0.585 0.043 PMDNet F 0.875 0.908 0.845 0.815 0.047 0.810 0.859 0.716 0.660 0.032 HetNet F 0.881 0.921 0.854 0.824 0.043 0.828 0.865 0.734 0.690 0.029 SATNet F 0.887 0.916 0.865 0.834 0.033 0.826 0.858 0.739 0.684 0.025 SS W 0.681 0.747 0.567 0.527 0.158 0.726 0.790 0.571 0.513 0.055 SCWS W 0.770 0.814 0.678 0.659 0.121 0.759 0.807 0.599 0.579 0.059 WSCOD W 0.786 0.851 0.728 0.685 0.092 0.764 0.819 0.609 0.586 0.055 Ours W 0.828 0.878 0.780 0.750 0.078 0.773 0.824 0.630 0.600 0.051 Table 1: Quantitative comparison on MSD and PMD datasets with five evaluation metrics. F, W denote fully supervised and weakly supervised, respectively. The best weakly supervised performances are bolded. Methods Mirror-RGBD Sm↑ Em↑ F w β ↑ IoU↑ MAE↓ A2dele 0.641 0.730 0.505 0.428 0.120 HDFNet 0.671 0.663 0.521 0.447 0.095 S2MA 0.765 0.797 0.646 0.609 0.075 JL-DCF 0.815 0.861 0.750 0.696 0.057 DANet 0.800 0.842 0.728 0.678 0.063 BBSTNet 0.840 0.881 0.786 0.743 0.048 VST 0.815 0.859 0.751 0.702 0.054 PDNet 0.856 0.906 0.825 0.778 0.042 SATNet 0.857 0.901 0.829 0.784 0.031 SS 0.654 0.722 0.537 0.444 0.127 SCWS 0.690 0.743 0.547 0.498 0.118 WSCOD 0.698 0.762 0.581 0.518 0.106 Ours 0.754 0.806 0.655 0.616 0.088 Table 2: Quantitative comparison on Mirror-RGBD dataset with five evaluation metrics. The best weakly supervised performances are bolded. the first row demonstrates that our method can exploit context and obtain complete detection results when the mirror region is similar to the surroundings and has a large scale. In the second row, the mirror region has a small scale, causing A2dele to even miss, but our method can determine. The third row shows that our method can establish the relationship between multiple objects. Although our method does not use depth information, it still performs well. To verify the lightness of our model, we compare it with related weakly supervised models. As shown in Table 3, our method is also efficient. Methods Input Size Params. FLOPs SS 352×352 16.80 70.85 SCWS 352×352 63.54 53.80 WSCOD 352×352 32.65 14.27 Ours 352×352 26.16 21.39 Table 3: Model Efficiency Comparison. We compare with three related weakly supervised models on Parameters (M), FLOPs (GMAC). Ablation Study We conduct ablation experiments on MSD dataset, as shown in Table 4. We also select a representative image to visualize the ablation process, as shown in Figure 8. Effect of LGFE. Based on the Baseline, we enhance X1 by adding an LDEF module to obtain richer feature representations with more semantic and detailed information. As a result, we achieve improvements of 1.7%, 2.4%, 2.8%, 2.4%, 1.5% on the Sm, Em, F w β , IoU, and MAE metrics, respectively. LGFE module can establish global and local dependencies, effectively distinguishing between imagings and objects. The visualization results show that after adding a LGFE module, the non-mirror region is significantly reduced without affecting the mirror area. Effect of FAMA. We evaluate the performance of FAMA on both the Baseline and “Baseline+LGFE” network. Compared to the Baseline, we observe improvements of 2.5%, 3.4%, 4.6%, 4.0%, and 2.1% on the five metrics, respectively. After adding a LGFE module, the performance is further enhanced, demonstrating the complementarity of the two modules. The visualization results also show that the addition of FAMA effectively integrated edge features, reduce The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6958 Image GT Ours WSCOD SCWS SS CPDNet EGNet R3Net Figure 6: Qualitative comparison on MSD and PMD datasets. Occlusion, mirror reflection, large-scale and small-scale scenes are shown from top to bottom. Method Sm↑ Em↑ F w β ↑ IoU↑ MAE↓ B 0.793 0.833 0.717 0.695 0.106 B+I1 0.810 0.857 0.745 0.719 0.091 B+I2 0.818 0.867 0.763 0.735 0.085 B+I3 0.799 0.845 0.725 0.701 0.098 B+I1+I2 0.820 0.873 0.772 0.740 0.081 Ours 0.828 0.878 0.780 0.750 0.078 Table 4: Results of ablation study on MSD dataset. B, I1, I2, and I3 indicate Baseline, LGFE, FAMA, and PCL, respectively. Based on our good baseline and added incrementally, the proposed method reaches the best performances (bolded data). Image GT Ours WSCOD SCWS SS A2dele HDFNet Depth Figure 7: Qualitative comparison on Mirror-RGBD dataset, showing large-scale&similar to surroundings, small-scale and multi-objects scenes from top to bottom. mirror interference, leading to more accurate foreground detection. Although the “Baseline+LGFE+FAMA” network achieves promising detection results, very close to the GT, it suffers from the issue of excessive de-interference. Effect of PCL. Similar to evaluating FAMA, we introduce PCL as an auxiliary loss to the Baseline and “Baseline+LGFE+FAMA” network. If the obtained mirror features are not accurate enough, foreground and background prototypes may contain noise, resulting in little improvements. On the contrary, with the addition of LGFE module and FAMA, the mirror regions become more complete, Image GT Baseline +I3 +I1 +I2 Ours +I1+I2 Figure 8: Visualization results of ablation study. Baseline and stage models suffer from over- or under-detection. Our method achieves more accurate detection. leading to significant improvements. The visualization results after adding PCL to the Baseline show that the model mistakenly identifies the lower right area of the image as a mirror, despite greatly reducing the misidentified area on the left. Based on the “Baseline+LGFE+FAMA” network, PCL can fully utilize the high-quality foreground features among images to alleviate over-detection. Conclusion In this paper, we propose the first scribble-based weakly supervised MD dataset, requiring less than 0.01 of pixel annotation and offering a simple and flexible process. Using the relabeled dataset, we propose a novel MD framework with three carefully designed components. Firstly, we propose a local-global feature enhancement (LGFE) module to tackle problems such as scale variation, irregularity, and occlusion of mirror region, thereby improving the representation quality for fine details. Secondly, we design a foreground-aware mask attention (FAMA) by combining foreground semantics and edge features, which promotes the expansion and completeness of scribble regions while reducing interference from mirror imaging. Finally, we formulate a prototype contrast loss (PCL) to learn the similarity of foreground-background semantic features between images, enabling more robust feature representations. Extensive experiments show that our method surpasses state-of-the-art weakly supervised approaches, achieving performance comparable to fully supervised learning while being lightweight. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6959 Acknowledgments This work was supported in part by the National Natural Science Foundation of China under grant U23B2011, 62102069, U20B2063 and 62220106008, the Sichuan Science and Technology Program under grant 2022YFG0032, and the China Academy of Space Technology (CAST) Innovation Program. References An, D.; Qi, Y.; Huang, Y.; Wu, Q.; Wang, L.; and Tan, T. 2021. Neighbor-view enhanced model for vision and language navigation. In Proceedings of the 29th ACM International Conference on Multimedia, 5101–5109. Canny, J. 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 679–698. Deng, Z.; Hu, X.; Zhu, L.; Xu, X.; Qin, J.; Han, G.; and Heng, P.-A. 2018. R3net: Recurrent residual refinement network for saliency detection. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, 684– 690. Dong, B.; Wang, W.; Fan, D.-P.; Li, J.; Fu, H.; and Shao, L. 2021. Polyp-pvt: Polyp segmentation with pyramid vision transformers. arXiv preprint arXiv:2108.06932. Fan, D.; Gong, C.; Cao, Y.; Ren, B.; Cheng, M.; and Borji, A. 2018. Enhanced-alignment Measure for Binary Foreground Map Evaluation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, 698–704. Fan, D.-P.; Cheng, M.-M.; Liu, Y.; Li, T.; and Borji, A. 2017. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision, 4548–4557. Fan, D.-P.; Lin, Z.; Zhang, Z.; Zhu, M.; and Cheng, M.-M. 2020. Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks. IEEE Transactions on Neural Networks and Learning Systems, 2075–2089. Fu, K.; Fan, D.-P.; Ji, G.-P.; and Zhao, Q. 2020. JL-DCF: Joint learning and densely-cooperative fusion framework for RGB-D salient object detection. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 3052–3062. Gao, S.; Zhang, W.; Wang, Y.; Guo, Q.; Zhang, C.; He, Y.; and Zhang, W. 2022. Weakly-supervised salient object detection using point supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, 670–678. Guan, H.; Lin, J.; and Lau, R. W. 2022. Learning semantic associations for mirror detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5941–5950. He, R.; Dong, Q.; Lin, J.; and Lau, R. W. 2023. Weaklysupervised camouflaged object detection with scribble annotations. In Proceedings of the AAAI Conference on Artificial Intelligence, 781–789. He, R.; Lin, J.; and Lau, R. W. 2023. Efficient Mirror Detection via Multi-Level Heterogeneous Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 790–798. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7132–7141. Huang, T.; Dong, B.; Lin, J.; Liu, X.; Lau, R. W.; and Zuo, W. 2023. Symmetry-Aware Transformer-based Mirror Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, 935–943. Jain, J.; Singh, A.; Orlov, N.; Huang, Z.; Li, J.; Walton, S.; and Shi, H. 2021. Semask: Semantically masked transformers for semantic segmentation. arXiv preprint arXiv:2112.12782. Lin, J.; Wang, G.; and Lau, R. W. 2020. Progressive mirror detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3697–3705. Liu, J.-J.; Hou, Q.; Cheng, M.-M.; Feng, J.; and Jiang, J. 2019. A simple pooling-based design for real-time salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3917– 3926. Liu, N.; Zhang, N.; and Han, J. 2020. Learning selective self-mutual attention for RGB-D saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13756–13765. Liu, N.; Zhang, N.; Wan, K.; Shao, L.; and Han, J. 2021a. Visual saliency transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4722–4732. Liu, Y.; Cheng, M.-M.; Hu, X.; Wang, K.; and Bai, X. 2017. Richer convolutional features for edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3000–3009. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021b. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 10012–10022. Ma, M.; Xia, C.; and Li, J. 2021. Pyramidal feature shrinking for salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, 2311–2318. Margolin, R.; Zelnik-Manor, L.; and Tal, A. 2014. How to evaluate foreground maps? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 248– 255. Mei, H.; Dong, B.; Dong, W.; Peers, P.; Yang, X.; Zhang, Q.; and Wei, X. 2021. Depth-aware mirror segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3044–3053. Pang, Y.; Zhang, L.; Zhao, X.; and Lu, H. 2020a. Hierarchical dynamic filtering network for RGB-D salient object detection. In Proceedings of the European Conference on Computer Vision, 235–252. Pang, Y.; Zhao, X.; Zhang, L.; and Lu, H. 2020b. Multi-scale interactive network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9413–9422. Piao, Y.; Rong, Z.; Zhang, M.; Ren, W.; and Lu, H. 2020. A2dele: Adaptive and attentive depth distiller for efficient The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6960 RGB-D salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9060–9069. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4510–4520. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, 5998–6008. Wang, G.; Sun, C.; and Sowmya, A. 2019. Erl-net: Entangled representation learning for single image de-raining. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5644–5652. Wang, G.; Sun, C.; and Sowmya, A. 2021. Contextenhanced representation learning for single image deraining. International Journal of Computer Vision, 1650–1674. Wang, G.; Yang, Y.; Xu, X.; Li, J.; and Shen, H. 2021a. Enhanced context encoding for single image raindrop removal. Science China Technological Sciences, 2640–2650. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021b. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 568–578. Wei, J.; Wang, S.; Wu, Z.; Su, C.; Huang, Q.; and Tian, Q. 2020. Label decoupling framework for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13025–13034. Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I. S. 2018. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision, 3–19. Wu, Y.; Pan, C.; Wang, G.; Yang, Y.; Wei, J.; Li, C.; and Shen, H. T. 2023. Learning Semantic-Aware Knowledge Guidance for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1662–1671. Wu, Z.; Su, L.; and Huang, Q. 2019. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3907–3916. Yang, X.; Mei, H.; Xu, K.; Wei, X.; Yin, B.; and Lau, R. W. 2019. Where is my mirror? In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8809–8818. Yu, S.; Zhang, B.; Xiao, J.; and Lim, E. G. 2021. Structureconsistent weakly supervised salient object detection with local saliency coherence. In Proceedings of the AAAI Conference on Artificial Intelligence, 3234–3242. Zamir, S. W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F. S.; and Yang, M.-H. 2022. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739. Zhang, J.; Yu, X.; Li, A.; Song, P.; Liu, B.; and Dai, Y. 2020. Weakly-supervised salient object detection via scribble annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12546–12555. Zhao, J.-X.; Liu, J.-J.; Fan, D.-P.; Cao, Y.; Yang, J.; and Cheng, M.-M. 2019. EGNet: Edge guidance network for salient object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8779–8788. Zhao, X.; Zhang, L.; Pang, Y.; Lu, H.; and Zhang, L. 2020. A single stream network for robust and real-time RGB-D salient object detection. In Proceedings of the European Conference on Computer Vision, 646–662. Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P. H.; et al. 2021. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 6881–6890. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6961 | 2024 | 773 |
18,599 | Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders Yaohua Zha1,2, Huizhen Ji1, Jinmin Li1, Rongsheng Li1, Tao Dai3*, Bin Chen4, Zhi Wang1, Shu-Tao Xia1,2 1Tsinghua Shenzhen International Graduate School, Tsinghua University 2Research Center of Artificial Intelligence, Peng Cheng Laboratory 3College of Computer Science and Software Engineering, Shenzhen University 4Harbin Institute of Technology, Shenzhen [email protected] Abstract Learning 3D representation plays a critical role in masked autoencoder (MAE) based pre-training methods for point cloud, including single-modal and cross-modal based MAE. Specifically, although cross-modal MAE methods learn strong 3D representations via the auxiliary of other modal knowledge, they often suffer from heavy computational burdens and heavily rely on massive cross-modal data pairs that are often unavailable, which hinders their applications in practice. Instead, single-modal methods with solely point clouds as input are preferred in real applications due to their simplicity and efficiency. However, such methods easily suffer from limited 3D representations with global random mask input. To learn compact 3D representations, we propose a simple yet effective Point Feature Enhancement Masked Autoencoders (Point-FEMAE), which mainly consists of a global branch and a local branch to capture latent semantic features. Specifically, to learn more compact features, a shareparameter Transformer encoder is introduced to extract point features from the global and local unmasked patches obtained by global random and local block mask strategies, followed by a specific decoder to reconstruct. Meanwhile, to further enhance features in the local branch, we propose a Local Enhancement Module with local patch convolution to perceive fine-grained local context at larger scales. Our method significantly improves the pre-training efficiency compared to cross-modal alternatives, and extensive downstream experiments underscore the state-of-the-art effectiveness, particularly outperforming our baseline (Point-MAE) by 5.16%, 5.00%, and 5.04% in three variants of ScanObjectNN, respectively. Code is available at https://github.com/ zyh16143998882/AAAI24-PointFEMAE. Introduction Point cloud, as an efficient representation of 3D objects, has been widely used in extensive applications like autonomous driving, robotics, and the metaverse for its rich geometric, shape, and structural details. Recently, with the rapid advancements of deep learning-based point cloud understanding (Qi et al. 2017a; Wang et al. 2019; Xiong et al. 2023; Gao *Corresponding author. ([email protected]) Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. et al. 2023), masked autoencoder (MAE) based pre-training methods (Pang et al. 2022; Zhang et al. 2022b; Dong et al. 2022; Zhang et al. 2022c; Qi et al. 2023), which aim to learn latent 3D representations from vast unlabeled point clouds, have received much attention, and can be categorized into two classes, i.e., single-modal (Pang et al. 2022; Zhang et al. 2022b) and cross-modal (Dong et al. 2022; Guo, Li, and Heng 2023; Zhang et al. 2022c; Qi et al. 2023) methods. Among them, cross-modal MAE methods, leveraging insights from other modalities, have achieved remarkable performance by acquiring holistic 3D representations. However, these methods rely heavily on transferring knowledge from massive pair images or texts, which are often unavailable in practice. Specifically, they utilize pre-trained image or language models to extract cross-modal knowledge, along with techniques like projection or knowledge distillation for cross-modal knowledge transfer. Such complex operations require heavy computational cost and thus hinders their applications in practice. As shown in Table 1, crossmodal methods like Recon (Qi et al. 2023) have obtained performance gains by 5% on ScanObjectNN while requiring 5× pre-training parameters, compared to the single-modal Point-MAE (Pang et al. 2022). For these reasons, single-modal methods with solely point clouds as input are preferred in real applications due to their simplicity and efficiency (Table 1). However, existing single-modal methods rely heavily on the global random masked point cloud (shown in Figure 1 (a)) generated by the global random masking strategy to learn 3D representations, which makes the model have robust global shape perception but insufficient local detail representation. As shown in Table 2, such single-modal methods can work well on global masked point cloud (GMPC), while failing in local masked point cloud (LMPC), thus resulting in limited 3D representations for single-modal MAE models. To learn compact 3D representations for point cloud, we propose a simple yet highly effective Point Feature Enhancement Masked Autoencoders (Point-FEMAE), which mainly consists of a global branch and a local branch to capture latent global and local features, respectively. Specifically, during the pre-training stage, as illustrated in Figure 2 (a), we subject a complete point cloud to both global ranThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6962 Method Input Mask Strategy Pretrain Efficiency Efficacy #Params (M) GFLOPS Times (h) ScanObjectNN ModelNet40 Single-Modal MAE-based Method Point-MAE (baseline) PC Global Random 29.0 2.3 13 85.18 93.8 Point-M2AE PC Multi-Scale 15.3 (0.5 ×) 3.7 (1.6 ×) 29 (2.2 ×) 86.43 (↑1.25) 94.0 (↑0.2) Cross-Modal MAE-based Method ACT PC Global Random 135.5 (4.7 ×) 31.0 (13.5 ×) 52 (4.0 ×) 88.21 (↑3.03) 93.7 (↓0.1) Joint-MAE PC & I Global Random 86.07 (↑0.89) 94.0 (↑0.2) I2P PC & I 2D-Guided 74.9 (2.6 ×) 16.8 (7.3 ×) 64 (4.9 ×) 90.11 (↑4.93) 94.1 (↑0.3) Recon PC & I & L Global Random 140.9 (4.9 ×) 20.9 (9.1 ×) 34 (2.6 ×) 90.63 (↑5.45) 94.5 (↑0.7) Point-FEMAE PC Hybrid 41.5 (1.4 ×) 5.0 (2.2 ×) 21 (1.6 ×) 90.22 (↑5.04) 94.5 (↑0.7) Table 1: Comparison of single-modal and cross-modal MAE methods in terms of pre-training efficiency and representational. For pre-training efficiency, we evaluate parameters, GFLOPS, and actual pre-training time. For representational capability, we fine-tuned the pre-trained models to evaluate classification accuracy. PC is point cloud, I is images and L is language. dom masking and local block masking to generate globallybiased and locally-biased inputs, respectively. Subsequently, a partially parameter-shared encoder is employed to capture latent global and local features in the global and local branches and rebuild the masked inputs with a branchindependent decoder. Our encoder in both branches shares the same Transformer parameters to ensure comprehensive comprehension of the global points. Furthermore, an additional Local Enhancement Module (LEM) with local patch convolution is introduced within the local branch to perceive fine-grained local context at larger scales. During the finetuning phase, as depicted in Figure 2(b), owing to the availability of comprehensive global and local information in the complete input point cloud, we employ the encoder from the local branch to learn compact 3D representations of the downstream task point clouds. Our main contributions are summarized as follows: • We have found that existing single-modal MAE-based point cloud pre-training methods suffer from limited 3D representations, due to the use of a global random masking strategy. • We propose a Point Feature Enhancement Masked Autoencoders (Point-FEMAE), which combines global and local mask reconstruction to capture latent enhanced point features. Besides, a Local Enhancement Module (LEM) is introduced into the encoder to perceive finegrained local context at larger scales. • Our method significantly improves the pre-training efficiency compared to cross-modal methods. Notably, extensive experiments demonstrate the effectiveness of our method over other MAE-based methods. Particularly, our method significantly outperforms Point-MAE by 5.16%, 5.00, and 5.04% in three variants of ScanObjectNN, respectively. Related Work Point Cloud Self-supervised Learning Self-supervised Learning (SSL) has achieved remarkable success in many fields such as NLP and computer vision. This approach first applies a pretext task to learn the latent semantic information and then fine-tunes the weights of the model in the target task to achieve higher performance. Existing pretext tasks can be divided into discriminative tasks (Becker and Hinton 1992; Wu et al. 2018; Chen et al. 2020; Zhang et al. 2023) and generative tasks (He et al. 2022; Lin, Wang, and Liu 2021; Baevski et al. 2022). The discriminative approach (Xie et al. 2020) distinguishes different views of the same instance from other instances, and in the point cloud field, PointContrast (Xie et al. 2020) first explores learning 3D representations using contrast learning of features of the same points in different views. CrossPoint (Afham et al. 2022) learns point cloud representations within the 3D domain by contrast learning, and then performs further cross-mode contrast learning. Generation methods (Vincent et al. 2008; Radford et al. 2018; Devlin et al. 2018; Ferles, Papanikolaou, and Naidoo 2018; Zhang et al. 2022a) typically rely on an autoencoder to learn the latent features of the data by reconstructing the original input. Masked autoencoders (MAE) (He et al. 2022), a classical autoencoder that tries to recover the original input from a masked version, which allows the model to learn more robust features, has received a lot of research attention. MAE-based Point Cloud Pre-training MAE-based point cloud pre-training methods can be grouped into two categories, i.e., single-modal (Pang et al. 2022; Zhang et al. 2022b) and cross-modal (Dong et al. 2022; Guo, Li, and Heng 2023; Zhang et al. 2022c; Qi et al. 2023) methods. Point-MAE (Pang et al. 2022) pioneered the use of masked autoencoders for self-supervised pre-training in point clouds. It divides point clouds into patches and employs mini-Point-Net to extract patch embeddings. Then a mask reconstruction was performed with standard transformers and the results were impressive. Afterward, PointM2AE (Zhang et al. 2022b) proposes a multi-scale masking strategy, but still relies on a global random masking strategy at the first scale. Subsequent work mainly focused on using cross-modal knowledge to aid point cloud model learning. For instance, ACT (Dong et al. 2022) utilized a pre-trained The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6963 (c) Complete Point Cloud (CPC) (a) Global Masked Point Cloud (GMPC) (b) Local Masked Point Cloud (LMPC) Pre-training Input Fine-tuning Input Figure 1: Differences in data distribution between pretraining and fine-tuning. (a) Global Masked Point Cloud (GMPC) input during pre-training with global random masking. (b) Local Masked Point Cloud (LMPC) input during pre-training with local block masking. (c) Complete Point Cloud (CPC) input during downstream fine-tuning. ViT (Dosovitskiy et al. 2020) as a teacher network to guide the learning of the point cloud student network. I2P-MAE (Zhang et al. 2022c) proposed 2D-guided masking and 2D semantic reconstruction to assist point cloud model learning. Recon (Qi et al. 2023) learn from both generative modeling teachers and cross-modal contrastive teachers through ensemble distillation. Other MAE-based works (Chen et al. 2023; Yang et al. 2023; Tian et al. 2023) focus on using scene and LiDAR point clouds for pre-training, specifically for detection tasks. IDPT (Zha et al. 2023b) first proposed to introduce prompt tuning in pre-trained point cloud models. Our work focuses on single-modality point cloud pretraining to learn compact 3D representations. Methodology Observations Despite the high efficiency, existing single-modal MAEbased pre-training pipelines with global/local random mask strategies obtain much worse performance than cross-modal methods (as shown in Table 1). It still remains unknown how the random mask strategies affect the single-modal MAE models. To this end, we first identified a substantial gap in the data distribution between the input data during pre-training and fine-tuning in the context of existing MAE-based methods. During the pre-training stage, conventional masked autoencoders typically employ a global random masking strategy to learn 3D representations, as shown in Figure 1(a), where a portion of the points is randomly masked. This masking strategy retains the global shape of the point cloud while sacrificing local details. Another strategy of local block masking randomly masks entire point blocks from the complete point cloud at the same ratio, preserving some local details but disrupting global shapes, as shown in Figure 1 (b), which has been demonstrated to yield limited performance (Yu et al. 2022; Pang et al. 2022). However, during the fine-tuning stage, complete point clouds containing full information are often utilized to learn 3D representations, as depicted in Figure 1 (c). Our empirical observations suggest that such masked input during the pre-training stage may learn limited 3D representation due to the lack of complete information. Specifically, we employ two straightforward masking strategies: Pre-training Model Reconstruction (↓) Classification (↑) GMPC LMPC GMPC LMPC Point-MAE w/ GM 2.1902 2.8538 92.77 88.98 Point-MAE w/ LM 2.3533 2.4064 92.08 88.81 Point-FEMAE 2.1880 2.3941 93.46 89.33 Table 2: Models with varying mask strategies are assessed using LMPC and GMPC for classification and reconstruction. We measure the reconstructed CD distance on the ShapeNet test set. Additionally, we gauge the classification accuracy on ScanObjectNN (OBG-BG). global random mask and local block mask, illustrated in Figure 1 (a) and (b), to dissect the representation efficacy of Point-MAE models pre-trained with these inputs. We assess the models’ performance across reconstruction and classification tasks on pertinent test datasets. By introducing point cloud inputs biased toward local details (LMPC) and biased toward global shapes (GMPC) into the model, we gauge its competence in capturing both global and local point representations. The rationale behind this is as follows: for a model utilizing a global random masking strategy, the GMPC inputs are sparsely and randomly spread across the entire object, causing local details to be severely disrupted. Despite this, the overall global shape remains preserved, leading the model to prioritize extracting global features. Conversely, in the case of LMPC inputs, all points are clustered within a few local regions, prompting the model to emphasize learning representations centered on the local surface. Consequently, models exhibiting proficiency in GMPC highlight strong global representation, while those excelling in LMPC underscore potent local representation capabilities. As illustrated in Table 2, the Point-MAE w/ global random masking, demonstrates impressive reconstruction and classification results when tested on GMPC, but its performance is subpar on LMPC. This observation suggests that the model excels in global representation capabilities. Conversely, the Point-MAE w/ local block masking also displays superior performance on GMPC as opposed to LMPC. However, in comparison to global random masking, local block masking encounters a more substantial decline in GMPC performance and a greater enhancement in LMPC performance. The above observations indicate that existing single modal pre-trained models employing these two straightforward masking strategies lack the ability to excel simultaneously in both LMPC and GMPC, i.e., these models fail to effectively capture both local and global representations. Previous research (Qi et al. 2017b; Wang et al. 2019; Li et al. 2021; Wu, Qi, and Fuxin 2019) has demonstrated that models capable of robustly representing both global and local features exhibit higher potential. This insight motivates us to develop a model that learns compact 3D representations by comprehensively exploring global and local information. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6964 FPS KNN ... ... LEM ... ... ... ... ... Encoder ... ... PointNet Transformer Block Transformer Block ... Decoder Decoder ... ... ... ... Rec Head Rec Head Reconstruction Loss Reconstruction Loss Position Embedding Point Patch Unmask Token Mask Token Unmask Token LEM PointNet Transformer Block ... Shared Shared Global Random Mask Local Block Mask Local Branch Global Branch Encoder Transformer Block Transformer Block Transformer Block Transformer Block Transformer Block ... 𝑷𝒖 𝒈 𝑷𝒎 𝒈 𝑷𝒖𝒍 𝑷𝒎 𝒍 𝑷 𝑷𝑪 𝑬𝟎 𝒈 𝑬𝟎 𝒍 𝑬𝟏 𝒍 𝑬𝟏 𝒈 𝓣𝟏 𝑬𝒎 𝒍 𝑬𝒎 𝒈 𝑬𝒏𝒍 𝑬𝒏 𝒈 𝑹𝒎 𝒈 𝑹𝒎 𝒍 𝓜𝟏 𝓓𝒈 𝓓𝒍 𝓣𝒏 𝓜𝒏 𝓗𝒈 𝓗𝒍 𝑪 𝑪 𝓣𝟏 𝓣𝒏 Figure 2: The pipeline of our Point-FEMAE. During the pre-training stage, we perform mask reconstruction in both the global and local branches to learn compact 3D representations. During the fine-tuning stage, we only employ the encoder of the local branch to learn the 3D representation of downstream data. Point Feature Enhancement Mask Autoencoders The overall pipeline of our point feature enhancement masked autoencoders (Point-FEMAE) is shown in Figure 2. During the pre-training stage, due to the issue of information loss in masked inputs, we performed mask reconstruction in both the global and local branches to learn compact 3D representations. During the fine-tuning stage, owing to the complete input, we only employ the encoder of the local branch to learn the 3D representation of downstream data. Masking and Embedding. Given a point cloud P C ∈ RN×3 with N points, we initially divide it into p point patches P ∈Rp×m×3 by farthest point sampling (FPS) and K-Nearest Neighborhood (KNN), with each point patch comprising m local points. Subsequently, in the global branch, we apply global random patch masking to yield unmasked patches P g u ∈R(1−r)p×m×3 and masked patches P g m ∈Rrp×m×3, where r denotes the mask ratio. Analogously, within the local branch, we utilize random local block masking to generate P l u ∈R(1−r)p×m×3 and P l m ∈Rrp×m×3. Finally, P g u and P l u are embedded via a light PointNet, and positional encodings are incorporated to derive block tokens Eg 0 ∈R(1−r)p×C and El 0 ∈R(1−r)p×C for the global and local branches. Encoder. We employ a share-parameter Transformer encode to extract features from the unmasked patches in both the global and local branches. This encoder consists of a series of n encoder layers, each incorporating a standard Transformer block and a Local Enhancement Module (LEM), as depicted in Figure 3. The Transformer layer integrates multi-head Self-Attention and a feed-forward network, predominantly focused on perceiving global information. The local enhancement module (LEM), situated after the Transformer Block, is mainly designed to capture local information about the object, during the fine-tuning phase and the local branch of pre-training. Specifically, for the global branch, during the i-th layer forward phase, the feature Eg n only passes through the i-th standard Transformer block Ti, allowing the standard Transformer to focus more on the global feature representations. For the local branch, the feature El n passes through the ith standard Transformer block and is then fed into the ith Local Enhancement Module Mi, enabling the Local Enhancement Module to focus more on representing local features. Finally, after n layers of forward propagation, the two branches yield the features Eg n and El n, respectively. The forward process of each layer is defined as [Eg i ; El i]0 = [Ti(Eg i−1); Ti(Mi(Eg i−1))]0, (1) where i takes values from 1 to n, and [; ]0 denotes concatenation along the batch dimension. Local Enhancement Module. Existing MAE-based methods have exhibited limited local representation, primarily relying on PointNet (Qi et al. 2017a) for extracting patch embeddings to represent limited local contexts. This approach is hindered by two key issues: 1) PointNet inherently lacks localization capabilities, and 2) it struggles to effectively capture localization at broader scales. To tackle these issues, drawing inspiration from Edge-Conv (Wang et al. 2019), we introduce a local patch convolution with coordinate-based nearest neighbors at the patch scale as a dedicated local enhancement module (LEM), to perceive fine-grained local context at larger scales. Specifically, for each patch token El i−1, it first undergoes a Transformer Block to yield the current patch tokens El i. The patch coordinates C of this patch undergo K-Nearest The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6965 Local Enhancement Module (LEM) Layer Normal Feed Forward Layer Normal Self-Attention Layer Normal ... .. . ... . .. .... . . . ... ... ... ... ... MLP ... ... ... ... Max Pooling ... MLP ... ... ... ... ... ... ... ... ... KNN 𝑬𝒊−𝟏 𝒍 𝑪 𝑰 𝑬𝒊 𝒍 𝑬𝒊 𝒍 Figure 3: The Encoder Layer’s structure, where each layer incorporates a globally oriented Transformer Block and a locally oriented LEM. Within the LEM, information from k nearest neighbor patches is fused based on the patches’ coordinates, facilitating a broader scope of local perception. Neighbor (KNN) to obtain the indices I of the K nearest neighboring patches. Through these indices, the relative edges between patches are calculated (e.g., for patches a and b as neighbors, the edge is computed as El i(a) - El i(b)). Each patch in El i is then replicated K times, and concatenated with the corresponding edges to form the final edge tensor Gi. We apply a single-layer MLP for dimension reduction and use Max pooling to aggregate the K local edges. Lastly, the result goes through another MLP to yield the output tokens El i for the i-th layer. Decoder. We employ two distinct decoders, Dg and Dl, both structured identically. In the local branch, we first concatenate the encoder output El n with randomly initialized learnable mask tokens El m and direct this composite input into Dl. Subsequently, we pass the output El m through a Linear Head Hl for coordinate reconstruction, yielding Rl m. Finally, we calculate the reconstruction loss between Rl m ∈Rrp×m×3 and the ground truth P l m. Similar processes are undertaken for the global branch. Specifically, the forward process of each layer is defined as Rl m = Hl(Dl([El n; El m]1)[:, rp :]) (2) Rg m = Hg(Dg([Eg n; Eg m]1)[:, rp :]) (3) where [; ]1 denotes concatenation along the token dimension and [:, rp :] denotes the last rp patch tokens. L = CD(Rg m, P g m) + CD(Rl m, P l m) (4) Loss Function. We use the l2 Chamfer Distance (Fan, Su, and Guibas 2017) (CD) as our reconstruction loss. Our reconstruction target is to recover the coordinates of the local and global branch masked point patches. Our loss function L is given in Eq. 4 Experiments Pre-training on ShapeNet We use ShapeNet (Chang et al. 2015) as our pre-training dataset, encompassing over 50,000 distinct 3D models spanning 55 prevalent object categories. We extract 1024 points from each 3D model to serve as input for pre-training. The input point cloud is further divided into 64 point patches, with each patch containing 32 points. Table 1 presents a comparison of our method and other approaches concerning pre-training efficiency and efficacy. Single-Modal. Compared to the single-modal baseline, Point-MAE (Pang et al. 2022), our method shows only slight increases in parameters, GFLOPS, and pre-training time, which are negligible considering the significant performance improvements. In contrast to Point-M2AE (Zhang et al. 2022b), while we possess more parameters and GFLOPS, our pre-training time is notably shorter. This variance arises from Point-M2AE’s utilization of a larger input point count and patches (2048 points and 512 patches) in contrast to our utilization of 1024 points and 64 patches. Cross-Modal. In comparison to cross-modal methods, our approach showcases a substantial reduction in parameters (30%∼60%), GFLOPS (16%∼30%), and pre-training times (30%∼60%) due to our simple pipeline and input. Remarkably, while maintaining pre-training efficiency, our method achieves comparable performance to the state-of-the-art cross-modal method, Recon (Qi et al. 2023), underscoring the excellence of our approach. Fine-tuning on Downstream Tasks We assess the efficacy of our approach by fine-tuning our pre-trained models on downstream tasks, including classification, few-shot learning, and part segmentation. Object Classification. We initially assess the overall classification accuracy of our pre-trained models on both realscanned (ScanObjectNN (Uy et al. 2019)) and synthetic (ModelNet40 (Wu et al. 2015)) datasets. ScanObjectNN is a prevalent dataset consisting of approximately 15,000 real-world scanned point cloud samples from 15 categories. These objects represent indoor scenes and are often characterized by cluttered backgrounds and occlusions caused by other objects. ModelNet40 is a well-known synthetic point cloud dataset, comprising 12,311 meticulously crafted 3D CAD models distributed across 40 categories. To ensure a fair comparison, we follow the practices of previous studies (Dong et al. 2022; Qi et al. 2023; Zhang et al. 2022c). For the ScanObjectNN dataset, we employ data augmentation through simple rotations and report results without voting mechanisms. Additionally, for each input point cloud, we sample 2048 points. Regarding the ModelNet40 dataset, we sample 1024 points for each input point cloud and report overall accuracy for both the withoutvote and with-vote configurations and during the fine-tuning phase in ModelNet40, we only update the parameters of our local enhancement modules and the classification head to mitigate overfitting. As presented in Table 3, in comparison to baseline PointMAE, our method showcases substantial enhancements in accuracy across various datasets. Specifically, we observe improvements of 5.16%, 5.00%, and 5.04% on three variants of ScanObjectNN, as well as gains of 0.8% and 0.7% on the ModelNet40 (w/o vote and w/ vote respectively). Furthermore, when compared to the leading cross-modal method The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6966 Method #Params ScanObjectNN ModelNet40 Input OBJ-BG OBJ-ONLY PB-T50-RS Input w/o Vote w/ Vote Supervised Learning Only PointNet (Qi et al. 2017a) 3.5 1k PC 73.3 79.2 68.0 1k PC 89.2 PointNet++ (Qi et al. 2017b) 1.5 1k PC 82.3 84.3 77.9 1k PC 90.7 DGCNN (Wang et al. 2019) 1.8 1k PC 82.8 86.2 78.1 1k PC 92.9 SimpleView (Goyal et al. 2021) 6 I 80.5 6 I 93.9 PointMLP (Ma et al. 2022) 12.6 1k PC 85.2 1k PC 94.1 94.5 SFR (Zha et al. 2023a) 20 I 87.8 12 I 93.9 P2P-HorNet (Wang et al. 2022) 195.8 40 I 89.3 40 I 94.0 Single-Modal Self-Supervised Learning Point-BERT (Yu et al. 2022) 22.1 1k PC 87.43 88.12 83.07 1k PC 92.7 93.2 MaskPoint (Liu, Cai, and Lee 2022) 22.1 2k PC 89.30 88.10 84.30 1k PC 93.8 Point-MAE (Pang et al. 2022) 22.1 2k PC 90.02 88.29 85.18 1k PC 93.2 93.8 Point-M2AE (Zhang et al. 2022b) 15.3 2k PC 91.22 88.81 86.43 1k PC 93.4 94.0 Point-FEMAE 27.4 2k PC 95.18 93.29 90.22 1k PC 94.0 94.5 Improvement (baseline: Point-MAE) +5.16 +5.00 +5.04 +0.8 +0.7 Cross-Modal Self-Supervised Learning ACT (Dong et al. 2022) 22.1 2k PC 93.29 91.91 88.21 1k PC 93.2 93.7 Joint-MAE (Guo, Li, and Heng 2023) 2k PC 90.94 88.86 86.07 1k PC 94.0 I2P-MAE (Zhang et al. 2022c) 15.3 2k PC 94.15 91.57 90.11 1k PC 93.7 94.1 Recon (Qi et al. 2023) 44.3 2k PC 95.18 93.29 90.63 1k PC 94.1 94.5 Table 3: Classification accuracy on real-scanned (ScanObjectNN) and synthetic (ModelNet40) point clouds. In ScanObjectNN, we report the overall accuracy (%) on three variants. In ModelNet40, we report the overall accuracy (%) for both without and with voting. ”#Params” represents the model’s parameters. Method 5-way 10-way 10-shot 20-shot 10-shot 20-shot Single-Modal Self-Supervised Learning Point-BERT 94.6±3.1 96.3±2.7 91.0±5.4 92.7±5.1 MaskPoint 95.0±3.7 97.2±1.7 91.4±4.0 93.4±3.5 Point-MAE 96.3±2.5 97.8±1.8 92.6±4.1 95.0±3.0 Point-M2AE 96.8±1.8 98.3±1.4 92.3±4.5 95.0±3.0 PointFEMAE 97.2±1.9 98.6±1.3 94.0±3.3 95.8±2.8 Improvement +0.9 +0.8 +1.4 +0.8 Cross-Modal Self-Supervised Learning ACT 96.8±2.3 98.0±1.4 93.3±4.0 95.6±2.8 Joint-MAE 96.7±2.2 97.9±1.8 92.6±3.7 95.1±2.6 I2P-MAE 97.0±1.8 98.3±1.3 92.6±5.0 95.5±3.0 Recon 97.3±1.9 98.9±1.2 93.3±3.9 95.8±3.0 Table 4: Few-shot learning on ModelNet40. We report the average classification accuracy (%) with the standard deviation (%) of 10 independent experiments. Recon (Qi et al. 2023), our approach achieves almost equivalent accuracy, while requiring only 62% of the parameters. These results underscore the unmatched efficiency and efficacy of our pre-trained models, affirming the superiority of our design. Few-shot Learning. Following previous works (Pang et al. 2022; Qi et al. 2023), we conduct few-shot learning experiments on the ModelNet40 (Wu et al. 2015) dataset using the ”n-way, m-shot” configuration, where n is the number of randomly sampled categories and m is the number of samples in each category. We use the above-mentioned n × m samples for training, while 20 unseen samples from each category for testing. Following standard protocol, we conducted 10 independent experiments for each setting and reported mean accuracy with standard deviation. As indicated in Table 4, with limited downstream finetuning data, our Point-FEMAE exhibits competitive performance among existing single-modal and cross-modal methods, e.g.+1.4% classification accuracy to Point-MAE on the 10-way 10-shot split. Part Segmentation. We assess the performance of PointFEMAE in part segmentation using the ShapeNetPart dataset (Chang et al. 2015), comprising 16,881 samples across 16 categories. Employing the same experimental settings and segmentation head as Point-MAE and the mean IoU across all categories, i.e., mIoUc (%), and the mean IoU across all instances, i.e., mIoUI (%) are reported. We did not include the results for Point-M2AE and I2P-MAE due to their utilization of a more intricate segmentation head. As shown in Table 5, our Point-FEMAE exhibits competitive performance among both existing single-modal and cross-modal methods, e.g.+0.7% mIoUc to Point-MAE (Pang et al. 2022) and slightly improvement compared to Recon (Qi et al. 2023). These results demonstrate that our approach exhibits superior performance in tasks such as part segmentation, which demands a more fine-grained understanding of point clouds, demonstrating the superiority of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6967 Methods Reference mIoUc mIoUI Supervised Learning Only PointNet (Qi et al. 2017a) CVPR’17 80.4 83.7 PointNet++ (Qi et al. 2017b) NIPS’17 81.9 85.1 PointMLP (Ma et al. 2022) ICLR’22 84.6 86.1 Single-Modal Self-Supervised Learning Transformer NIPS’17 83.4 84.7 Transformer-OcCo ICCV’21 83.4 85.1 Point-BERT CVPR’22 84.1 85.6 MaskPoint ECCV’22 84.4 86.0 Point-MAE ECCV’22 84.2 86.1 Point-FEMAE 84.9 86.3 Improvement +0.7 +0.2 Cross-Modal Self-Supervised Learning ACT ICLR’23 84.7 86.1 Recon ICML’23 84.8 86.4 Table 5: Part segmentation results on the ShapeNetPart. The mean IoU across all categories, i.e., mIoUc (%), and the mean IoU across all instances, i.e., mIoUI (%) are reported. the compact representations learned by our method. Ablation Study Effects of data augmentation, masking strategy, and LEM. Comparing our fine-tuning with the baseline PointMAE on ScanObjectNN (Uy et al. 2019), our method has two main differences. 1) Masking strategy: we use a hybrid global and local branch point masking strategy (e.g.Hybrid Mask). 2) Network architecture: we add our Local Enhancement Module (LEM) after each standard Transformer block in the local branch. We examined the effect of each factor separately. We designed four different structures to explore the effects of these factors, as shown in Table 6, A uses PointMAE as the baseline, B has a simple hybrid global and local branch mask reconstruction without local enhancement module (LEM), C add our LEM at each layer of the Encoder based on the Point-MAE with global random mask, and D is our Point-FEMAE model. 1 and 2 indicate two different data augmentations. Table 6 reports our ablation results, we can discover that: 1) simply combining two mask reconstructions can lead to a suboptimal encoder (comparing A-B). 2) introducing LEM to Point-MAE provides a slight improvement (comparing AC), and this improvement may be due to the introduction of additional parameters, we will discuss this issue in the next subsection. 4) Comparing D with other results, we can discover a significant improvement, which illustrates the superiority of our design, which artfully combines a hybrid global and local branch masking strategy and local enhancement modules. Effects of Additional Parameters. To illustrate whether our improvement is due to more parameters, we introduced the patch-independent MLP and Self-Attention module that focuses on global patches to replace our Local Enhancement #Params Hybrid Mask LEM PB-T50-RS A 22.1 ✘ ✘ 88.41 (baseline) B 22.1 ✔ ✘ 88.75 (↑0.34) C 27.4 ✘ ✔ 89.17 (↑0.76) D 27.4 ✔ ✔ 90.22 (↑1.81) Table 6: Effects of data augmentation, hybrid masking strategy, and LEM on the ScanObjectNN dataset. Addition Module #Params (M) PB-T50-RS Hybrid Mask w/o LEM 22.1 88.75 Hybrid Mask w/ 1-layer MLP 23.9 89.14 Hybrid Mask w/ 3-layer MLPs 27.4 89.17 Hybrid Mask w/ Self-Attention 29.2 89.42 Hybrid Mask w/ LEM 27.4 90.22 Table 7: Effects of additional network and parameters. Module, respectively, within our masking and reconstruction pipeline for pre-training. We reported their respective finetuned results on the ScanObjectNN in Table 7. These outcomes demonstrate that incorporating an additional 1-layer MLP exhibits some enhancement when compared to the Hybrid Mask w/o LEM. However, with the escalation of parameters, the model exhibits a limited potential, likely due to the MLP employing shared parameters for individual patch processing, regardless of patch correlations, similar to the Transformer’s feed-forward network. Similarly, the additional Self-Attention layer, requiring more parameters, yields a certain improvement, yet it parallels the behavior of the Self-Attention layer within the Transformer, consequently capping potential. These comparisons underscore that the advancement of our approach stems from the excellence of ingeniously combining the strategy of hybrid global and local branch mask reconstruction with the design based on local patch convolution, rather than being driven by additional parameters. Conclusions In this paper, we first compare the pre-training efficiency and efficacy of current single-modal and cross-modal MAEbased point cloud pre-training pipelines and experimentally demonstrate that the limited 3D representation of existing single-modal MAE-based point cloud pre-training methods is due to biases in the existing masking strategies towards global and local representations. To address this issue, we propose to learn compact 3D representations via effective Point Feature Enhancement Masked Autoencoders, which mainly consist of a global branch and local branch to capture latent semantic features. Meanwhile, to further perceive fine-grained local context at larger scales, we propose a Local Enhancement Module with local patch convolution in the local branch. Extensive experiments demonstrate the advancement of our design. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6968 Acknowledgments This work is supported in part by the National Key Research and Development Program of China, under Grant No. 2023YFF0905502, National Natural Science Foundation of China, under Grant (62302309,62171248), Shenzhen Science and Technology Program (Grant No. RCYX20200714114523079, JCYJ20220818101014030, JCYJ20220818101012025), and the PCNL KEY project (PCL2023AS6-1), and Tencent “Rhinoceros Birds” - Scientific Research Foundation for Young Teachers of Shenzhen University. References Afham, M.; Dissanayake, I.; Dissanayake, D.; Dharmasiri, A.; Thilakarathna, K.; and Rodrigo, R. 2022. Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9902–9912. Baevski, A.; Hsu, W.-N.; Xu, Q.; Babu, A.; Gu, J.; and Auli, M. 2022. Data2vec: A general framework for selfsupervised learning in speech, vision and language. arXiv preprint arXiv:2202.03555. Becker, S.; and Hinton, G. E. 1992. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355(6356): 161–163. Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. 2015. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012. Chen, A.; Zhang, K.; Zhang, R.; Wang, Z.; Lu, Y.; Guo, Y.; and Zhang, S. 2023. Pimae: Point cloud and image interactive masked autoencoders for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5291–5301. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Dong, R.; Qi, Z.; Zhang, L.; Zhang, J.; Sun, J.; Ge, Z.; Yi, L.; and Ma, K. 2022. Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning? arXiv preprint arXiv:2212.08320. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fan, H.; Su, H.; and Guibas, L. J. 2017. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 605–613. Ferles, C.; Papanikolaou, Y.; and Naidoo, K. J. 2018. Denoising autoencoder self-organizing map (DASOM). Neural Networks, 105: 112–131. Gao, K.; Bai, J.; Wu, B.; Ya, M.; and Xia, S.-T. 2023. Imperceptible and Robust Backdoor Attack in 3D Point Cloud. IEEE Transactions on Information Forensics and Security. Goyal, A.; Law, H.; Liu, B.; Newell, A.; and Deng, J. 2021. Revisiting point cloud shape classification with a simple and effective baseline. In International Conference on Machine Learning, 3809–3820. PMLR. Guo, Z.; Li, X.; and Heng, P. A. 2023. Joint-mae: 2d-3d joint masked autoencoders for 3d point cloud pre-training. arXiv preprint arXiv:2302.14007. He, K.; Chen, X.; Xie, S.; Li, Y.; Doll´ar, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16000–16009. Li, G.; M¨uller, M.; Qian, G.; Perez, I. C. D.; Abualshour, A.; Thabet, A. K.; and Ghanem, B. 2021. Deepgcns: Making gcns go as deep as cnns. IEEE transactions on pattern analysis and machine intelligence. Lin, K.; Wang, L.; and Liu, Z. 2021. End-to-end human pose and mesh reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1954–1963. Liu, H.; Cai, M.; and Lee, Y. J. 2022. Masked discrimination for self-supervised learning on point clouds. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part II, 657–675. Springer. Ma, X.; Qin, C.; You, H.; Ran, H.; and Fu, Y. 2022. Rethinking network design and local geometry in point cloud: A simple residual MLP framework. arXiv preprint arXiv:2202.07123. Pang, Y.; Wang, W.; Tay, F. E.; Liu, W.; Tian, Y.; and Yuan, L. 2022. Masked autoencoders for point cloud selfsupervised learning. arXiv preprint arXiv:2203.06604. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30. Qi, Z.; Dong, R.; Fan, G.; Ge, Z.; Zhang, X.; Ma, K.; and Yi, L. 2023. Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining. arXiv preprint arXiv:2302.02318. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training. Tian, X.; Ran, H.; Wang, Y.; and Zhao, H. 2023. GeoMAE: Masked Geometric Target Prediction for Self-supervised Point Cloud Pre-Training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13570–13580. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6969 Uy, M. A.; Pham, Q.-H.; Hua, B.-S.; Nguyen, T.; and Yeung, S.-K. 2019. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF international conference on computer vision, 1588–1597. Vincent, P.; Larochelle, H.; Bengio, Y.; and Manzagol, P.-A. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, 1096–1103. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; and Solomon, J. M. 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5): 1–12. Wang, Z.; Yu, X.; Rao, Y.; Zhou, J.; and Lu, J. 2022. P2p: Tuning pre-trained image models for point cloud analysis with point-to-pixel prompting. arXiv preprint arXiv:2208.02812. Wu, W.; Qi, Z.; and Fuxin, L. 2019. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 9621–9630. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1912–1920. Wu, Z.; Xiong, Y.; Yu, S. X.; and Lin, D. 2018. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3733–3742. Xie, S.; Gu, J.; Guo, D.; Qi, C. R.; Guibas, L.; and Litany, O. 2020. Pointcontrast: Unsupervised pre-training for 3d point cloud understanding. In European conference on computer vision, 574–591. Springer. Xiong, J.; Dai, T.; Zha, Y.; Wang, X.; and Xia, S.-T. 2023. Semantic Preserving Learning for Task-Oriented Point Cloud Downsampling. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Yang, H.; He, T.; Liu, J.; Chen, H.; Wu, B.; Lin, B.; He, X.; and Ouyang, W. 2023. GD-MAE: generative decoder for MAE pre-training on lidar point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9403–9414. Yu, X.; Tang, L.; Rao, Y.; Huang, T.; Zhou, J.; and Lu, J. 2022. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19313–19322. Zha, Y.; Li, R.; Dai, T.; Xiong, J.; Wang, X.; and Xia, S.-T. 2023a. SFR: Semantic-Aware Feature Rendering of Point Cloud. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Zha, Y.; Wang, J.; Dai, T.; Chen, B.; Wang, Z.; and Xia, S.-T. 2023b. Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models. arXiv preprint arXiv:2304.07221. Zhang, C.; Zhang, C.; Song, J.; Yi, J. S. K.; Zhang, K.; and Kweon, I. S. 2022a. A survey on masked autoencoder for self-supervised learning in vision and beyond. arXiv preprint arXiv:2208.00173. Zhang, R.; Guo, Z.; Gao, P.; Fang, R.; Zhao, B.; Wang, D.; Qiao, Y.; and Li, H. 2022b. Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pretraining. arXiv preprint arXiv:2205.14401. Zhang, R.; Wang, L.; Qiao, Y.; Gao, P.; and Li, H. 2022c. Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders. arXiv preprint arXiv:2212.06785. Zhang, T.; He, S.; Tao, D.; Chen, B.; Wang, Z.; and Xia, S.-T. 2023. Vision-Language Pre-training with Object Contrastive Learning for 3D Scene Understanding. arXiv preprint arXiv:2305.10714. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6970 | 2024 | 774 |
Subsets and Splits