index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
18,800
Few Shot Part Segmentation Reveals Compositional Logic for Industrial Anomaly Detection Soopil Kim1,3, Sion An1, Philip Chikontwe1, Myeongkyun Kang1,3, Ehsan Adeli3, Kilian M. Pohl3, Sang Hyun Park1,2 1Robotics and Mechatronics Engineering, DGIST, Daegu, Korea 2AI Graduate School, DGIST, Daegu, Korea 3Stanford University, Stanford, CA 94305, USA {soopilkim, shpark13135}@dgist.ac.kr Abstract Logical anomalies (LA) refer to data violating underlying logical constraints e.g., the quantity, arrangement, or composition of components within an image. Detecting accurately such anomalies requires models to reason about various component types through segmentation. However, curation of pixel-level annotations for semantic segmentation is both time-consuming and expensive. Although there are some prior few-shot or unsupervised co-part segmentation algorithms, they often fail on images with industrial object. These images have components with similar textures and shapes, and a precise differentiation proves challenging. In this study, we introduce a novel component segmentation model for LA detection that leverages a few labeled samples and unlabeled images sharing logical constraints. To ensure consistent segmentation across unlabeled images, we employ a histogram matching loss in conjunction with an entropy loss. As segmentation predictions play a crucial role, we propose to enhance both local and global sample validity detection by capturing key aspects from visual semantics via three memory banks: class histograms, component composition embeddings and patch-level representations. For effective LA detection, we propose an adaptive scaling strategy to standardize anomaly scores from different memory banks in inference. Extensive experiments on the public benchmark MVTec LOCO AD reveal our method achieves 98.1% AUROC in LA detection vs. 89.6% from competing methods. Introduction In industrial images, defects can be categorized into two main types: structural and logical anomalies (Bergmann et al. 2022). Structural anomalies (e.g., cracks and contamination) occur in localized regions often absent in normal data, whereas logical anomalies refer to data that does not adhere to underlying logical constraints, e.g., component composition and arrangement. Herein, effective detection requires the consideration of long-range dependencies within and across images. Existing research on anomaly detection (AD) for industrial images has primarily focused on unsupervised approaches that aim to learn the distribution of normal data and detect outliers as anomalies. This has resulted in stateof-the-art models that have reported impressive scores exCopyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Ma Normal Images Unsupervised AD (A) (C Normal (B) Unsupervised AD Part Segmentation Normal Images Few Labeled Images (C) Figure 1: Comparison of approaches at a conceptual level. (A) The anomaly detection (AD) model is directly trained using images. (B) Our proposed method guides part segmentation models using a few labeled samples to accurately segment components and then uses the segments for AD. (C) Examples of logical anomalies show the importance of semantically segmenting components for detection. ceeding 99% (Roth et al. 2022). This high score can be attributed to the nature of public benchmarks (e.g., MVTec AD (Bergmann et al. 2019) and MTD (Huang et al. 2020)), which predominantly comprises structural anomalies, resulting in models with much lower performance when targeting logical anomalies (Bergmann et al. 2022). To address logical AD, current methods implicitly consider global dependencies among multiple components for effective detection, as described in Fig. 1A. For example, (Bergmann et al. 2022) proposed a hybrid feature reconstruction model, while (Tzachor, Hoshen et al. 2023) introduced a histogram-based density estimation model. Despite these advancements, performance is constrained by the inability to accurately differentiate various components. For more accurate logical AD, it is essential to semantically segment the product’s components, as they often exhibit similar features (e.g., peaches vs. mandarins in Fig. 1C). This task is closely related to co-part segmentation (Hung et al. 2019), as normal samples’ similar components follow pre-defined logics. However, existing unsupervised methods (Hung et al. 2019; Gao et al. 2021) often fail to precisely segment such components since they cannot distinguish similar features without relying on supervised guidance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8591 A more effective approach could involve guiding part segmentation using a set of labeled images by employing manufacturers’ prior knowledge about the individual elements required for product assembly as shown in Fig. 1B. However, creating pixel-level annotations for numerous training images is a costly and labor-intensive task. While few-shot segmentation methods have made impressive advances to reduce the number of labeled samples (Wang et al. 2023; Hong et al. 2022), they equally fail to segment different parts that have similar textures or shapes. To this end, we introduce a novel part segmentation model tailored to distinguish components in industrial images using few labeled images and several unlabeled images. Specifically, we utilize positional features for prediction and minimize a histogram matching loss for unlabeled images, ensuring each image maintains a consistent number of pixels per class. The combination of different losses enables the model to accurately segment elements across the images. We integrate accurate part segmentation in our novel AD method, PSAD (Part Segmentation-based Anomaly Detection). Specifically, PSAD detects local and global dependencies of elements by relying on memory banks for class histograms, component composition embeddings, and patch-level representations. To obtain a unified anomaly score from the different scaled outputs of the memory banks, we propose an adaptive strategy to re-scale anomaly scores using scores from training data. We evaluate the proposed method on a public dataset consisting of both logical and structural AD with five categories. We report results with higher AUROC compared to state-of-the-art not only in logical AD, but also in structural AD. Our contributions can be summarized as follows: • We propose a novel anomaly detection method PSAD that employs 3 different memory banks by utilizing visual features and semantic segmentation. • We propose a new part segmentation method that is supervised by a limited number of labeled images with regularization using logical constraints shared across unlabeled images. • We propose an adaptive scaling method to aggregate anomaly scores with different scales. • Our method achieves state-of-the-art performance in both logical and structural anomaly detection. Related Works Anomaly Detection in Industrial Images: In literature, existing anomaly detection (AD) methods often train models to first learn the distribution of normal data and then detect outliers as anomalies. These methods can be broadly categorized into reconstruction, self-supervision, and density estimation-based models. Reconstruction-based methods learn to reconstruct normal input samples and determine the anomaly score based on the difference between inputs and the reconstruction (Lee, Lee, and Song 2022; Liu et al. 2023b; Tien et al. 2023). Self-supervised methods create synthetic abnormal samples and use them to train a classifier. For instance, CutPaste (Li et al. 2021) and DRAEM (Zavrtanik et al. 2021) generate abnormal samples for learning abnormality. Density estimation methods first extract features from normal samples using pre-trained models and then compare them with test sample features to compute anomaly scores (Roth et al. 2022; Jiang et al. 2022; Hyun et al. 2023). We note that existing methods focus on utilizing local features since most benchmarks mainly contain structural anomalies rather than logical anomalies. Following the release of the first dataset comprising logical anomalies (Bergmann et al. 2022), several unsupervised methods have been proposed. GCAD (Bergmann et al. 2022) trains local and global models that reconstruct pre-trained image features based on local and global dependencies. SINBAD (Tzachor, Hoshen et al. 2023) extracts a set of orderless elements and randomly projects element features to compute a histogram, with anomaly scores obtained via density estimation. ComAD (Liu et al. 2023a) applied K-Means clustering on pre-trained features to segment multiple components within an image. However, performance was limited because a precise discrimination of different components is challenging. We observe that product manufacturers are aware of the logical constraints on various components and this prior knowledge can be leveraged for AD. In this paper, we introduce a novel method PSAD using density estimation and semantic segmentation to precisely differentiate components. However, PSAD doesn’t demand many labeled images due to our proposed few-shot segmentation method. When using multiple anomaly scores and aggregating them, previous works simply add the scores (Tsai et al. 2022) or manually set hyper-parameters to scale them (Liu et al. 2023a). However, these approaches may degrade performance when the multiple scores follow different distributions or the hyper-parameters are incorrectly set (Table 3). Even if (Bergmann et al. 2022) attempted to normalize two distinct anomaly scores without defining hyper-parameters, they utilized a validation dataset to determine the statistics of these scores, potentially sacrificing valuable training data. Instead, we propose an adaptive scaling of the scores that solely relies on the training data by treating each sample as a test sample. Object Part Segmentation: As part segmentation is vital for logical AD, one can train a supervised model for object part segmentation (Chen et al. 2014). However, due to costly pixel labeling, unsupervised models (Sra and Dhillon 2005) that can learn arbitrary segmentation using a collection of unlabeled images are preferable. (Hung et al. 2019) proposed an end-to-end segmentation model with pretrained CNN features using semantic consistency and geometric concentration losses. Later, (Siarohin et al. 2021) and (Gao et al. 2021) trained a segmentation model using a part-assembly procedure that reconstructs a target image by transforming parts of a source image. Though viable alternatives for industrial image segmentation, learning objectives based on geometric concentration or affine transformations are not generally applicable in industrial images since multiple objects can appear in distant positions (e.g., mandarins in ‘breakfast box’ and hexagonal nuts in ‘screw bag’ in MVTec The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8592 Memory Bank 𝓜𝒑𝒂𝒕𝒄𝒉 Memory Bank 𝓜𝒉𝒊𝒔𝒕 Memory Bank 𝓜𝒄𝒐𝒎𝒑 Normal Images Class Histogram Test Images Class Histogram NN Search Region Pooling NN Search NN Search Anomaly Score Class Embeddings Adaptive Scale Adaptive Scale Adaptive Scale 𝒇𝜽 Labeled Images Unlabeled Images Ground Truth Histogram Matching Prediction 𝓛𝑪𝑬 𝓛𝑫𝒊𝒄𝒆 𝓛𝓗 Image Augmentation Classifier Coordinate Classifier 𝒙𝒚 𝒙 𝒚 coordinate 𝒇𝜽 𝒈𝝓 𝒈𝝓 𝒇𝜽 Enc 3x3 Avg Pooling Class Embeddings Patch Representations Training Testing Patch Representations Visual features 𝒉𝝍 Figure 2: Illustration of PSAD (Part Segmentation-based Anomaly Detection). During training depicted in the blue box, 3 different memory banks are constructed using normal images. The anomaly score of a test image is computed by finding its nearest neighbor (NN search) and adaptive scaling. LOCO dataset (Bergmann et al. 2022)). In addition, models often under- or over-segment object parts as the labels are arbitrarily optimized. In this paper, we instead propose a new part segmentation model that can segment components in various industrial images using only few labeled samples. Few Shot Semantic Segmentation: Few-shot segmentation (FSS) has been proposed to overcome the data-hungry nature of deep learning models with different approaches employing generated or augmented images (Mondal, Dolz, and Desrosiers 2018), generative models (Tritrong et al. 2021; Han et al. 2022; Baranchuk et al. 2021), meta-learning (Hong et al. 2022; Kim et al. 2023), transductive inference (Boudiaf et al. 2021), and foundation models (Wang et al. 2023). In general, FSS models employing pre-trained generative models report good part segmentation, especially on several well-aligned images such as face or car. However, generative model training is challenging and requires several samples to guarantee good performance. Note that our method is closely related to the transductive approach RePRI (Boudiaf et al. 2021) that uses a fixed pre-trained backbone and trains a pixel classifier with several regularization losses. During inference, only the classifier (prototype-based) is updated with few samples. While impressive, training is regularized by the initial segmentation which may be often noisy. Thus, we instead update the backbone and the classifier with a histogram matching loss to better utilize logical constraints shared across normal images. Methods Problem Setting Unsupervised anomaly detection (AD) aims to train a model that can identify abnormal data from a set of normal data {X1, ..., XNtrain} where Ntrain is the number of data and their labels are all assigned as 0 (normal). The model is trained to distinguish between normal and abnormal test data, predicting labels as 0 (normal) or 1 (anomalous). To detect logical anomalies (LA), accurate part segmentation has to be preceded. In this process, the class of each component is defined by the manufacturer as each normal image contains a predetermined number of specific parts appearing in predefined locations. Consequently, variations in the object’s location may lead to different classes, even for the same object (e.g. ‘pushpins’ and ‘splicing connectors’ in Fig. 4). Instance segmentation differs from semantic segmentation, as predefined class labels are not assigned to the instances. Since constructing pixel-level annotations of lots of images is labor-intensive, we assume that only a scarce number of labeled images {Xl,i ∈RW ×H×3, Y l,i ∈ RW ×H×NC}NL i=1 and a substantial set of unlabeled images {Xu,j}NU j=1 are provided for training. Here, NL, NU, and NC represent the numbers of labeled images, unlabeled images, and classes, respectively. The model is optimized to reduce a combination of supervised losses for Xl and unsupervised losses for Xu. Overview Our proposed PSAD (Part Segmentation-based Anomaly Detection) consists of two parts: semantic part segmentation and AD using part segmentation. For part segmentation, we design a model to distinguish multiple components based on visual and positional features (Fig. 3). A visual feature extractor and a pixel classifier are jointly optimized with a few labeled images and logical constraints shared across numerous unlabeled images. For AD with part segmentation, the segmentation model applied to normal samples is leveraged to construct three distinct memory banks (Fig. 2). In particular, (1) a class histogram memory bank Mhist that records the quantity and arrangement of each component to assess the relative abundance of different components within the images. (2) a component compositions memory bank Mcomp that helps determine the validity of various compositions of components to identify anomalies that arise from unexpected or irregular component arrangements. Finally, (3) a memory bank Mpatch specifically designed for patch-level features to capture fine-grained details within the image. From these memory banks, we generate three different anomaly scores, each of which has a different scale and distribution. To effectively compare and combine the scores, we perform adaptive scaling using statistics obtained from the training data to ensure scores can be reliably compared The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8593 𝑿𝒍 𝑿𝒖 𝒀𝒍 𝓛𝑪𝑬 𝓛𝑫𝒊𝒄𝒆 𝓛𝓗 Image Augmentation coordinate 𝒙 𝒚 𝒙 𝒚 coordinate 𝒇𝜽 𝒈𝝓 𝒈𝝓 𝒇𝜽 𝓛𝒉𝒊𝒔𝒕 𝒑𝒖 𝒑𝒍 Figure 3: Proposed part segmentation model that predicts segmentation utilizing visual and positional features. across different scales. Part Segmentation Using Limited Annotations The segmentation model consists of a feature extractor fθ and a pixel classifier gϕ, with fθ(X) being a feature map having the same size as input X. Since the object’s location is important, the pixels’ coordinate c ∈RW ×H×2 and fθ(X) are concatenated as input for gϕ. During training, given model prediction probability p = gϕ(fθ(X) ⊕c), parameters θ and ϕ are optimized via: L = LDice + λ1LCE + λ2LH + λ3Lhist, (1) where each λ is a hyper-parameter. For labeled images Xl, our model relies on cross-entropy loss LCE and dice similarity loss LDice. Note that prediction on unlabeled images Xu may be uncertain, especially with limited Xl. A common approach to handle this is by incorporating an entropy loss LH to reduce uncertainty (Wang et al. 2022). Nonetheless, minimizing LH with only a few labeled images can lead to unexpected training outcomes and potentially degrade accuracy. To mitigate this, we propose a histogram matching loss Lhist to ensure consistency in segmenting each part with an equal number of pixels. We randomly select a label Y l from {Y l,i}NL i=1 and compare the class-level volume with predictions pu from unlabeled images: Lhist = 1 NC NC X n=1 1 WH X w,h Y l w,h,n − 1 WH X w,h pu w,h,n (2) While model parameters are updated to reduce uncertain predictions under the constraints from LCE, LDice, and Lhist, the model also learns consistent segmentation on numerous unlabeled images based on both the visual and positional similarity of each component. Handling Multiple Types of Products In industrial image datasets, products may be composed of various subtypes (e.g. ‘juice bottle’ and ‘splicing connector’ in the MVTec LOCO AD dataset). In such cases, it is necessary to ensure that Xl and Xu belong to the same product type for the comparison in Lhist. To classify Xu without human annotation, we compare unlabeled and labeled images in latent space and find the nearest labeled image for each unlabeled image. Specifically, we extract global-average-pooled features from the images using a pre-trained encoder before training the segmentation model. Each Xu’s type is determined as the type of the nearest labeled images in the latent space. Subsequently, Xl and Xu of the same type can be used together in Lhist. As a result, our model can effectively handle datasets that contain multiple types of products. Anomaly Detection Using Part Segmentation Our proposed PSAD follows a density estimation approach (Defard et al. 2021) in which normal data features are stored in a memory bank M = {ek}NM k=1, with NM denoting the number of elements in M. To determine the anomaly score s of a test sample etest, we find its nearest neighbor among elements in M: s = arg min e∈M ∥etest −e∥2 . (3) Patch-level density-based methods (Roth et al. 2022; Jiang et al. 2022) have proven to be effective in detecting structural anomalies by focusing on local features. However, logical anomalies often arise when multiple components appear together to form a single product or entity. Class Histogram Memory Bank The first memory bank Mhist focuses on quantifying the number of components for each class through a class histogram. Given normal images and their corresponding segmentation, we construct a histogram that represents the distribution of pixels among different classes. The histograms are then stored in Mhist and respective anomaly scores are predicted using Eq.(3). Component Composition Memory Bank It is worth noting that solely relying on Mhist cannot verify whether the components are combined correctly or not. To address this, we introduce a component composition memory bank Mcomp that stores feature compositions of different parts within an image. After a feature map w = hψ(X) is extracted using a pre-trained encoder hψ, the segmentation map allows us to define a class embedding as an averaged feature vector of pixels belonging to each class. A concatenation of these class embeddings is saved in Mcomp to effectively capture visual features of each component and their compositions within the image. The anomaly score is predicted using Eq.(3). Patch Representation Memory Bank Finally, we construct memory bank Mpatch by storing patch-level representations to detect fine-grained features following established approaches (Defard et al. 2021; Roth et al. 2022): Mpatch = SNtrain k=1 {hψ(Xk)l}NP l=1, where hψ(Xk)l is the lth patch representations extracted from Xk and NP denotes the number of patches. The anomaly score of Xtest is predicted as: s = max e∈{hψ(Xtest)l} NP l=1 min e′∈Mpatch e −e′ 2. Aggregating Anomaly Scores of Different Scales Considering the distinct scales and distributions of the memory banks, it is essential to set appropriate hyperparameters for each anomaly score as arbitrarily configuring a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8594 single hyperparameter can negatively impact overall accuracy. To mitigate this, our solution is scaling s based on the anomaly scores of training data in each memory bank. In particular, we derive a set of anomaly scores denoted as Strain = {s1, ..., sNM} from the training data by treating each data point ek as a test sample. We then construct the memory bank using all other training samples excluding ek as follows: sk = min e∈M,e̸=ek ek −e 2. In the context of Mhist and Mcomp, e stands for a class histogram and a component composition embedding derived from a data sample, respectively. However, for Mpatch, sk is defined in a different way as multiple elements are saved from a data sample as: sk = max e∈{hψ(Xk)l} NP l=1 min e′∈M′ patch e −e′ 2, where M′ patch = SNtrain m=1,m̸=k{hψ(Xm)l}NP l=1. We define a normalized anomaly score considering the statistics of Strain as ˆsM = s/ max{s1, ..., sNM}. This adaptive scaling approach improves accuracy and robustness in detecting anomalies. The final anomaly score is defined as a sum of three anomaly scores from different memory banks: s = ˆsMhist + ˆsMcomp + ˆsMpatch, facilitating both structural and logical anomaly scoring. Implementation Details: We use a pre-trained Wide ResNet101 (Zagoruyko and Komodakis 2016) for initializing the parameters of the segmentation model fθ. Among 4 convolutional blocks in fθ, features extracted from the first 3 blocks are resized to the size of input X and concatenated to obtain v ∈RW ×H×(256+512+1024). Labeled images were augmented following (Buslaev et al. 2020). For training, we used an AdamW optimizer with a learning rate 0.001 and batch size of 5 per iteration on an NVIDIA RTX A5000 GPU workstation. The model was first trained for 50 epochs using only LCE and LDice. After warming up with the supervised loss, the model is trained using Eq.(1) for additional 50 epochs. As LDice is usually larger than the other losses, hyper-parameters λ1, λ2, and λ3 were set as 10. hψ has Wide ResNet101 as the visual feature encoder and a 3×3 average pooling operation following the setting of PatchCore (Roth et al. 2022) which is one of the state-of-the-art models proposed for detecting structural anomalies. Experiments Experimental Setting: We evaluated our method on MVTec LOCO AD dataset (Bergmann et al. 2022), the only benchmark for detecting logical anomalies to the best of our knowledge. This dataset consists of 5 categories (breakfast box/juice bottle/pushpins/screw bag/splicing connectors). For each category, 351/335/372/360/360 normal images were used for training and 275/330/310/341/312 images for testing following the setting of the comparison methods. Test data is categorized into good, structural anomaly (SA), and logical anomaly (LA). For the segmentation task, we used 5 labeled images since existing FSS models show more stable accuracy on the 5-shot setting. If the products have multiple types (e.g., 3 types within ‘juice bottle’ and ‘splicing connectors’), we created a labeled image for each type, thereby employing a total of 3 labeled images. State-of-the-art AD methods including PatchCore (Roth et al. 2022), RD4AD (Deng and Li 2022), DRAEM (Zavrtanik et al. 2021), AST (Rudolph et al. 2023), ST (Bergmann et al. 2020), ComAD (Liu et al. 2023a), GCAD (Bergmann et al. 2022), SINBAD (Tzachor, Hoshen et al. 2023), and SLSG (Yang et al. 2023) are used as comparison methods. The models are evaluated on SA and LA detection separately. We resized all images so that the number of pixels on the longer side among width and height is 512. The area under the ROC curve (AUROC) is used as a metric following previous works (Roth et al. 2022). Comparison with State-of-the-art Methods: Table 1 lists the AUROC for LA and SA detection of methods trained and tested on MVTec LOCO AD dataset. Existing methods designed to focus on local features (e.g., PatchCore, RD4AD, DRAME, AST, ST) have lower scores in LA, as they can not capture the global dependencies among multiple components in the image, despite showing better scores in SA detection. While recent methods for detecting LA (e.g. ComAD, GCAD, SINBAD, SLSG) show improvements over those focusing on local features, they still fail to precisely distinguish between different components within the image. On the other hand, our proposed PSAD shows significant gains over others (i.e. +8.5 avg AUROC score across 5 categories in LA detection). Notably, we achieved a 99.3% AUROC score in the ’screw bag’ category, while the highest accuracy among the other methods in this category was only 80.1%. In addition, our proposed method also showed the best average score in SA detection. These results show that using semantic information can be beneficial for detecting both LA and SA. As a result, our proposed method obtained the best AUROC scores in both tasks. Qualitative Comparison of FSS Methods: In Fig. 4, we show a qualitative comparison of different segmentation models on MVTec LOCO AD dataset. When we applied unsupervised co-part segmentation models SCOPS (Hung et al. 2019) and Part-Assembly (Gao et al. 2021), we obtained arbitrary segmentation results that can not discriminate between different components that are supposed to be segmented into different classes. In some cases, PartAssembly failed to obtain proper results, as they focus on single-body objects. We also evaluated various state-of-theart FSS models: a meta-learning-based model VAT (Hong et al. 2022), a foundation model SegGPT (Wang et al. 2023), and a transductive model RePRI (Boudiaf et al. 2021). VAT and RePRI, each with ResNet-101 backbone pre-trained on PASCAL-5i dataset (Shaban et al. 2017) showed poor performance in most cases. This is due to (1) frozen encoder during training, (2) encoder pre-trained on different domain data, and (3) models relying on inaccurate/noisy initial predictions. Though SegGPT showed relatively good results in some categories such as ‘juice bottle’, it showed limited performance when multiple components have similar textures but different classes. For example in Fig. 4, SegGPT fails to distinguish the left and right parts of ‘splicing connectors’ and the short and long bolts in ‘screw bag’, as they share similar textures. The limitations of existing FSS methods are mainly attributed to training models with existing segmentation datasets that do not necessitate considering position The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8595 Category PatchCore RD4AD DRAEM ST AST GCAD SINBAD ComAD SLSG PSAD LA Breakfast Box 74.8 66.7 75.1 68.9 80.0 87.0 96.5 91.1 100.0 Juice Bottle 93.9 93.6 97.8 82.9 91.6 100.0 96.6 95.0 99.1 Pushpins 63.6 63.6 55.7 59.5 65.1 97.5 83.4 95.7 100.0 Screw Bag 57.8 54.1 56.2 55.5 80.1 56.0 78.6 71.9 99.3 Splicing Connectors 79.2 75.3 75.2 65.4 81.8 89.7 89.3 93.3 91.9 Average (LA) 74.0 70.7 72.0 66.4 79.7 86.0 88.9 89.4 89.6 98.1 SA Breakfast Box 80.1 60.3 85.4 68.4 79.9 80.9 87.5 81.6 84.9 Juice Bottle 98.5 95.2 90.8 99.3 95.5 98.9 93.1 98.2 98.2 Pushpins 87.9 84.8 81.5 90.3 77.8 74.9 74.2 91.1 89.8 Screw Bag 92.0 89.2 85.0 87.0 95.9 70.5 92.2 88.5 95.7 Splicing Connectors 88.0 95.9 95.5 96.8 89.4 78.3 76.7 94.9 89.3 Average (SA) 89.3 85.1 87.6 88.4 87.7 80.7 84.7 90.9 91.4 91.6 Average 81.7 77.9 79.8 77.4 83.7 83.4 86.8 90.1 90.3 94.9 Table 1: Performance comparison of the proposed model PSAD against state-of-the models on MVTec LOCO AD dataset. LA and SA denote logical and structural anomalies, respectively. Boldface represents the best score. Models LA SA SCOPS (Hung et al. 2019) 82.5 90.2 Part-Assembly (Gao et al. 2021) 80.3 85.6 SegGPT (Wang et al. 2023) 88.7 87.2 VAT (Hong et al. 2022) 79.2 87.8 RePRI (Boudiaf et al. 2021) 83.6 88.4 Ours (Lsup) 95.9 89.6 Ours (Lsup + LH) 96.3 90.0 Ours (Lsup + LH + Lhist) 98.1 91.6 Table 2: Average AUROC scores of our proposed PSAD using different FSS models. Mhist Mcomp Mpatch AS LA SA ✓ 94.2 71.1 ✓ 90.9 85.4 ✓ 73.9 89.3 ✓ ✓ ✓ 96.8 87.6 ✓ ✓ ✓ ✓ 98.1 91.6 Table 3: Average AUROC scores of our proposed PSAD using different combinations of memory banks. ‘AS’ stands for adaptive scaling. or length comparisons. As most meta-learning FSS models are focused on discriminating classes based on texture and shape, they equally show limited accuracy on industrial images which may have multiple similar components belonging to different classes. When we trained our model using only supervised losses for labeled images, we obtained better results in ‘pushpins’ and ‘splicing connectors’ but poor predictions in other categories despite using LH. For example, it failed to discriminate between the short and long bolts in ‘screw bag’. However, when Lhist was employed, accurate segmentation results were obtained on various types of products. This shows that using Lhist with the other loss functions is more beneficial to obtain consistent segmentation by leveraging logical constraints. Anomaly Detection Using Different Segmentation Models: Based on the segmentation results, we also check whether accurate segmentation correlates with AD performance. Table 2 shows the average AUROC scores of our methods using different segmentation models. FSS (SegGPT, VAT, and RePRI) and unsupervised models (SCOPS and Part-Assembly) showed low performance in LA detection, as they under-perform in segmenting some categories’ data such as ‘pushpins’. Nevertheless, it is worth noting that their LA detection scores are higher than the score of our baseline PatchCore. This shows that leveraging even imperfect segmentation can be beneficial for LA detection. On the other hand, our model trained with Lsup (= LCE + LDice) showed significantly improved scores even if it is not as accurate as our final model. This shows that our approach which jointly trains the encoder and classifier utilizing image augmentations and positional information is beneficial for segmenting industrial image data. When we use LH and Lhist together, we obtained further improved LA detection scores. Interestingly, the detection performance of SA is also enhanced with the utilization of more accurate segmentation results. These findings show the crucial role of accurate segmentation in achieving precise AD. Effect of Various Memory Banks And Adaptive Scaling: Table 3 shows AD performance using different combinations of memory banks. When each memory bank is employed alone, Mhist and Mcomp showed good performance on LA detection but poor performance on SA detection, whereas Mpatch showed the opposite case. When we add these anomaly scores from 3 different memory banks without any scaling strategy, we obtained a better LA detection score and a degraded SA detection score. It is mainly attributed to varying scales of anomaly scores depending on memory banks. When we apply adaptive scaling, we obtained the best scores in both LA and SA detection. Figure 5 illustrates histograms of anomaly scores obtained from various memory banks and the unified anomaly scores after adaptive scaling. It shows that the scale of anomaly scores varies across memory banks, implying the importance of scaling scores before aggregation. Notably, the anomaly scores from the patch representation memory bank are poor due to its reliance on local features. Nevertheless, after normalizing each score and integrating them into a uniThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8596 Image SCOPS Part-Assembly SegGPT VAT RePRI Ours(𝓛𝒔𝒖𝒑) +𝓛𝓗 +𝓛𝒉𝒊𝒔𝒕 Breakfast Box Juice Bottle Pushpins Screw Bag Splicing Connectors Figure 4: Qualitative comparison of FSS models. Lsup (= LCE + LDice) denotes a supervised loss for labeled images. For the unsupervised methods, such as SCOPS and Part-Assembly, we arbitrarily set the number of parts as 10. Unified Score 𝒑𝒂𝒕𝒄𝒉 𝒄𝒐𝒎𝒑 𝒉𝒊𝒔𝒕 : Normal :Logically Abnormal Breakfast Box Juice Bottle Pushpins Screw Bag Splicing Connectors Figure 5: Histogram visualizations of anomaly scores from different memory banks and the unified anomaly scores. fied score, a clear discrimination between normal and abnormal samples was observed. Overall, these findings highlight the significance of adaptive scaling in improving the effectiveness of AD using multiple memory banks. Logical Anomaly Detection Using Less Training Samples: Table 4 lists the AUROC of LA detection using varying numbers of unlabeled images. Despite a slight decrease in the average AUROC scores using less data, our approach NM 100% 50% 25% 12.5% Avg AUROC 97.4 97.1 96.6 96.2 Table 4: AUROC in LA detection of our proposed PSAD using different numbers of normal images Ntrain. In this experiment, a combination of Mhist and Mcomp is used. still outperforms the other methods with the reduced dataset. This finding underscores the significance of accurate segmentation maps in enabling precise LA detection even with limited data. Conclusion In this paper, we incorporate part segmentation into anomaly detection (AD) to detect logical and structural anomalies. To avoid constructing a large training dataset for segmentation, we propose a new segmentation model that utilizes a few labeled images and logical constraints shared across normal images. We also propose a novel AD method that involves constructing 3 distinct memory banks based on the segmentation. To generate a unified anomaly score from varying scales of anomaly scores, we introduce an adaptive scaling strategy. By doing so, our model could detect LA and SA, and yields substantial improvements with minimal effort required from users. As future few-shot segmentation models evolve to require fewer labeled images and produce better results, our AD model will achieve more enhanced performance with less effort from users. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8597 Acknowledgments This work was supported by the DGIST R&D program of the Ministry of Science and ICT of KOREA (22-KUJoint-02 and 21-DPIC-08) and the Digital Innovation Hub project supervised by the Daegu Digital Innovation Promotion Agency (DIP) grant funded by the Korea government (MSIT and Daegu Metropolitan City) in 2023 (DBSD1-01). References Baranchuk, D.; Rubachev, I.; Voynov, A.; Khrulkov, V.; and Babenko, A. 2021. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126. Bergmann, P.; Batzner, K.; Fauser, M.; Sattlegger, D.; and Steger, C. 2022. Beyond dents and scratches: Logical constraints in unsupervised anomaly detection and localization. International Journal of Computer Vision, 130(4): 947–969. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9592–9600. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2020. Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4183–4192. Boudiaf, M.; Kervadec, H.; Masud, Z. I.; Piantanida, P.; Ben Ayed, I.; and Dolz, J. 2021. Few-shot segmentation without meta-learning: A good transductive inference is all you need? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 13979–13988. Buslaev, A.; Iglovikov, V. I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; and Kalinin, A. A. 2020. Albumentations: fast and flexible image augmentations. Information, 11(2): 125. Chen, X.; Mottaghi, R.; Liu, X.; Fidler, S.; Urtasun, R.; and Yuille, A. 2014. Detect what you can: Detecting and representing objects using holistic models and body parts. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1971–1978. Defard, T.; Setkov, A.; Loesch, A.; and Audigier, R. 2021. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, 475–489. Springer. Deng, H.; and Li, X. 2022. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9737–9746. Gao, Q.; Wang, B.; Liu, L.; and Chen, B. 2021. Unsupervised co-part segmentation through assembly. In International Conference on Machine Learning, 3576–3586. PMLR. Han, M.; Zheng, H.; Wang, C.; Luo, Y.; Hu, H.; and Du, B. 2022. Leveraging GAN priors for few-shot part segmentation. In Proceedings of the 30th ACM International Conference on Multimedia, 1339–1347. Hong, S.; Cho, S.; Nam, J.; Lin, S.; and Kim, S. 2022. Cost aggregation with 4d convolutional swin transformer for fewshot segmentation. In European Conference on Computer Vision, 108–126. Springer. Huang, Y.; et al. 2020. Surface defect saliency of magnetic tile. The Visual Computer, 36: 85–96. Hung, W.-C.; Jampani, V.; Liu, S.; Molchanov, P.; Yang, M.H.; and Kautz, J. 2019. Scops: Self-supervised co-part segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 869–878. Hyun, J.; Kim, S.; Jeon, G.; Kim, S. H.; Bae, K.; and Kang, B. J. 2023. ReConPatch: Contrastive Patch Representation Learning for Industrial Anomaly Detection. arXiv preprint arXiv:2305.16713. Jiang, X.; Liu, J.; Wang, J.; Nie, Q.; Wu, K.; Liu, Y.; Wang, C.; and Zheng, F. 2022. Softpatch: Unsupervised anomaly detection with noisy data. Advances in Neural Information Processing Systems, 35: 15433–15445. Kim, S.; Chikontwe, P.; An, S.; and Park, S. H. 2023. Uncertainty-aware semi-supervised few shot segmentation. Pattern Recognition, 137: 109292. Lee, S.; Lee, S.; and Song, B. C. 2022. Cfa: Coupledhypersphere-based feature adaptation for target-oriented anomaly localization. IEEE Access, 10: 78446–78454. Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9664–9674. Liu, T.; Li, B.; Du, X.; Jiang, B.; Jin, X.; Jin, L.; and Zhao, Z. 2023a. Component-aware anomaly detection framework for adjustable and logical industrial visual inspection. arXiv preprint arXiv:2305.08509. Liu, W.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2023b. Diversity-measurable anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12147–12156. Mondal, A. K.; Dolz, J.; and Desrosiers, C. 2018. Few-shot 3d multi-modal medical image segmentation using generative adversarial learning. arXiv preprint arXiv:1810.12241. Roth, K.; Pemula, L.; Zepeda, J.; Sch¨olkopf, B.; Brox, T.; and Gehler, P. 2022. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14318–14328. Rudolph, M.; Wehrbein, T.; Rosenhahn, B.; and Wandt, B. 2023. Asymmetric student-teacher networks for industrial anomaly detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2592– 2602. Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; and Boots, B. 2017. One-shot learning for semantic segmentation. arXiv preprint arXiv:1709.03410. Siarohin, A.; Roy, S.; Lathuili`ere, S.; Tulyakov, S.; Ricci, E.; and Sebe, N. 2021. Motion-supervised co-part segmentation. In 2020 25th International Conference on Pattern Recognition (ICPR), 9650–9657. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8598 Sra, S.; and Dhillon, I. 2005. Generalized nonnegative matrix approximations with Bregman divergences. Advances in neural information processing systems, 18. Tien, T. D.; Nguyen, A. T.; Tran, N. H.; Huy, T. D.; Duong, S.; Nguyen, C. D. T.; and Truong, S. Q. 2023. Revisiting reverse distillation for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 24511–24520. Tritrong, N.; et al. 2021. Repurposing gans for one-shot semantic part segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4475–4485. Tsai, C.-C.; et al. 2022. Multi-scale patch-based representation learning for image anomaly detection and segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3992–4000. Tzachor, N. C.; Hoshen, Y.; et al. 2023. Set features for fine-grained anomaly detection. arXiv preprint arXiv:2302.12245. Wang, X.; Zhang, X.; Cao, Y.; Wang, W.; Shen, C.; and Huang, T. 2023. Seggpt: Segmenting everything in context. arXiv preprint arXiv:2304.03284. Wang, Y.; Wang, H.; Shen, Y.; Fei, J.; Li, W.; Jin, G.; Wu, L.; Zhao, R.; and Le, X. 2022. Semi-supervised semantic segmentation using unreliable pseudo-labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4248–4257. Yang, M.; Liu, J.; Yang, Z.; and Wu, Z. 2023. SLSG: Industrial Image Anomaly Detection by Learning Better Feature Embeddings and One-Class Classification. arXiv preprint arXiv:2305.00398. Zagoruyko, S.; and Komodakis, N. 2016. Wide residual networks. arXiv preprint arXiv:1605.07146. Zavrtanik, V.; et al. 2021. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8330–8339. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8599
2024
955
18,801
VITA: ‘Carefully Chosen and Weighted Less’ Is Better in Medication Recommendation Taeri Kim1*, Jiho Heo1*, Hongil Kim2*, Kijung Shin3†, Sang-Wook Kim1† 1Department of Computer Science, Hanyang University, South Korea 2Department of Artificial Intelligence, Hanyang University, South Korea 3Kim Jaechul Graduate School of AI & School of Electrical Engineering, KAIST, South Korea {taerik, linda0123, hong0814, wook}@hanyang.ac.kr, [email protected] Abstract We address the medication recommendation problem, which aims to recommend effective medications for a patient’s current visit by utilizing information (e.g., diagnoses and procedures) given at the patient’s current and past visits. While there exist a number of recommender systems designed for this problem, we point out that they are challenged in accurately capturing the relation (spec., the degree of relevance) between the current and each of the past visits for the patient when obtaining her current health status, which is the basis for recommending medications. To address this limitation, we propose a novel medication recommendation framework, named VITA, based on the following two novel ideas: (1) relevant-Visit selectIon; (2) Target-aware Attention. Through extensive experiments using real-world datasets, we demonstrate the superiority of VITA (spec., up to 5.67% higher accuracy, in terms of Jaccard, than the best competitor) and the effectiveness of its two core ideas. The code is available at https://github.com/jhheo0123/VITA. 1 Introduction Medication recommendation aims to help doctors in prescribing effective medications for a patient’s current visit (Zhang et al. 2017; Shang et al. 2019b). When prescribing medications to a patient, doctors should consider the following factors: (1) the diagnoses and procedures given to her at the current visit; (2) her past health records; (3) the relations between medications to be prescribed (e.g., the possibility of adverse effects when taken together). This is a time-consuming and challenging process even for experienced doctors (Wu et al. 2022b). Therefore, the medication recommendation that can alleviate these difficulties of doctors has been an important research area in medical tasks. Early medication recommender systems recommend medications to a patient by considering only the diagnoses and procedures given to her at the current visit (i.e., current visit information), which are called instance-based methods (Zhang et al. 2017; Wang et al. 2018; Gong et al. 2021). However, they overlooked the fact that, although patients are *These authors contributed equally as co-first authors. †Co-corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. given with the same diagnoses at the current visit, the cause of the given diagnoses may be different for each patient, so the medications for them could also be different (Shang et al. 2019b; Yang et al. 2021b); this is because they do not utilize the patients’ past health records (Wu et al. 2022b). To alleviate this limitation, longitudinal-based medication recommender systems have emerged, which utilize not only a patient’s current visit information but also her past health records (Shang et al. 2019a,b; Bhoi et al. 2021; Wang et al. 2021a,b; Yang et al. 2021b; Wu et al. 2022a,b). These methods usually consist of an encoder and a predictor. For their encoders, most of them (Shang et al. 2019b; Bhoi et al. 2021; Wang et al. 2021a,b; Yang et al. 2021b; Wu et al. 2022a) learn a patient representation indicative of her current health status by aggregating her current visit information and the diagnoses and procedures given at her past visits (i.e., past visit information) in consideration of the visit order via a Recurrent Neural Network (RNN)-based model. Then, for their predictors, they recommend medications to her by utilizing other past health records (e.g., medications prescribed at her past visits) based on their relevance with her patient representation via various deep-learning models (e.g., an attention network (Bahdanau, Cho, and Bengio 2015)). On the other hand, COGNet (Wu et al. 2022b), another longitudinal-based method recently proposed, encodes only a patient’s current visit information, unlike the above methods, to obtain her current health status in the encoder by capturing the relation between diagnoses (resp. procedures) at her current visit via a transformer (Vaswani et al. 2017)based model. Then, in the predictor, COGNet additionally takes into account the similarity between (a) medications that should be prescribed at her current visit by considering only her current visit information and (b) those prescribed at her past visits, to improve upon the predictor of the aforementioned methods. Please refer to the Appendix1 for more detailed information on these (i.e., related studies). Although longitudinal-based methods achieve higher accuracy than instance-based methods by utilizing a patient’s past health records, we point out a key limitation of them: when obtaining the patient’s current health status (i.e., in 1The Appendix for VITA can be found at https://github.com/ jhheo0123/VITA. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8600 the encoder), they have a difficulty in accurately capturing the relation (spec., the degree of relevance) of each past visit to the current one. Specifically, they either compute the relevance score by considering simply the order of visits via a RNN-based model (Shang et al. 2019b; Bhoi et al. 2021; Wang et al. 2021a,b; Yang et al. 2021b; Wu et al. 2022a), or do not attempt to capture the degree of relevance, not taking into account any of the patient’s past visits (Wu et al. 2022b). However, when obtaining the patient’s current health status, accurately capturing the degree of relevance between her current and past visits is highly important. In particular, her past visit information relevant (resp. irrelevant) to her current visit should (resp. should not) be reflected in representing her current health status. Our empirical findings supporting this claim are detailed in Section 2. In this paper, we propose VITA (relevant-visit selection and target-aware attention), a novel medication recommendation framework, based on a more accurate understanding of her current health status. To achieve this, we learn a patient representation indicative of her current health status by taking into account an accurate relevance score between the current visit and each of the past ones, without relying solely on the visit order. Furthermore, we do not consider her past visits irrelevant to her current one at all, instead of simply assigning lower relevance scores to them, since this approach enhances accurate medication recommendation. Our empirical validation of these ideas can be found in Section 4. Our contributions are summarized as follows: • Important Discovery: We discovered that existing medication recommender systems face challenges in accurately capturing the degree of relevance of each past visit to the current one when obtaining a patient’s current health status. In addition, we are the first to demonstrate that using past visits irrelevant to the current one has a negative effect on recommending accurate medications. • Novel Framework: To address this key limitation, we propose a novel medication recommendation framework, named VITA, that recommends medications by employing an enhanced patient representation, based on the following two novel ideas: (1) relevant-Visit selectIon that automatically excludes past visits irrelevant to the current one; (2) Target-aware Attention that accurately captures the relevance score of the past visits to the current one. • Extensive Evaluation: We validate the effectiveness of VITA through extensive experiments using public realworld datasets. Most importantly, VITA surpasses all six state-of-the-art competitors, achieving an improvement of up to 5.67% in terms of Jaccard. Also, the two core ideas of VITA can be orthogonally combined with existing medication recommender systems; when combined, they elevate the accuracy beyond that of the original ones. 2 Motivation In this section, we demonstrate the limitations of existing medication recommender systems via preliminary experiments answering the following preliminary questions (PQs): All No 0.420 0.425 0.430 0.435 0.440 Jaccard (PQ1) Top-1 Mid.-1 Bot.-1 0.420 0.425 0.430 0.435 0.440 Jaccard (PQ2) Figure 1: Accuracies of GAMENet when varying the use of past visit information of a patient. ‘All’ (resp. ‘No’) means to use all (resp. not to use any) past visit information. ‘Top1/Mid.-1/Bot.-1’ means to use only one past visit information that is the most/moderately/the least similar to the current visit information, respectively. All differences are statistically significant with a p-value ≤0.001. • PQ1: Does using past visit information in representing a patient’s current health status help accurate medication recommendation? • PQ2: Which past visit information is beneficial to better representing a patient’s current health status? Experimental Settings. We conducted experiments using the GAMENet model (Shang et al. 2019b)2, the MIMICIII dataset (Johnson et al. 2016), and the accuracy measure Jaccard, which are most commonly used in medication recommendation (Bhoi et al. 2021; Wang et al. 2021a,b; Yang et al. 2021a,b; Wu et al. 2022b). Please refer to Section 4 for more detailed information on experimental settings. PQ1: Effect of the use of a patient’s past visit information. We first analyze whether it is really helpful to use a patient’s past visit information for representing her current health status (i.e., in her patient representation) for recommending medications at the current visit. To this end, we compared the accuracy of GAMENet that uses all past and current visit information as input to the RNN-based model in the encoder (i.e., the original method) with that of its counterpart that uses only her current visit information. The results for the above two methods are represented as ‘All’ and ‘No’, respectively, in Figure 1-(PQ1). From the results, we found that using not only the current visit information but also the past ones for patient representation is helpful for accurate medication recommendation. PQ2: Effect of the use of a patient’s past visit information depending on its degree of relevance to her current visit. Now, we analyze which past visit information of a patient is helpful to obtain better patient representation in medication recommendation. To this end, we first calculated the Jaccard similarity between the current and each of past visit information for each patient. Specifically, we first represented the diagnoses (resp. procedures) given at each visit as a multihot vector for each visit per patient. Then, we represented each visit as a single vector by concatenating the two multihot vectors of the diagnoses and procedures to calculate the 2The results on other existing medication recommender systems showed similar tendencies to those on the GAMENet. These results would be shown in the Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8601 Figure 2: Overview of VITA, composed of two components: an encoder based on relevant-visit selection and target-aware attention; a predictor based on (a) current health-aware and (b) current health-relevant past medication representations. similarity. After that, for each patient, we measured the accuracy of GAMENet using only the past visit information most similar to the current visit, along with the current one, as an input to the RNN-based model in the encoder. The result of this method is represented as ‘Top-1’ in Figure 1(PQ2). Also, for comparison, we additionally measured the accuracy using moderately (resp. the least) similar one, instead of using the most similar past visit information (the moderately similar past visit is the one with the median similarity). The result for this method is represented as ‘Mid.-1’ (resp. ‘Bot.-1’) in Figure 1-(PQ2). As shown in Figure 1-(PQ2), the accuracy decreases in the order of using only the most similar, moderately similar, and the least similar past visit information along with the current one. In addition, we found that the accuracy when using only the most similar (resp. the least similar) past visit information along with the current one is rather higher (resp. lower) than that of the original method using all past visit information along with the current one (resp. the method using only the current visit information) (compare ‘All’ (resp. ‘No’) and ‘Top-1’ (resp. ‘Bot.-1’) accuracies in Figure 1-(PQ1) and (PQ2), respectively). Summary. Based on the above results, we draw the following conclusions: (1) referencing past visit information in obtaining the patient representation is helpful for medication recommendation; furthermore, (2) using only relevant (i.e., similar) past visit information to the current one is more helpful; however, (3) using irrelevant (i.e., dissimilar) past visit information to the current one has a negative effect. 3 VITA: Proposed Framework In this section, we detail our proposed framework, VITA, based on relevant-visit selection and target-aware attention. 3.1 Problem Definition Definition 1: Patient Health Records. In Electronic Health Records (EHR) data (e.g., MIMIC-III dataset), the health records of each patient x consist of sequential visits Vx = [V1 x, · · · , V(T −1) x , VT x ], where V(T −1) x denotes the (T − 1)-th visit of patient x. For simplicity, following (Zhang et al. 2017; Shang et al. 2019a,b; Wu et al. 2022b), we omit the subscript indicating a patient (i.e., x) and describe VITA with a single patient. Therefore, the patient’s visits are denoted by V = [V1, · · · , V(T −1), VT ]. Each visit V(T −1) of a patient consists of three subsets of all diagnoses D, all procedures P, and all medications M; the subsets are represented as multi-hot vectors (e.g., d(T −1) ∈R|D|, p(T −1) ∈ R|P|, and m(T −1) ∈R|M|). Definition 2: EHR and DDI Graphs. The EHR and the Drug-Drug Interactions (DDI) graphs are denoted by GEHR = (M, EEHR) and GDDI = (M, EDDI), respectively, where EEHR denotes the set of edges between medications, each of which indicates that two medications have been prescribed together at any visit of any patient, and EDDI denotes the set of edges between medications, each of which indicates that two medications may cause adverse effects if taken together; their adjacency matrices AEHR, ADDI ∈R|M|×|M| satisfy: AEHR[i, j] = 1 if and only if the medications i and j have been prescribed together at any visit of any patient, ADDI [i, j] = 1 if and only if the medication i and j can be harmful when they are taken together. The same EHR and DDI graphs are used for all patients. Medication Recommendation Problem. Given past health records [V1, · · · , V(T −1)] and current visit information (i.e., diagnoses dT and procedures pT given at the current visit T) of a patient, and EHR and DDI graphs, the goal is to recommend the medications ˆ MT for her current visit T. Key notations used in this paper can be found in the Appendix. 3.2 Key Components in VITA VITA consists of two components, an encoder and a predictor, and its schematic overview is presented in Figure 2. In the following, we delve into the details of each component. Encoder. In its encoder, VITA aims to obtain an enhanced patient representation qT ∈ Rdim, which denotes her current health status (i.e., at the T-th visit), by employing relevant-visit selection and target-aware attention. To achieve this, for each visit, VITA first concatenates visit t’s diagnoses dt and procedures pt, and then feeds it into an embedding layer to obtain a dense representation vt of visit t, as follows: ∀t ∈{1, · · · , (T −1), T}, vt = concat(dt, pt)We, (1) where We ∈R(|D|+|P|)×dim denotes the learnable weight matrix of the embedding layer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8602 Relevant-Visit Selection. Then, VITA aggregates the dense representations vt for all visits to obtain her patient representation qT . Recall that, in Section 2, we showed that using only the past visit information relevant to the current one is helpful for accurate medication recommendation. Therefore, we design a novel relevant-visit selection module to find only the past visits relevant to the current visit. However, here we encounter the following challenge: the number of past visits relevant to the current visit may differ per patient; for some patients, it is possible that no past visit is relevant to the current visit. To address this challenge, we predict per patient whether each of the past visits is relevant to her current one or not (i.e., whether to select or not). Specifically, VITA obtains a set Vrel of past visits relevant to the current visit via the relevant-visit selection module. To this end, for each visit, VITA first concatenates the dense representations vt of the past visit t and vT of the current visit T to provide the relevant-visit selection module with the context for deciding whether to select the past visit t, and then feeds them into a Multi-Layer Perceptron (MLP) layer to obtain the probability st of the past visit t being selected, as follows: ∀t ∈{1, · · · , (T −1)}, st = sigmoid(concat(vt, vT )Ws + bs), (2) where Ws ∈R2dim×1 and bs ∈R denote a weight matrix and a bias value of the MLP layer, respectively. Finally, VITA employs the Gumbel-softmax (Jang, Gu, and Poole 2017), which transforms the discrete sampling problem into a differentiable continuous problem, thereby allowing for the flow of gradients, to select the past visits relevant to the current visit (i.e., to obtain the set Vrel), as follows: Vrel = {vt : t ∈{1, · · · , (T −1)} and ⌊ot 1 + 0.5⌋= 1}, (3) where ot γ = exp((logπt γ+zt γ)/τg) P2 δ=1 exp((logπt δ+zt δ)/τg)), and πt ∈R2×1 denotes a vector obtained by concatenating the probability st and the probability (1 −st) of each past visit t not being selected; zt γ and τg denote the γ-th sample from the Gumbel(0, 1) distribution for the past visit t and the temperature hyperparameter, respectively. Target-Aware Attention. Given the set Vrel, VITA aggregates the dense representations vt in Vrel and vT of the current visit T to obtain the ultimate patient representation qT . Here, to the best of our knowledge, we are first to employ an attention network in the encoder of medication recommender systems; this aims to accurately capture the relevance score αt between the current visit T and each relevant past visit t, without relying solely on the visit order3. Moreover, to obtain an accurate relevance score αt even when all the past visits are weakly relevant to the current visit, we design a novel target-aware attention module that assigns the relevance score αt of each past visit t relatively not only to the other past visit but also to the current visit. Specifically, using the dense representation vT of the current visit T as a query and all dense representations vt ∈ Vrel for the relevant past visits t and vT of the current visit 3Nevertheless, we experimented with a variant of VITA that incorporates positional encoding into our target-aware attention, but the current version of VITA showed higher accuracy. T as keys and values, VITA computes the relevance score αt between the current visit and each of relevant past visits, as follows: ∀vt ∈Vrel ∪{vT }, αt = exp((vT Wαvt⊺)/ √ dim) P vf ∈Vrel∪{vT } exp((vT Wαvf ⊺)/ √ dim) , (4) where Wα ∈Rdim×dim denotes a learnable weight matrix of the target-aware attention module. It is worth noting that the relevance score αT for the current visit T is also calculated; thus, the sum of the softmax values for all selected past visits (except for the current visit) is not necessarily fixed to 1; it is also possible for the sum to be much smaller than 1 if all selected visits are weakly relevant to the current visit. Finally, VITA obtains the patient representation qT by aggregating the dense representation vT of the current visit T and all dense representations vt ∈Vrel of the relevant past visits t based on their relevance scores αt, as follows: qT = X vt∈Vrel∪{vT } αtvt. (5) We highlight that VITA’s two core ideas (i.e., relevantvisit selection and target-aware attention modules) can be orthogonally combined with the encoder of any medication recommender systems, thereby improving their accuracy; this claim will be empirically validated in Section 4. Predictor. In its predictor, VITA aims to recommend the set ˆ MT of necessary medications for a patient based on her patient representation qT obtained. To this end, VITA first obtains, using the patient representation qT , the two medication representations (a) pT k ∈Rdim, which captures features of the k-th medication that are necessary at her current visit T when considering her current health status, and (b) ¯pT k ∈R|M|, which captures the probabilities that all medications prescribed at her past visits will be recommended as the k-th medication at her current visit T when considering her current health status. Then, VITA fuses them to obtain the k-th medication of the necessary medications for her current visit. The above process is iteratively performed until all necessary medications for her current visit are obtained.4 (a) Current Health-Aware Medication Representation. To obtain a current health-aware medication representation pT k , VITA first obtains enriched medication representations ei via the relations between medications, by applying an independent two-layer Graph Convolutional Network (GCN) to each of the EHR and DDI graphs with randomly initialized medication representations ei, and then fusing outputs of the two GCNs per medication (Shang et al. 2019b). Subsequently, VITA begins to obtain the current healthaware medication representation pT k using the patient representation qT and the medication representations ei. In this process, especially, VITA focuses on the medications already predicted (i.e., medications up to the (k−1)-th5) rather 4In practice, VITA predicts the <END> token (class), indicating that the medication prediction at the patient’s current visit is completed. Also, we compared the accuracies of the one-by-one approach and the all-at-once approach on our framework. The results showed that the one-by-one approach was more accurate. 5k ≥1, when k = 1, a randomly initialized <START> token (representation) is used. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8603 than all medications, since this approach helps ensure that the relations between recommended medications are considered when recommending the next one. This process can be formally expressed as follows: pT k = softmax(transformer(eT,1 ∗ , ..., eT,(k−1) ∗ ) ⊙qT √ dim )qT , (6) where transformer(·) denotes a transformer-based model as used in (Wu et al. 2022b), which aggregates inputs considering the relation between inputs; eT,(k−1) ∗ denotes a medication representation of the (k −1)-th predicted medication at the current visit T; ⊙denotes a dot-product. (b) Current Health-Relevant Past Medication Representation. Next, VITA obtains a current health-relevant past medication representation ¯pT k . To achieve this, VITA evaluates two levels (spec., medication-level rt m,i ∈R and visitlevel rt v ∈R) of relevance between the current visit and each past visit (Wu et al. 2022b). Specifically, the relevance rt m,i at the medication-level indicates how much each medication i prescribed at each of the patient’s past visits t is related to the k-th medication that is necessary at her current visit T (i.e., pT k ), as follows: ∀i ∈mt; ∀t ∈{1, · · · , (T −1)}, rt m,i = exp((et i ⊙pT k )/ √ dim) P|mt| j=1 exp((et j ⊙pT k )/ √ dim) , (7) where et i denotes a medication representation of medication i prescribed at the t-th visit; mt is defined in Section 3.1. While the relevance rt v at the visit-level indicates how much the patient’s health status at each past visit t is related to her current health status. So, VITA first obtains the patient representation qt at the t-th past visit for all her past visits in the same way as in its encoder. Then, VITA employs the targetaware attention module, using the patient representation qT as a query and the patient representations qT and qt for all her visits as keys and values (refer to Eq. (4)). Given the two levels rt m,i and rt v of relevance between the current and each past visit, VITA obtains the current healthrelevant past medication representation ¯pT k , as follows: ¯pT k = (T −1) X t=1 rt vrt m (rt m ∈R|M|), where ∀i ∈M, rt m = ( rt m,i , i ∈mt, 0, otherwise. (8) Finally, VITA fuses the two medication representations pT k and ¯pT k to obtain the k-th medication of the necessary medications for the patient’s current visit T, as follows: ˆ MT = ˆ MT ∪argmaxk∈{1,...,|M|}(ˆpT k ), where ˆpT k = λk{softmax(pT k Wp + bp)} + (1 −λk)¯pT k , (9) where the set ˆ MT begins as an empty set; ˆpT k ∈R|M| denotes the probabilities that all medications will be recommended as the k-th medication at her current visit T; Wp ∈Rdim×|M| and bp ∈R|M| denote a learnable weight matrix and a bias vector, respectively; λk ∈R denotes a learnable parameter. In other words, based on such a ˆpT k , VITA predicts the medication with the highest probability as the k-th medication. 3.3 Training For the medications predicted by VITA, we employ the cross-entropy loss as the objective function, same as in (Wu et al. 2022b), to learn the medication representations ei and other learnable parameters of VITA, as follows: L = − T X t=1 |M| X i=1 mt ilog( ˆmt i). (10) 4 Evaluation In this section, we conducted extensive experiments aiming at answering the following key research questions (RQs): • RQ1: Does VITA provide more accurate recommendation than state-of-the-art medication recommenders? • RQ2: Is each of VITA’s two core ideas effective for medication recommendation? • RQ3: Does equipping existing methods with VITA’s two core ideas consistently improve their accuracies? • RQ4: Which past visit information does VITA select from all of the patients’ past visit information? 4.1 Experimental Settings Datasets. We used the MIMIC-III dataset (Johnson et al. 2016), widely used in medication recommendation studies (Shang et al. 2019b; Yang et al. 2021a,b; Wu et al. 2022b), and the MIMIC-IV dataset (Johnson et al. 2023), which is a follow-up dataset to MIMIC-III dataset. Following (Shang et al. 2019b; Yang et al. 2021a,b; Wu et al. 2022b), we also filtered out the patients who visited only once and the medical records related only to them. Please refer to the Appendix for detailed statistics of the two datasets. Competitors. We compared VITA with six competitors: one basic classifier (Nearest as used in (Shang et al. 2019b)) and five state-of-the-art medication recommender systems (LEAP (Zhang et al. 2017), GAMENet (Shang et al. 2019b), MRSC (Wang et al. 2021b), SafeDrug (Yang et al. 2021b), and COGNet (Wu et al. 2022b)). Evaluation Protocols. We randomly split the patients in each dataset into training (4/6), validation (1/6), and test (1/6) sets as in (Shang et al. 2019b; Yang et al. 2021b; Wu et al. 2022b). To measure accuracy, as in (Shang et al. 2019b; Wang et al. 2021a,b; Yang et al. 2021b; Wu et al. 2022b), we used the following three measures: Jaccard, PRAUC, and F1. In addition, following (Shang et al. 2019b; Wang et al. 2021a; Yang et al. 2021b; Wu et al. 2022b), we used the DDI rate (Shang et al. 2019b) to measure the possibility of adverse effects between recommended medications. However, due to space limitations, we present only the results in terms of accuracy here. Please refer to the Appendix for the results from the DDI rate. Furthermore, for each measure, we report the average values over five independent runs for most of our experiments; in Tables 1, 2 and Figure 3, all improvements are statistically significant with a p-value ≤0.001. 4.2 Results and Analysis Due to space limitations, for RQ3 and RQ4, we present the results on the MIMIC-III dataset only. Please refer to the Appendix for the results from the MIMIC-IV dataset. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8604 Datasets MIMIC-III / MIMIC-IV Measures Jaccard PRAUC F1 Nearest 0.392 / 0.452 0.381 / 0.446 0.547 / 0.605 LEAP 0.431 / 0.427 0.638 / 0.595 0.594 / 0.584 GAMENet 0.435 / 0.460 0.675 / 0.703 0.590 / 0.611 MRSC 0.486 / 0.449 0.741 / 0.691 0.645 / 0.606 SafeDrug 0.506 / 0.473 0.754 / 0.698 0.663 / 0.625 COGNet 0.509 / 0.494 0.757 / 0.707 0.667 / 0.647 VITA 0.528 / 0.522 0.767 / 0.715 0.682 / 0.669 Table 1: Accuracies of six competitors and VITA. The best and the second-best results in each column (i.e., each measure) are in bold and underlined, respectively. RQ1: Comparison with Six Competitors. To demonstrate the superiority of VITA, we compared the accuracy of VITA and those of the six competitors6. As shown in Table 1, VITA outperforms all competitors on all datasets for all measures consistently. Specifically, on the MIMIC-III and -IV datasets, VITA outperforms the best competitor (i.e., COGNet) by up to 3.73% and 5.67% in terms of Jaccard. The improvement gap increased from 3.73% in the MIMIC-III dataset to 5.67% in the MIMIC-IV dataset in terms of Jaccard. This is interpreted as our idea of selecting and using only the past visits relevant to the patient’s current visit becoming more important as the number of visits increases (the MIMIC-IV dataset, compared to the MIMIC-III dataset, contains about 2.15 times as many patients and about 2.53 times as many visits, with the average number of visits per patient increasing from 2.59 to 3.05). Also, in the Appendix, we will analyze the training time of VITA and competitors for a more comprehensive understanding of the efficiency of VITA’s encoder. RQ2: Ablation Study. In order to verify the effectiveness of our two novel ideas of VITA (relevant-visit selection and target-aware attention), we conducted comparative experiments using VITA and its variants. VITA’s variants for confirming the effectiveness of relevant-visit selection. (1) VITA-R does not use the relevant-visit selection module in VITA. (2) VITA-RT -1 uses only one past visit, which is most similar to the current visit of a patient in terms of Jaccard similarity, and the current visit information as input for its encoder, instead of using the relevant-visit selection module. Also, we note that, when the value of temperature of the attention network is close to zero, the difference between the attention weights increases, sharpening their distribution (Hinton, Vinyals, and Dean 2015; Nguyen, Pernkopf, and Kosmider 2020). It will make our target-aware attention module have an effect similar to using only the past visits relevant to the current visit; in other words, this will be able to play a role similar to the relevant-visit selection module. Therefore, we measured the accuracies of VITA-R, while varying the temperature τa of its target-aware attention module from 1 to 0.2 by 0.2 (please refer to the Appendix for detailed results of this experiment); 6We carefully tuned the hyperparameters of all methods. Please refer to the Appendix for the specific hyperparameter values. Datasets MIMIC-III / MIMIC-IV Measures Jaccard PRAUC F1 VITA 0.528 / 0.522 0.767 / 0.715 0.682 / 0.669 VITA-R 0.516 / 0.520 0.756 / 0.709 0.671 / 0.659 VITA-RT -1 0.514 / 0.480 0.741 / 0.694 0.670 / 0.635 VITA-Rsha. 0.519 / 0.520 0.762 / 0.709 0.672 / 0.664 VITA-Tavg. 0.512 / 0.513 0.751 / 0.707 0.665 / 0.660 VITA-Trnn 0.515 / 0.517 0.754 / 0.701 0.668 / 0.663 VITA-Tatt. 0.518 / 0.520 0.759 / 0.709 0.670 / 0.662 Table 2: The effects of VITA’s two core ideas (relevant-visit selection and target-aware attention). The best result in each column (i.e., each measure) is in bold. we considered (3) VITA-Rsha., which is VITA-R with the best-performing temperature τa. VITA’s variants for confirming the effectiveness of target-aware attention. (1) VITA-Tavg. (resp. (2) VITATrnn) employs the mean pooling (resp. a RNN-based model (spec., GRU)) instead of the target-aware attention module in VITA when fusing the past visit information relevant to the current visit. (3) VITA-Tatt. employs a typical attention network that uses the current visit information as a query and the relevant past visit information as keys and values, instead of the target-aware attention module, when fusing the past visit information relevant to the current visit. Table 2 shows the accuracies of VITA and its all variants. Results regarding the effectiveness of relevant-visit selection. We observed that VITA outperforms VITA-R. This result indicates that using only the past visits relevant to the current visit is important in recommending effective medications to patients; in other words, the past visits irrelevant to the current visit should not be considered at all when recommending medications. However, even though VITA-RT -1 employs only one past visit most relevant to the current visit of a patient in terms of Jaccard similarity, this shows lower accuracy than VITA-R. This is because, although the number of past visits relevant to the current visit may differ per patient, this employs a fixed number of past visits for all patients, which has adversely affected learning by including (resp. excluding) the past visits that are actually irrelevant (resp. relevant) to the current visit. This supports the design of our relevant-visit selection module, which does not have such a restriction. We also observed that VITA-Rsha. showed lower accuracy than VITA. Note that it is difficult for VITA-Rsha. to flexibly select any number of past visits relevant to the current visit for each patient, because it operates similarly to the argmax function as the distribution of attention weights continues to sharpen. Results regarding the effectiveness of target-aware attention. We observed that VITA outperforms all its variants related to target-aware attention (i.e., VITA-Tavg., VITATrnn, and VITA-Tatt.), confirming the effectiveness of the target-aware attention module. Also, we observed that the accuracies for most measures improve in the order of VITATavg., VITA-Trnn, and VITA-Tatt.. The results indicate that (1) capturing the degree of relevance between the current The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8605 GAM. MRS. Saf. COG. 0.42 0.45 0.48 0.51 Jaccard Original Original(VITA) Figure 3: Accuracies of four longitudinal-based methods and their variants equipped with two core ideas of VITA. visit and each past visit plays an important role in obtaining higher accuracy (this is from the fact the accuracy of VITA-Tavg. is worse than those of VITA-Trnn, VITA-Tatt., and VITA in most measures); (2) such relevance may not be consistent with the order of the visits (this is from the fact the accuracy of VITA-Trnn is worse than those of VITATatt. and VITA in most measures); (3) it is important to accurately capture the degree of relevance between the current visit and each past visit in providing better accuracy (this is from the fact the accuracy of VITA-Tatt. is worse than that of VITA in all measures). These findings again support the limitation of existing works that we previously pointed out. Also, it bears mentioning that all variants of VITA, except for VITA-RT -1, outperform the best competitor (i.e., COGNet) for most measures (compare Tables 1 and 2), even though they incorporate only one of VITA’s two core ideas; this validates that VITA’s single core idea alone is beneficial in improving the accuracy of medication recommendation. RQ3: Compatibility of VITA’s Two Core Ideas. In Section 3.2, we claimed that VITA’s two core ideas can be applied orthogonally to most medication recommenders, thereby improving their accuracy. To validate this claim, we compare the accuracy of four longitudinal-based methods (GAMENet, MRSC, SafeDrug, and COGNet) and their variants, each of which is equipped with the two core ideas of VITA (GAMENet(VITA), MRSC(VITA), SafeDrug(VITA), and COGNet(VITA)); the results of the methods are represented as the first three letters of the methods in Figure 3. As shown in Figure 3, the variants of existing methods equipped with the two core ideas of VITA outperform the original methods in Jaccard (please refer to the Appendix for the results from PRAUC and F1); that is, the result shows that our two core ideas are equipped orthogonally to most medication recommenders, improving their accuracy. RQ4: Analysis of Selected Past Visits. One of the two core ideas of VITA, the relevant-visit selection, automatically selects only the past visit relevant to the current visit. To investigate which past visit of patients was selected by the relevant-visit selection module, we analyze the visit information of patients in the test set. We first calculated the Jaccard similarity between the current and each past visit information of a patient in the test set as in Section 2. Then, we divided the past visits of patients into the following two groups: (i) (resp. (ii)) the past visits selected (resp. not selected) by the relevant-visit selection module; the statistics of the Jaccard similarity for each group, along with those of all past visits for comparison, are represented in Figure 4. As shown in Figure 4, the relevant-visit selection module tended to select the past visit information with a high Jaccard similarity to the current visit information. Numerically, the 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Not Select. Select. All Jaccard similarity Figure 4: The statistics of the Jaccard similarity between patients’ current visit and ‘all’ past visits, (i) (resp. (ii)) the past visits ‘selected’ (resp. ‘not selected’) by the relevantvisit selection module. average of Jaccard similarity between the past visit information selected by the relevant-visit selection module and the current visit information is 0.2164, which is 22.81% higher than that for the past visit not selected by the relevant-visit selection module (0.1762). Note that the relevant-visit selection module did not always select the past visit information similar to the current one of the patients in Jaccard similarity. This implies that, even if the past visit information is not similar to the current one in Jaccard similarity (i.e., on the face of it), it can be useful for accurate medication recommendation (e.g., due to its possible inherent relevance with the current one), as inferred from the superior performance of VITA, which is equipped with the relevant-visit selection. Additionally, a case study on past visits dissimilar to the current visit in Jaccard similarity but selected by the relevantvisit selection module would be shown in the Appendix. 5 Conclusions In this paper, we demonstrated the following two important points w.r.t. representing a patient’s current health status: (1) it is required to accurately capture the relevance between current visit information and each of past visit information; (2) using past visit information irrelevant to the current one is harmful; in other words, only the past visits relevant to a patient’s current visit should be ‘carefully chosen’, and the chosen past visits should be ‘weighted less’ when aggregated according to the degree of relevance. Considering these points, we proposed a novel medication recommendation framework, named VITA, based on the following two core ideas: relevant-visit selection, which allows for flexible selection of only the past visits relevant to the current visit per patient (even allowing for the selection of all past visits or selecting none of them, if necessary); target-aware attention, which accurately captures the relevance score between a patient’s current visit and each of past visits (even covering when all the past visits are weakly relevant to the current visit). Our two novel ideas are effective in improving the accuracy, thereby making VITA, which is the final version equipped with all the ideas, consistently and significantly more accurate than its six competitors on real-world datasets. Furthermore, our two core ideas can be applied to the various medication recommender systems (even in the recommender systems of another domain) orthogonally. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8606 Acknowledgments This work was supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2023 (Project Name: Development of Intelligent Personalized Rehabilitation Service Technology, Project Number: SR202104001, Contribution Rate: 33.34%) and by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00352 and No. RS-2022-00155586). References Bahdanau, D.; Cho, K.; and Bengio, Y. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR). Bhoi, S.; Lee, M. L.; Hsu, W.; Fang, A. H. S.; and Tan, N. C. 2021. Personalizing Medication Recommendation with a Graph-Based Approach. ACM Transactions on Information Systems, 40: 1–23. Gong, F.; Wang, M.; Wang, H.; Wang, S.; and Liu, M. 2021. SMR: Medical Knowledge Graph Embedding for Safe Medicine Recommendation. Big Data Research, 23: 100174. Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Jang, E.; Gu, S.; and Poole, B. 2017. Categorical Reparameterization with Gumbel-Softmax. In Proceedings of the International Conference on Learning Representations (ICLR). Johnson, A.; Bulgarelli, L.; Pollard, T.; Horng, S.; Celi, L. A.; and Mark, R. 2023. MIMIC-IV, a freely accessible electronic health record dataset. Scientific data, 10: 1. Johnson, A. E.; Pollard, T. J.; Shen, L.; Lehman, L.-w. H.; Feng, M.; Ghassemi, M.; Moody, B.; Szolovits, P.; Anthony Celi, L.; and Mark, R. G. 2016. MIMIC-III, a freely accessible critical care database. Scientific data, 3: 1–9. Nguyen, T.; Pernkopf, F.; and Kosmider, M. 2020. Acoustic Scene Classification for Mismatched Recording Devices Using Heated-Up Softmax and Spectrum Correction. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 126–130. Shang, J.; Ma, T.; Xiao, C.; and Sun, J. 2019a. Pre-training of Graph Augmented Transformers for Medication Recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 5953–5959. Shang, J.; Xiao, C.; Ma, T.; Li, H.; and Sun, J. 2019b. GAMENet: Graph Augmented MEmory Networks for Recommending Medication Combination. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 33, 1126–1133. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), volume 30, 5998–6008. Wang, L.; Zhang, W.; He, X.; and Zha, H. 2018. Personalized Prescription for Comorbidity. In Proceedings of the International Conference on Database Systems for Advanced Applications (DASFAA), 3–19. Wang, Y.; Chen, W.; Pi, D.; Yue, L.; Wang, S.; and Xu, M. 2021a. Self-Supervised Adversarial Distribution Regularization for Medication Recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 3134–3140. Wang, Y.; Chen, W.; Pi, D.; Yue, L.; Xu, M.; and Li, X. 2021b. Multi-hop Reading on Memory Neural Network with Selective Coverage for Medication Recommendation. In Proceedings of the International ACM Conference on Information and Knowledge Management (CIKM), 2020– 2029. Wu, J.; Qian, B.; Li, Y.; Gao, Z.; Ju, M.; Yang, Y.; Zheng, Y.; Gong, T.; Li, C.; and Zhang, X. 2022a. Leveraging multiple types of domain knowledge for safe and effective drug recommendation. In Proceedings of the International ACM Conference on Information and Knowledge Management (CIKM), 2169–2178. Wu, R.; Qiu, Z.; Jiang, J.; Qi, G.; and Wu, X. 2022b. Conditional Generation Net for Medication Recommendation. In Proceedings of the international ACM conference on World Wide Web (WWW), 935–945. Yang, C.; Xiao, C.; Glass, L.; and Sun, J. 2021a. Change Matters: Medication Change Prediction with Recurrent Residual Networks. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 3728– 3734. Yang, C.; Xiao, C.; Ma, F.; Glass, L.; and Sun, J. 2021b. SafeDrug: Dual Molecular Graph Encoders for Recommending Effective and Safe Drug Combinations. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 3735–3741. Zhang, Y.; Chen, R.; Tang, J.; Stewart, W. F.; and Sun, J. 2017. LEAP: Learning to Prescribe Effective and Safe Treatment Combinations for Multimorbidity. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 1315–1324. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8607
2024
956
18,802
Optimal Quasi-clique: Hardness, Equivalence with Densest-k-Subgraph, and Quasi-partitioned Community Mining Aritra Konar1, Nicholas D. Sidiropoulos2 1KU Leuven, Leuven, Belgium 2University of Virginia, Charlottesville, USA [email protected], [email protected]. Abstract Dense subgraph discovery (DSD) is a key primitive in graph mining that typically deals with extracting cliques and nearcliques. In this paper, we revisit the optimal quasi-clique (OQC) formulation for DSD and establish that it is NP–hard. In addition, we reveal the hitherto unknown property that OQC can be used to explore the entire spectrum of densest subgraphs of all distinct sizes by appropriately varying a single hyperparameter, thereby forging an intimate link with the classic densest-k-subgraph problem (DkS). We corroborate these findings on real-world graphs by applying the simple greedy algorithm for OQC with improved hyperparameter tuning, to quickly generate high-quality approximations of the size-density frontier. Our findings indicate that OQC not only extracts high quality (near)-cliques, but also large and loosely-connected subgraphs that exhibit well defined local community structure. The latter discovery is particularly intriguing, since OQC is not explicitly geared towards community detection. Introduction Dense subgraph detection (DSD) is a key primitive in graph mining that aims to extract highly interconnected subsets of vertices from a graph. Applications of the problem range from discovering regulatory motifs in genomic DNA, mining trending topics in social media, finding functional modules in gene co-expression networks, and communities in social networks – see (Cadena, Chen, and Vullikanti 2018; Lanciano et al. 2023) and references therein. In recent years, DSD has also found application in spotting fraudulent behavior in user-product graphs (Hooi et al. 2016) and financial transaction networks (Zhang et al. 2017; Li et al. 2020; Chen and Tsourakakis 2022). Directly maximizing subgraph density (defined as the fraction of the maximum number of possible edges in a subgraph) admits trivial solutions such as a single edge. This motivates using alternative surrogates for density maximization. The classic Densest Subgraph (DSG) problem (Goldberg 1984) aims to extract a dense vertex subset that maximizes the average induced degree. DSG can be solved exactly in polynomial-time via maximum-flow (Goldberg Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1984). In practice, a simple vertex peeling-based greedy approximation algorithm (Charikar 2000) is used, as it enjoys linear-time complexity and provides a 0.5-approximation guarantee for DSG. Recently, “multi-pass” generalizations of the greedy algorithm have been developed which exhibit superior performance (Boob et al. 2020; Chekuri, Quanrud, and Torres 2022). Another well-known formulation is the core decomposition (Seidman 1983), which is tantamount to maximizing the minimum induced degree - the resulting vertex subset is known as the maxcore, which can be obtained via a slight modification of the greedy peeling algorithm for DSG. These approaches suffer from an inherent limitation - there is no means of explicitly controlling the size of the extracted subgraphs. Hence, one cannot rule out the possibility that these extracted subgraphs will have low density. Unfortunately, such cases can occur on real-world graphs. For example, the peeling algorithm for DSG can output the entire graph as the solution (Tsourakakis et al. 2013). Meanwhile, empirical studies have revealed that the maxcores typically do not form a dense quasi-clique (Shin, Eliassi-Rad, and Faloutsos 2016). If the density of DSG/maxcore proves to be unsatisfactory, the Densest-k-Subgraph (DkS) problem (Feige, Peleg, and Kortsarz 2001) can be employed - given a pre-specified size parameter k, extract the densest size-k vertex subset (i.e., the one which harbors the maximum number of induced edges). By solving the problem for various k, we obtain a collection of the densest subgraphs of distinct sizes, from which a solution of desired density can be selected. We designate the entire spectrum of such subgraphs (i.e., the densest of each distinct size) the optimal size-density frontier. Unfortunately, this extra flexibility comes at a price DkS is NP–hard and is notoriously difficult to approximate in the worst-case (Manurangsi 2017). Notwithstanding this fact, practical algorithms which work well for this problem on real graphs include (Papailiopoulos et al. 2014; Konar and Sidiropoulos 2021). However, a limitation of these approaches is that they entail solving an optimization problem for each k, which can prove computationally expensive when generating candidate solutions of various sizes. An alternative is the recent Generalized Mean Densest Subgraph (GMDSG) framework (Veldt, Benson, and Kleinberg 2021), which employs a single parameter p for computing generalized means of degree sequences of a subgraph. By varyThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8608 Figure 1: (Top panel): Size-density frontiers generated using greedyOQC (blue) and DkS (red) on the Facebook dataset. (Left): Subgraphs in the range spanned by α ∈[0.33, 0.99] and (right): in the range α ∈[0.01, 0.33). Denser subgraphs mined by greedyOQC correpsond to larger values of α. (Bottom panel): Visualizing local communities in looselyknit subgraphs extracted by OQC via the block diagonal structure of their adjacency matrices. (Left): Size = 574, density = 0.21. (Right): Size = 1297, density = 0.07. ing p, one can extract a family of dense subgraphs which obey different notions of density, with DSG and maxcore corresponding to the choices of p = 1 and p = −∞, respectively. For p ≥1, GMDSG can be solved optimally in polynomial-time via maximum-flow, and is also amenable to high-quality approximation via a generalized greedy peeling algorithm. While being a useful generalization of DSG, presently it is not known whether the solution of GMDSG (for a given p) corresponds to the densest subgraph of that size; i.e., whether it equals the solution of DkS (in terms of density) for a given k. In this paper, our primary goal is to highlight an alternative means of mining subgraphs from the optimal sizedensity frontier as opposed to employing DkS. To this end, we revisit the optimal quasi-clique formulation (OQC) proposed in (Tsourakakis et al. 2013). Similar to GMDSG, the framework employs a single parameter α to quantify subgraph density; in particular how unexpected the density of a subgraph is w.r.t. to a random subgraph model. In (Tsourakakis et al. 2013), a greedy peeling algorithm was developed for OQC and tested using α = 1/3 to demonstrate that it outperforms DSG on real-world graphs. However, the merits of such a parameter choice have not been formally investigated. In fact, the precise role played by α remains ill-understood. Loosely speaking, the OQC formulation (2) can be viewed as a “regularized” counterpart of DkS, with α serving as a trade-off parameter between subgraph size and density. Building on this intuition, we provide several important insights regarding the problem. Our contributions can be summarized as follows. 1. Hardness: We prove that OQC is NP–hard for undirected, unweighted graphs, thereby settling a longstanding conjecture regarding the complexity of the problem originally posed in (Tsourakakis et al. 2013). 2. Equivalence with DkS: We demonstrate that the densities of the maximizers of OQC obtained by continuous variation of the parameter α equal those of the maximizers of DkS obtained by variation of the discrete size parameter k in DkS. In other words, by varying their respective parameters, both formulations generate the optimal size-density frontier in a graph 1. In order to establish our result, we prove the existence of sub-intervals of α where the maximizers of OQC are the densest subgraphs of a particular size-k. We remark that such an equivalence between non-convex, combinatorial problems is surprising, since unlike establishing equivalences between regularized and constrained variants of continuous problems, we cannot appeal to strong duality (Boyd and Vandenberghe 2004), or to penalty-based approaches (Bertsekas 2014). 3. Quickly exploring the size-density frontier: Since both DkS and OQC are difficult problems to solve exactly, in practice, there can be a difference in the quality of the subgraphs extracted by them. An implication of our results is that the greedy peeling algorithm for OQC (Tsourakakis et al. 2013) is a natural baseline for benchmarking the performance of DkS methods. In addition to its linear-time complexity, an attractive feature of this peeling method is that the peeling order does not depend on α. Hence, by running the method once to obtain the order, different values of α can be used in a postprocessing step to select subgraphs of different densities and sizes. This is in stark contrast to methods for DkS, which have to be run for each distinct k. An illustrative example of the performance of greedy peeling and the convex relaxation algorithm (Konar and Sidiropoulos 2021) for DkS on the Facebook dataset (obtained from (?)) is provided in Figure 1. In the top panels, we display the size-density frontiers for OQC and DkS for two ranges of subgraph sizes. Notice how closely the curves match, with the peeling method exhibiting slightly better densities for subgraph sizes less than 110. Additionally, we noted that increasing α beyond 1/3 generally improves the density performance; e.g., with α = 1/3 we obtain a subgraph of size 200 with density 0.78 whereas for α = 0.99 we obtain a clique of size 69. It can also be observed that the frontier generated by OQC is coarser compared to DkS - this is due to the nature of the peeling algorithm (see Experiments for a more detailed discussion). 4. Large and sparse quasi-cliques can also be interesting: DSD is mostly concerned with extracting cliques and near-cliques, which reside in the high density region of the optimal size-density frontier. Thus, the task of mining larger, less cohesive subgraphs is apriori not well motivated. Unexpectedly, it turns out that in real-world graphs, quasi-cliques with density as low as 7% can exhibit well-defined, non-trivial local community structure. 1Such a property is currently not known for GMDSG. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8609 This is illustrated in the bottom panel of Figure 1 - as α is decreased, the peeling algorithm “zooms-out” to reveal sparsely connected subgraphs which harbor loosely interconnected communities of smaller dense subgraphs. This discovery is surprising, since the objective function of OQC does not explicitly promote community structure. Our results on other real-world graph reveals a similar pattern (see Experiments). The Optimal Quasi-clique Problem Consider an undirected, unweighted graph G := (V, E) on n vertices with m edges. Given a subset of vertices S ⊆V, let e(S) denote the number of edges in the subgraph GS induced by S. The density of S is defined as ρ(S) := e(S)/ |S| 2  . The optimal quasi-clique (OQC) formulation proposed in (Tsourakakis et al. 2013) aims at finding the subgraph that maximizes the following objective function fα(S) := e(S) −α |S| 2  . (1) The first term encourages the subgraph induced by S to have a large number of edges while the second term penalizes large subgraph sizes. The regularization parameter α ∈(0, 1) plays a balancing act in trading off subgraph density for size. The objective function admits the following interpretation - the second term can be viewed as the number of edges that appear in expectation in a random Erdos-Renyi graph defined on the vertex subset S, where α ∈(0, 1) denotes the probability of an edge connecting a pair of vertices. Thus, fα(S) assigns a greater reward to subgraphs GS which exhibit a large surplus of edges with respect to the random subgraph model. Overall, OQC aims to solve the following optimization problem max S⊆V fα(S) (2) The choice of the parameter α affects the size and density of the extracted solution. Intuitively, selecting a small value of α allows large, non-dense subgraphs to exhibit a large edge surplus. As the value of α is increased, dense subgraphs of smaller size are favored. In (Tsourakakis et al. 2013), it was recommended to set α = 1/3. Hardness Regarding the computational complexity of problem (2), little is known - it has long been suspected to be NP–hard (Tsourakakis et al. 2013), but a formal proof has remained elusive thus far. We point out that a generalization of the OQC problem studied in (Cadena, Vullikanti, and Aggarwal 2016), where the edges of G are allowed to have arbitrary weights, has been shown to be NP–hard. An analogous result for undirected graphs, where each edge has unit weight, however, is not presently known. Our first major contribution settles the matter by furnishing a proof of NP– hardness via a reduction from the decision version of the MAXCLIQUE problem, which is known to be NP–complete (Karp 1972). Given G and a positive integer k ≥3, the decision variant of the MAXCLIQUE problem asks whether the maximum clique size is at least k. We demonstrate that for every choice of k, there exists a sub-interval of α ∈(0, 1) for which there is a one-to-one correspondence between the solutions of problem (2) and MAXCLIQUE. Hence, OQC is at least as hard as solving an arbitrary decision instance of MAXCLIQUE. Our reduction utilizes the following key result in extremal graph theory. Fact 1 (Turán’s theorem (Turan 1941)). Every graph on n vertices that does not contain a k-clique, can have at most the following number of edges. τ(n, k) :=  1 − 1 k −1 n2 2 (3) In other words, if the number of edges in a n-vertex graph exceeds τ(n, k), then it must contain a k-clique. We adopt the following approach: if a graph contains a k-clique, then it is a subset of a subgraph of size at least k with “sufficiently” large edge density. That is, if we are able to locate an induced subgraph GS such that the number of edges induced e(S) exceeds the threshold τ(|S|, k), then that subgraph must harbor a k-clique. Since such a k-clique in GS is also a k-clique in G, it then follows that an affirmative answer to the instance of MAXCLIQUE has been determined. The task of detecting such a subgraph and tying it to the solution of the OQC problem (2) is formalized in the following result. Theorem 1. The optimal quasi-clique problem on undirected graphs (2) is NP-hard. Proof. We briefly sketch the outline. Given a decision instance of MAXCLIQUE, we utilize Turán’s theorem to determine the smallest constant α ∈(0, 1) for which a subgraph GS induced by S ⊆V obeys the inequality α |S| 2  ≥τ(|S|, k) =  1 − 1 k −1 |S|2 2 . (4) For such a choice of α, if it additionally holds that the edgesurplus fα(S) > 0, then we have the implication e(S) > τ(|S|, k). This in turn implies that GS harbors a k-clique. It turns out that a sufficient choice of α is αk := 1 − 1 (k −1)2 . (5) We can show that solving (2) with α = αk and examining whether the size of the solution is greater than or equal to, or smaller than k, corresponds to solving any decision instance of MAXCLIQUE. Remark 1. When testing for the presence of a k-clique, the above result remains unchanged if the threshold α = αk is replaced by any value of α in the sub-interval [αk, 1). This is because for a fixed value of k, αk is the smallest constant that satisfies the inequality (4). Clearly, any value of α which exceeds this threshold is also a valid choice. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8610 Unveiling the Role of α As mentioned previously, the choice of α plays a key role in determining the quality of the subgraph extracted by OQC (in terms of size and density). However, the question of which subgraphs of G correspond to maximizers of OQC for a given value of α ∈(0, 1) has not been formally investigated. This is the main object of our study. Given a parameter k ∈[K] := {2, · · · , n}, we denote the optimal (i.e., maximum) density across all size-k subgraphs as ρ∗ k := max |S|=k ρ(S). The collection of pairs {(k, ρ∗ k)}k∈[K] then corresponds to the optimal size-density frontier of G; i.e., each frontier point denotes the maximum subgraph density of a given size. Regarding the relationship among the optimal density values {ρ∗ k}k∈[K], the following result is known (Kawase and Miyauchi 2018, Lemma 1). Lemma 1. For any graph G, the optimal size-k subgraph density ρ∗ k is a monotonically non-increasing function of the size; i.e., it always holds that ρ∗ k ≥ρ∗ k+1, ∀k ∈[K]. (6) Let ω denote the size of the maximum clique in G, and Cω be a subset of vertices that constitute a maximum clique. The result implies that for every size k ≤ω, the maximum density ρ∗ k = 1. This is because a fixed size subgraph attains a density of 1 (the maximum possible value) if and only if it is a clique, and the maximum clique contains all cliques of smaller size. For sizes k > ω, the optimal density ρ∗ k is bounded away from 1, i.e., the densest subgraphs in this range of sizes are quasi-cliques. Furthermore, by virtue of Lemma 1, the density ρ∗ k of these optimal quasi-cliques is a monotone non-increasing function of size k. Our second major contribution establishes that solving OQC with varying α is equivalent to mining subgraphs which correspond to different points on the optimal sizedensity frontier. To be precise, we show that for every unique density value in the set {ρ∗ k}k∈[K], there exists a sub-interval of α ∈(0, 1) for which the solution of OQC corresponds to the largest subgraph of that density value. For example, our result implies that there is a range of α for which the maximizers of problem (2) are the maximum cliques, which correspond to the largest subgraphs in G with density 1. As expected, our results show that large values of α enable OQC to mine maximum cliques and optimal near-cliques lying on the optimal size-density frontier, with smaller values extracting larger subgraphs of lower density on this frontier. Extracting the Maximum Clique We provide sufficient conditions on α such that the optimal solution of problem (2) coincides with the set of maximum cliques in G. First, we establish the following warm-up result. Consider a vertex subset S of size at most ω with density ρ(S) ∈[0, 1]. Then, for any choice of α ∈(0, 1) in the edge-surplus function (1), the following statement is true. Lemma 2. For any subgraph of size |S| ≤ω, it always holds that fα(S) ≤fα(Cω). (7) Since the maximum clique size ω is unique, the inequality (7) is satisfied with equality if and only if S constitutes a maximum clique in G. We conclude from the above result that all subgraphs of G which lie in the “shadow” of the maximum clique, i.e., which are dominated in size and density by Cω, are always sub-optimal for (2), irrespective of the choice of α ∈(0, 1). Consequently, if S∗ α denotes the optimal solution of (2), for every value of α ∈(0, 1), it must hold that |S∗ α| ≥ω and fα(S∗ α) ≥fα(Cω). (8) Going forward, we are interested in determining for what range of values of α are the above pair of inequalities satisfied with equality, which implies that the optimal solution of (2) coincides with the maximum clique. We expect the required value of α to be large in order for the edge-surplus attained by the maximum cliques in G to dominate that of all other subgraphs. Let ρ∗ ω+1 denote the density of the densest quasi-clique larger than ω. Define the threshold ˆα := ρ∗ ω+1 −(1 −ρ∗ ω+1) · c0, (9) where c0 := ω(ω −1)/(n −ω)(n + ω + 1). Then, we have the following result. Theorem 2. For all α ∈(ˆα, 1), the maximizers of OQC are the maximum cliques in G. Remark 2. We point out that extracting a maximum clique Cω corresponds to extracting all points on the optimal sizedensity frontier {(k, 1)}k≤ω, since Cω contains all cliques of smaller sizes. Extracting Optimal Quasi-Cliques Larger than the Maximum Clique Define the set [L] := {1, · · · , n −ω}. For a fixed parameter ℓ∈[L], let Qℓdenote the set of all quasi-cliques in G of size ω + ℓ. Let Q∗ ℓ∈Qℓdenote an optimal quasi-clique of size ω + ℓthat attains the maximum density ρ∗ ω+ℓ; i.e., we have Q∗ ℓ∈arg max |S|=ω+ℓρ(S). Next, we show that α can be selected such that the maximizers of OQC correspond to optimal quasi-cliques {Q∗ ℓ}ℓ∈[L]. Note that such optimal quasi-cliques correspond to the points (ω + ℓ, ρ∗ ω+ℓ) on the optimal size-density frontier. Our analysis requires the following assumption. Assumption 1: Every optimal density value in the range {ρ∗ ω+ℓ}ℓ∈[L] is unique. In other words, the optimal density values are not repeated for subgraph sizes larger than ω. While reasonable, this condition does not hold without loss of generality (e.g., in a 4-cycle, ρ∗ 3 = ρ∗ 4). Nevertheless, its primary utility is to keep derivations simple; it can be relaxed at the expense of more cumbersome technical arguments. Warm-up: We first consider the case of ℓ= 1, which corresponds to extracting Q∗ 1. The extension to the general case will be described afterwards. In order for Q∗ 1 to be the unique maximizer of (2), α should satisfy each of the following conditions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8611 1. fα(Q∗ 1) > fα(Q1), ∀Q1 ∈Q1 \ Q∗ 1. This reflects the fact that Q∗ 1 is required to have the maximum edgesurplus amongst all quasi-cliques Q1 of size ω + 1. 2. fα(Q∗ 1) > fα(Qℓ), ∀Qℓ∈Q+ \ Q1. This ensures that the edge-surplus attained by Q∗ 1 dominates that of the quasi-cliques of size larger than ω + 1. 3. fα(Q∗ 1) > fα(Cω). That is, the edge surplus of Q∗ 1 must exceed that of the maximum clique Cω. Recall the assertion of Lemma 2, which states that for any choice of α ∈(0, 1), we have fα(Cω) ≥fα(S), for all subgraphs of size |S| ≤ω. Hence, satisfying the condition fα(Q∗ 1) > fα(Cω) also guarantees that fα(Q∗ 1) > fα(S) is satisfied, for all subgraphs S smaller than ω. Define the thresholds LB(1) := ρ∗ ω+2 −(ρ∗ ω+1 −ρ∗ ω+2) · c1, (10a) UB(1) := ρ∗ ω+1 −(1 −ρ∗ ω+1)(ω −1)/2 (10b) where c1 is a constant dependent on n, ω. We can show that the above thresholds define an open sub-interval where α satisfies the above three conditions. This leads to the following result. Theorem 3. For all α ∈(LB(1, UB(1)), the maximizers of OQC correspond to optimal quasi-cliques Q∗ 1. The general case: Next, we consider the extraction of optimal quasi-cliques of sizes ℓ∈{2, · · · , n −ω}. Again, the following three conditions must be met for Q∗ ℓto be the unique maximizer of (2). 4. fα(Q∗ ℓ) > fα(Qℓ), ∀Qℓ∈Qℓ\ Q∗ ℓ. This condition is the same as (1). 5. fα(Q∗ ℓ) > fα(Qk), ∀Qk ∈Qk, k ∈[K]ℓ:= {ℓ+ 1, · · · , n −ω}. This is a generalization of condition (2) to ensure that the edge surplus of fα(Q∗ ℓ) dominates that of all subgraphs of size larger than ℓ. 6. fα(Q∗ ℓ) > fα(Qj), ∀Qj ∈Qj, j ∈{1, · · · , ℓ−1}. This condition generalizes (3) and ensures that the edge surplus of the optimal quasi-clique Q∗ ℓof size ω + ℓexceeds that of all quasi-cliques of smaller sizes. Define the thresholds LB(ℓ) := ρ∗ ω+(ℓ+1) −(ρ∗ ω+ℓ−ρ∗ ω+(ℓ+1))cℓ, (11a) UB(ℓ) := ρ∗ ω+ℓ−(ρ∗ ω+(ℓ−1) −ρ∗ ω+ℓ) · ω + (ℓ−2) 2  (11b) where cℓis a constant dependent on n, ℓ, ω. By an appropriate generalization of the arguments underpinning Theorem 3, we can utilize the above thresholds to obtain the following result. Theorem 4. For all α ∈(LB(ℓ), UB(ℓ)), the maximizers of OQC correspond to optimal quasi-cliques Q∗ ℓ. Overall, our results demonstrate that for any graph G, there exists a choice of α such that the maximizers of OQC correspond to the largest quasi-clique that attains a unique density on the optimal size-density frontier. Relationship with Densest-k-Subgraph In the previous section we demonstrated that there exists a choice of α in OQC which enables extraction of subgraphs comprising the optimal size-density frontier; i.e., subgraphs of G corresponding to the pairs {(k, ρ∗ k)}k∈[K]. An alternate means of traversing this frontier is to employ the DENSESTk-SUBGRAPH (DkS) formulation. Given a size parameter k ∈[K], DkS aims to find maximizers of the optimization problem max |S|=k ρ(S). Clearly, any size-k maximizer of DkS corresponds to the point (k, ρ∗ k) on the optimal size-density frontier. Thus, by varying k, DkS can be used to sweep the frontier comprising the pairs {(k, ρ∗ k)}k∈[K]. This implies that in terms of the optimal density value attainable for each specific subgraph size, OQC and DkS are equivalent. A natural follow-up question to consider then is the relationship between the maximizers of the twin formulations. Can the problems be viewed as being equivalent in this respect as well? To this end, we define the following notation. For a fixed value of α ∈(0, 1), let S∗ α denote the collection of maximizers of OQC; i.e., a subgraph S∗ α ∈S∗ α is a maximizer of OQC. Similarly, for a fixed size k ∈[K], let S∗ k denote the collection of maximizers of DkS. Theorem 5. For every α ∈(0, 1), there exists a value of k ∈[K] such that S∗ α ⊆S∗ k. However, there exist maximizers of DkS which are not maximizers of OQC. The second case corresponds to scenarios where the optimal density value is repeated across successive subgraph sizes. Hence, the two formulations are not entirely equivalent w.r.t their maximizers. However, we can show that when such an event occurs, the maximizers of OQC correspond to the largest quasi-clique among all optimal quasi-cliques that attain the same density value (across successive sizes). Additionally, we can also show that the largest such quasi-clique contains all the quasi-cliques of smaller sizes (with the same density). Lemma 3. Let Q∗ k and Q∗ k+1 be optimal quasi-cliques with densities ρ∗ k = ρ∗ k+1. Then, Q∗ k+1 harbors a size-k optimal quasi-clique with density ρ∗ k. Note that the above result generalizes the fact that the maximum clique contains cliques of all sizes lesser than ω. Experiments In principle, both OQC and DkS can be employed to mine dense subgraphs of differing sizes from the optimal sizedensity frontier {(k, ρ∗ k)}k∈[K] of G. However, these problems are NP–hard in the worst-case. In light of this fact, we resort to employing approximation algorithms for each formulation, which are not guaranteed to find optimal solutions in general. Thus, in practice, depending on the effectiveness of the selected algorithm, the quality of the subgraphs extracted (in terms of size and density) using the two formulations can be different. In this section, we conduct an empirical comparison of the subgraphs extracted by approximation methods for DkS and OQC on real-world graphs and provide guidelines regarding which formulation to use. Lovász Relaxation for DkS: We employ the recent convex relaxation approach of (Konar and Sidiropoulos 2021) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8612 Dataset n m Network Type FACEBOOK 4K 88K Social SOC-GOOGLE 211K 1.14M Social WEB-STANFORD 281K 2.31M Web graph MATHSCINET 332K 820K Co-authorship CA-DBLP 540K 15M Co-authorship WEB-GOOGLE 875K 5.10M Web graph PATENTS 3.7M 16.7M Citation graph Table 1: Summary of graph statistics: the number of vertices (n), the number of edges (m), and network type. wherein the Lovász extension of the supermodular objective function of DkS is maximized over the convex hull of the sum-to-k constraints. The resulting problem is solved using the Alternating Direction Method of Multipliers (ADMM) (Condat 2013). As the solution is not guaranteed to be integral, a rounding post-processing step is used to obtain the candidate subgraph of the desired size k. Greedy peeling for OQC: We employ the greedy vertexpeeling algorithm originally proposed in (Tsourakakis et al. 2013). Starting from the entire graph G, the algorithm repeatedly peels off the lowest degree vertex until no vertices are left to remove. In the process, a sequence of nested subgraphs is generated, and the one which attains the largest edge surplus is returned as the solution. The algorithm can be implemented efficiently in O(n + m) time. In (Tsourakakis et al. 2013), the choice of α = 1/3 was recommended to select the subgraph with the largest edge-surplus. As our theoretical analysis reveals that increasing the value of α is more suitable for detecting dense quasi-cliques, in our experiments we employ larger values of α. In practice, given a graph G, it is difficult to determine the exact subinterval of α required to a extract quasi-clique of a desired size since we do not know apriori all the parameters required for constructing the requisite sub-interval of α, including for what range of subgraph sizes are the optimal density values repeated. Consequently, we resort to using empirically chosen values of α. Note that fine-tuning the selection of α can be accomplished in a post-processing step independently of the algorithm; i.e., the algorithm has to be executed once in order to obtain a ranking of the vertices based on the iteration index where they were eliminated (this procedure does not depend on the value of α). Thereafter, different values of α can be tested to extract the best solution relative to the corresponding edge-surplus function. In this manner, the algorithm can be employed to quickly generate an approximation of the optimal size-density curve of G. This is an advantage enjoyed by the algorithm over its DkS counterpart, which needs to be run for each desired value of subgraph size k. Datasets, pre-processing and implementation: We used a collection of datasets (summarized in Table ??) obtained from standard repositories (Leskovec and Krevl 2014) to test the performance of all methods. Each dataset is preprocessed by symmetrizing any directed arcs, removing selfloops, and extracting the largest connected component. All our experiments were performed in Matlab on a Macbook equipped with 16GB RAM and an M2 processor. The code for the ADMM algorithm for solving DkS was the same employed in (Konar and Sidiropoulos 2021). Performance on Real-world graphs: After running the GREEDYOQC algorithm on a dataset, we perform a grid search on α in the range [0.01, 0.99], in increments of 0.01. Each value of α defines a different edge-surplus function, using which the subgraph with the largest edge surplus amongst the family of nested subgraphs generated by GREEDYOQC is selected. The smallest and largest size subgraphs obtained by this procedure are then set to be the lower and upper limits on k in the ADMM algorithm for DkS respectively. The size-density frontiers generated by these two different methods on the aforementioned datasets is depicted in Figure 2. We make the following general observations. 1. There exist “gaps” in the size-density frontier generated by GREEDYOQC. This is because for each choice of α, the solution is always restricted to be chosen from the same family of n nested subgraphs generated by the peeling process. We empirically confirmed that this can result in the same subgraph in the family attaining the largest edge-surplus for successive values of α. Owing to these “resolution limits”, the subgraphs extracted can correspond to a coarse approximation of the optimal sizedensity frontier in terms of the range of subgraph sizes spanned. The results also showcase that larger values of α do indeed retrieve denser subgraphs; in particular the originally recommended (Tsourakakis et al. 2013) choice of α = 1/3 can be sub-optimal in this regard. 2. Since the ADMM-based relaxation of DkS is designed to output a subgraph of a distinct size, it does not exhibit gaps in its generated size-density frontier. Thus, it offers a more fine-grained approximation of the optimal sizedensity frontier in terms of subgraph sizes compared to GREEDYOQC. However, this comes at the cost of extra computational time as the algorithm has to be run for each distinct value of k. 3. For subgraph sizes corresponding to the intersection of the twin size-density frontiers, the two algorithms are closely matched in general. However, for smaller subgraph sizes (≤100), the ADMM algorithm can perform worse than GREEDYOQC, which attains high-quality solutions in the range. We conclude that GREEDYOQC can be use to quickly obtain a high-quality approximation of the optimal size-density frontier. However, since it is limited in its resolution (in terms of the range of subgraph sizes spanned), ADMM-DkS can be employed over a smaller range of interest in order to obtain a finer-grained approximation of the frontier. Large and loose quasi-cliques matter too: In dense subgraph discovery, one is typically interested in exploring the “high” (density) end of the optimal size-density frontier of G, which is comprised of cliques and near-cliques. Given that, scant attention has been paid to exploring the opposing “low” end of the frontier, consisting of large subgraphs with low density. At first, it may seem that there is no apparent reason for doing so, since the subgraphs comprising this regime are not dense to begin with. However, since The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8613 Figure 2: Size-density frontiers generated by GREEDYOQC and ADMM-DkS. For OQC, denser subgraphs correspond to smaller values of α. GREEDYOQC can be utilized to quickly explore any region of the frontier (by appropriate selection of α in the postprocessing step), we analyzed the characteristics of the subgraphs comprising the low end. Our results indicate that subgraphs with density as low as 2 −10% can be interesting in their own right. Figure 3 depicts the sparsity pattern of the adjacency matrices of these extracted subgraphs across various datasets. Although these subgraphs are too large and sparse to be labelled dense (having only 5 −10% density), the block diagonal structure of their adjacency matrices reveals a striking property - the presence of local community structure. Evidently, these subgraphs are composed of multiple components of non-trivial size which exhibit sparse external connectivity and high internal cohesion. In order to reveal this community structure, we applied spectral clustering (Von Luxburg 2007) on the extracted subgraph. It is well known that real-world graphs lack global community structure (Leskovec et al. 2009), and global partitioning methods such as normalized-cut (Shi and Malik 2000)2 typically fail to find well connected clusters. Hence, a body of research has blossomed around local community detection (Spielman and Teng 2004; Andersen, Chung, and Lang 2006; Kloster and Gleich 2014; Orecchia and Zhu 2014; Veldt, Gleich, and Mahoney 2016; Wang et al. 2017) which use specialized techniques and algorithms tailored for detecting local communities. In that context, our results are surprising since (a): the edge-surplus function is a surrogate 2(of which spectral clustering can be viewed as a relaxation) Figure 3: Presence of local communities in low-density subgraphs identified using OQC, as visualized by the blockdiagonal structure of their respective adjacency matrices. for maximizing the internal connectivity of a subgraph and not explicitly geared towards promoting community structure, and (b): it is not obvious apriori that running the peeling process simply based on removing the lowest degree vertex will “chip” away at the global structure in the right places to reveal local communities. It is striking that this indeed happens consistently across various real-world graphs. Conclusions We revisited the OQC problem and revealed that the densities of its solutions obtained by continuous variation of α is equivalent to that of the classic Densest-k-subgraph problem. This opened the door to utilizing the GREEDYOQC algorithm for mining dense subgraphs comprising the optimal size-density frontier. On real-world graphs, we demonstrated that the algorithm quickly generates a high-quality approximation of the frontier, to that generated by more computationally intensive baselines of DkS; albeit with possibly limited resolution. On turning the spotlight towards large, loosely connected quasi-cliques, we made the surprising discovery that they harbor well defined local communities, even though the OQC formulation does not explicitly promote community structure. Acknowledgments Supported by the KU Leuven Special Research Fund (BOF/STG-22-040) and the National Science Foundation, USA (IIS-1908070). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8614 References Andersen, R.; Chung, F.; and Lang, K. 2006. Local graph partitioning using pagerank vectors. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), 475–486. IEEE. Bertsekas, D. P. 2014. Constrained optimization and Lagrange multiplier methods. Academic press. Boob, D.; Gao, Y.; Peng, R.; Sawlani, S.; Tsourakakis, C.; Wang, D.; and Wang, J. 2020. Flowless: Extracting densest subgraphs without flow computations. In Proceedings of The Web Conference 2020, 573–583. Boyd, S. P.; and Vandenberghe, L. 2004. Convex optimization. Cambridge university press. Cadena, J.; Chen, F.; and Vullikanti, A. 2018. Graph anomaly detection based on Steiner connectivity and density. Proc. of the IEEE, 106(5): 829–845. Cadena, J.; Vullikanti, A. K.; and Aggarwal, C. C. 2016. On dense subgraphs in signed network streams. In 2016 IEEE 16th International Conference on Data Mining (ICDM), 51– 60. IEEE. Charikar, M. 2000. Greedy approximation algorithms for finding dense components in a graph. In International Workshop on Approximation Algorithms for Combinatorial Optimization, 84–95. Springer. Chekuri, C.; Quanrud, K.; and Torres, M. R. 2022. Densest subgraph: Supermodularity, iterative peeling, and flow. In Proc. of SODA, 1531–1555. SIAM. Chen, T.; and Tsourakakis, C. 2022. Antibenford subgraphs: Unsupervised anomaly detection in financial networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2762– 2770. Condat, L. 2013. A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. Journal of Optimization Theory and Applications, 158(2): 460–479. Feige, U.; Peleg, D.; and Kortsarz, G. 2001. The dense ksubgraph problem. Algorithmica, 29(3): 410–421. Goldberg, A. V. 1984. Finding a maximum density subgraph. Technical report, University of California Berkeley, CA. Hooi, B.; Song, H. A.; Beutel, A.; Shah, N.; Shin, K.; and Faloutsos, C. 2016. Fraudar: Bounding graph fraud in the face of camouflage. In Proc. of SIGKDD, 895–904. ACM. Karp, R. M. 1972. Reducibility among combinatorial problems. In Complexity of computer computations, 85–103. Springer. Kawase, Y.; and Miyauchi, A. 2018. The densest subgraph problem with a convex/concave size function. Algorithmica, 80: 3461–3480. Kloster, K.; and Gleich, D. F. 2014. Heat kernel based community detection. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1386–1395. Konar, A.; and Sidiropoulos, N. D. 2021. Exploring the Subgraph Density-Size Trade-off via the Lova´sz Extension. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 743–751. Lanciano, T.; Miyauchi, A.; Fazzone, A.; and Bonchi, F. 2023. A survey on the densest subgraph problem and its variants. arXiv:2303.14467. Leskovec, J.; and Krevl, A. 2014. SNAP Datasets: Stanford Large Network Dataset Collection. https://snap.stanford. edu/data. Accessed: 2023-08-15. Leskovec, J.; Lang, K. J.; Dasgupta, A.; and Mahoney, M. W. 2009. Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters. Internet Mathematics, 6(1): 29–123. Li, X.; Liu, S.; Li, Z.; Han, X.; Shi, C.; Hooi, B.; Huang, H.; and Cheng, X. 2020. Flowscope: Spotting money laundering based on graphs. In Proc. of AAAI, volume 34, 4731–4738. Manurangsi, P. 2017. Almost-polynomial ratio ETHhardness of approximating densest k-subgraph. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 954–961. Orecchia, L.; and Zhu, Z. A. 2014. Flow-based algorithms for local graph clustering. In Proceedings of the twentyfifth annual ACM-SIAM symposium on Discrete algorithms, 1267–1286. SIAM. Papailiopoulos, D.; Mitliagkas, I.; Dimakis, A.; and Caramanis, C. 2014. Finding dense subgraphs via low-rank bilinear optimization. In ICML, 1890–1898. Seidman, S. B. 1983. Network structure and minimum degree. Social networks, 5(3): 269–287. Shi, J.; and Malik, J. 2000. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8): 888–905. Shin, K.; Eliassi-Rad, T.; and Faloutsos, C. 2016. Corescope: Graph mining using k-core analysis—patterns, anomalies and algorithms. In 2016 IEEE 16th international conference on data mining (ICDM), 469–478. IEEE. Spielman, D. A.; and Teng, S.-H. 2004. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, 81–90. Tsourakakis, C.; Bonchi, F.; Gionis, A.; Gullo, F.; and Tsiarli, M. 2013. Denser than the densest subgraph: extracting optimal quasi-cliques with quality guarantees. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 104–112. Turan, P. 1941. On an extremal problem in graph theory. Mat. Fiz. Lapok, 48: 436–452. Veldt, N.; Benson, A. R.; and Kleinberg, J. 2021. The generalized mean densest subgraph problem. In Proc. of SIGKDD, 1604–1614. Veldt, N.; Gleich, D.; and Mahoney, M. 2016. A simple and strongly-local flow-based method for cut improvement. In International Conference on Machine Learning, 1938– 1947. PMLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8615 Von Luxburg, U. 2007. A tutorial on spectral clustering. Statistics and computing, 17: 395–416. Wang, D.; Fountoulakis, K.; Henzinger, M.; Mahoney, M. W.; and Rao, S. 2017. Capacity releasing diffusion for speed and locality. In International Conference on Machine Learning, 3598–3607. PMLR. Zhang, S.; Zhou, D.; Yildirim, M. Y.; Alcorn, S.; He, J.; Davulcu, H.; and Tong, H. 2017. Hidden: hierarchical dense subgraph detection with application to financial fraud detection. In Proceedings of the 2017 SIAM international conference on data mining, 570–578. SIAM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8616
2024
957
18,803
Learning Persistent Community Structures in Dynamic Networks via Topological Data Analysis Dexu Kong, Anping Zhang, Yang Li * Shenzhen Key Laboratory of Ubiquitous Data Enabling, Shenzhen International Graduate School, Tsinghua University [email protected], [email protected], [email protected] Abstract Dynamic community detection methods often lack effective mechanisms to ensure temporal consistency, hindering the analysis of network evolution. In this paper, we propose a novel deep graph clustering framework with temporal consistency regularization on inter-community structures, inspired by the concept of minimal network topological changes within short intervals. Specifically, to address the representation collapse problem, we first introduce MFC, a matrix factorization-based deep graph clustering algorithm that preserves node embedding. Based on static clustering results, we construct probabilistic community networks and compute their persistence homology, a robust topological measure, to assess structural similarity between them. Moreover, a novel neural network regularization TopoReg is introduced to ensure the preservation of topological similarity between inter-community structures over time intervals. Our approach enhances temporal consistency and clustering accuracy on real-world datasets with both fixed and varying numbers of communities. It is also a pioneer application of TDA in temporally persistent community detection, offering an insightful contribution to field of network analysis. Code and data are available at the public git repository: https://github.com/kundtx/MFC-TopoReg. Introduction Community detection on dynamic networks is crucial for graph analysis. The formation of social ties, economic transactions, the unfolding of human mobility and communication, such real-world events all lie in the identification of meaningful substructures and their evolution hidden in the temporal complex system. Static community detection algorithms are well-researched and developed, such as the Louvain method (Blondel et al. 2008), submodularity (Liu et al. 2013), and spectral clustering (Ng, Jordan, and Weiss 2001). As graph neural networks have shown super capabilities in fields such as node classification and link prediction, deep graph clustering methods have come to the fore (Zhou et al. 2022) and been gradually adopted in static community detection (Su et al. 2022). However, dynamic community detection methods are still slow to develop due to the lack of a clear definition of communities in dynamic networks. Some *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. + Figure 1: Inconsistent inter-community structure in dynamic community detection. The top row shows three snapshots of a dynamic graph constructed at the vertex level, undergoing a transient perturbation; Different node colors represent their true community labels. The inconsistent communities are outlined by rectangles. The second row shows the corresponding community-level networks, which exhibit a falsely detected merge. dynamic community detection algorithms apply improved static algorithms on each snapshot of the network (Javed et al. 2018). Nevertheless, these methods focus on snapshot optimal solutions and their results lack consistency in time. In many dynamic community detection scenarios, it is reasonable to assume that changes in the relationship between communities occur smoothly and the intercommunity structure remains relatively stable over time. Research has shown that the structure of the network itself does not change significantly over a short period of time (Corne, Handl, and Knowles 2010). Moreover, the lack of temporal consistency in dynamic community detection makes it difficult to distinguish real community evolution from network perturbation, posing challenges for subsequent matching and analysis. For example, Fig. 1 illustrates an inconsistent community detection results in a simple network with three true communities, e.g. social groups. In the second time step, some perturbations, such as temporary changes in membership or interactions, would cause algorithms to incorrectly infer a merging event between two communities even though they return to being relatively independent The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8617 in the next time step. A good dynamic community detection algorithm can correct the clustering results based on information from nearby snapshots. Similar motivations can be found in previous work such as ESPRA (Wang, Gao, and Ma 2017), which introduces quantum physics to model graph perturbations. While they try to smooth the structural perturbations at the vertex level, we focus on the stability at the community level. The community networks shown in the second row of Fig. 1 is a good model of the intercommunity structure, where each node is a community and the weights of the edges are equal to the sum of the weights of the inter-community edges. Focusing on the communitylevel structure has a more direct impact on clustering results than simply modifying individual nodes and edges. In this paper, we investigate how to constrain the structural consistency of community networks within nearby snapshots. We believe that the key to distinguishing inter-community structures lies in the topological characteristics of community networks. For example, in Fig. 1, the difference between a triangle and a line illustrates the difference in the structure of community networks. Since 2009, there have been increasing research efforts on Topological Data Analysis (TDA), which integrates algebraic geometry, computational geometry and data mining (Carlsson 2009). TDA characterizes intrinsic, topological changes in graph data through persistence homology, which quantify the topological features in the data across continuous scales. Topological graph analysis is a special class of TDA for graph data. Unlike heuristically designed topological features, such as the RA index (Zhou, L¨u, and Zhang 2009), persistence homology is much more scale-independent and robust to perturbations, making it a better choice for quantifying the structural similarity in community networks. In this work, we propose a novel dynamic community detection framework, which jointly performs graph clustering at the vertex-level and temporal consistency regularization at the community-level. Specifically, we solve two main challenges. First, the widely used self-supervised clustering module (Xie, Girshick, and Farhadi 2016) would collapse local structure of embedding distribution (Guo et al. 2017). To preserve the structure of the embedding space, we propose a novel deep graph clustering algorithm called MFC, which inspired by non-negative matrix factorization. The second challenge lies in how to incorporate the TDA-based consistency constraints on the community-level structures with vertex-level graph clustering methods. In this work, we design a differentiable operator that associates topological features with the cluster assignment distribution in deep clustering algorithms. Thus, the gradient of our Topological Regularization (TopoReg) can be back-propagated to the clustering module to penalize the topological differences in the community structure in neighboring snapshots. The main contributions of our work are concluded as follows: • We introduce topological graph analysis into dynamic community detection to learn consistent intercommunity structures end-to-end. • We propose a novel deep clustering algorithm that implements matrix factorization with relaxed sparsity constraints via neural networks. • We empirically demonstrate the superiority of our proposed clustering algorithm and the importance of community structure preservation in dynamic community detection with both fixed and varying numbers of communities. Related Works Deep Graph Clustering Deep graph clustering, clustering nodes in a graph into communities, is an emerging field in machine learning and social networks. We divide existing deep graph clustering methods into two classes: Static graph clustering or Dynamic graph clustering. Static graph clustering. Most frameworks perform clustering on lower-dimensional embedding of graphs, based on popular architectures like GANs (Creswell et al. 2018), and Graph Auto-Encoders (Kipf and Welling 2016). A naive approach would directly perform traditional community detection methods on node embedding. EGAE (Zhang et al. 2022), a work very similar to our approach, is a typical one. It finds an ideal space for the clustering, but still uses k-means. Instead, another self-optimized deep clustering framework jointly optimizes the learned embedding and perform clustering, such as DAEGC (Wang et al. 2019). Their core clustering module comes from DEC (Xie, Girshick, and Farhadi 2016). Recent models improve deep graph clustering by better learning of node features, i.e. SCDN (Bo et al. 2020), AGCN (Peng et al. 2021) and DCRN (Liu et al. 2022), but the core clustering module does not change. Dynamic graph clustering. Although there have been successful studies on evolving graphs, they mostly focus on node classification (Pareja et al. 2020) or temporal networks clustering, which typically yields a single clustering result for networks with changing edge weights (Liu et al. 2023). In contrast, this paper focuses on tracking community changes over discrete snapshots, which is relatively underexplored in the field. Traditional dynamic community detection algorithms, such as RTSC (You et al. 2021), ESPRA (Wang, Gao, and Ma 2017), DECS (Liu et al. 2020), often solve a multi-objective optimization problem. CGC (Park et al. 2022) improves the graph clustering algorithm based on contrastive learning and extends it to dynamic graphs, but its experiments are performed on a dataset with binary labels only. Dynamic graph embedding algorithms combine recurrent neural network with graph autoencoders, such as DynAE, DynRNN, and DynAERNN (Goyal, Chhetri, and Canedo 2020). Though without a dedicated clustering module, some clustering functionality is available. Topological Graph Analysis TDA can be extended to graphs by representing them as simplicial complexes, which encode their topology and structural properties. Numerous graph filtration methods have been proposed methods to compute persistent homology of graphs, such as Vietoris-Rips filtration (Dey and Wang 2022), weighted simplex filtration (Huang and Ribeiro The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8618 2016), and vertex-based clique filtration (Rieck et al. 2017). The graph topological features extracted from these graph filtration methods are widely used in biological and social graph data, among others. A series of studies on brain networks using TDA were presented by Songdechakraiwut and Chung (Songdechakraiwut and Chung 2023), such as learning MRI signals by graph filtration to understand complex relations in brain networks. Periodic phenomena in temporal traffic networks were studied using WRCF by Lozeve (Lozeve 2018), while Hajij (Hajij et al. 2018) uses Rips filtration to visualize structural changes in dynamic networks. In addition, recent works (Yan et al. 2021) have shown that adding topological graph analysis into GNNs can effectively improve their learning ability and performance. Notations and Problem Formulation Given a graph G = (V, E), V is a set of vertices and E is a set of tuples (u, v) with u, v ∈V . A Dynamic Graph Gτ is defined by an ordered set {G(1), G(2), . . . , G(t)}. Dynamic community detection is to find the best cluster assignment Y (t) for each snapshot G(t) at time step t. The criteria we define for good dynamic community detection are twofold: on the one hand, we need to achieve cohesive clustering results at each snapshot, and on the other hand, we expect the structure of the detected communities to maintain a certain degree of topological stability and continuity during dynamic changes of the input network. Our methods provide a tradeoff between snapshot coherence and temporal consistency. Methodology In this section, we present our topology preserving dynamic community detection framework in detail. First, we will start by introducing a novel static deep graph clustering algorithm, followed by the topological consistency regularization for communities derived from clustering result within neighboring time windows. Note that we omit the time dimension in the Matrix Factorization Clustering section to make the notations more readable. Matrix Factorization Clustering Node embeddings learned by common deep clustering method tend to collapse to cluster centroids, which is desirable for node label inference but difficult to train in the presence of regularization. For example, DEC works by taking the proximity between the node embeddings and each cluster centers in the embedding space as the cluster assignment distribution. By constraining the sparsity of the distribution, the samples in each cluster are clustered toward the center. However, there is no guarantee that the samples near the margin will be pulled into the correct cluster, and it will destroy the structure of embedding space to some extent. Therefore, we propose a novel end-to-end deep graph clustering algorithm. It is called Matrix Factorization Clustering (MFC), a novel end-to-end deep graph clustering algorithm that learns relaxed matrix factorization on node embeddings using a Graph Auto-encoder (GAE). In contrast to DEC, MFC is a dimension reduction technique that avoids lossy C Q = normalize(ZC†) Figure 2: Framework of matrix factorization clustering. It consists of a graph auto-encoder and a clustering module. compression, thereby preserving the structure of the embedding space. GAE incorporates the traditional auto-encoder and different kinds of GNNs, such as GCNs (Zhang et al. 2019) and GAT (Salehi and Davulcu 2020). Like a common auto-encoder, GAE consists of an encoder and a decoder. The encoder part attempts to learn a latent representation Z = [z1, z2, ..., zn]T of graph input with n nodes via GNN layers. While the decoder intends to reconstruct the adjacency matrix A from embedding Z, it is usually designed as σ(ZZT) where σ(·) denotes the sigmoid function. The reconstruction loss Lgae is the binary cross entropy loss between A and σ(ZZT). In general, the clustering results can be obtained by directly performing k-means or other heuristic algorithms on embedding Z. However, the clustering results obtained in this way are not differentiable, so it is difficult to further optimize the topology of the detected community structure. Matrix factorization has been proven to be essentially equivalent to k-means, spectral clustering, and many other clustering algorithms (Du et al. 2023). In our method, with relaxed sparsity constraints, Neural networks can be used to learn two low-rank matrices decomposed from the embedding matrix. Specifically, the optimization problem solved by k-means can be expressed as: min C,Q ∥Z −QC∥2 F , s.t. qij ∈{0, 1}, Q1k = 1n. (1) where Q = (qij) and C = [µ1, µ2, ..., µk]T. µj denotes center of j cluster and qij is the indicator. Specifically speaking, qij = 1 if the i-th point is assigned to the j-th cluster. Otherwise, qij = 0. The above problem is hard to solve directly due to the discrete constraint on Q. We first derive the closed-form solution of Q in the unconstrained situation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8619 when C is fixed. The objective function can be derived as: Jkm = ∥Z −QC∥2 F = tr ZTZ  −2 tr ZTQC  + tr CTQTQC  . (2) Take the derivative of Jkm and set it to 0 ∇QJkm = 2(QCCT −ZCT) = 0 Q = ZCT(CCT)−1 = ZC† (3) Inspired by Pseudo Inverse Learning (Guo and Lyu 2004) and ADMM (Alternating Direction Method of Multipliers) (Boyd et al. 2011), we consider C as a set of weights in the neural network, thus combining the alternating optimization of C and Q with the training process. Specifically, Q is updated by Q = g(ZC†) in forward propagation, the encoder of GAE and C are updated by gradient descent in backward propagation. Here g is a function that project Q to the feasible region. Specifically, we relax the discrete constraints on Q to a soft assignment problem, which means qij ∈(0, 1), Pk j=1 qij = 1. Normalized by Softmax, MinMax Normalization or other algorithms, any continuous matrix can satisfy the condition. In this paper we choose MinMax Normalization to normalize each row q of Q as the relaxed constraints: g(q) = q −min(q) max(q) −min(q) (4) The MSE (Mean Square Error) between Z and g(Q)C is calculated as Lc to jointly train the neural network with Lgae using back propagation until convergence. The detailed learning procedure of MFC is shown in Algorithm 1. When the best clustering centers C∗is learned, the index of row maximum in Q∗can be taken as the final clustering result of each node: yi = arg maxu qiu. One limitation of MFC is that since the principle of the method is matrix decomposition, the algorithm fails when the dimension of the node embeddings is less than the number of clusters. Because when the dimension of a matrix is greater than its rank, its pseudoinverse does not exist. Topological Regularization of Dynamic Clustering Consistency We propose an end-to-end regularization TopoReg to ensure the topological consistency of the community networks, which makes the clustering results more accurate and stable. It is a sliding window style loss function in order to reduce the topological distance between neighboring community networks. Furthermore, to feed the gradient back into the deep graph clustering module, we devise an elaborated community network construction method to make the community topology a differentiable function of cluster membership. The complete process is shown in Fig.4. Construction of Community Topology. Given a graph G, the cluster assignment distribution Q is computed in most deep graph clustering method such as DAEGC and the MFC algorithm we proposed. We can always assign a pseudo label si on each node i based the index of maximum row value Algorithm 1: Optimizing clustering module in MFC Input: Node embedding matrix Z of graph G learned by GAE Parameter: Trade-off parameter α 1: Initialize weights C . 2: repeat 3: Calculate Q = ZC†. 4: Normalize each row of Q with function g. 5: Calculate the differentiable clustering loss Lc = MSE(Z, g(Q)C) . 6: Calculate the total loss and its gradients: Lgae + α × Lc. 7: Update GAE weights and C by gradient descent. 8: until convergence or exceeding maximum iterations 9: Ensure Assignment Matrix Q, clustering centers C and GAE encoder parameters {Wi}L i=1 0.45 0.4275 [0.1, 0.4, 0.5] [1, 0, 0] [0.1, 0, 0.9] [1, 0, 0] [0.15, 0.45, 0.4] [0.95, 0.05, 0] [0.15, 0.05, 0.8] [0.95, 0, 0.05] 1.4 0.76 [0, 0, 1] Figure 3: A demo showing how the community graph is computed. The first row shows the clustering results, and the second row shows the community graphs derived from them. From left to right, the assignment distribution of the nodes marked in red changes as the gradient of the edge weight of the community graph decreases, and the corresponding topology changes. of Q. This pseudo division of G could form a new graph containing community structure called community network, whose nodes are communities. The weight of an edge connecting two communities A and B is determined by summing up the weights of all the edges that have one end belongs to A and the other belongs to B. The following equations demonstrate the derivation of community network based on Q and the weighted adjacency matrix W of G. The pseudo label of node i is si = arg maxk(Qik), ∀i ∈ {1, 2, . . . , n}. Assuming that graph G has K ground truth clusters, we have si ∈ {1, 2, . . . , K}. Organize the pseudo-labels of all nodes into a vector S = [s1, s2, · · · , sn], we define the indicator function 1 (S = k) = [1 (s1 = k) , 1 (s2 = k) , . . . , 1 (sn = k)]T = [0, 0, . . . 1, . . . 0]T, where 1 (si = k) = 1. ⇐⇒ si = k. If we take the kth column of Q called Qk = [q1k, q2k, · · · , qnk]T, a filtered distribution matrix ˆQk = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8620 Embedding Vertex-level Graph Embedding Space Community-level Graph Persistence Barcode Embedding Embedding Clustering Clustering Clustering Filtration Filtration Filtration Figure 4: Illustration of topological regularization. A sequential process across three time steps is shown. Rows trace graph evolution with columns showing snapshots, embeddings, community graphs, and persistence barcodes. Different colors are used to differentiate the real category labels. Red arrows highlight the temporal consistency loss on persistence barcodes and the backpropagation path for the tth snapshot Gt. Qk ⊙1 (S = k), which means the removal of entries in the kth column that are not row maxima. We define edge weight between community 1 and community 2 as: M12 = ΣjΣi1 (si = 1) · qi1 · wij · 1 (sj = 2) · qj2 = ΣjΣiˆqi1wij ˆqj2 = ˆQT 1 W ˆ Q2 (5) For intuition, assume that Q is a discrete matrix in which each row includes only one 1 and the rest are 0s. In this scenario, M12 equals the number of edges between the two communities. If we organize ˆQk in to a matrix ˆQ = [ ˆQ1, ˆQ2, . . . , ˆQK], then the adjacency matrix M ∈RK×K of the community network can be written as: ˆ M = ˆQTW ˆQ. To adapt to networks of different sizes, we need to normalize ˆ M into M: M = ˆQTW ˆQ P i P j Wij , (6) where the denominator is the sum of the weights of all edges. Based on the construction method mentioned earlier, each edge in the community graph equals the sum of the products of the assignment probabilities that the endpoints of the edge connecting corresponding communities. Given any graph filtration f, the persistence diagram of community network is dgm(M), which quantifies topological characteristics of community networks. Since the Betti numbers and graph weights can establish a one-to-one correspondence, the gradient of the loss function based on the Wasserstein distance can be backpropagated to the parameters of the graph encoder, which changes the node embedding Z. Thus the clustering results are optimized to ensure a persistent community topology. Topological Loss Definition. In our method, we perform Weight rank clique filtration (WRCF) (Petri et al. 2013) on the community network to calculate the 0-th and 1th Persistence Diagram (PD) of the community topology. They record the Betti numbers β0 and β1 , respectively, reflecting the connected components and the two-dimensional voids. WRCF sequentially adds edges with higher weights to form simplicial complexes. This technique identifies maximal cliques based on the subgraph at each filtration level for topological analysis and facilitates the application of persistent homology to study structural changes over time. Given a dynamic graph Gτ = {G(t)}T t=0, we apply deep graph clustering on each snapshot and compute their community networks {M (t)}T t=0 based on clustering assignment distribution matrices. Then WRCF are applied to them to get a series of PDs {dgm(M (t))}T t=0. By calculating the Wasserstein distance (Carriere, Cuturi, and Oudot 2017) between the PD at the current snapshot and the PDs before and after, we can construct a constraint on the consistency of the clustering results, formulated as: Ltopo = T −1 X t=1 X k∈{1,2}  Wp,q  dgmk(M (t)), dgmk(M (t−1))  +Wp,q  dgmk(M (t)), dgmk(M (t+1))  . (7) One technicality is that the two diagrams may have different cardinalities. In this case, some extra points will be mapped to the diagonal line in Wasserstein distance. In practice, we choose p = 1 and q = ∞, which corresponds The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8621 G1 1 G2 G3 2 3 Figure 5: Illustration of Weight Rank Clique Filtration (WRCF) applied to a toy graph. The first row shows a weighted graph, followed by the simple complex under three levels of filtration, and the second row shows the persistence barcode corresponding to the above filtration. The black lines represent the 0th persistent Betti number, while the red line represents the 1st persistent Betti number. to Earth Mover distance and the Infinity-norm, respectively. The complete training process of the model is summarized in Algorithm 2. Topological Optimization. To introduce topological loss into the deep learning framework, we need to calculate its gradient. Computation of persistence homology is typically based on non-continuous matrix reduction algorithms. Since the output is in the form of a multiset, calculating the gradient directly poses difficulties. Following (Gabrielsson et al. 2020), given a graph filtration, each birth-death pair in persistence diagrams can be mapped to the edges that respectively created and destroyed the homology class. If the ordering on simplices is strict, the map will be unique, and we can obtain the gradient by inverse mapping the birth-death values to edge weights. Note that if the ordering is not strict, which is more likely, we can still extend the total order to a strict order either deterministically or randomly. Algorithm 2: Complete Training Process Require: A dynamic graph Gτ = {G(t)}T t=0 and trade-off hyper-parameter α for t = 1 to T do Optimize the GAE on the snapshot G(t) to obtain node embedding Z(t) by minimizing the reconstruction loss Lgae. Perform MFC on the embedding Z(t) to learn clustering assignment Q(t) by optimizing the composite loss Lgae + αLc. Compute community network M (t) based on Q(t) Compute persistence diagrams dgm(M (t)) end for Integrate topological insights by calculating the Topological Loss Ltopo, and apply backpropagation to refine the model. Graph Node Embedding Community Topology Initial After Interaction Figure 6: Visualization of the demo graphs with their embedding and community topology. These three lines show graph, embedding, and community topology respectively. Different markers and colors represent different real clusters. The triangle in (e) represents a 2-simplex and the red lines highlight a β1 Experiments and Results In this section, experiments on both synthetic data and realworld datasets are conducted to evaluate the performance and effectiveness of our algorithm. We compare the performance of our algorithm with the state-of-the-art algorithms. The experiment is repeated five times and the average results are reported to account for any variation in results. Experiments on Synthetic Data We reproduce a similar scenario in Fig.1: there are five groups of people, and two of them have additional links due to an ephemeral collaboration. We will show that TopoReg successfully ensures the temporal structure consistency of the community detection results. In Fig.6, a Gaussian random partition graph is initialized by creating 5 clusters each with a size drawn from a normal distribution N(20, 1). Nodes are connected within clusters with a probability of 0.5 and between clusters with a probability of 0.001. Then, the other graph is created by randomly adding moderate amount of edges between 2 of the 5 clusters. Their node embeddings are visualized in the middle column respectively by dimension reduction to 2D via t-distributed stochastic neighbor embedding (t-SNE) (Van der Maaten and Hinton 2008). The third column shows that the collaboration leads to a sudden change in the community topology. Specifically, two 2cliques are wiped out. The initial graph has 4 β0 and 2 β1 features, whereas the graph after interaction has only 3 β0 and no β1, since 2-simplex does not exist. In such a situation, our algorithm is shown to have a binding influence on the node embedding, thus changing the clustering results and maintaining a stable topology between communities. We cluster the two graphs separately and optimize the clustering result of the second graph with Ltopo. In Fig. S1 (Appendix), we track the changes in node embedding, topological loss, and clustering effect. We find that the two clusters of points, which were initially more concenThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8622 Embedding After TopoReg Embedding Before TopoReg Figure 7: A comparison of node embeddings before and after applying TopoReg to GEC algorithm. The data is obtained from a single snapshot in the Highschool dataset, and the colors indicate the ground truth of community labels. trated, gradually separate as the loss decreases. The clustering accuracy also improves from 81% to 98%. Experiments on Real-world Datasets Datasets We collected and processed four labeled dynamic network datasets without node features, including Enron, Highschool (Crawford and Milenkovi´c 2018), DBLP, Cora (Hou et al. 2020). Noting that each node in these existing datasets has a fixed label, we processed a new dataset DBLPdyn from the original data, recalculating the node’s label at each snapshot. The brief information of these datasets is summarized in Table S1. Baselines • Static Baselines. We first compare with state-of-the-art deep graph clustering method focusing on static community detection. DAEGC (Wang et al. 2019) first introduces the clustering module in DEC (Xie, Girshick, and Farhadi 2016) into the graph clustering problem. In order to simplify the problem and make the comparison fair, we also compare with the version that replaces GAT back to basic GCN in encoder, leaving other parts unchanged, which is called Graph Embedding Clustering (GEC). SDCN (Bo et al. 2020) improves deep clustering by integrating the structual information into representation learning module, but the core clustering module remain the same. • Temporal Baselines. ESPRA (Wang, Gao, and Ma 2017), DECS (Liu et al. 2020) are two local smoothing dynamic community detection methods. Since they solve a multi-objective optimization problem with graph data in the form of a 3D matrix, they are too time- and memory-consuming and cannot be run on DBLP and Cora datasets. DynAE, DynRNN, DynAERNN (Goyal, Chhetri, and Canedo 2020) are a family of dynamic graph embedding algorithms that use recurrent neural networks to model temporal information in dynamic networks. DynRNN, DynAERNN expand network parameters based on DynAE. Limited by GPU memory, we only report DynAE results. Note that DECS is a label propagation algorithm and cannot fix the number of clusters, 0.016  0.014  0.012  0.010  0.008  0.006  0.004  0.002  0.000 After TopoReg Before TopoReg β1 0.00 0.02 0.04 0.06 0.08 0.10 0.12 After TopoReg β0 Before TopoReg Enron Highschool DBLP Cora Enron Highschool DBLP Cora Wasserstein Distance Figure 8: Comparisons of the Wasserstein distances between the community topology under the ground truth labels and the deep clustering labels before and after applying TopoReg. so we take the top k −1 clusters in the clustering result and merge the remaining nodes into one cluster. Metrics We use Accuracy (ACC), Normalized Mutual Information (NMI), Adjusted Rand Index (ARI) to reflect how well the clustering results match the ground truth labels, and Modularity to reflect the clustering quality. All metrics are calculated at each snapshot and the average values are reported in Table 1. Experimental Settings The experiments in this paper follow the following settings: node embedding dimension is 30, the learning rate is 0.001. Each backbone method uses one GNN layer (GCN, GAT). The other baseline models are set as default by the authors. The training is divided into two stages, α = 10, β = 0 when training the backbone model and α = 1, β = 10 when training the TopoReg, 500 epochs of each to ensure convergence. All weights in neural networks are initialized with Glorot initialization (Glorot and Bengio 2010). Note that the nodes and edges in our dataset may appear and disappear in each snapshot, so the number of nodes per snapshot is not consistent. For temporal baselines that require a fixed-size dynamic adjacency matrix input, we add the points that do not appear in the current snapshot as isolated nodes. And to be fair, we remove these isolated nodes when calculating the metrics. Results with Fixed Community Number Table 1 shows the experimental results on four real-world datasets with constant node labels. In such a situation, we assume that the community number k is known and fixed, so k-means clustering is performed on each learned embedding. Topologically optimized MFC consistently achieves the best (bold) or second-best (underlined) accuracy, which demonstrates the superiority of our methodology. In addition, the TopoReg has an average improvement of 11.54%, 5.90%, and 1.38% on the three backbone models GEC, DAEGC, and MFC, respectively. Fig. 7 is a t-SNE visualization of node embedding on one snapshot in Highschool dataset before and after applying TopoReg to DEC. It is shown that the embedding gets scattered because TopoReg optimizes the problem of representation collapse by smoothing inter-community structure. Meanwhile, to prove that we get a more stable community The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8623 Data Metrics Static Baselines Temporal Baselines Ablation Study Ours GEC DAEGC SDCN DECS ESPRA DynAE MFC GEC+Topo DAEGC+Topo MFC+Topo Enron ACC 58.66 58.15 57.86 57.24 59.82 58.49 58.44 60.32 58.33 59.31 NMI 15.42 15.69 12.44 15.63 13.85 7.10 18.9 18.14 17.53 19.14 ARI 0.47 -0.81 -1.10 -1.90 -0.30 1.47 2.17 1.00 -0.36 2.73 Modularity 30.42 30.08 -1.84 45.40 -2.10 -1.47 45.54 39.34 39.18 46.36 Highschool ACC 49.21 49.33 24.02 65.77 26.44 18.82 68.91 63.51 62.66 70.67 NMI 28.11 42.58 9.71 62.36 12.31 5.64 63.14 59.41 56.22 65.78 ARI 13.57 26.43 0.46 36.18 0.12 0.11 48.77 44.44 38.93 50.87 Modularity 49.82 56.35 -0.25 72.80 -0.95 -0.11 76.99 73.62 68.37 77.87 DBLP ACC 56.38 56.23 56.22 OOM OOM 68.31 56.83 57.62 56.54 57.98 NMI 1.75 2.41 1.57 OOM OOM 0.28 6.32 6.31 3.92 7.97 ARI 0.25 0.35 0.37 OOM OOM 0.05 0.98 1.37 0.62 1.41 Modularity 56.07 59.28 4.54 OOM OOM 0.07 85.39 76.71 71.61 86.65 Cora ACC 35.18 37.56 34.17 OOM OOM 37.85 50.66 43.18 41.6 52.53 NMI 3.21 6.79 1.45 OOM OOM 0.24 24.27 16.10 11.66 27.52 ARI 1.18 3.05 0 OOM OOM 0.18 13.49 8.04 5.50 14.62 Modularity 47.70 49.94 3.59 OOM OOM 1.32 74.09 63.05 55.53 75.61 Table 1: Experimental results on four datasets with known cluster numbers. K-means clustering is performed on graph embedding to obtain the final community membership. OOM means out-of memory. Data Metrics GEC∗ Q DAEGC∗ Q MFC∗ Q DBLPdyn ACC 39.49 39.06 42.82 NMI 2.91 2.13 9.32 ARI 0.75 -3.61 2.68 Modularity 61.27 34.83 84.71 Table 2: Experimental results on a dataset with unknown cluster number. The label of the corresponding node is assigned as the index of the row maximum in the cluster assignment distribution matrix. ∗indicates clustering results after applying TopoReg. structure after using TopoReg, we computed the Wasserstein distance between the persistence diagrams of deep clustering results and that of the ground truth communities. The distribution of the distances are shown in Fig. 8. It can be seen that the topological consistency improvement of different datasets are reflected in different ways, including a decrease in median or variance. The enhancement tends to focus on β0 or β1 based on network properties. Overall, the inter-community structure after TopoReg is much closer to the one in the ground truth. The consistency of the ground truth labels suggests that we have found a more stable dynamic community detection results. Results with Varying Community Number Table 2 shows the results on DBLPdyn dataset. The true number of communities is not known in most cases. To solve that, a heuristic algorithm is often used to select the correct number of clusters, such as the elbow method (Liu and Deng 2020). When we set the clustering dimension K to a relatively large value in the model and get the clustering result based on arg max Q. The number of clusters obtained will follow the clustering structure, and finally we get a detection result where the number of communities varies dynamically. For DEC-based backbone models, they are not as well adapted to TopoReg as our MFC in this situation. Conclusion This work proposes an end-to-end framework for dynamic community detection. It uses a neural network module MFC to implement matrix factorization for clustering, which outperforms the widely used self-supervised clustering method in the absence of node features. Regularization module TopoReg optimizes the cluster assignment distribution in deep graph clustering based on the topology of nearby snapshots. We demonstrate through synthetic and real dataset experiments that TopoReg can improve dynamic graph clustering results and preserve persistent community structure in terms of its topological features. It has good theoretical interpretability and can be easily extended to other depth graph clustering architectures. The two modules provide a trade-off between node clustering and topological stability of the community. Compared to other DEC-based backbone models such as DAEGC, TopoReg combines better with MFC in the case of unknown number of communities. At this point, we can obtain dynamically changing number of clusters based on the cluster assignment distribution learned. Using distributed computing and scalable graph representations like FastGAE (Salha et al. 2021), our method could be efficiently extended to large-scale graphs, since topological regularization is not directly affected by the size of graph, but only by the number of clusters. Acknowledgments This work is supported in part by the Natural Science Foundation of China (Grant 62371270), Tsinghua SIGS Scientific Research Start-up Fund (Grant QD2021012C) and Shenzhen Key Laboratory of Ubiquitous Data Enabling (No.ZDSYS20220527171406015). References Blondel, V. D.; Guillaume, J.-L.; Lambiotte, R.; and Lefebvre, E. 2008. Fast unfolding of communities in large netThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8624 works. Journal of Statistical Mechanics: Theory and Experiment, 2008(10): P10008. Bo, D.; Wang, X.; Shi, C.; Zhu, M.; Lu, E.; and Cui, P. 2020. Structural deep clustering network. In Proceedings of The Web Conference 2020, 1400–1410. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J.; et al. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1): 1–122. Carlsson, G. 2009. Topology and data. Bulletin of the American Mathematical Society, 46(2): 255–308. Carriere, M.; Cuturi, M.; and Oudot, S. 2017. Sliced Wasserstein kernel for persistence diagrams. In International conference on machine learning, 664–673. PMLR. Corne, D.; Handl, J.; and Knowles, J. 2010. Evolutionary Clustering, 332–337. Boston, MA: Springer US. ISBN 9780-387-30164-8. Crawford, J.; and Milenkovi´c, T. 2018. ClueNet: Clustering a temporal network based on topological similarity rather than denseness. PloS One, 13(5): e0195993. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; and Bharath, A. A. 2018. Generative adversarial networks: an overview. IEEE Signal Processing Magazine, 35(1): 53–65. Dey, T. K.; and Wang, Y. 2022. Computational topology for data analysis. Cambridge University Press. Du, K.-L.; Swamy, M. N. S.; Wang, Z.-Q.; and Mow, W. H. 2023. Matrix Factorization Techniques in Machine Learning, Signal Processing, and Statistics. Mathematics, 11(12). Gabrielsson, R. B.; Nelson, B. J.; Dwaraknath, A.; and Skraba, P. 2020. A topology layer for machine learning. In International Conference on Artificial Intelligence and Statistics, 1553–1563. PMLR. Glorot, X.; and Bengio, Y. 2010. Understanding the difficulty of training deep feedforward neural networks. In Teh, Y. W.; and Titterington, M., eds., Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, 249–256. Chia Laguna Resort, Sardinia, Italy: PMLR. Goyal, P.; Chhetri, S. R.; and Canedo, A. 2020. dyngraph2vec: Capturing network dynamics using dynamic graph representation learning. Knowledge-Based Systems, 187: 104816. Guo, P.; and Lyu, M. R. 2004. A pseudoinverse learning algorithm for feedforward neural networks with stacked generalization applications to software reliability growth data. Neurocomputing, 56: 101–121. Guo, X.; Gao, L.; Liu, X.; and Yin, J. 2017. Improved Deep Embedded Clustering with Local Structure Preservation. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, 1753–1759. AAAI Press. ISBN 9780999241103. Hajij, M.; Wang, B.; Scheidegger, C.; and Rosen, P. 2018. Visual Detection of Structural Changes in Time-Varying Graphs Using Persistent Homology. In 2018 IEEE Pacific Visualization Symposium (PacificVis), 125–134. Hou, C.; Zhang, H.; He, S.; and Tang, K. 2020. Glodyne: Global topology preserving dynamic network embedding. IEEE Transactions on Knowledge and Data Engineering, 34(10): 4826–4837. Huang, W.; and Ribeiro, A. 2016. Persistent homology lower bounds on high-order network distances. IEEE Transactions on Signal Processing, 65(2): 319–334. Javed, M. A.; Younis, M. S.; Latif, S.; Qadir, J.; and Baig, A. 2018. Community detection in networks: A multidisciplinary review. Journal of Network and Computer Applications, 108: 87–111. Kipf, T. N.; and Welling, M. 2016. Variational Graph AutoEncoders. stat, 1050: 21. Liu, F.; and Deng, Y. 2020. Determine the number of unknown targets in open world based on elbow method. IEEE Transactions on Fuzzy Systems, 29(5): 986–995. Liu, F.; Wu, J.; Xue, S.; Zhou, C.; Yang, J.; and Sheng, Q. 2020. Detecting the evolving community structure in dynamic social networks. World Wide Web, 23: 715–733. Liu, M.; Liu, Y.; Liang, K.; Wang, S.; Zhou, S.; and Liu, X. 2023. Deep Temporal Graph Clustering. arXiv:2305.10738. Liu, M.-Y.; Tuzel, O.; Ramalingam, S.; and Chellappa, R. 2013. Entropy-rate clustering: Cluster analysis via maximizing a submodular function subject to a matroid constraint. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(1): 99–112. Liu, Y.; Tu, W.; Zhou, S.; Liu, X.; Song, L.; Yang, X.; and Zhu, E. 2022. Deep graph clustering via dual correlation reduction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7603–7611. Lozeve, D. 2018. Topological Data Analysis of Temporal Networks. Ph.D. thesis, University of Oxford. Ng, A. Y.; Jordan, M. I.; and Weiss, Y. 2001. On Spectral Clustering: Analysis and an Algorithm. In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, NIPS’01, 849–856. Cambridge, MA, USA: MIT Press. Pareja, A.; Domeniconi, G.; Chen, J.; Ma, T.; Suzumura, T.; Kanezashi, H.; Kaler, T.; Schardl, T. B.; and Leiserson, C. E. 2020. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. In AAAI Conference on Artificial Intelligence. AAAI press. Park, N.; Rossi, R.; Koh, E.; Burhanuddin, I. A.; Kim, S.; Du, F.; Ahmed, N.; and Faloutsos, C. 2022. Cgc: Contrastive graph clustering forcommunity detection and tracking. In Proceedings of the ACM Web Conference 2022, 1115–1126. Peng, Z.; Liu, H.; Jia, Y.; and Hou, J. 2021. Attention-driven graph clustering network. In Proceedings of the 29th ACM International Conference on Multimedia, 935–943. Petri, G.; Scolamiero, M.; Donato, I.; and Vaccarino, F. 2013. Topological strata of weighted complex networks. PloS One, 8(6): e66506. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8625 Rieck, B.; Fugacci, U.; Lukasczyk, J.; and Leitte, H. 2017. Clique community persistence: A topological visual analysis approach for complex networks. IEEE Transactions on Visualization and Computer Graphics, 24(1): 822–831. Salehi, A.; and Davulcu, H. 2020. Graph attention autoEncoders. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), 989–996. IEEE Computer Society. Salha, G.; Hennequin, R.; Remy, J.-B.; Moussallam, M.; and Vazirgiannis, M. 2021. FastGAE: Scalable Graph Autoencoders with Stochastic Subgraph Decoding. Neural Netw., 142(C): 1–19. Songdechakraiwut, T.; and Chung, M. K. 2023. Topological learning for brain networks. The Annals of Applied Statistics, 17(1): 403. Su, X.; Xue, S.; Liu, F.; Wu, J.; Yang, J.; Zhou, C.; Hu, W.; Paris, C.; Nepal, S.; Jin, D.; et al. 2022. A comprehensive survey on community detection with deep learning. IEEE Transactions on Neural Networks and Learning Systems. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(11). Wang, C.; Pan, S.; Hu, R.; Long, G.; Jiang, J.; and Zhang, C. 2019. Attributed graph clustering: A deep attentional embedding approach. arXiv preprint arXiv:1906.06532. Wang, P.; Gao, L.; and Ma, X. 2017. Dynamic community detection based on network structural perturbation and topological similarity. Journal of Statistical Mechanics: Theory and Experiment, 2017(1): 013401. Xie, J.; Girshick, R.; and Farhadi, A. 2016. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, 478–487. PMLR. Yan, Z.; Ma, T.; Gao, L.; Tang, Z.; and Chen, C. 2021. Link prediction with persistent homology: An interactive view. In International Conference on Machine Learning, 11659– 11669. PMLR. You, J.; Hu, C.; Kamigaito, H.; Funakoshi, K.; and Okumura, M. 2021. Robust dynamic clustering for temporal networks. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2424– 2433. Zhang, H.; Li, P.; Zhang, R.; and Li, X. 2022. Embedding graph auto-encoder for graph clustering. IEEE Transactions on Neural Networks and Learning Systems. Zhang, S.; Tong, H.; Xu, J.; and Maciejewski, R. 2019. Graph convolutional networks: a comprehensive review. Computational Social Networks, 6(1): 1–23. Zhou, S.; Xu, H.; Zheng, Z.; Chen, J.; Bu, J.; Wu, J.; Wang, X.; Zhu, W.; Ester, M.; et al. 2022. A comprehensive survey on deep clustering: Taxonomy, challenges, and future directions. arXiv preprint arXiv:2206.07579. Zhou, T.; L¨u, L.; and Zhang, Y.-C. 2009. Predicting missing links via local information. The European Physical Journal B, 71(4): 623–630. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8626
2024
958
18,804
Spatio-Temporal Pivotal Graph Neural Networks for Traffic Flow Forecasting Weiyang Kong1, Ziyu Guo1, Yubao Liu1, 2 1Sun Yat-Sen University,Guangzhou, China 2Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China [email protected], [email protected], [email protected] Abstract Traffic flow forecasting is a classical spatio-temporal data mining problem with many real-world applications. Recently, various methods based on Graph Neural Networks (GNN) have been proposed for the problem and achieved impressive prediction performance. However, we argue that the majority of existing methods disregarding the importance of certain nodes (referred to as pivotal nodes) that naturally exhibit extensive connections with multiple other nodes. Predicting on pivotal nodes poses a challenge due to their complex spatiotemporal dependencies compared to other nodes. In this paper, we propose a novel GNN-based method called SpatioTemporal Pivotal Graph Neural Networks (STPGNN) to address the above limitation. We introduce a pivotal node identification module for identifying pivotal nodes. We propose a novel pivotal graph convolution module, enabling precise capture of spatio-temporal dependencies centered around pivotal nodes. Moreover, we propose a parallel framework capable of extracting spatio-temporal traffic features on both pivotal and non-pivotal nodes. Experiments on seven realworld traffic datasets verify our proposed method’s effectiveness and efficiency compared to state-of-the-art baselines. Introduction Traffic flow forecasting is a classical spatio-temporal data mining problem. The problem has been found useful in many real-world applications such as intelligent route planning, dynamic traffic management, smart location-based applications, and so on (Wu and Tan 2016). The purpose of the problem is to predict the traffic flows of several future times based on the historical traffic observations (e.g. recorded by sensors of traffic networks). The challenges of traffic prediction mainly stem from the intricate spatio-temporal correlations between sensors. With the rapid advancement of neural networks, deep learning methods have become capable of capturing complex spatio-temporal features (Yu, Yin, and Zhu 2018; Yan, Xiong, and Lin 2018; Seo et al. 2018). Graph Neural Networks (GNNs) have shown promising results in modeling transportation networks (Li et al. 2018; Wu et al. 2019; Li and Zhu 2021; Han et al. 2021; Lan et al. 2022; Wu et al. 2022). Typically, GNNs use graph nodes to represent sensors in transportation networks, while the edges between Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2 1 3 5 2 1 3 5 2 1 3 5 2 1 3 5 Time T1 T1 T2 T2 T3 T3 HIGH LOW 4 4 4 Figure 1: A toy traffic network example with 5 nodes. The nodes with different colors denote their traffic flow changes. the nodes signify the connections between the various sensors (Tedjopurnomo et al. 2022; Jin et al. 2023). Although significant improvements have been made in traffic forecasting with spatio-temporal data, we argue that the majority of existing methods overlook a common phenomenon in traffic: certain nodes in traffic network, either due to their geographical location (in the city center) or characteristics (next to large interchanges), exhibit extensive connections with multiple other nodes, forming a complex spatio-temporal network centered around these nodes. In this paper we refer to these nodes as pivotal nodes. For example, a toy traffic network comprising five nodes (sensors) is given in Figure 1. The traffic flow is first aggregated from nodes 1, 2, and 5 to node 3 during time interval [T1,T2]. Then, the traffic flow is subsequently distributed from node 3 to nodes 2 and 5 during time interval [T2,T3]. Compared to other nodes, node 3 exhibits stronger abilities in both aggregating and distributing traffic flow. We can select the node 3 as a pivotal node. Such rich inter-node spatio-temporal relationships endow the pivotal nodes with intricate interdependencies and potential multiplicity of roles within the entire network structure, making them challenging to accurately predict. The existing methods fell short in accurately represent the spatio-temporal dependencies centered around these pivotal nodes. Most existing methods learned spatial and temporal dependencies separately, which fail to effectively capture the accurate spatio-temporal dependencies. Few methods (Song et al. 2020; Li and Zhu 2021) tried to synchronously capture spatio-temporal relationships. But these methods are constrained to represent the simplified spatio-temporal dependencies among all nodes since accurately considering the spatio-temporal dependencies among all nodes leads to a high time and space complexity. For exThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8627 ample, STSGCN (Song et al. 2020) employed a one-hot adjacency matrix to represent local spatio-temporal dependencies. In this work, we focus on modeling the traffic phenomenon around pivotal nodes, which involve the following two challenges: The first challenge is how to identify pivotal nodes, and the second challenge lies in how to precisely extract spatio-temporal features at these pivotal nodes. To tackle the above challenges, we propose a novel GNN-based method called Spatial-Temporal Pivotal Graph Neural Networks (STPGNN). For the first challenge, we propose a Pivotal node Identification Module (PIM), which decompose the process of traffic flow propagation into two components: aggregation and distribution. We design a scoring mechanism that evaluates the aggregation and distribution capabilities of each node, thus identifying pivotal nodes based on the scoring results. For the second challenge, we construct a pivotal graph with the identified pivotal nodes, and propose a Pivotal Graph Convolution Module (PGCM) to capture the intricate spatio-temporal dependencies around pivotal nodes with the pivotal graph. Similar with the existing method, we adopt graph convolution and linear unit to capture the spatial and temporal dependencies on non-pivotal nodes respectively. A parallel framework is designed to fuse the results from the two graph convolution. By integrating pivotal nodes, our model effectively establishes extensive correlations, and significantly improves the model’s inferential capabilities for accurate prediction tasks. The main contributions of the work are as follows: • We address the traffic prediction problem of certain pivotal nodes with complex spatio-temporal dependencies. We propose a novel method for identifying the pivotal nodes in traffic network. We construct a pivotal graph and introduce a pivotal graph convolution module to accurately capture the spatio-temporal dependencies on pivotal nodes. • A parallel framework is proposed to capture the spatiotemporal dependencies among all nodes. This framework can integrate the spatio-temporal features captured on both pivotal and non-pivotal nodes. • We conducted experiments on seven real-world datasets from different sources, and the results demonstrate the effectiveness and efficiency of our approach. Related Work Traffic Flow Forecasting Traffic flow forecasting is a spatio-temporal data forecasting problem. Similar problems include shared bicycle demand forecasting, bus and taxi demand forecasting, crowd flow forecasting, etc (Li et al. 2015; Chai, Wang, and Yang 2018; Hu et al. 2021; Zhao et al. 2019). Traditional statistical methods like ARIMA (Williams and Hoel 2003) and SVM (Drucker et al. 1996) are widely used in time series prediction. Since they ignore the spatial information, it is difficult for them to handle complex spatio-temporal data. Recently, deep learning methods are often used for handling the non-linearity and complexity of traffic data. Convolutional Neural Networks (CNNs) have been regularly applied to traffic flow prediction (Zhang et al. 2016; Zhang, Zheng, and Qi 2017; Ouyang et al. 2022; Yao et al. 2018). Each cell in the set records the number of vehicles passing in that cell in a time period. In order to capture the spatial correlations between the grid regions, methods with CNNs model the traffic flow readings as an image, and similar techniques developed for image recognition can be easily applied (Tedjopurnomo et al. 2022). For better investigation of sequence data, Recurrent Neural Networks (RNNs) were proposed. With the memorization capability to sequences, methods with RNNs were soon applied to traffic flow forecasting (Ye et al. 2019; Shi et al. 2015; Zonoozi et al. 2018). More recently, methods with Graph Neural Networks are proposed to handle spatio-temporal correlations in traffic flow data and obtain impressive results (Pan et al. 2022; Shen et al. 2022; Sun et al. 2022; Guo et al. 2022). DCRNN (Li et al. 2018) proposes a bi-directional process of diffusion to simulate actual road conditions, and uses gated recurrent units to capture temporal information. ASTGCN (Guo et al. 2019) uses two attention layers to capture the dynamics of spatial dependencies and temporal correlations. STGCN, Graph WaveNet, LSGCN and AGCRN (Yu, Yin, and Zhu 2018; Wu et al. 2019; Huang et al. 2021; Bai et al. 2017) follow and improve the GCN methods to extract spatio-temporal information. In particular, Graph WaveNet designs a self-adaptive matrix to take the influence between nodes and their neighbors into account while LSGCN uses an attention layer to do similar work. STSGCN, STFGNN and STGODE (Song et al. 2020; Li and Zhu 2021; Fang et al. 2021) propose GCN methods based on similar characteristics that can capture spatio-temporal information synchronously. MTGNN (Wu et al. 2020) proposes a graph learning module that constructs a dynamic graph by computing the similarity between learnable node embeddings. DMSTGCN, TPGNN, and DSTAGNN (Han et al. 2021; Wu et al. 2022; Lan et al. 2022) capture the spatio-temporal characteristics by constructing dynamic associations between nodes. SGP (Cini et al. 2023)propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. However, most existing methods have not taken into account the significance of certain pivotal nodes in traffic network, thereby failing to accurately extract the spatio-temporal features of these nodes. Consequently, this limitation hampers the improvement of the performance of the model. Graph Neural Networks Graph Neural networks (GNN) are originally designed to study the structure of the graph and are widely used in node embedding (Pan et al. 2019), node classification (Kipf and Welling 2017), etc. In recent years, to model the graph structures in transportation systems, GNNs such as graph convolutional and graph attention networks have been used for the problem and achieved SOTA performance. Bruna et al. (Bruna et al. 2014) proposes GCN based on the spectral graph theory, which can use a filter to smooth the input graph signal and aggregate the information of neighbor nodes. Defferrard et al. (Defferrard, Bresson, and Vandergheynst 2016) proposes a Chebyshev extension to reduce the complexity of laplacians computation of GCN. Kipf and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8628 Welling et al. (Kipf and Welling 2017) simplifies the Chebyshev extension method. Velickovic et al. (Velickovic et al. 2018) proposes GAT, which introduces attention mechanisms into graph to update the weight of a node’s neighbors. Monti et al. (Monti et al. 2017) applies Gaussian kernels to learn the weight of a node’s neighbors. Hamilton, Ying, and Leskovec et al. (Hamilton, Ying, and Leskovec 2017) proposes GraphSAGE, which aggregates the features of nodes’ neighbors and themselves through a fixed sampling method. Preliminary Traffic network is represented by an undirected graph G = (V, E), where V is the set of nodes (sensors), N = |V | denotes the number of nodes, and E is the set of edges between two nodes. The adjacency matrix derived from G is denoted by U ∈RN×N. U is obtained based on the Euclidean distance between nodes. In our problem, we assume that each node records its traffic flow data as graph signal. A graph signal is Xt ∈RN, where t denotes the t-th time step. The graph signal represents the traffic flow values at the t-th time step. Given a traffic network G and its historical S step graph signal matrix X1:S = (X1, X2, ..., XS) ∈RN×S, our problem is to predict its next T step graph signals, namely XS+1:S+T = (XS+1, XS+2, ..., XS+T ) ∈RN×T . We formulate the problem as finding a function F to forecast the next T steps data based on the past S steps historical data: (XS+1, XS+2, ..., XS+T ) = F((X1, X2, ..., XS)). (1) Proposed Model As shown in Figure 2, the main framework of our proposed method consists of an input layer, a Pivotal node Identification Module (PIM), M stacked spatio-temporal layers (STLayer) and an output layer. The input layer contains one linear units convolution layer. The PIM identifies pivotal nodes with a scoring function and generates a pivotal graph accordingly. For each spatio-temporal layer, it incorporates parallel structures, namely the pivotal graph convolution module (PGCM) and the graph convolution module with linear unit. The pivotal graph convolution module is designed for feature extraction on pivotal nodes. Meanwhile, in the second branch, the graph convolution module with linear unit is mainly employed for feature extraction on non-pivotal nodes. The output of each spatio-temporal layer is concatenated and then sent to the output layer. The output layer contains two activation layers and two linear units convolution layers. Pivotal node Identification Module In traffic network, certain nodes, referred to as pivotal nodes, possess more complex node relationships. We introduce the definition of pivotal nodes and the method devised to identify them. Intuitively, pivotal nodes are expected to exhibit stronger capabilities in both aggregating traffic flow from other nodes and distributing it to them. Therefore, our initial focus lies in quantifying these capabilities. We consider the following approach: Let H ∈RN×S be the input, where Hi ∈RS represents the traffic features of node i during the S time period, then we calculate E = (ei,j) ∈RN×N as follow: ei,j = PS k=1+d(Hi,kH⊤ j,k−d)wi,j qPS k=1+d(H2 i,k) · qPS−d k=1 (H2 j,k) , (2) here, k denotes the index of time step and d represents the time required for traffic flow to propagate between nodes. In this paper, we set d = 1 which corresponds to 5 minutes in the context of the commonly used public dataset PEMS (detailed in the experimental section). W = (wij) ∈N × N is a trainable parameter matrix. The term ei,j is obtained by calculating the similarity between the traffic flow at node i during the current time step and this at j during the previous d time step, where higher similarity indicates a greater influence of the traffic features propagate from node j to node i. The entries in i-th row of matrix E represents the node dependencies from other nodes to i with time d, thereby the summation of each row in matrix E yields a representation of the aggregation capability of each node in traffic graph. Similarly, the summation of the i-th column of matrix E can represent the distribution capability of node i. Henceforth, we can propose a scoring function capable of measuring the aggregation and distribution capabilities of nodes i with the generated E as follow: Score(i) = N X j=1 (ei,j + ej,i). (3) The set of the pivotal nodes C is defined as follows: C = {i|Score(i) ∈TopK(Score)}, (4) where, K is a hyperparameter to control the number of pivotal nodes. We empirically set K = 1 5N, and we further analyze the hyperparameter setting in the experimental section. With the identification of pivotal nodes established, the subsequent step involves constructing a graph that incorporates spatio-temporal dependencies of pivotal nodes. For pivotal nodes, it is reasonable to use the corresponding entries in matrix E as the spatio-temporal dependencies between nodes at adjacent time steps. However, for non-pivotal nodes, it is no longer suitable to use E due to their lack of strong aggregation and distribution capabilities. Hence, the adjacency matrix A = (ai,j) ∈RN×N is defined as follows: ai,j = sigmoid(ei,j), if i ∈C ∨j ∈C ui,j, others , (5) here, u denotes the entry in the given adjacency matrix U, and the sigmoid function is used to normalize the entries in matrix E. Figure 2 (a) depicts a pivotal graph with five nodes in Figure 1. Pivotal Graph Convolution Module The Graph Convolutional Network is a powerful method to extract nodes’ features with its neighbors’ information. Most existing methods employed it to capture spatial relationships between nodes and were unable to synchronously capture the spatio-temporal dependencies. Few methods such The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8629 Pivotal Graph Conv Graph Conv Linear + Pivotal Graph Conv Graph Conv Linear + Pivotal Graph Conv Graph Conv Linear + Output Layer Input Layer ... Residual Connetctions ST-Layer 2 ST-Layer M ST-Layer 1 PIM (a) Pivotal graph N C S N C’ S-d+1 N C S N C S-d+1 N C’ S-d+1 N C S N C S-d+1 N C’ S-d+1 (b) Pivotal graph convolution (c) The overall framework of STPGNN (d) Linear unit and graph convolution Pivotal Node Non-Pivotal Node 2 1 3 5 4 Pivotal Graph Conv Linear Graph Conv Entry in E Entry in U Figure 2: Detailed framework of STPGNN. as STSGCN (Song et al. 2020) designed synchronous graph convolution to capture the spatio-temporal dependencies, but these methods can only consider simplified and local spatio-temporal dependencies with a one-hot adjacency matrix since considering the spatio-temporal dependencies among all nodes leads inevitably to the problem of high time and space complexity with O(T 2 d N 2). where Td is the number of time steps span of the spatio-temporal dependencies. In this paper, we can focus on the spatio-temporal dependencies around pivotal nodes, which significantly reduces the complexity to O(TdKN), where K is the number of pivotal nodes. We build a pivotal graph convolution module where the convolutional operation is designed to extract pivotal nodes’ features with matrix A. The graph convolutional operation is defined in the vertex domain, which means it can fuse node features with its neighbors without requiring spectral filter such as graph Laplacian. Let Hl ∈RS×N×C be the input graph signal of the l-th ST-Layer where C is the number of channel. The pivotal convolutional operation can be formulated as follow: Hl = [Hl 1, Hl 2, ..., Hl S−d+1], (6) Hl+1 i = d X k=1 (σ(AHl [i,:,:]W + B)), (7) here, Hl+1 ∈R(S−d+1)×N×C′ represent the output. C′ is the number of output channels. d denotes the length of kernel size in the temporal dimension, and k denotes the index of kernel. A is the adjacency matrix of pivotal graph. σ denotes the activation function, such as sigmoid or tanh. W ∈RC×C′and B ∈RN×C′ are all model parameters. The output Hl+1 incorporate spatio-temporal features from d time steps. Figure 2 (b) illustrates the computation process of the pivotal graph convolutional operation. A Parallel Structure for Non-pivotal Nodes Although PGCM addresses the spatio-temporal dependencies on pivotal nodes, we still need to consider the extraction of features on non-pivotal nodes. Similar to existing methods such as GraphWavenet (Wu et al. 2019), We adopt graph convolution and linear convolution to capture the spatial and temporal correlations among non-pivotal nodes respectively, and we propose a parallel structure to simultaneously extract features from both pivotal and non-pivotal nodes. The format of graph convolution is as follows: O = Q X q=0 U qXLq, (8) where U q represents the q −th power series of the diffusion matrix U, X ∈RN×S denotes the input signals, O ∈RN×S denotes the output, and Lq ∈RS×S denotes the matrices of learnable parameters. The linear unit (Yu and Koltun 2016) is designed to capture long-range behaviors of temporal features. Mathematically, given an input x ∈RT and a filter The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8630 Dataset Node Samples Sample Rate Data Type PEMS-03 358 26208 5min Traffic Flow PEMS-04 307 16992 5min Traffic Flow PEMS-07 883 28224 5min Traffic Flow PEMS-08 170 17856 5min Traffic Flow England 314 17353 15min Traffic Flow TaxiBJ 1024 5596 30min Taxi GPS PEMS-BAY 325 52116 5min Traffic Speed Table 1: Datasets description f ∈RK, the linear unit operation of x with f at time step t is represented as x ⋆f(t) = K−1 X s=0 f(s)x(t −d × s), (9) where ⋆represents the convolution operation. Figure 2 (d) illustrate how graph convolution and linear unit collaborate with each other. Loss Function Mean absolute error (MAE) is chosen as loss function. The objective function is shown below: L( ˆX(t+1):(t+T ); Θ) = 1 TN i=T X i=1 j=N X j=1 | ˆX(t+i) j −X(t+i) j |, (10) here ˆX(t+1):(t+T ) is the prediction value, Θ denotes all learnable model parameters, and X(t+i) j denotes the ground truth. Experiments In this section, we evaluated our proposed model by empirically examining on seven real-world datasets with the stateof-the-art models for traffic forecasting. We not only examine on traffic flow datasets but also choose traffic speed and GPS datasets to showcase the generality of our method in addressing traffic prediction problem. Datasets and Baseline Methods We conduct experiments on seven widely used realworld public traffic datasets from: (1)TaxiBJ (Junbo Zhang 2017)(2)England1(3)PEMS2(PEMS03, PEMS04, PEMS07, PEMS08, and PEMS-BAY). A brief description are given in Table 1. The baselines are ARIMA (Williams and Hoel 2003), DCRNN (Li et al. 2018), GWNet (Wu et al. 2019), STSGCN (Song et al. 2020), MTGNN (Wu et al. 2020), DMSTGCN (Han et al. 2021), DSTAGNN (Lan et al. 2022), TPGNN (Wu et al. 2022), and SGP (Cini et al. 2023). Experiment Settings To make a fair comparison, we follow existing experimental settings, and use the same evaluation metrics as the original 1http://tris.highwaysengland.co.uk/detail/trafficflowdata 2http://pems.dot.ca.gov/ publications in each dataset. All the tests adopt 60 minutes as the history time window except for the England dataset, for which we employed a 3-hour window. The forecasting time window is set to be the same as history time window. We adopt Mean Absolute Errors (MAE), Mean Absolute Percentage Errors (MAPE), and Root Mean Squared Errors (RMSE) to measure the performance of different methods. Every experiment is repeated 5 times and the average performance is reported. Experiment Results Table 2 summarizes the experimental results. It shows the comparison of different approaches for the traffic flow forecasting tasks. From the table 2, we can see our method outperforms other baseline methods on seven datasets except that our method performs sub-optimally in metric MAPE on the PEMS-BAY dataset. We can easily observe that the traditional statistical methods such as ARIMA often have poor performance since they cannot efficiently handle the complex spatio-temporal data. By utilizing the multi-head attention mechanism, DSTAGNN, has achieved sub-optimal performance on most datasets. Constructing dynamic graph structures and incorporating auxiliary information, TPGNN and DMSTGNN have achieved sub-optimal results on the remaining datasets. In comparison to these suboptimal methods, our model has shown significant improvements on the TAXI-BJ and PEMS08 datasets, with a 14.15% increase in MAPE on the TAXI-BJ dataset and a 10.32% increase on the PEMS08 dataset. On the PEMS03, TAXI-BJ, PEMS08, and PEMS-BAY datasets, our method has outperformed the suboptimal results by 9.05%, 7.60%, 6.98% and 6.21%, in terms of MAE, respectively. Moreover, on the TAXI-BJ and PEMS04 datasets, our method has achieved 6.54%, and 6.03% improvement in terms of RMSE, respectively. In general, the results in Table 2 verify the effectiveness of our model. In the pivotal nodes identify module, we employ a scoring mechanism to identify pivotal nodes, which includes a TopK function where K is a hyperparameter controlling the number of pivotal nodes. We conduct experiments to validate the appropriate value for K. As shown in Table 3, we represent the performance of STPGNN under different size of K on PEMS07 and PEMS08 datasets. Initially, the performance of model consistently improves as K increases. However, after surpassing a certain threshold, the performance gradually declined. This threshold is approximately one-fifth of the total number of nodes. Component Analysis To further verify effectiveness of different modules of STPGNN, we conduct experiments on PEMS08 (as shown in Figure 3 (a)) and England (as shown in Figure 3 (b)). The results on other datasets show similar results. In particular, we design three variants of the STPGNN model: (1) RemPG: This variable removes the pivotal node identification module and randomly selects nodes as pivotal nodes. (2) RemPGCN: This variable removes the pivotal graph convolution module and solely utilizes graph convolution module and linear units to extract features from all nodes. (3) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8631 Datasets Metric ARIMA DCRNN GWNet STSGCN MTGNN DMSTGCN DSTAGNN TPGNN SGP STPGNN Improve(%) PEMS03 MAE 26.33 18.18 19.85 17.48 17.23 16.82 15.67 16.88 15.82 14.37 9.05 MAPE(%) 22.90 18.18 19.85 16.78 17.35 16.71 14.74 16.53 15.74 14.23 3.58 RMSE 33.05 30.31 32.94 29.21 25.89 25.81 27.21 25.78 25.92 24.62 4.71 PEMS04 MAE 28.55 24.70 25.45 21.19 19.98 19.75 19.53 19.63 19.57 18.34 6.49 MAPE(%) 19.55 17.12 17.29 13.90 14.13 13.91 12.97 13.04 13.13 12.49 3.84 RMSE 40.36 38.12 39.70 33.65 31.92 31.43 31.46 31.44 31.52 29.64 6.03 PEMS07 MAE 33.89 25.30 26.85 24.26 23.92 23.73 21.42 23.52 23.66 20.52 4.39 MAPE(%) 17.60 11.66 12.12 10.21 12.43 12.21 9.08 11.20 9.92 8.75 3.77 RMSE 46.38 38.58 42.78 29.03 35.86 36.01 34.51 35.20 34.97 33.38 3.39 PEMS08 MAE 31.23 17.86 19.13 17.13 15.03 14.87 15.67 14.92 14.96 13.90 6.98 MAPE(%) 19.25 11.45 12.68 10.96 10.23 10.11 9.94 10.11 10.27 9.01 10.32 RMSE 33.47 27.83 31.05 26.80 23.89 23.86 24.77 23.76 24.03 23.05 3.08 England MAE 4.23 3.59 3.12 3.02 3.03 2.98 2.97 3.07 3.05 2.87 3.48 MAPE(%) 5.72 4.90 4.53 4.48 4.42 4.37 4.39 4.41 4.52 4.19 4.30 RMSE 7.68 7.42 7.17 7.03 7.05 6.99 7.02 7.11 7.25 6.81 2.64 TaxiBJ MAE 25.32 19.81 18.77 17.69 18.07 17.59 16.85 17.23 17.03 15.66 7.60 MAPE(%) 59.35 34.19 33.52 31.04 31.98 31.79 29.76 30.89 30.12 26.07 14.15 RMSE 51.54 31.68 30.66 28.30 29.97 27.71 27.53 28.19 27.86 25.84 6.54 PEMS-BAY MAE 2.33 1.74 1.64 1.67 1.68 1.78 1.71 1.65 1.54 1.45 6.21 MAPE(%) 5.40 3.90 3.85 3.75 3.69 4.10 3.60 3.47 3.44 3.57 * RMSE 4.76 3.97 3.75 3.82 3.74 3.97 3.71 3.65 3.52 3.46 1.73 Table 2: Performance comparison of different approaches on seven datasets. Dataset PEMS07(with 883 sensors) K 100 125 150 175 200 225 MAE 21.32 21.13 20.85 20.52 20.56 20.63 MAPE(%) 9.32 9.13 8.86 8.75 8.79 8.91 RMSE 34.25 33.96 33.51 33.38 33.42 33.63 Dataset PEMS08(with 170 sensors) K 5 15 25 35 45 55 MAE 15.21 14.63 14.02 13.77 13.96 14.38 MAPE(%) 10.17 9.53 9.22 8.96 9.31 9.74 RMSE 23.97 23.25 23.03 22.90 23.11 23.68 Table 3: The performance of STPGNN with different K. (a) (b) Figure 3: Performance comparison of different variant on PEMS08 and England datasets. RemGCN: This variable removes the graph convolution module and linear units and exclusively employs the pivotal graph convolution module to construct the ST-Layers. From the results in Figure 3, we have the following findings. When removing the pivotal node identification module, the method fails to capture the pivotal nodes, resulting in a significant decrease in prediction accuracy. The performance of the model declines after the removal of PGCM or GCN. However, comparing RemPGCN and RemGCN reveals that the effectiveness of using PGCN varies across different datasets. We speculate that this might be due to the more (b) (a) (c) (d) Figure 4: An illustrative case study conducted on the PEMSBAY dataset. (a) The geographical positions of all nodes. Pivotal node 90 has been highlighted in red. (b) Node 90 is situated on an overpass. (c) The top 25 nodes aggregating towards node 90. (d) Similar to (c), the top 25 nodes which are distributed from node 90. traffic flow propagation around pivotal nodes in the England dataset. In summary, identifying pivotal nodes and extracting spatio-temporal dependencies on these nodes can substantially enhance the precision of model predictions. Effect of the Pivotal Nodes We present a case study to gain a better understanding of our proposed approach. we investigate the pivotal graph from our method trained on the PEMS-BAY dataset to verify if our model has truly identified the pivotal nodes. We have plotted all the sensors in the PEMS-BAY dataset on a map, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8632 Figure 5: An illustrative case study conducted on the PEMS08 dataset. (a) The ground truth for pivotal and non-pivotal nodes. (b) The prediction results of STPGNN and DSTAGNN on pivotal node 0. as shown in Figure 4 (a), with node 90 highlighted in red. In the scoring mechanism, node 90 has received the highest score due to its exceptional performance in both aggregation and distribution capabilities. Figure 4 (b) illustrates the precise location of node 90. As evident, node 90 is located on a overpass with high traffic flow, which validates the suitability of our chosen pivotal nodes in real-world scenarios. In Figure 4 (c) and (d), we present highly connected sensors selected from the 90-th row and the 90-th column in matrix E respectively. It can be observed that the majority of nodes are situated near node 90, with a few nodes at a greater distance. Additionally, the nodes involved in aggregation and distribution are located on opposite sides of the pivotal node. This demonstrates that our method is indeed capable of capturing the process of traffic aggregation and distribution at pivotal nodes. To validate the effectiveness on modeling the pivotal nodes of our method, We evaluate the predictive performance of our method on pivotal nodes. we choose an identified pivotal node (Node 0) and a non-pivotal node (Node 113) from the PEMS08 dataset. We present the traffic data records of two nodes over a span of three days, as shown in Figure 5 (a). It can be observed that the pivotal node exhibits higher traffic flow compared to the non-pivotal node, and its traffic flow undergoes significant variations throughout each day, indicating the challenge in predicting its behavior. Furthermore, in Figure 5 (b), we compare our method with the sub-optimal method DSTAGNN in terms of predictions for Node 0 on the second day. It is evident that our method successfully captures the abrupt changes in traffic flow (highlighted by the colored region), demonstrating its effectiveness in predicting the traffic variations of pivotal nodes. These results provide strong evidence that our proposed method outperforms alternative approaches in accurately forecasting the traffic flow on pivotal nodes. Computation Time To demonstrate the efficiency of our method, we compare STPGNN with DMSTGCN, DSTAGNN, and TPGNN, which have achieved suboptimal results on multiple datasets. The training time refers to the duration it takes for the model to complete one epoch under the same batch. EvDataset PEMS04 Model DSTAGNN DMSTGCN TPGNN STPGNN Training(s) 150.37 40.36 70.28 32.36 inference(s) 10.21 5.53 8.32 3.93 Dataset PEMS08 Model DSTAGNN DMSTGCN TPGNN STPGNN Training(s) 117.41 35.37 58.25 27.32 inference(s) 8.94 4.28 7.53 3.16 Table 4: The computation cost on the PEMS04 and PEMS08 datasets. ery experiment is repeated 10 times and the average performance is reported. Table 4 show the computation cost on the PEMS04 and PEMS08 datasets. Similar conclusions are drawn from the testing results on other datasets. STPGNN and DMSTGNN is faster than DSTAGNN and TPGNN in both training and inference. Although DSTAGNN achieves significant improvements in prediction accuracy through the use of multi-head attention mechanisms, it requires a substantial computational cost, leading to significantly increased time in training and inference. Conclusion In this paper, we propose a novel GNN-based method for traffic flow forecasting problems, in which we address a common phenomenon about pivotal nodes. We design a scoring mechanism to identify these pivotal nodes in traffic network and propose a novel pivotal graph convolution module to extract spatio-temporal features at these nodes. Furthermore, we introduce a parallel framework to concurrently capture spatio-temporal dependencies on both pivotal and non-pivotal nodes. Extensive experiments demonstrate the effectiveness and efficiency of our method. Acknowledgments The authors would like to thank the anonymous reviewers for their helpful comments. This work was supported by the NSFC 61572537, and the CCF-Huawei Populus Grove Challenge Fund 202305. Yubao Liu is the corresponding author. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8633 References Bai, L.; Yao, L.; Li, C.; Wang, X.; and Wang, C. 2017. Adaptive graph convolutional recurrent network for traffic forecasting. In NIPS. Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2014. Spectral Networks and Locally Connected Networks on Graphs. In ICLR. Chai, D.; Wang, L.; and Yang, Q. 2018. Bike Flow Prediction with Multi-Graph Convolutional Networks. In SIGSPATIAL. Cini, A.; Marisca, I.; Bianchi, F. M.; and Alippi, C. 2023. Scalable Spatiotemporal Graph Neural Networks. Proceedings of the 37th AAAI Conference on Artificial Intelligence. Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. NIPS. Drucker, H.; Burges, C. J.; Kaufman, L.; Smola, A.; and Vapnik, V. 1996. Support vector regression machines. NIPS. Fang, Z.; Long, Q.; Song, G.; and Xie, K. 2021. Spatialtemporal graph ode networks for traffic flow forecasting. In SIGKDD. Guo, S.; Lin, Y.; Feng, N.; Song, C.; and Wan, H. 2019. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In AAAI. Guo, S.; Lin, Y.; Wan, H.; Li, X.; and Cong, G. 2022. Learning Dynamics and Heterogeneity of Spatial-Temporal Graph Data for Traffic Forecasting. IEEE Transactions on Knowledge and Data Engineering, 34(11): 5415–5428. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. NIPS. Han, L.; Du, B.; Sun, L.; Fu, Y.; Lv, Y.; and Xiong, H. 2021. Dynamic and Multi-Faceted Spatio-Temporal Deep Learning for Traffic Speed Forecasting. In SIGKDD. Hu, Q.; Ming, L.; Xi, R.; Chen, L.; Jensen, C. S.; and Zheng, B. 2021. SOUP: A Fleet Management System for Passenger Demand Prediction and Competitive Taxi Supply. In ICDE. Huang, R.; Huang, C.; Liu, Y.; Dai, G.; and Kong, W. 2021. LSGCN: long short-term traffic prediction with graph convolutional networks. In IJCAI. Jin, G.; Liang, Y.; Fang, Y.; Huang, J.; Zhang, J.; and Zheng, Y. 2023. Spatio-Temporal Graph Neural Networks for Predictive Learning in Urban Computing: A Survey. Junbo Zhang, D. Q., Yu Zheng. 2017. Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. AAAI. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR. Lan, S.; Ma, Y.; Huang, W.; Wang, W.; Yang, H.; and Li, P. 2022. DSTAGNN: Dynamic Spatial-Temporal Aware Graph Neural Network for Traffic Flow Forecasting. In ICML. Li, M.; and Zhu, Z. 2021. Spatial-temporal fusion graph neural networks for traffic flow forecasting. In AAAI. Li, Y.; Yu, R.; Shahabi, C.; and Liu, Y. 2018. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In ICLR. Li, Y.; Zheng, Y.; Zhang, H.; and Chen, L. 2015. Traffic Prediction in a Bike-Sharing System. In SIGSPATIAL. Monti, F.; Boscaini, D.; Masci, J.; Rodola, E.; Svoboda, J.; and Bronstein, M. M. 2017. Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs. In CVPR. Ouyang, K.; Liang, Y.; Liu, Y.; Tong, Z.; Ruan, S.; Zheng, Y.; and Rosenblum, D. S. 2022. Fine-Grained Urban Flow Inference. IEEE Transactions on Knowledge and Data Engineering, 34(6): 2755–2770. Pan, S.; Hu, R.; Fung, S.-f.; Long, G.; Jiang, J.; and Zhang, C. 2019. Learning graph embedding with adversarial training methods. IEEE transactions on cybernetics, 50(6): 2475–2487. Pan, Z.; Zhang, W.; Liang, Y.; Zhang, W.; Yu, Y.; Zhang, J.; and Zheng, Y. 2022. Spatio-Temporal Meta Learning for Urban Traffic Prediction. IEEE Transactions on Knowledge and Data Engineering, 34(3): 1462–1476. Seo, Y.; Defferrard, M.; Vandergheynst, P.; and Bresson, X. 2018. Structured Sequence Modeling with Graph Convolutional Recurrent Networks. In ICONIP. Shen, Y.; Jin, C.; Hua, J.; and Huang, D. 2022. TTPNet: A Neural Network for Travel Time Prediction Based on Tensor Decomposition and Graph Embedding. IEEE Transactions on Knowledge and Data Engineering, 34(9): 4514–4526. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.; Wong, W.; and Woo, W. 2015. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In NIPS. Song, C.; Lin, Y.; Guo, S.; and Wan, H. 2020. Spatialtemporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In AAAI. Sun, J.; Zhang, J.; Li, Q.; Yi, X.; Liang, Y.; and Zheng, Y. 2022. Predicting Citywide Crowd Flows in Irregular Regions Using Multi-View Graph Convolutional Networks. IEEE Transactions on Knowledge and Data Engineering, 34(5): 2348–2359. Tedjopurnomo, D. A.; Bao, Z.; Zheng, B.; Choudhury, F. M.; and Qin, A. K. 2022. A Survey on Modern Deep Neural Network for Traffic Prediction: Trends, Methods and Challenges. IEEE Transactions on Knowledge and Data Engineering, 34(4): 1544–1561. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2018. GRAPH ATTENTION NETWORKS. stat, 1050: 4. Williams, B. M.; and Hoel, L. A. 2003. Modeling and forecasting vehicular traffic flow as a seasonal ARIMA process: Theoretical basis and empirical results. Journal of transportation engineering, 129(6): 664–672. Wu, Y.; and Tan, H. 2016. Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework. arXiv preprint arXiv:1612.01022. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Chang, X.; and Zhang, C. 2020. Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks. In SIGKDD. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8634 Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Chang, X.; and Zhang, C. 2022. Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks. In NIPS. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; and Zhang, C. 2019. Graph wavenet for deep spatial-temporal graph modeling. In IJCAI. Yan, S.; Xiong, Y.; and Lin, D. 2018. Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. In AAAI. Yao, H.; Tang, X.; Wei, H.; Zheng, G.; Yu, Y.; and Li, Z. 2018. Modeling spatial-temporal dynamics for traffic prediction. arXiv preprint arXiv:1803.01254. Ye, J.; Sun, L.; Du, B.; Fu, Y.; Tong, X.; and Xiong, H. 2019. Co-Prediction of Multiple Transportation Demands Based on Deep Spatio-Temporal Neural Network. In SIGKDD. Yu, B.; Yin, H.; and Zhu, Z. 2018. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In IJCAI. Yu, F.; and Koltun, V. 2016. Multi-Scale Context Aggregation by Dilated Convolutions. In ICLR. Zhang, J.; Zheng, Y.; and Qi, D. 2017. Deep SpatioTemporal Residual Networks for Citywide Crowd Flows Prediction. In AAAI. Zhang, J.; Zheng, Y.; Qi, D.; Li, R.; and Yi, X. 2016. DNN-Based Prediction Model for Spatio-Temporal Data. In SIGSPATIAL. Zhao, B.; Xu, P.; Shi, Y.; Tong, Y.; Zhou, Z.; and Zeng, Y. 2019. Preference-aware task assignment in on-demand taxi dispatching: An online stable matching approach. In AAAI. Zonoozi, A.; Kim, J.-J.; Li, X.; and Cong, G. 2018. PeriodicCRN: A Convolutional Recurrent Model for Crowd Density Prediction with Recurring Periodic Patterns. In IJCAI. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8635
2024
959
18,805
SRFormer: Text Detection Transformer with Incorporated Segmentation and Regression Qingwen Bu1,2, Sungrae Park3*, Minsoo Khang3, Yichuan Cheng4 1Shanghai Jiao Tong University 2Shanghai AI Laboratory 3Upstage AI 4City University of Hong Kong [email protected], {sungrae.park, mkhang}@upstage.ai, [email protected] Abstract Existing techniques for text detection can be broadly classified into two primary groups: segmentation-based and regression-based methods. Segmentation models offer enhanced robustness to font variations but require intricate post-processing, leading to high computational overhead. Regression-based methods undertake instance-aware prediction but face limitations in robustness and data efficiency due to their reliance on high-level representations. In our academic pursuit, we propose SRFormer, a unified DETR-based model with amalgamated Segmentation and Regression, aiming at the synergistic harnessing of the inherent robustness in segmentation representations, along with the straightforward post-processing of instance-level regression. Our empirical analysis indicates that favorable segmentation predictions can be obtained at the initial decoder layers. In light of this, we constrain the incorporation of segmentation branches to the first few decoder layers and employ progressive regression refinement in subsequent layers, achieving performance gains while minimizing computational load from the mask. Furthermore, we propose a Mask-informed Query Enhancement module. We take the segmentation result as a natural softROI to pool and extract robust pixel representations, which are then employed to enhance and diversify instance queries. Extensive experimentation across multiple benchmarks has yielded compelling findings, highlighting our method’s exceptional robustness, superior training and data efficiency, as well as its state-of-the-art performance. Our code is available at https://github.com/retsuh-bqw/SRFormer-Text-Det. Introduction Scene text detection and recognition have made many strides in recent years, garnering increasing attention within both the research community and industries, thanks to their wide range of practical applications, such as autonomous driving and document intelligence. Despite being a thoroughly investigated area, text detection remains a challenging endeavor within the realm of existing methodologies, particularly when confronted with complex scenarios involving overlapping, irregularly shaped, and stylized text instances. Previous work on detecting texts can be roughly divided into two streams: regression- and segmentation-based meth*Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ods. Regression-based methods offer notable advantages, including computational efficiency and adaptability to texts of varying sizes, making them suitable for real-time applications and the detection of both small and large text instances. Additionally, their end-to-end learning approach simplifies the pipeline, enabling post-processing with geometric calculations. However, these methods may exhibit slightly lower localization precision compared to segmentation-based approaches, particularly in the context of irregular or curved text instances (Zhao et al. 2019). They may also struggle when text contrasts poorly with the surrounding background, making them vulnerable in complex environments. Segmentation-based models also have their own advantages and limitations. While they can provide pixel-level localization and are more robust in addressing variations in text appearance, such as diverse font styles, sizes, and orientations, they require intricate post-processing to extract complete text instances from the binary masks, involving further algorithmic intervention, which is not amenable to GPU parallel processing (Gu, Bai, and Kong 2022). This impedes their ability to achieve stable and fast detection. Can we harness the strength of both regression- and segmentation-based methods, while mitigating their drawbacks by combining these two methods into one unified model? DEtection TRansformers (DETR), a recent popular method in object detection, present a suitable framework for the integration of these two representations (Li et al. 2023). While DETR variants have demonstrated notable success (Zhang et al. 2022; Ye et al. 2023b,a), there remains discernible scope for further enhancement of the performance across various text detection benchmarks. Furthermore, most DETR models adhere to the regression-based paradigm, thereby necessitating prolonged training iterations and substantial datasets to attain optimal performance. To address the aforementioned issues, we propose SRFormer, a new DETR-based model with separated decoder chunks: the segmentation chunk bootstraps models to learn more robust pixel-level representations, helps the model better separate between text and non-text regions, and provides positional prior for finer-grained regression; the regression chunk directs the queries to capture high-level semantic features and provides further refinement of localization results with minimal post-processing. Rather than utilizing the segmentation mask directly as the ultimate prediction The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 855 output, which necessitates accurate prediction and complex post-processing procedures, we introduce a Mask-informed Query Enhancement module that leverages masks as inherent indicators of Regions of Interest (ROI) to extract distinctive features in localized regions, further enhancing and diversifying queries for improved optimization. Utilizing the proposed module alongside supervision signals for both segmentation and regression empowers our model to harness the distinct advantages of each component while alleviating their inherent constraints. Our contributions are three folds: • We incorporate regression and segmentation into a unified DETR model, which creates new state-of-the-art performance across several scene text detection benchmarks by leveraging the distinct characteristics from both sides. • Through the strategic incorporation of the segmentation map solely within the initial layers of the decoder, the model gains the capacity to acquire robust pixellevel features and mitigates the need for intricate postprocessing steps to ensure a stable and fast inference. • In comparison to regression-based approaches, our proposed method exhibits superior performance in terms of training efficiency, data utilization, as well as improved robustness across diverse data domains. Related Work Detection Transformers DETR (DEtection TRansformer) (Carion et al. 2020) represents a pioneering model that introduced a fully end-to-end transformer-based paradigm for object detection. By formulating object detection as a set prediction task, it eliminates the need for non-maximum suppression (NMS) and substantially reduces post-processing requirements. However, DETR’s training convergence and feature resolution limitations have hindered its competitiveness compared to traditional detectors. In response, Deformable DETR (Zhu et al. 2020) addresses these concerns by introducing sparse multiscale features to enhance efficiency. Additionally, other variants such as Conditional-DETR (Meng et al. 2021), AnchorDETR (Wang et al. 2022), and DAB-DETR (Liu et al. 2022) introduced improved positional priors to expedite the training process. Furthermore, approaches like GroupDETR (Chen et al. 2022) and DN-DETR (Li et al. 2022) concentrate on label assignment strategies, significantly improving matching stability, particularly during early training stages. Our study primarily focuses on the transformer decoder part, aiming to enhance the quality of query representation and expedite the training convergence. Segmentation-based Scene Text Detectors Segmentation-based approaches commonly integrate pixellevel prediction with subsequent post-processing algorithms to obtain the bounding boxes or polygons corresponding to the detected objects. CRAFT (Baek et al. 2019) utilizes a weakly supervised approach to train character segmentation models. PSENet (Wang et al. 2019a) first predicts the text center region (text kernel) and then obtains the result of text instance segmentation by progressive scale expansion algorithm. DBNet (Liao et al. 2020) embeds differentiable binarization into the network and predicts the corresponding threshold map in addition to learning the binary segmentation map of the text region. Learning low-level representations makes segmentation-based methods more robust towards domain gaps and font variations. However, the total inference time is considerably impacted by post-processing operations on the CPU. Our proposed model seamlessly integrates the prowess of representation learning, while being free from the need for intricate post-processing. Regression-based Scene Text Detectors Regression-based methods directly predict the polygon coordinates or Bezier control points. EAST (Zhou et al. 2017) represents an end-to-end anchor-free method that adopts pixel-level regression techniques to handle multi-oriented text instances. ABCNet (Liu et al. 2021) is the first to introduce Bezier curve control points for arbitrary-shaped texts. TESTR (Zhang et al. 2022) and DPText (Ye et al. 2023a) exploit the efficacy of the DETR architecture, wherein they utilize learnable queries as inputs and employ a straightforward MLP head to predict polygon coordinates. We preserve the procedural simplicity inherent in the regression-based methods, while enhancing performance and robustness through a judicious fusion of segmentation. Methodology Overview Model Architecture. Fig.1 shows the overall structure of SRFormer . We first leverage ResNet50 (He et al. 2016) as the backbone. Upon updating the flattened features with the transformer encoder, we combine the backbone and updated features with a feature pyramid network module. The fused features are then channeled into both the decoding stage and the mask prediction head, thereby serving as the foundational reference feature within the framework. This interaction between query representations and high-resolution backbone features addresses the information bottleneck observed in the original DETR segmentation heads. Subsequently, we employ a two-stage approach, where a shared group of decoder embeddings is initialized by encoder output and fed into the decoder to gather richer features through cross-attention mechanism. We set the first few layers as the Segmentation&Regression Chunk to make instance-wise mask prediction along with the point-wise coordinate prediction, followed by Regression-only chunk performing layer-by-layer refinement to get more precise polygon control points. Several heads for mask, coordinate and class score predictions, are adopted in a parallel manner. Query Formulation. Derived from previous success, we initialize decoder queries with encoder outputs for better performance and faster training convergence. Instead of setting the number of learnable parameters as the number of proposals K. We only set 16 (i.e., number of polygon control points) groups of learnable embeddings to capture pointwise feature and universal control point correlation. They The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 856 Figure 1: The overview of the proposed SRFormer. We propose a two-step mechanism in the decoder: firstly, acquiring a coarse positional prior with segmentation results, followed by iterative regression refinements in a layer-by-layer fashion. We aim to optimize the intermediate representations and final predictions for improved performance in a concise framework. are then equipped with top-k encoder queries qe ∈RK×d to provide instance-wise information: qd = qe[Arg Top K(scls)] + qp (1) where qd ∈RK×16×d is the decoder embedding, scls denotes the classification score predicted from encoder queries, and qp ∈R16×d is point-wise learnable embedding. By combining instance-level and control point-level queries to form a hierarchical representation, we can effectuate the filtration of similar predictions through instance-level attention, and model global point-to-point relative relationships through point-level attention. In addition, to better utilize the bounding box output from encoder, we sample 16 equidistant point uniformly along the longer side of the box in a clock-wise manner as proposed in (Ye et al. 2023a). These sampled points are subsequently employed as the initial polygon prediction. We use sinusoidal positional encoding function PE(·) in conjunction with a two-layer MLP scaling network MLP(·) to enable precise positional representation for each control point: qpos = MLP(PE(pxy)) ∈RK×16×d (2) where pxy ∈RK×16×2 represents the coordinate of all polygon control points. Segmentation & Regression Chunk Mask prediction. As demonstrated in Fig. 1, we only perform text instance segmentation at initial layers of decoder, based on the experimental findings that instance segmentation masks show favorable results in first few layers and can hardly be refined layer-by-layer even with improved query representations in deeper decoder layers. With this implementation, we can also reduce the computation cost in the decoder with minimal performance drop. To perform mask prediction, we build the pixel embedding map fused from backbone and encoder features. Given the hierarchical nature of the queries in the decoder, it becomes imperative to aggregate point-level queries for text instancelevel prediction. We show a closer look of our mask head in Fig. 2. Specifically, we first use a 1D Conv with large kernel sizes (k = 9 in our default setting) to capture inter-point geometry knowledge, followed by a 1 × 1 Conv layer to learn point-level aggregation weights. Then we adopt weighted summation of queries along the control point dimension to adaptively formulate mask embedding. Finally, we dotproduct each mask embedding qm with the pixel embedding map F 1/8 to obtain instance masks ˆm: ˆm = F(qm) · P(F 1/8) ∈RK×H′×W ′ (3) where F denotes a two-layer MLP and P is a convolutional layer to make linear projection for semantic alignment. Mask as regression prior. To bridge the gap of dense representation of segmentation masks and polygon control points, we first formulate dense anchor grids map G ∈ RH′×W ′×2 of the same resolution as segmentation masks: G = meshgrid(linspace( 1 H′ + 1, H′ H′ + 1, H′), linspace( 1 W ′ + 1, W ′ W ′ + 1, W ′)) (4) where the linspace(start, end, num) function evenly generate num points in the closed interval [start, end]. Subsequently, we perform Hadamard product between anchor grids and normalized text segmentation results to obtain the ‘center of gravity’ for each text instance: ˆpa = H′W ′ X softmaxH′W ′( ˆm/τ) ⊙G (5) ˆpa ∈RB×K×2 are anchor points for subsequent regression, and τ is a scaling factor set to 0.3. The mask results are normalized using the softmax function across the spatial dimension to ensure the output ˆp falls within the interval of [0, 1]. Empirical analysis has demonstrated that text confined to a small spatial area exhibits anchors that are noticeably attracted to the central region of the image. A scaling factor is applied to enhance the discriminative contrast between The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 857 Figure 2: A detailed structure of mask prediction head. the response values pertaining to text and non-text regions to mitigate the potential influence of non-textual areas, characterized by lower response values yet encompassing larger pixel extents, on the final anchor outcome. Given that most text instances are regular convex geometries, their center of gravity coincides with the geometric center, making it a suitable reference point for regression purposes. Discussion. In addition, our approach differs from MaskDINO (Li et al. 2023) for obtaining positional priors from instance masks. MaskDINO employs the boundary rectangles of connected regions within binary segmentation maps, which can prove inaccurate in text detection tasks, particularly during initial training stages. Instance segmentation masks are obtained by computing the representational similarity between query and pixel embeddings, potentially leading to multiple responses surpassing a given threshold. This limitation becomes more pronounced in text detection, primarily due to the abundance of visually akin instances, especially in scenarios featuring high text density, such as documents. Our methodology inherently does not necessitate highly precise mask predictions. Mask-informed Query Enhancement Previous work (Liu et al. 2022) revealed that cross-attention mechanism in the decoder can be treated as soft ROI pooling where the ROI is implicitly encoded in the positional embedding. We heuristically adopt instance mask as a kind of soft ROI to extract instance-level features from encoder features and add the feature directly to the original query representation, working as a supplementary of point-level features extracted through cross-attention. Specifically, we build instance-level ROI indicator ˆr ∈RK×H′×W ′ from mask prediction of all scales by: ˆr = softmaxK( ˆm) ⊙em (6) we introduce the semantic segmentation mask em to achieve two objectives: firstly, to softly filter out non-textual regions within ˆm, and secondly, to incorporate supplementary supervision as proposed in (Long et al. 2022). The ROI for each instance is softly excluded from each other, thereby augmenting the differentiation among query representations and facilitating model optimization. Subsequently, we extract instance-level and global text features from multi-scale Figure 3: The Mask-informed Query Enhancement (MQE) module extracts multi-level pixel features guided by instance and global ROI indicators, which are derived from instance and semantic segmentation masks respectively. encoder features normalized by the spatial-wise summation of ROI indicators to craft the final output I ∈RK×d for query enhancement: I = MHA( PL PH′W ′ (ˆrl ⊙F l) α · L ) + PH′W ′ (Γ( em) ⊙F 1) β (7) where F l represents encoder features of level l, the normalization factors α, β are formulated as PH′W ′ ˆr and PH′W ′ Γ( em) respectively, Γ denotes a simple interpolation function to perform spatial alignment and MHA(·) represents a multi-head attention module to capture inter-instance relations. After applying linear projection to align with the original queries, we integrate the output of the MQE module directly into the query tensor. Enhanced queries are then directed into the subsequent decoder layers. End-to-End Optimization Matching. The primary objective of the matching process is to ascertain an optimal permutation σ : [ ˆY ] −→[Y ] of N elements that minimize the matching cost between the set predictions ˆY and ground truths Y : arg min σ N X n=1 C( ˆY (σ(n)), Y (n)) (8) where N is the number of ground truth instances per image. We use Hungarian matching to solve the corresponding bipartite matching problem. The cost function for decoder output is formulated as: Cdec =λclsFL(ˆsσ(n) dec ) + λcoordΣ16 i=1∥ˆpσ(n) i −pn i ∥ + λmaskDice( ˆmσ(n), mn gt) (9) where ˆmσ(n) is the mask prediction for text instances, ˆpσ(n) i denotes coordination prediction of the i-th control point and λcls, λcoord, and λgiou are hyper-parameters to balance different cost proportions. FL is defined as the difference between the positive and negative term: FL(x) = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 858 Methods External Data Total-Text CTW1500 ICDAR19-ArT P R F1 P R F1 P R F1 Segmentation-based Methods TextSnake (Long et al. 2018) Syn800K 82.7 74.5 78.4 67.9 85.3 75.6 PAN (Wang et al. 2019b) Syn800K 89.3 81.0 85.0 86.4 81.2 83.7 61.1 79.4 69.1 CRAFT (Baek et al. 2019) Syn800K+IC13 87.6 79.9 83.6 86.0 81.1 83.5 77.2 68.9 72.9 DB (Liao et al. 2020) Syn800K 87.1 82.5 84.7 86.9 80.2 83.4 I3CL (Du et al. 2022) Syn800K(+MLT17+LSVT) 89.2 83.7 86.3 87.4 84.5 85.9 82.7 71.3 76.6 Regression-based Methods ABCNet v2 (Liu et al. 2021) Syn150k+MLT17 90.2 84.1 87.0 85.6 83.8 84.7 FSG (Tang et al. 2022) Syn800K 90.7 85.7 88.1 88.1 82.4 85.2 TESTR (Zhang et al. 2022) Syn150k+MLT17 93.4 81.4 86.9 92.0 82.6 87.1 SwinTextSpotter (Huang et al. 2022) Syn150k+MLT17+IC13/15 88.0 88.0 TextBPN++ (Zhang et al. 2023) Syn800K+MLT17 91.8 85.3 88.5 87.3 83.8 85.5 81.1 71.1 75.8 DPText (Ye et al. 2023a) Syn150k+MLT17(+LSVT) 91.3 86.3 88.7 91.7 86.2 88.8 83.0 73.7 78.1 Ours-SRFormer (#1Seg) Syn150k+MLT17(+LSVT) 92.2 86.6 89.3 91.6 87.7 89.6 86.2 73.1 79.1 Ours-SRFormer (#2Seg) Syn150k+MLT17(+LSVT) 92.2 87.9 90.0 89.4 89.6 89.5 86.2 73.4 79.3 Ours-SRFormer (#3Seg) Syn150k+MLT17(+LSVT) 91.5 87.9 89.7 89.4 89.8 89.6 86.1 73.5 79.3 Table 1: Quantitative detection results on several benchmarks. “P”, “R” and “F1” denote Precision (%), Recall (%) and F1-score (%), respectively. The backbone network is all ResNet50, except for SwinTextSpotter (SwinT), PAN (ResNet18), CRAFT and TextSnake (VGG16). We use #Seg to denote the number of decoder layers assigned to the Segmentation&Regression chunk. −α(1 −x)γlog(x) + (1 −α)xγlog(1 −x). Dice loss is exclusively incorporated within the layers of Segmentation & Regression chunk. Loss function. We leverage the focal loss with α = 0.25, γ = 2 for instance classification. Dice loss in corporation with BCE loss are exploited to supervise both instance and semantic mask predictions. In addition, L1 distance loss is used for regressed polygon control points: L =λclsLcls(ˆsσ(n) dec , sn gt) + λmaskLmask( ˆmσ(n), mn gt) + λmaskLmask( em, ΣNmn gt) + λregLreg(ˆpσ(n), pn gt) (10) where λcls, λmask and λreg are balancing factors. Experiment Datasets and benchmarks. TotalText (Ch’ng and Chan 2017) features a diverse range of text instances, including horizontal, multi-oriented, and curved text in natural scenes. The dataset contains over 1,500 high-resolution images with annotations, making it suitable for evaluating the robustness of text detection models across different text layouts and orientations. Rot.Total-Text constitutes a test set derived from the Total-Text test set, as initially proposed in (Ye et al. 2023a). We also integrate it to facilitate the development of optimal performance models. CTW1500 (Liu et al. 2019) consists of 1,000 training images and 500 test images, with various text instances exhibiting diverse orientations, fonts, and perspectives. ICDAR19-ArT (Chng et al. 2019) is a large arbitrary-shape scene text benchmark, which includes multiple languages. We also adopt the following additional datasets for pre-training: SynthText150k (Liu et al. 2020) is synthesized by overlaying computer-generated text on natural images. This approach allows for large-scale data generation and fine-grained control over text characteristics, such as size, font, and orientation. The dataset contains contains 94,723 images with multi- oriented texts and 54,327 images with curved texts, providing a rich resource for pretraining text detection models under various synthetic scenarios. MLT17 (Nayef et al. 2017) is introduced as part of the ICDAR17 Robust Reading Competition, which is a multi-language large-scale scene text dataset. Implementation details. We adopt ResNet-50 (He et al. 2016) as the backbone, followed by a deformable transformer encoder with 8 heads and 4 sampling points to update the features. We set the number of proposals to 100 and polygon control point embedding to 16. Model pre-training is made on a mixture of SynthText150K, MLT17 and TotalText dataset for a total number of 300k iterations. The starting learning rate is 1e-4 and decays to 1e-5 at the 240k iteration. We fine-tune our model on TotalText and CTW1500 with 30k iteration with learning rates set to 1e-4 and 5e-5 respectively, which is then divided by 10 at the 24k iteration. For evaluation on ICDAR19-ArT dataset, we also incorporate LSVT for pre-training, following (Sun et al. 2019). The loss weights for classification, mask prediction and ctrl-points regression are set to λcls=2, λmask = λreg = 5, respectively. We adopt multi-scale training strategy with the shortest edge ranging from 480 to 896, and the longest edge kept within 1,600, following most of previous studies. For evaluation, we resize the shorter side to 1,000 and keep the longer side within 1,800. All training and evaluation are conducted on a system with 8 NVIDIA 3090 graphics cards. Comparison with SoTA Methods Our proposed methodology is evaluated on three benchmark datasets, namely Total-Text, CTW1500, and ICDAR19 ArT. The obtained quantitative results are then systematically compared with those achieved by prior approaches, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 859 #Seg Layer #Reg Layer P R F1 FPS 1 5 88.6 84.5 86.5 9.7 2 4 89.0 85.1 87.0 8.6 3 3 88.0 86.1 87.1 7.9 Table 2: Ablation results of several variations of SRFormer with different decoder layers allocation. AnchorReg MQE F1 Improv. Extra Param. FPS 85.5 10.5 ✓ 86.0 0.5 0.39M 10.5 ✓ 86.7 1.2 2.95M 7.9 ✓ ✓ 87.1 1.6 3.34M 7.9 Table 3: Ablations on test sets with SRFormer (#3Seg). “AnchorReg” denotes the control point regression based on mask-generated anchor points. “MQE” represents our proposed Mask-guided Query Enhancement module. as illustrated in Table 1. Our method consistently achieves state-of-the-art performance across these datasets. We use #Seg to denote the number of layers in the Segmentation & Regression Chunk, where the total number of decoder layers stays 6. Compared to previous SoTA methods, for example, SRFormer outperforms the state-of-the-art DPText by +1.3%, +0.7% and +1.2% on TotalText, CTW1500 and IDCAR19-ArT respectively. Additionally, SRFormer surpasses SoTA segmentation-based method I3CL by a notable gap of +2.7%, +3.6% and +2.7% on three benchmarks. Ablation Studies All the ablation experiments are conducted on TotalText without any pre-training. All models, unless specified otherwise, are trained for 50K iterations. Decoder layer number. In this study, we undertook an investigation into the impact of varying the number of layers assigned to Segmentation & Regression Chunk on the final performance of the model. In general, placing greater emphasis on segmentation learning tends to yield improved recall rates, albeit at the potential cost of reduced precision, which can be attributed to the absence of a finegrained, layer-by-layer polygon refinement process in the Regression-only Chunk. We’ve also noticed that the decoder’s first layer achieves favorable segmentation results that can hardly be further improved in subsequent layers, which could partially explain the marginal performance gain by simply adding more segmentation layers. It’s worth noting that our method yields a competitive 87.1% F1-score with only 50k iteration training solely on TotalText. Regression from anchor points As listed in Table. 3, leveraging the anchor prior provided by instance masks brings about +0.5% performance improvement. The utilization of mask-generated anchor points constitutes a valuable positional prior, especially at early training stages, enabling the model to learn geometric relationships and characteristics between control points. (a) TotalText (b) Rot.TotalText Figure 4: Training convergence of DPText and ours. Mask-informed Query Enhancement Incorporating MQE solely brings a notable +1.2% performance gain, as shown in Table. 3 MQE module extracts distinctive pixel features for different queries by utilizing existing instance and semantic mask predictions, introducing less than 1M parameters at each layer. We believe that MQE can be treated as a cross-attention mechanism, where the mask functions analogously to positional embedding, guiding the model to extract richer features in a designated region. Discussion Training efficiency. Fig. 4 shows convergence curves, showcasing the fluctuation of the evaluation F1-score with increasing training iterations. When training from scratch on TotalText and Rot.TotalText, The observed trend in the figure reveals that our model consistently outperforms DPText in all tests beyond the 5,000th iteration, within the context of the 50k-iteration training schedule. In addition, we extended the training schedule of DPText twofold and generated its corresponding convergence plot. Despite the doubled training schedule, the performance of DPText still falls short of our model on both datasets. These findings emphasize the prospective advantages associated with the integration of segmentation, leading to enhanced convergence and superior performance in contrast to approaches solely reliant on regression methodologies. Predictions at each decoder layer. We take a closer look at the output of each decoder layer in SRFormer and DPText to further reveal potential benefits brought by combined segmentation and regression, as listed in Table. 4. In the context of pretrained models, our method produces SoTA result of 88.95% F1-score with only two decoder layers, exceeding The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 860 test layer# Ours DPText Ours∗ DPText∗ layer 0 86.67 81.64 82.13 (+26.96) 55.17 layer 1 88.95 86.67 84.32 76.87 layer 2 89.33 87.98 85.75 83.86 layer 5 89.96 88.72 86.23 85.64 Table 4: F1 scores (%) on TotalText when using different decoder layers in the same model. ∗denotes models trained from scratch for 50k iterations. Data Proportion DPText SRFormer (Ours) P R F1 P R F1 10% Data 83.0 69.3 75.6 82.9 71.8 76.9 50% Data 86.7 77.7 82.0 87.4 81.4 84.3 Table 5: Evaluation performance (%) of DPText and our model on TotalText dataset with fewer labeled images for training provided. the six-layer DPText. For models trained from scratch, this gap is even more pronounced. The F1-scores of the predictions made by first decoder layer (referred to as “layer 0” in the table) exhibit a noteworthy gap of 26.96%. Data efficiency. Segmentation-based techniques are commonly recognized for their superior data efficiency, demanding a considerably smaller training dataset to attain satisfactory generalization performance, which could be attributed to learning pixel-level representations by engaging with lowlevel features. Since our method utilizes a segmentation technique, our method can show better data efficiency. To reveal this property, we trained SRFormer and DPText with only 10% and 50% samples in the TotalText dataset for 30k and 50k iterations, respectively. Table. 5 shows the result. Using a limited training dataset comprising only 10% of available samples, we achieved a F1-score of 76.9%, demonstrating an improvement of approximately 1.9% in comparison to DPText. Notably, upon increasing the training data availability to 50%, the performance disparity further expands to 2.3%, and our proposed method exhibits a F1-score of 84.3%, underscoring its superior efficacy. Robustness. As listed in Table. 6, we evaluate the crossdomain robustness inherent in two models by subjecting them to training and testing regimens involving disparate datasets. While TotalText addresses a more limited scope within scenes and languages, MLT, in contrast, encompasses a wide array of both domains. When trained on the combination of all, our model exhibits a relatively superior performance. The exclusion of MLT data engenders an observable decrement in the performance of DPText. In contrast, our proposed model has an elevated level of robustness, evidenced by a significant performance upswing of +13.18% on MLT and +4.56% on TotalText, respectively. From an alternative vantage point, this performance differential tends to narrow when both models are only trained on synthetic data, reflecting SRFormer’s capacity to cultivate real-world generalization even with limited samples. Training Set DPText SRFormer (Ours) MLT TT MLT TT SynthText + TT + MLT 70.54 80.93 71.11 81.81 SynthText + TT 50.10 67.40 63.28 71.96 SynthText 41.14 48.71 42.87 51.52 Table 6: Evaluation F1-score (%) of DPText and SRFormer on MLT17 and TotalText (TT) dataset. Figure 5: Average time spent on the GPU (i.e., for inference) and CPU (i.e., for post-processing) side per image. For SRFormer and DPText, we keep the longer side of input image within 1,800, while we resize the input to 1,600 for DBNet. Inference time analysis. As mentioned in Sec , segmentation methods inherently require complex post-processing to obtain the final outputs from the identified segmentation map. While our method incorporates both segmentation and regression losses during training, the final output is determined by the regression head, eliminating the need for the post-processing step. Fig. 5 shows the required time for model inference and post-processing. The segmentation method, DBNet, incurs significant post-processing time, resulting in four times longer than model inference time and high variability per image. In contrast, the regression method, DPText, and ours demonstrate negligible post-processing time. Additionally, it’s worth denoting that SRFormer#2L, our model with only two decoding layers, shows a similar inference cost to DPText but achieves better performances (as listed in Table. 4). Conclusion We propose SRFormer, a DETR-based model with incorporated segmentation and regression. By introducing mask prediction, we utilize it to provide a location prior for regression and to extract distinctive information for decoder queries from pixel features, enhancing robustness against textual deformations and improving domain transferability. Without compromising the simplicity of post-processing inherent to regression models, various experiments demonstrate that our method yields notable improvements in training efficiency, data utilization, and overall performance across various benchmarks. While the efficacy of the proposed method is substantiated within the context of text detection, we believe its prospective effectiveness in divergent detection tasks necessitating domain robustness and data efficiency. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 861 Acknowledgements The work is supported in part by National Key R&D Program of China (2022ZD0160201), HK RIF (R7030-22), HK ITF (GHP/169/20SZ), the Huawei Flagship Research Grants in 2021 and 2023, the HKU-SCF FinTech Academy R&D Funding Schemes in 2021 and 2022, Hong Kong RGC GRF (HKU 17208223), and the Shanghai Artificial Intelligence Laboratory (Heming Cui is the Ph.D. advisor of Qingwen Bu and a courtesy researcher in this lab). References Baek, Y.; Lee, B.; Han, D.; Yun, S.; and Lee, H. 2019. Character region awareness for text detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9365–9374. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vision, 213–229. Springer. Chen, Q.; Chen, X.; Zeng, G.; and Wang, J. 2022. Group detr: Fast training convergence with decoupled one-to-many label assignment. arXiv preprint arXiv:2207.13085. Ch’ng, C. K.; and Chan, C. S. 2017. Total-text: A comprehensive dataset for scene text detection and recognition. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR), volume 1, 935–942. IEEE. Chng, C. K.; Liu, Y.; Sun, Y.; Ng, C. C.; Luo, C.; Ni, Z.; Fang, C.; Zhang, S.; Han, J.; Ding, E.; et al. 2019. Icdar2019 robust reading challenge on arbitrary-shaped text-rrc-art. In 2019 International Conference on Document Analysis and Recognition (ICDAR), 1571–1576. IEEE. Du, B.; Ye, J.; Zhang, J.; Liu, J.; and Tao, D. 2022. I3cl: Intra-and inter-instance collaborative learning for arbitraryshaped scene text detection. International Journal of Computer Vision, 130(8): 1961–1977. Gu, W.; Bai, S.; and Kong, L. 2022. A review on 2D instance segmentation based on deep neural networks. Image and Vision Computing, 120: 104401. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Huang, M.; Liu, Y.; Peng, Z.; Liu, C.; Lin, D.; Zhu, S.; Yuan, N.; Ding, K.; and Jin, L. 2022. Swintextspotter: Scene text spotting via better synergy between text detection and text recognition. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4593–4603. Li, F.; Zhang, H.; Liu, S.; Guo, J.; Ni, L. M.; and Zhang, L. 2022. Dn-detr: Accelerate detr training by introducing query denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13619–13627. Li, F.; Zhang, H.; Xu, H.; Liu, S.; Zhang, L.; Ni, L. M.; and Shum, H.-Y. 2023. Mask dino: Towards a unified transformer-based framework for object detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3041–3050. Liao, M.; Wan, Z.; Yao, C.; Chen, K.; and Bai, X. 2020. Real-time scene text detection with differentiable binarization. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 11474–11481. Liu, S.; Li, F.; Zhang, H.; Yang, X.; Qi, X.; Su, H.; Zhu, J.; and Zhang, L. 2022. Dab-detr: Dynamic anchor boxes are better queries for detr. arXiv preprint arXiv:2201.12329. Liu, Y.; Chen, H.; Shen, C.; He, T.; Jin, L.; and Wang, L. 2020. Abcnet: Real-time scene text spotting with adaptive bezier-curve network. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9809– 9818. Liu, Y.; Jin, L.; Zhang, S.; Luo, C.; and Zhang, S. 2019. Curved scene text detection via transverse and longitudinal sequence connection. Pattern Recognition, 90: 337–345. Liu, Y.; Shen, C.; Jin, L.; He, T.; Chen, P.; Liu, C.; and Chen, H. 2021. Abcnet v2: Adaptive bezier-curve network for realtime end-to-end text spotting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8048–8064. Long, S.; Qin, S.; Panteleev, D.; Bissacco, A.; Fujii, Y.; and Raptis, M. 2022. Towards end-to-end unified scene text detection and layout analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1049–1059. Long, S.; Ruan, J.; Zhang, W.; He, X.; Wu, W.; and Yao, C. 2018. Textsnake: A flexible representation for detecting text of arbitrary shapes. In Proceedings of the European conference on computer vision (ECCV), 20–36. Meng, D.; Chen, X.; Fan, Z.; Zeng, G.; Li, H.; Yuan, Y.; Sun, L.; and Wang, J. 2021. Conditional detr for fast training convergence. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3651–3660. Nayef, N.; Yin, F.; Bizid, I.; Choi, H.; Feng, Y.; Karatzas, D.; Luo, Z.; Pal, U.; Rigaud, C.; Chazalon, J.; et al. 2017. Icdar2017 robust reading challenge on multi-lingual scene text detection and script identification-rrc-mlt. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR), volume 1, 1454–1459. IEEE. Sun, Y.; Ni, Z.; Chng, C.-K.; Liu, Y.; Luo, C.; Ng, C. C.; Han, J.; Ding, E.; Liu, J.; Karatzas, D.; et al. 2019. ICDAR 2019 competition on large-scale street view text with partial labeling-RRC-LSVT. In 2019 International Conference on Document Analysis and Recognition (ICDAR), 1557–1562. IEEE. Tang, J.; Zhang, W.; Liu, H.; Yang, M.; Jiang, B.; Hu, G.; and Bai, X. 2022. Few could be better than all: Feature sampling and grouping for scene text detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4563–4572. Wang, W.; Xie, E.; Li, X.; Hou, W.; Lu, T.; Yu, G.; and Shao, S. 2019a. Shape robust text detection with progressive scale expansion network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9336– 9345. Wang, W.; Xie, E.; Song, X.; Zang, Y.; Wang, W.; Lu, T.; Yu, G.; and Shen, C. 2019b. Efficient and accurate arbitraryshaped text detection with pixel aggregation network. In The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 862 Proceedings of the IEEE/CVF international conference on computer vision, 8440–8449. Wang, Y.; Zhang, X.; Yang, T.; and Sun, J. 2022. Anchor detr: Query design for transformer-based detector. In Proceedings of the AAAI conference on artificial intelligence, volume 36, 2567–2575. Ye, M.; Zhang, J.; Zhao, S.; Liu, J.; Du, B.; and Tao, D. 2023a. Dptext-detr: Towards better scene text detection with dynamic points in transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, 3241–3249. Ye, M.; Zhang, J.; Zhao, S.; Liu, J.; Liu, T.; Du, B.; and Tao, D. 2023b. Deepsolo: Let transformer decoder with explicit points solo for text spotting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19348–19357. Zhang, S.-X.; Yang, C.; Zhu, X.; and Yin, X.-C. 2023. Arbitrary shape text detection via boundary transformer. IEEE Transactions on Multimedia. Zhang, X.; Su, Y.; Tripathi, S.; and Tu, Z. 2022. Text spotting transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9519– 9528. Zhao, Z.-Q.; Zheng, P.; Xu, S.-t.; and Wu, X. 2019. Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11): 3212–3232. Zhou, X.; Yao, C.; Wen, H.; Wang, Y.; Zhou, S.; He, W.; and Liang, J. 2017. East: an efficient and accurate scene text detector. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 5551–5560. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 863
2024
96
18,806
Knowledge-Aware Explainable Reciprocal Recommendation Kai-Huang Lai1, Zhe-Rui Yang1, Pei-Yuan Lai2, Chang-Dong Wang1*, Mohsen Guizani3, Min Chen4,5 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2South China Technology Commercialization Center, Guangzhou, China 3Machine Learning Department, Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE 4School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 5Pazhou Lab, Guangzhou, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Reciprocal recommender systems (RRS) have been widely used in online platforms such as online dating and recruitment. They can simultaneously fulfill the needs of both parties involved in the recommendation process. Due to the inherent nature of the task, interaction data is relatively sparse compared to other recommendation tasks. Existing works mainly address this issue through content-based recommendation methods. However, these methods often implicitly model textual information from a unified perspective, making it challenging to capture the distinct intentions held by each party, which further leads to limited performance and the lack of interpretability. In this paper, we propose a Knowledge-Aware Explainable Reciprocal Recommender System (KAERR), which models metapaths between two parties independently, considering their respective perspectives and requirements. Various metapaths are fused using an attention-based mechanism, where the attention weights unveil dual-perspective preferences and provide recommendation explanations for both parties. Extensive experiments on two real-world datasets from diverse scenarios demonstrate that the proposed model outperforms state-of-the-art baselines, while also delivering compelling reasons for recommendations to both parties. Introduction Reciprocal recommender systems (RRS) (Pizzato et al. 2010) have become increasingly popular in various online platforms such as online dating (Neve and Palomares 2019; Xia et al. 2016) and job recruitment (Jiang et al. 2020; Yang et al. 2022). Unlike traditional recommender systems that make uni-directional recommendations to users, RRS aims to fulfill the bilateral needs between two parties by making reciprocal recommendations (e.g. recommending satisfactory date partners to each other, matching job seekers and recruiters, etc.). However, building an effective RRS faces unique challenges compared to traditional recommender systems. One *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. major issue is the sparsity of interaction data. For example, in job recommendation, once a job seeker accepts an offer, the interaction between the job seeker and the recruiter becomes inactive until the job seeker looks for new jobs. Meanwhile, as the job position gets filled, the recruiter will stop interacting with candidates for that position. Such bidirectional inactivation after a successful matching leads to significantly fewer historical interaction signals for accurately modeling the preferences of both sides, compared to the abundant user-item interactions in traditional recommender systems. To alleviate the data sparsity issue, existing works (Akehurst et al. 2011; Luo et al. 2020) have explored leveraging side information such as resumes and job postings. However, they rely on textual data with inconsistent formats from both sides. The free-form nature of such user-generated content makes it difficult to precisely extract and match preferences. In addition, they treat the information in a unified view without distinguishing between the two parties involved. However, the two sides often have distinct intentions and preferences when evaluating the same content. The inability to capture such dual perspectives from inconsistent data formats limits the accuracy and interpretability of existing models. These limitations highlight the need for modeling dual perspectives in reciprocal recommendation. For instance, in job matching scenarios, a candidate and recruiter may align on certain dimensions like skills and industry, but have mismatches in other dimensions like location and education preferences due to their different focuses. Capturing such nuanced differences in intentions and motivations is crucial for improving the accuracy of matches. To address the limitations of existing methods, in this paper, we propose a Knowledge-Aware Explainable Reciprocal Recommender System (KAERR) that incorporates side information from both parties involved in the recommendation process in a knowledge graph. By extracting metapaths between the two parties, KAERR can explicitly capture their distinct preferences and intentions. We encode the metapaths from dual perspectives using a bidirectional LSTM (Hochreiter and Schmidhuber 1996) and fuse them with an attention The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8636 mechanism to distill important signals differently. In addition, we make reciprocal predictions and optimize the model with a bilateral quadruple-based loss function. The learned attention weights also provide explainability by revealing the relative importance of different metapaths. By effectively modeling the knowledge graph information from dual perspectives, KAERR can improve recommendation accuracy and provide explanations. Extensive experiments verify the effectiveness of KAERR over state-of-the-art baselines. The contributions of our work are summarized as follows: • We propose a novel Knowledge-Aware Explainable Reciprocal Recommender System (KAERR) that models metapaths from a knowledge graph independently from the dual perspectives of the two parties involved using a bidirectional LSTM encoder. • Extensive experiments on two real-world datasets demonstrate that KAERR consistently outperforms stateof-the-art baselines on reciprocal recommendation tasks. • To the best of our knowledge, our model is the first reciprocal recommender system that can provide compelling reasons for the recommendations to both parties by revealing the relative importance of different metapaths through attention weights. Related Work Reciprocal Recommendation Existing RRS studies can be grouped into several categories based on their methodology, including collaborative filtering-based methods (Cai et al. 2012; Xia et al. 2016; Neve and Palomares 2019), content-based methods (Alanazi and Bain 2013; Akehurst et al. 2011; Yang et al. 2017), hybrid methods (Zhou et al. 2023), and sequential-based methods (Zheng et al. 2023a). Collaborative filtering methods use past user interactions to infer preferences from similar patterns, but struggle with minimal history (cold-start). Content-based approaches need detailed text data to match user profiles, relying on the quality of this data. Hybrid methods combine behavior and content analysis to improve recommendations. Sequential methods use neural networks for sequence matching, yet also require extensive interaction history. Generally, current RRS inadequately address sparse interactions or the mutual aspect between two sides. Developing an approach that can overcome sparse bilateral signals and suit the reciprocal setting remains an open challenge. Knowledge-Aware Recommendation Knowledge graphs offer valuable context by mapping entities and their relationships, improving recommender systems’ representation learning. Knowledge-aware recommendation techniques fall into three categories: embeddingbased methods (Zhang et al. 2016; Wang et al. 2018; Cao et al. 2019) use entity and relation embeddings from knowledge graphs in user and item representations; path-based methods (Wu, Zhang, and Lin 2022; Li et al. 2022) extract knowledge graph metapaths to understand user-item connections; and GNN-based methods (Wu, Zhang, and Lin 2022; Li et al. 2022) utilize graph neural networks to learn from knowledge graph structures. While these methods enrich semantics, they are not specifically tailored for reciprocal recommendations and fail to differentiate the distinct intentions and preferences of both parties involved. Explainable Recommendation Explainable recommendation is a key research area with diverse explanation styles, such as predefined templates (Li, Chen, and Dong 2021), ranked sentences (Li, Zhang, and Chen 2021), knowledge graph paths (Xian et al. 2019), reasoning rules (Shi et al. 2020), and generated natural language (Li, Zhang, and Chen 2020). These styles range from using fixed templates and selected review sentences to leveraging knowledge graph semantics, inference rules, and language models to create tailored explanations. Nevertheless, most systems generate generic explanations without considering the unique needs of each party in a reciprocal recommendation scenario, a significant drawback for reciprocal recommendations where individual motivations and priorities vary greatly. Preliminaries and Notations To facilitate discussion in the following sections, we take the example of an online recruitment platform. Next, we formally define the notations for the concepts involved. Bilateral Interaction Assume that we have a set of candidates C = {c1, c2, · · · , cM} and a set of jobs J = {j1, j2, · · · , jN} posted by recruiters, where M and N are the total numbers of candidates and jobs. Each candidate or recruiter can send requests to jobs or candidates that meet their criteria. All accepted requests form a matching set M = {(ci, jk) | ci ∈C, jk ∈J }. Rejected requests lead to unilateral matches, which are recorded in matrix UM×N, where uik = 1 means candidate ci applied for job jk but got rejected, and uik = −1 means recruiter of job jk invited candidate ci but got declined, and the default value within the matrix is 0. Knowledge Graph We construct a knowledge graph G = {(h, r, t)|h, t ∈E, r ∈R}, where E represents the set of entities including candidates, jobs and their attributes, and R represents the set of relations between entities. Each triplet (h, r, t) denotes a head entity h, relation r and tail entity t. For an example, (CandidateA, HasSkill, Java) indicates CandidateA has the skill Java. Metapath A metapath is a sequence of entity types and relation types that defines a specific semantic path between entities. For instance, a metapath of “Candidate-HasSkillSkill-RequireSkill-Job” reveals the skills that candidates possess which are required for certain jobs. We pre-define a set of metapath patterns and extract all metapath instances from the knowledge graph. A metapath p ∈P can be denoted as (e1, r1, e2, · · · , rn−1, en), where ei ∈E represents entities and rj ∈R represents relations. Problem Definition Given the bilateral interaction history M, U and knowledge graph G, our goal is to learn a matching function f(ci, jk) that predicts the matching probability between candidate ci and job jk based on their interaction records and the metapaths Pi,k that connect them within the knowledge graph. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8637 Candidate Salary Job Skill Degree DesireSalary RequireSkill RequireDegree HasDegree OfferSalary HasSkill Knowledge Graph Embedding Layer Metapath Encoder Extracted Metapath KG HigherSalary HigherDegree SimilarSkill Attention Attention Predict Predict Figure 1: The overall framework of KAERR. Method In this section, we present the proposed Knowledge-Aware Explainable Reciprocal Recommender System (KAERR) (shown in Figure 1), which incorporates two main modules: 1) Dual-Perspective Metapath Encoder, which encodes the metapaths from the perspectives of both candidates and jobs independently using a BiLSTM encoder; 2) Attentive Metapath Fusion, which learns attention weights for each metapath and fuses the dual representations based on the attention weights. In addition to the above two modules, we adopt MLP-based methods to predict matching probabilities from both perspectives and average them as the final prediction. Finally, we optimize the model by minimizing a proposed bilateral quadruple-based loss that considers both bilateral matches and unilateral matches to enhance performance in reciprocal recommendation. Dual-Perspective Metapath Encoder To capture the distinct preferences of candidates and jobs over each metapath, we first encode the metapath instances from each side independently. This is because the same metapath may imply different intentions from two sides. For example, the metapath “Candidate-HasDegreePhD-LowerDegree-Bachelor-RequireDegree-Job” indicates a positive signal for the recruiter that the candidate’s education level meets the requirement. However, it may not be that important or even negative for the candidate who pursues a higher degree. Therefore, modeling metapaths from dual perspectives is necessary. We choose to use a BiLSTM encoder because each metapath can be seen as a sequence consisting of entities and relations. LSTM is adept at feature extraction from sequential data, including the ability to handle dependencies within sequences. Our choice to opt for LSTM over Transformers (Vaswani et al. 2017) is motivated by the need for computational efficiency and to mitigate overfitting risks, considerations that become significant in the context of processing metapaths that are typically short and exhibit limited variability. By treating the candidate and job as the start of the sequence respectively in two LSTM directions, the BiLSTM encoder is able to learn the dual-perspective representations for each metapath. For modeling each metapath instance p = (e1, r1, ..., rn−1, en) ∈ Pi,k between candidate ci and job jk, we first map the elements in the metapath to low-dimensional embeddings through a Knowledge Graph Embedding Layer initialized by TransR (Lin et al. 2015). TransR is able to capture the structural features of entities and relations, which facilitates subsequent metapath modeling. Specifically, the embedding of the metapath instance is: E = Embed(p) = [e1, e2, ..., eT ], (1) where E ∈RT ×de, et ∈Rde is the de-dimensional knowledge graph embedding of the t-th element, and T = 2n −1 is the length of metapath. Then the embedding sequence E is fed into a bidirectional LSTM encoder to learn contextual representations: −→ h t = LSTM(et, −→ h t−1), (2) ←− h t = LSTM(et, ←− h t+1), (3) where −→ h t, ←− h t ∈Rdh are the dh-dimensional forward and backward hidden states at step t, respectively. To represent the perspectives of candidate ci and job jk over the metapath instance p, we compute dual-perspective aggregated representations by averaging the BiLSTM hidden states from two directions: pc = 1 T T X t=1 −→ h t, pj = 1 T T X t=1 ←− h t, (4) where pc, pj ∈Rdh are the metapath representations of candidate ci and job jk respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8638 By encoding the metapath sequentially from two ends, the BiLSTM model is able to learn distinct preferences over the same metapath from dual perspectives. Attentive Metapath Fusion For the same metapath instance, the attention assigned by the candidate and recruiter sides may differ, as it implies distinct intentions for them. To capture such dual-perspective preferences, we adopt an attention mechanism for metapath representation aggregation. The attention module can learn soft weights to highlight influential metapaths while suppressing irrelevant ones differently for the two sides. Specifically, for each candidate-job pair (ci, jk), given their metapath representations {pc l }L l=1 and {pj l }L l=1 from the dual-perspective metapath encoder, where L is the number of metapaths, we compute the attention weights as: αc l = σ(pc l wa c + ba c), αj l = σ(pj l wa j + ba j ), (5) where wa c, wa j ∈Rdh and ba c, ba j ∈R are trainable weight vectors and bias terms, and σ(·) is the sigmoid function that squashes the attention weights between 0 and 1 for soft selection: σ(x) = 1 1 + e−x . (6) The fused metapath representations are computed as weighted sums using the attention weights: mc = L X l=1 αc l pc l , mj = L X l=1 αj l pj l , (7) where mc and mj represent the aggregated candidate and job representations that imply their preferences over each other. The attention weights αc l and αj l indicate the relative importance of different metapaths from the dual perspectives, which provides explanations for the recommendation results. Prediction Unlike the traditional recommender systems which make predictions by fusing representations from both sides, we make dual-perspective predictions separately and then average them. Specifically, given the aggregated metapath representations mc and mj of candidate ci and job jk, we have: ˆyci→jk = σ(mcwp c +bp c), ˆyjk→ci = σ(mjwp j +bp j), (8) where wp c, wp j ∈Rd are trainable weight vectors that transform aggregated metapath representations into matching probabilities. bp c, bp j ∈R are trainable bias terms. And ˆyci→jk predicts the probability of job jk satisfying candidate ci based on the candidate’s aggregated preferences over metapaths, while ˆyjk→ci predicts the probability in the opposite direction. To combine the dual-perspective predictions, we take their average as the final matching probability: ˆyi,k = 1 2(ˆyci→jk + ˆyjk→ci). (9) Optimization To optimize the model parameters, we propose a bilateral quadruple loss that incorporates bilateral matching loss and unilateral matching loss. Follow the previous work (Yang et al. 2022), for each positive sample match ⟨ci, jk⟩, we construct negative samples ⟨ci, j′ k⟩and ⟨c′ i, jk⟩, where c′ i and j′ k are randomly sampled negative candidate and job respectively. The training set can be denoted as D = {(i, k, i′, k′) | (i, k) ∈M, (i, k′) ∈ M, (i′, k) ∈M}, where M and M are the matched and unmatched sets, and (i, k, i′, k′) is the abbreviation of quadruple (ci, jk, c′ i, j′ k). The bilateral matching loss is defined as: Lbm = −1 |D| X (i,k,i′,k′)∈D log (σ (2ˆyi,k −ˆyi,k′ −ˆyi′,k)) , (10) where σ denotes the sigmoid function. The negative samples may contain some unilaterally matched ones, which can be identified by the unilateral match matrix U. Specifically, uik = 1 indicates candidate ci applied but got rejected by job jk, uik = −1 indicates the opposite direction, and uik = 0 means no unilateral match between them. The unilateral matching loss is defined as: Lum = −1 |D| X (i,k,i′,k′)∈D log(σ(f(i, k, i′, k′))), (11) where f(i, k, i′, k′) = uik′(ˆyi→k′ −ˆyk′→i)+ui′k(ˆyi′→k −ˆyk→i′). (12) By combining the two parts, the final loss function is: L = Lbm + λLum, (13) where λ balances the two loss terms. By minimizing this loss function, the model parameters can be optimized to improve the matching prediction performance. Compared to the previous methods that use either crossentropy loss or pairwise loss, our bilateral quadruple-based loss models the reciprocal matching in two directions simultaneously, and also accommodates unilateral matches in reciprocal recommendation. Experienments To answer the following questions, we conduct experiments on two real-world datasets from different scenarios. Our code is available at: https://github.com/AllminerLab/Codefor-KAERR-master. • RQ1: How does our model perform compared to the existing state-of-the-art methods? • RQ2: What are the contributions of different components of our model to the overall performance? • RQ3: How do parameters influence the results of KAERR? • RQ4: Can our model provide intuitive explanations for the prediction results? The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8639 Dataset Zhaopin UEM # Candidates/Researchers 4,500 3,124 # Jobs/Demands 19,114 6,247 # Interactions 29,792 18,960 Sparsity 99.97% 99.90% # Match 28,195 11,245 # KG Entity Types 9 11 # KG Relation Types 24 13 # KG Entities 35,471 58,214 # KG Relations 431,831 585,794 Table 1: Statistics of the experimental datasets. Datasets We evaluate our model on two real-world datasets from different reciprocal recommendation scenarios. The overall statistics are shown in Table 1. • Online Recruitment. We use a dataset from the Aliyun Programming Competition on Person-Job Fitting1, provided by a large Chinese online recruitment platform, namely Zhaopin. For simplicity, the dataset is called Zhaopin. In this dataset, if a job seeker views and applies for a job posting, and the recruiter accepts the application, this candidate-job pair is treated as a positive match, indicating mutual satisfaction. • University-Enterprise Matching. We have compiled a dataset derived from industry collaboration records spanning the last five years at Sun Yat-sen University. For simplicity, the dataset is called UEM. In this scenario, we need to recommend suitable university researchers for the technology demands proposed by enterprises. Meanwhile, we also need to recommend appropriate enterprise demands for the researchers based on their capabilities. Baselines We conduct experiments to compare our proposed model with the following baseline methods: • BPRMF (Rendle et al. 2012) is a matrix factorization model that learns user and item representations by optimizing a pairwise Bayesian Personalized Ranking loss. • NCF (He et al. 2017) replaces the inner product in matrix factorization with a multi-layer perceptron, which helps to capture non-linear relationships. • LFRR (Neve and Palomares 2019) is a latent factor model adapted for reciprocal recommendation. • LightGCN (He et al. 2020) is a simplified graph convolutional network for recommendation that captures collaborative filtering signals to generate personalized recommendations efficiently. • PJFNN (Zhu et al. 2018) is a convolutional neural network model for person-job fit prediction. It learns joint representations of person and job from historical application data in an end-to-end manner. • BPJFNN (Qin et al. 2018)is an RNN-based model for person-job fit prediction. It uses BiLSTM to derive semantic representations for job requirements and applicant experiences. 1https://tianchi.aliyun.com/dataset/31623 • APJFNN (Qin et al. 2018) employs hierarchical attention on RNN-derived job and applicant representations to identify key requirements and relevant experiences. • DPGNN (Zhou et al. 2023) uses graph representation learning with two nodes per entity to capture two-way selection preferences and interactions. We categorize the baseline models into three groups according to their core techniques: (1) Collaborative filtering methods including BPRMF, NCF, LFRR and LightGCN, which make recommendations based on user-item interactions; (2) Content-based methods including PJFNN, BPJFNN, and APJFNN, which rely on profile content features; (3) Hybrid method DPGNN that combines collaborative filtering and content-based filtering. Except for BPRMF and NCF, all the other baseline models are proposed specifically for the reciprocal recommendation scenario. It’s crucial to highlight that we omitted comparisons with sequential recommendation models like ReSeq (Zheng et al. 2023b) due to their dependence on extensive interaction histories and sequential data, requirements that our dataset does not meet. Evaluation Following (Yang et al. 2022), we adopt four common ranking metrics: Recall (R@k), Precision (P@k), Normalized Discounted Cumulative Gain (NDCG@k) and Mean Reciprocal Rank (MRR@k). We set k to 5 for evaluation. We perform evaluation from both sides simultaneously for each positive match, which is well suited for reciprocal recommendation. Specifically, for each positive match, we sample 20 negative instances for both sides to construct two ranking lists. We then report the average ranking metrics across both lists. Implementation Details We implement the baseline models using RecBole (Zhao et al. 2022) library. Hyper-parameters for all methods are tuned through grid search. The Adam optimizer is utilized for model training. The learning rate is selected from {0.01, 0.001, 0.0001} via tuning. Early stopping with a patience of 10 epochs is adopted to prevent overfitting. Performance Comparison (RQ1) Table 2 presents the comparison results. It can be observed that collaborative filtering-based baselines perform poorly due to limitations in modeling sparse interactions. Although content-based baselines achieve some improvements compared to collaborative filtering methods, they still underperform the hybrid methods. The hybrid method DPGNN achieves the second-best performance across all metrics, indicating that utilizing both text descriptions and interactions is important. In comparison, our proposed KAERR method achieves superior performance over all baselines on both datasets. Unlike the existing methods, KAERR explicitly models the metapaths from dual perspectives and fuses them with attention weights, which allows for capturing the distinct intentions of each party and focusing on influential metapaths. This leads to more accurate matching between the two sides. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8640 Dataset Perspective Candidates/Researchers Jobs/Demands Metric R@5 P@5 NDCG@5 MRR@5 R@5 P@5 NDCG@5 MRR@5 Zhaopin BPRMF 0.2769 0.0570 0.2164 0.1997 0.3500 0.0789 0.2576 0.2367 NCF 0.3606 0.0739 0.2378 0.2012 0.3236 0.0731 0.2257 0.2020 LFRR 0.2833 0.0582 0.2215 0.2045 0.3545 0.0802 0.2577 0.2352 LightGCN 0.2981 0.0611 0.2312 0.2089 0.3601 0.0814 0.2631 0.2393 PJFNN 0.6929 0.1425 0.4984 0.4392 0.6468 0.1384 0.4605 0.4057 BPJFNN 0.3056 0.0625 0.1970 0.1632 0.2318 0.0480 0.1389 0.1107 APJFNN 0.3074 0.0631 0.1900 0.1536 0.2319 0.0485 0.1396 0.1116 DPGNN 0.7777 0.1617 0.6144 0.5658 0.7460 0.1628 0.5869 0.5441 KAERR 0.9477 0.1979 0.8275 0.7895 0.9499 0.2059 0.8333 0.7990 UEM BPRMF 0.3202 0.0723 0.1987 0.2156 0.3825 0.0845 0.2663 0.2538 NCF 0.3542 0.0721 0.2335 0.1998 0.3278 0.0756 0.2198 0.2047 LFRR 0.3389 0.0698 0.2354 0.2496 0.3875 0.0925 0.2734 0.2547 LightGCN 0.3458 0.0714 0.2402 0.2547 0.3928 0.0944 0.2792 0.2601 PJFNN 0.7321 0.1623 0.5123 0.4578 0.6723 0.1502 0.4789 0.4156 BPJFNN 0.7514 0.1687 0.5236 0.4629 0.6915 0.1552 0.4887 0.4486 APJFNN 0.7587 0.1695 0.5287 0.4655 0.6963 0.1618 0.4894 0.4523 DPGNN 0.8259 0.1897 0.6532 0.5923 0.8064 0.1921 0.6243 0.5940 KAERR 0.9146 0.1932 0.8014 0.7507 0.9202 0.1961 0.8128 0.7667 Table 2: Performance comparison of all methods. Dataset Perspective Candidates/Researchers Jobs/Demands Metric R@5 P@5 NDCG@5 MRR@5 R@5 P@5 NDCG@5 MRR@5 Zhaopin KAERR 0.9477 0.1979 0.8275 0.7895 0.9499 0.2059 0.8333 0.7990 w/o DPME 0.8936 0.1531 0.7897 0.7612 0.9178 0.1736 0.7912 0.7714 w/o AMF 0.9105 0.1582 0.7875 0.7624 0.9235 0.1821 0.7890 0.7727 w/o BQL 0.9079 0.1597 0.7759 0.7595 0.9201 0.1799 0.7823 0.7612 UEM KAERR 0.9146 0.1732 0.8014 0.7507 0.9202 0.1961 0.8128 0.7667 w/o DPME 0.8653 0.1427 0.7652 0.7301 0.8827 0.1725 0.7698 0.7221 w/o AMF 0.8702 0.1495 0.7781 0.7399 0.8785 0.1925 0.7754 0.7344 w/o BQL 0.8669 0.1380 0.7664 0.7368 0.8802 0.1786 0.7721 0.7302 Table 3: Performance comparison between KAERR and its variants. Ablation Study (RQ2) To verify the effectiveness of our proposed components, we conducted ablation studies by removing each of the key designs in KAERR. Specifically, we consider the following three variants of KAERR: (1) KAERR w/o DPME replaces the dual-perspective metapath encoder with a shared LSTM encoder, where candidates and jobs use a common metapath representation; (2) KAERR w/o AMF substituting the attentive metapath fusion by simple mean pooling; (3) KAERR w/o BQL changes the bilateral quadruple loss to the conventional BPR loss. The results in Table 3 demonstrate a performance decline when removing any of the above components. This confirms that all the components in KAERR make pivotal contributions to improving KAERR’s performance. Hyper-Parameter Analysis (RQ3) The parameter tuning results are shown in Figure 2. We study the impacts of three key hyper-parameters: maximum number of metapaths Lm, knowledge graph embedding size he, and λ in the loss function. Lm controls the amount of semantics captured from the knowledge graph. Through testing Lm ∈2, 4, 8, 16, 32, we find that Lm = 16 achieves the best performance, as too small values lead to insufficient semantics while too large values incorporate noisy information. For he, the optimal value differs between the two datasets due to the knowledge graph size. λ balances the weights of bilateral and unilateral matching losses, and the best value is 2 since the proportion of unilateral matches is relatively small in the interaction data, and moderately increasing the coefficient that can helps to better exploit their information. This analysis provides insights into how to properly set these key factors. Case Study (RQ4) Figure 3 shows successful and unsuccessful Candidate-Job matches. In the successful match, our model predicts high scores from both the candidate’s and the recruiter’s perThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8641 2 4 8 16 32 Lm 0.6 0.7 0.8 0.9 Recall@5 Zhaopin UEM (a) Lm 16 32 64 128 256 he 0.91 0.92 0.93 0.94 0.95 Recall@5 Zhaopin UEM (b) he 1 4 1 2 1 2 4 0.91 0.92 0.93 0.94 0.95 Recall@5 Zhaopin UEM (c) λ Figure 2: The performance of KAERR with different settings of Lm, he, and λ. Candidate #0bf0*** Job #96dc*** Candidate's Top-5 Metapaths: City_538 2Years Declare Taxes College Bachelor 6000 8000 Reconciliation Accounting Bookkeeping Recruiter's Top-5 Metapaths: City_538 6000 8000 Bookkeeping Accounting Declare Taxes Reconciliation College Bachelor College Prediction Score: Bachelor Bookkeeping Accounting Declare Taxes Reconciliation 2Years 6000 8000 1.00 0.95 0.89 0.87 0.82 0.95 0.92 0.91 0.89 0.80 Matching Score: Prediction Score: Candidate #099d*** Job #96dc*** 2Years Declare Taxes College Bachelor 6000 8000 Reconciliation Accounting Bookkeeping Matching Score: 3Years Candidate's Top-5 Metapaths: 6000 8000 Bookkeeping Accounting Declare Taxes Reconciliation College Bachelor Prediction Score: 0.95 0.90 0.89 0.87 0.82 Recruiter's Top-5 Metapaths: College Bachelor Bookkeeping Accounting Declare Taxes Reconciliation 3Years 0.95 0.92 0.91 0.94 0.93 Prediction Score: Archiving Archiving 2Years Archiving Figure 3: Examples of a successful Candidate-Job match (Top) and an unsuccessful Candidate-Job match (Bottom). spectives. The attention weights on metapaths highlight the key factors influencing the match. For the candidate, the top weights are on location and salary, suggesting these are the primary considerations. On the other hand, the recruiter places more emphasis on education, skills, and experience. In contrast, the unsuccessful match depicted at the bottom shows a different scenario. Even though the candidate satisfies the job requirements with suitable education, work experience, and skills, which is indicated by a high prediction score from the job’s perspective, the candidate’s own prediction score for the job remains low. This discrepancy in scores is due to the job’s location not meeting the candidate’s preferences, ultimately resulting in a low overall matching score. This example underscores the importance of considering both parties’ preferences in the matching process and demonstrates the nuanced interpretability our model provides in real-world recommendation scenarios. Conclusion In this paper, we proposed a novel Knowledge-Aware Explainable Reciprocal Recommender System (KAERR) that effectively incorporates knowledge graph information to address the sparsity issue in the reciprocal recommendation. By extracting metapaths and modeling them from the dual perspectives of the two involved parties, KAERR is able to capture their distinct intentions and preferences. An attention mechanism is adopted to fuse the metapath representations by learning soft weights indicating the importance of each metapath. Extensive experiments on two real-world datasets verified that KAERR achieves state-of-the-art performance. Furthermore, the attention weights provide interpretability by revealing the relative influence of different metapaths. For future work, we plan to explore incorporating metapath modeling with other graph learning techniques to capture more information from knowledge graphs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8642 Acknowledgements This work was supported by Guangdong Basic and Applied Basic Research Foundation (2022B1515120059), NSFC (62276277 and 62276109), and Guangdong Provincial Engineering Research Center of Intelligent Matching for Technology Commercialization (2022A175). And Mohsen Guizani appreciates the research support provided by the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) (8481000021). References Akehurst, J.; Koprinska, I.; Yacef, K.; Pizzato, L. A. S.; Kay, J.; and Rej, T. 2011. CCR - A Content-Collaborative Reciprocal Recommender for Online Dating. In IJCAI, 2199– 2204. Alanazi, A.; and Bain, M. 2013. A people-to-people contentbased reciprocal recommender using hidden markov models. In RecSys, 303–306. Cai, X.; Bain, M.; Krzywicki, A.; Wobcke, W.; Kim, Y. S.; Compton, P.; and Mahidadia, A. 2012. Reciprocal and Heterogeneous Link Prediction in Social Networks. In PAKDD (2), 193–204. Cao, Y.; Wang, X.; He, X.; Hu, Z.; and Chua, T. 2019. Unifying Knowledge Graph Learning and Recommendation: Towards a Better Understanding of User Preferences. In WWW, 151–161. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR, 639–648. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T. 2017. Neural Collaborative Filtering. In WWW, 173–182. Hochreiter, S.; and Schmidhuber, J. 1996. LSTM can Solve Hard Long Time Lag Problems. In NIPS, 473–479. Jiang, J.; Ye, S.; Wang, W.; Xu, J.; and Luo, X. 2020. Learning Effective Representations for Person-Job Fit by Feature Fusion. In CIKM, 2549–2556. Li, L.; Chen, L.; and Dong, R. 2021. CAESAR: contextaware explanation based on supervised attention for service recommendations. J. Intell. Inf. Syst., 57(1): 147–170. Li, L.; Zhang, Y.; and Chen, L. 2020. Generate Neural Template Explanations for Recommendation. In CIKM, 755– 764. Li, L.; Zhang, Y.; and Chen, L. 2021. EXTRA: Explanation Ranking Datasets for Explainable Recommendation. In SIGIR, 2463–2469. Li, T.; Li, X.; Wang, C.; Li, X.; Gao, S.; and Han, D. 2022. FF-KGAT: Feature Fusion Based Knowledge Graph Attention Network for Recommendation. In AIPR, 468–474. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In AAAI, 2181–2187. Luo, L.; Yang, L.; Xin, J.; Fang, Y.; Zhang, X.; Yang, X.; Chen, K.; Zhang, Z.; and Liu, K. 2020. RRCN: A Reinforced Random Convolutional Network based Reciprocal Recommendation Approach for Online Dating. CoRR, abs/2011.12586. Neve, J.; and Palomares, I. 2019. Latent factor models and aggregation operators for collaborative filtering in reciprocal recommender systems. In RecSys, 219–227. Pizzato, L. A. S.; Rej, T.; Chung, T.; Koprinska, I.; and Kay, J. 2010. RECON: a reciprocal recommender for online dating. In RecSys, 207–214. Qin, C.; Zhu, H.; Xu, T.; Zhu, C.; Jiang, L.; Chen, E.; and Xiong, H. 2018. Enhancing Person-Job Fit for Talent Recruitment: An Ability-aware Neural Network Approach. In SIGIR, 25–34. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2012. BPR: Bayesian Personalized Ranking from Implicit Feedback. CoRR, abs/1205.2618. Shi, S.; Chen, H.; Ma, W.; Mao, J.; Zhang, M.; and Zhang, Y. 2020. Neural Logic Reasoning. In CIKM, 1365–1374. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. NIPS, 30. Wang, H.; Zhang, F.; Xie, X.; and Guo, M. 2018. DKN: Deep Knowledge-Aware Network for News Recommendation. In WWW, 1835–1844. Wu, Z.; Zhang, X.; and Lin, X. 2022. KGAT: Predicting Drug-Target Interaction Based on Knowledge Graph Attention Network. In ICIC (2), 438–450. Xia, P.; Zhai, S.; Liu, B.; Sun, Y.; and Chen, C. X. 2016. Design of reciprocal recommendation systems for online dating. Soc. Netw. Anal. Min., 6(1): 32:1–32:16. Xian, Y.; Fu, Z.; Muthukrishnan, S.; de Melo, G.; and Zhang, Y. 2019. Reinforcement Knowledge Graph Reasoning for Explainable Recommendation. In SIGIR, 285–294. Yang, C.; Hou, Y.; Song, Y.; Zhang, T.; Wen, J.; and Zhao, W. X. 2022. Modeling Two-Way Selection Preference for Person-Job Fit. In RecSys, 102–112. Yang, S.; Korayem, M.; AlJadda, K.; Grainger, T.; and Natarajan, S. 2017. Combining content-based and collaborative filtering for job recommendation system: A costsensitive Statistical Relational Learning approach. Knowl. Based Syst., 136: 37–45. Zhang, F.; Yuan, N. J.; Lian, D.; Xie, X.; and Ma, W. 2016. Collaborative Knowledge Base Embedding for Recommender Systems. In KDD, 353–362. Zhao, W. X.; Hou, Y.; Pan, X.; Yang, C.; Zhang, Z.; Lin, Z.; Zhang, J.; Bian, S.; Tang, J.; Sun, W.; Chen, Y.; Xu, L.; Zhang, G.; Tian, Z.; Tian, C.; Mu, S.; Fan, X.; Chen, X.; and Wen, J. 2022. RecBole 2.0: Towards a More Up-to-Date Recommendation Library. In CIKM, 4722–4726. Zheng, B.; Hou, Y.; Zhao, W. X.; Song, Y.; and Zhu, H. 2023a. Reciprocal Sequential Recommendation. CoRR, abs/2306.14712. Zheng, B.; Hou, Y.; Zhao, W. X.; Song, Y.; and Zhu, H. 2023b. Reciprocal Sequential Recommendation. In RecSys, 89–100. Zhou, L.; Chen, W.; Zeng, D.; Cheng, S.; Liu, W.; Zhang, M.; and Qu, H. 2023. DPGNN: Dual-perception graph neural network for representation learning. Knowl. Based Syst., 268: 110377. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8643 Zhu, C.; Zhu, H.; Xiong, H.; Ma, C.; Xie, F.; Ding, P.; and Li, P. 2018. Person-Job Fit: Adapting the Right Talent for the Right Job with Joint Representation Learning. ACM Trans. Manag. Inf. Syst., 9(3): 12:1–12:17. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8644
2024
960
18,807
Adaptive Hardness Negative Sampling for Collaborative Filtering Riwei Lai1, 2, Rui Chen1*, Qilong Han1*, Chi Zhang1, Li Chen2 1 College of Computer Science and Technology, Harbin Engineering University 2 Department of Computer Science, Hong Kong Baptist University {lai, ruichen, hanqilong, zhangchi20}@hrbeu.edu.cn, [email protected] Abstract Negative sampling is essential for implicit collaborative filtering to provide proper negative training signals so as to achieve desirable performance. We experimentally unveil a common limitation of all existing negative sampling methods that they can only select negative samples of a fixed hardness level, leading to the false positive problem (FPP) and false negative problem (FNP). We then propose a new paradigm called adaptive hardness negative sampling (AHNS) and discuss its three key criteria. By adaptively selecting negative samples with appropriate hardnesses during the training process, AHNS can well mitigate the impacts of FPP and FNP. Next, we present a concrete instantiation of AHNS called AHNSp<0, and theoretically demonstrate that AHNSp<0 can fit the three criteria of AHNS well and achieve a larger lower bound of normalized discounted cumulative gain. Besides, we note that existing negative sampling methods can be regarded as more relaxed cases of AHNS. Finally, we conduct comprehensive experiments, and the results show that AHNSp<0 can consistently and substantially outperform several state-of-the-art competitors on multiple datasets. Introduction Collaborative filtering (CF), as the most representative technique for recommendation, focuses on modeling user interests from observed user-item interactions (Wang et al. 2019; He et al. 2020). In many cases, it is not always possible to obtain a large amount of high-quality explicit feedback. As a result, implicit feedback, such as clicks or purchases, has become a default choice to train a CF model (Lai et al. 2023). In implicit feedback, each observed interaction normally indicates a user’s interest in an item and corresponds to a positive training sample. As for negative training samples, a widely adopted approach is to randomly select some uninteracted items for users. An implicit CF model is then optimized to give positive samples higher scores than negative ones (Rendle et al. 2009). Similar to many semi-supervised learning problems, existing implicit CF models highly rely on mining negative samples to provide proper negative training signals. Without auxiliary data describing items, two lines of works have *Corresponding authors Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Average hardness of selected negative items in RNS, DNS, and DENS on two Amazon datasets. been proposed. The first line consists of static negative sampling, which assigns a static probability for each candidate to be sampled. For example, random negative sampling (RNS) (Rendle et al. 2009) chooses uninteracted items with equal probability, and popularity-biased negative sampling (PNS) (Chen et al. 2017; Wu et al. 2019) adopts itempopularity-biased distributions to favor popular items. The other line is hard negative sampling, such as dynamic negative sampling (DNS) (Zhang et al. 2013) and disentangled negative sampling (DENS) (Lai et al. 2023), which focuses on selecting hard negative samples that are difficult to be distinguished from the positive samples with dynamic distributions. Such hard negative samples can provide more informative training signals so that user interests can be better characterized (Xu et al. 2022). Although the above two lines of works on negative sampling have achieved some promising results, we point out that all these methods can only select negative samples of a certain “hardness” level, preventing them from achieving better performance. Without loss of generality, assume that positive samples’ predicted scores are always positive. We can define the hardness of a negative sample as its relative predicted score, i.e., the ratio of its predicted score to that of its corresponding positive sample, in order to smooth the influence of the simultaneous increase in the predicted scores of all items during the training process. As illustrated in Fig. 1, throughout the training process, RNS can only select easy negative samples with hardness around 0, while DNS and DENS can only choose hard negative samples with hardness around 0.3 and 0.4, respectively. Unavoidably, these fixed hardness negative sampling methods may suffer from two significant problems: (1) false positive problem (FPP): as shown in the upper part of Fig. 2, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8645 Figure 2: Issues of fixed hardness negative sampling. when only easy negative samples can be selected during the training process, items of no interest but with initially high predicted scores may not be sufficiently updated and will still be recommended to users, resulting in suboptimal recommendation results; (2) false negative problem (FNP): as shown in the lower part of Fig. 2, if only hard negative samples with a fixed hardness level are selected during the training process, items of interests but not interacted yet may be selected as negative and ranked lower in the recommendation list, which worsens recommendation results. We have conducted extensive experiments to verify the existence of FPP and FNP (see RQ2 of Experiments for more details). To address the above two problems and obtain better recommendation results, we propose to adaptively select negative samples with different hardness levels during the training process. A straightforward attempt is to introduce curriculum learning (Chu et al. 2021) into negative sampling, where a predefined pacing function is utilized to schedule the hardness levels of negative samples in different training epochs. However, such an implementation still selects negative samples with a fixed hardness level within the same epoch, rather than adaptively select negative samples with different hardness levels for different positive samples. In this paper, we introduce a brand new negative sampling paradigm called Adaptive Hardness Negative Sampling (AHNS), and analyze its three key criteria. We then present a concrete instantiation of AHNS called AHNSp<0, where p is a predefined smoothing parameter we will explain later. Comprehensive theoretical analyses are performed to confirm that AHNSp<0 satisfies the three criteria of AHNS and prove that implicit CF models with AHNSp<0 can achieve a larger lower bound on normalized discounted cumulative gain (NDCG) than with a fixed hardness negative sampling method. Furthermore, we discuss the relation between AHNS and other negative sampling methods and note that existing negative sampling methods can be considered as more relaxed cases of AHNS. Our main contributions are summarized as follows: • We are the first to identify and address FPP and FNP in existing negative sampling methods via adaptively selecting hardnesses of negative samples, which brings a new perspective of negative sampling for implicit CF. • We propose a new negative sampling paradigm AHNS with three criteria, which generalizes existing negative sampling methods. We present a concrete instantiation AHNSp<0 and theoretically show that it can fit the three criteria well and achieve a larger lower bound on NDCG. • We conduct extensive experiments to demonstrate that AHNSp<0 can achieve significant improvements over several representative state-of-the-art negative sampling methods. Related Work Static Negative Sampling Static negative sampling focuses on identifying good distributions to draw negative samples. For example, as the simplest and most prevalent static negative sampling method, Bayesian personalized ranking (BPR) (Rendle et al. 2009) randomly selects uninteracted items as negative. However, this method makes it hard to guarantee the quality of selected negative samples, and thus some studies (Chen et al. 2017; Wu et al. 2019; Yang et al. 2020) propose to replace the uniform distribution with other distributions. Inspired by the word-frequency-based and node-degree-based negative sampling distributions for network embedding (Mikolov et al. 2013), NNCF (Chen et al. 2017) and NCE-PLRec (Wu et al. 2019) adopt an item-popularity-based sampling distribution to select more popular items as negative, which helps to alleviate the widespread popularity bias issue in recommender systems (Chen et al. 2023). Hard Negative Sampling Hard negative sampling methods emphasize the importance of oversampling hard negative samples to speed up the training process and find more precise delineations of user interests. More specifically, it is achieved by either assigning higher sampling probabilities to items with larger predicted scores (Zhang et al. 2013; Ding et al. 2020; Huang et al. 2021; Zhu et al. 2022; Lai et al. 2023; Shi et al. 2023; Zhao et al. 2023) or leveraging adversarial learning techniques (Wang et al. 2017; Cai and Wang 2018; Park and Chang 2019). For instance, dynamic negative sampling (DNS) (Zhang et al. 2013) selects the item with the highest predicted score in a candidate negative sample set. SRNS (Ding et al. 2020) oversamples items with both high predicted scores and high variances to tackle the false negative problem. DENS (Lai et al. 2023) disentangles relevant and irrelevant factors of items and identifies the best negative samples with a factor-aware sampling strategy. Instead of directly selecting negative samples from uninteracted items, MixGCF (Huang et al. 2021) synthesizes hard negative samples by mixing positive information into negative samples, which further improves the performance. However, we experimentally find that all the above negative sampling methods can only select negative samples of a fixed hardness level during the training process, leading The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8646 Figure 3: An illustration of adaptive hardness negative sampling. to the false positive problem and false negative problem. Driven by this limitation, we propose an adaptive hardness negative sampling paradigm, which adaptively selects negative samples with appropriate hardnesses and achieves better recommendation results. Proposed Method Problem Formulation In this section, we formulate the problem of negative sampling in implicit CF. Let U and I be the set of users and the set of items, respectively. We denote the set of observed interactions, i.e., implicit feedback, by O+ = {(u, i+) | u ∈ U, i+ ∈I}, where each pair (u, i+) indicates an interaction between user u and item i+. Implicit CF aims to characterize user interests from their observed interactions. Interacted items are generally used to form positive pairs, while uninteracted items are considered candidates to generate negative samples. Specifically, given a positive pair (u, i+), a negative sampling strategy identifies an item i−that has not been previously interacted by u as a negative sample. The implicit CF model is then optimized to give positive pairs higher scores than negative pairs by the Bayesian personalized ranking (BPR) loss function (Rendle et al. 2009): LBPR = X (u,i+,i−) −ln σ(e⊤ u ei+ −e⊤ u ei−), (1) where eu, ei+, and ei−are the embeddings of user u, positive sample i+, and negative sample i−, respectively, the inner product is used to measure the score of positive and negative pairs, and σ(·) is the sigmoid function. Method Design Paradigm. To achieve the adaptive selection in the hardnesses of negative samples and alleviate the false positive problem (FPP) and false negative problem (FNP), we propose the adaptive hardness negative sampling (AHNS) paradigm. As shown in Fig. 3, unlike fixed hardness negative sampling, AHNS simultaneously satisfies the following three key criteria: • C1: The hardness of a selected negative sample should be positive-aware. Instead of setting a specific hardness Algorithm 1: AHNSp<0 1: Input: Set of observed interactions O+ = {(u, i+) | u ∈U, i+ ∈I}, number of candidate negatives M, predefined hyperparameters α, β and p 2: Output: Set of training triples T 3: T ←{} ▷Initialize an empty set for training triples 4: for each positive pair (u, i+) in O+ do 5: C ←{} ▷Initialize an empty set for candidate negative samples 6: for m = 1 to M do 7: im ←Randomly sample an uninteracted item 8: Add im to C 9: end for 10: R ←{} ▷Initialize an empty set for ratings of candidate negative samples 11: for each candidate negative sample im in C do 12: rm ← e⊤ u eim −β · (e⊤ u ei+ + α)p+1 13: Add rm to R 14: end for 15: i−←Select im with the smallest rm in R 16: Add (u, i+, i−) to T 17: end for 18: return T level of negative samples for each training epoch like curriculum learning (Chu et al. 2021), AHNS is expected to identify the appropriate hardness of a negative sample according to its corresponding positive sample. • C2: The hardness of a selected negative sample should be negatively correlated with the predicted score of its corresponding positive sample. On the one hand, for positive samples with higher predicted scores, AHNS should select items with lower hardnesses as negative, which can effectively avoid the FNP. On the other hand, for positive samples with lower predicted scores, AHNS should select items with higher hardnesses as negative, which can accelerate the optimization of positives and enable negatives with higher hardnesses to be sufficiently updated, thus alleviating the FPP. • C3: The hardness of selected negative samples should be adjustable. To cover a variety of practical recommendation scenarios, e.g., different datasets or evaluation metrics (Shi et al. 2023), AHNS should be able to adjust the hardness of selected negative samples. Instantiation. Next, we give a concrete instantiation of AHNS called AHNSp<0, whose entire procedure is detailed in Algo. 1. Specifically, for a positive pair (u, i+), we follow conventional methods (Chen et al. 2022; Lai et al. 2023) and adapt the two-pass sampling idea, which first randomly samples a fixed size of uninteracted items to form a candidate set, and then selects a negative sample from the candidate set according to predefined rating functions and sampling rules. For the first pass, the size M of the candidate set C is usually much smaller than the total number of items |I|, which can boost the sampling efficiency. For the second pass, the rating function and sampling rule play a critical role in idenThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8647 tifying the final negative sample and are the focus of all negative sampling methods. Therefore, we introduce three hyperparameters and carefully design the rating function in AHNSp<0. For each candidate negative item im ∈C, the rating function is formulated as: rm = e⊤ u eim −β · (e⊤ u ei+ + α)p+1 , (2) where α > 0, β > 0 and p < 0 are predefined hyperparameters, whose effects will be given in the subsequent Thm. 3. After calculating the ratings of all candidate negative items, we obtain a rating set R, and then the final negative sample is identified by selecting im with the smallest rm in R: i−= iarg minm rm. (3) Theoretical Analysis In this section, we conduct in-depth analyses on AHNSp<0. We first show that AHNSp<0 satisfies the three criteria of AHNS, and then establish that implicit CF models with AHNSp<0 can achieve a larger lower bound on normalized discounted cumulative gain than with a fixed hardness negative sampling method as training progresses. Theorem 1. AHNSp<0 satisfies C2 of AHNS. Proof. Consider a positive pair (u, i+). Let i− ∗be the ideal negative sample selected by AHNSp<0. According to Eq. (2) and Eq. (3), we have: e⊤ u ei− ∗= β · (e⊤ u ei+ + α)p+1. (4) To simplify the calculation process, we substitute e⊤ u ei+ with (e⊤ u ei+ + α) to calculate the hardness of i− ∗: Hardness(i− ∗) = e⊤ u ei− ∗ e⊤ u ei+ ≈ e⊤ u ei− ∗ e⊤ u ei+ + α = β · (e⊤ u ei+ + α)p+1 e⊤ u ei+ + α = β · (e⊤ u ei+ + α)p. (5) Based on the chain rule, we have: dHardness(i− ∗) d(e⊤ u ei+) = d(β · (e⊤ u ei+ + α)p) d(e⊤ u ei+ + α) · d(e⊤ u ei+ + α) d(e⊤ u ei+) = p · β · (e⊤ u ei+ + α)p−1. (6) Clearly, dHardness(i− ∗)/d(e⊤ u ei+) < 0 always holds when e⊤ u ei+ > 0, α > 0, β > 0 and p < 0, which means that the hardness of i− ∗is always negatively correlated with the predicted score of i+ – the above completes the proof. Theorem 2. AHNSp<0 satisfies C1 of AHNS. Proof. Consider two different positive pairs (u, i+ 1 ) and (u, i+ 2 ). Let i− 1∗and i− 2∗be the ideal negative samples selected by AHNSp<0 for (u, i+ 1 ) and (u, i+ 2 ), respectively. According to Eq. (5), we have: Hardness(i− 1∗) = β · (e⊤ u ei+ 1 + α)p, Hardness(i− 2∗) = β · (e⊤ u ei+ 2 + α)p. (7) 0.5 1.0 1.5 2.0 2.5 3.0 Predicted Score of i + 0.0 0.5 1.0 1.5 2.0 Hardness of i * (0.9, 0.5) =0.1, =0.5 AHNSp = 0 AHNSp = 1 AHNSp = 2 AHNSp = 3 Figure 4: Hardness of ideal negative sample i− ∗w.r.t. different p. It has been proved in Thm. 1 that Hardness(i− ∗) monotonically decreases as e⊤ u ei+ increases. Thus when e⊤ u ei+ 1 ̸= e⊤ u ei+ 2 , Hardness(i− 1∗) ̸= Hardness(i− 2∗) – the above completes the proof. Theorem 3. AHNSp<0 satisfies C3 of AHNS. Proof. According to Eq. (5), we plot the curves of the hardness of i− ∗under different values of the predicted score of i+ in Fig. 4. It is clear that p affects the magnitude of the curves, smaller p leads to larger magnitudes. In addition, all curves pass through the point (1 −α, β), indicating the effect of α and β in adjusting the hardness of selected negative samples–the above completes the proof. Theorem 4. As training progresses, implicit CF models with AHNSp<0 can achieve a larger lower bound on normalized discounted cumulative gain (NDCG) than with a fixed hardness negative sampling method. Proof. Given a user u, let πfu be the ranking function induced by recommender system f for user u, and πfu(i) the rank of item i. Let y be a binary indicator: yi = 1 if item i has been interacted by u, otherwise yi = 0. Let I(u) = {i | yi = 1} be the set of items interacted by u and I be the indicator function. First, we consider discounted cumulative gain (DCG). With 1 + z ≤2z when z ≥1, we have the following: DCG(u) = |I| X i=1 2yi −1 log2(1 + πfu(i)) = |I(u)| X i=1 1 log2(1 + πfu(i)) ≥ |I(u)| X i=1 1 πfu(i) = |I(u)| X i=1 1 1 + P j∈|I|\{i} I(e⊤ u ej −e⊤ u ei > 0) ≥ |I(u)| X i=1 1 1 + P j∈|I|\{i} exp(e⊤ u ej −e⊤ u ei). (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8648 Next, we consider the ideal DCG (IDCG). Let π∗ fu be the ideal ranking function, which can sort the items in the ground truth order: IDCG(u) = |I| X i=1 2yi −1 log2(1 + π∗ fu(i)) = |I(u)| X i=1 1 log2(1 + i) ≤ |I(u)| X i=1 1 = |I(u)| . (9) Clearly, we have: 1 IDCG(u) ≥ 1 |I(u)|. (10) Finally, we consider NDCG: NDCG(u) = DCG(u) IDCG(u) ≥ 1 |I(u)|DCG(u) ≥ 1 |I(u)| |I(u)| X i=1 1 1 + P j∈|I|\{i} exp(e⊤ u ej −e⊤ u ei) ≈ 1 |I(u)| |I(u)| X i=1 1 1 + exp(e⊤ u ei−−e⊤ u ei). (11) As illustrated in Fig. 4, it is not difficult to derive that as the predicted score of i+ increases, the hardness of i− ∗sampled by AHNSp<0 (the solid lines) is lower compared to that of a fixed hardness negative sampling method (the dotted line), leading to a lower value of exp(e⊤ u ei−−e⊤ u ei). Thus implicit CF models with AHNSp<0 can achieve a larger lower bound of NDCG. This completes the proof. Discussion In this section, we first discuss the relation between AHNS and other negative sampling methods. We point out that existing negative sampling methods can be considered as more relaxed cases that satisfy part of the three criteria of AHNS. For example, DENS (Lai et al. 2023) proposes a positive gating layer to disentangle items’ factors for negative sampling. Thus the hardness of selected negative samples becomes positive-aware and satisfies C1 of AHNS. By using an anti-curriculum pacing function to schedule the hardnesses of negative samples for different training epochs, CuCo (Chu et al. 2021) partially satisfies C2 of AHNS. To adapt to different datasets and top-K metrics, DNS(M, N) (Shi et al. 2023) adjusts the hardnesses of selected negative samples via predefined hyperparameters, which satisfies C3 of AHNS. In addition, we note that the main idea of AHNS in negative sampling is consistent with that of focal loss (Lin et al. 2017) in object detection, i.e., putting more focus on lower-ranked positives and higher-ranked negatives (hard, misclassified examples), which may bring some new insights into negative sampling for implicit CF. Experiments In this section, we perform extensive experiments to evaluate AHNSp<0 and answer the following research questions: Dataset #user (|U|) #item (|I|) #inter. (|R|) avg. inter. per user density ML-1M 6.0k 3.7k 1000.2k 165.6 4.47% Phones 27.9k 10.4k 194.4k 7.0 0.07% Sports 35.6k 18.4k 296.3k 8.3 0.05% Tools 16.6k 10.2k 134.5k 8.1 0.08% Table 1: The statistics of four datasets. • RQ1: How does AHNSp<0 perform compared with previous negative sampling methods? • RQ2: Does AHNSp<0 achieve adaptive selection in the hardnesses of negative samples and alleviate the false positive problem (FPP) and false negative problem (FNP)? • RQ3: What are the impacts of the hyperparameters (e.g., α, β) on AHNSp<0? • RQ4: Does AHNSp<0 have an advantage in terms of sampling efficiency? Experimental Setup Datasets and Evaluation Metrics. We consider four widely used public benchmark datasets in experiments: MovieLens-1M1 (ML-1M), Amazon-Phones2 (Phones), Amazon-Sports2 (Sports) and Amazon-Tools2 (Tools). Following (He et al. 2020; Shi et al. 2023), we randomly split each user’s interactions into training/test sets with a ratio of 80%/20%, and build the validation set by randomly sampling 10% interactions of the training set. Tab. 1 summarizes the statistics of the four datasets. We report the recommendation performances in terms of Recall@20 (R@20) and NDCG@{20, 50} (N@{20, 50}), where higher values indicate better performances. Baseline Methods. We compare AHNSp<0 with a wide range of representative negative sampling methods: • RNS (Rendle et al. 2009) randomly selects uninteracted items as negative. • SSM (Wu et al. 2022) achieves better performances by sampling more items as negative. • DNS (Zhang et al. 2013) chooses the item with the highest predicted score in a candidate set as negative. • MixGCF (Huang et al. 2021) synthesizes harder negative samples by injecting information from positive samples. • DENS (Lai et al. 2023) identifies better negative samples by disentangling factors of items. • DNS(M, N) (Shi et al. 2023) controls the sampling hardness via predefined hyperparameters. • GuCo (Chu et al. 2021) proposes a negative sampling method adopting curriculum learning in graph representation learning. We transfer this method to CF. 1https://grouplens.org/datasets/movielens/ 2https://jmcauley.ucsd.edu/data/amazon/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8649 Method ML-1M Phones Sports Tools R@20 N@20 N@50 R@20 N@20 N@50 R@20 N@20 N@50 R@20 N@20 N@50 RNS 22.86 35.46 37.41 11.06 5.98 7.35 6.73 3.60 4.68 5.53 2.99 3.75 SSM 24.87 37.74 39.71 11.37 6.13 7.48 7.08 3.80 4.87 5.72 3.10 3.88 DNS 24.66 36.64 38.31 12.08 6.64 7.99 7.74 4.25 5.32 6.66 3.78 4.52 MixGCF 24.75 37.54 38.95 12.20 6.73 8.13 7.68 4.32 5.36 6.82 3.88 4.59 DENS 25.07 37.67 39.11 12.16 6.68 8.13 7.90 4.35 5.50 6.66 3.76 4.55 DNS(M, N) 25.09 37.58 39.22 12.27 6.75 8.15 7.84 4.31 5.35 6.86 3.76 4.61 CuCo 25.12 37.53 39.20 12.19 6.68 8.11 7.68 4.25 5.36 6.76 3.82 4.59 AHNSp=−1 25.17 37.72 39.31 13.02 7.08 8.71 8.42 4.58 5.82 7.27 4.02 4.95 AHNSp=−2 25.51 38.77 40.57 13.03 7.14 8.74 8.52 4.61 5.81 7.42 4.05 4.92 Improv. 1.6% 2.7% 2.2% 6.2% 5.8% 7.2% 7.8% 6.0% 5.8% 8.2% 4.4% 7.4% Table 2: Performances (%) of AHNSp=−1, AHNSp=−2, and baseline methods. The best results are in bold, and the second best are underlined. Improvements are calculated over the best baseline method and statistically significant with p-value < 0.01. Implementation Details. We strictly follow the experimental setting in DENS (Lai et al. 2023). We utilize matrix factorization (MF) as the implicit CF model. The embedding dimension is fixed to 64, and the embedding parameters are initialized with the Xavier method. We optimize all parameters with Adam (Kingma and Ba 2015) and use the default learning rate of 0.001 and default mini-batch size of 2,048. The number of training epochs is set to 100. For AHNSp<0, the candidate negative size M is searched in the range of {4, 8, 16, 32, 64}. The hyperparameters α and β are tuned over {0.1, 0.2, · · · , 0.9, 1.0} independently. The hyperparameters of all baseline methods are carefully tuned by grid search. Our code is publically available at https://github.com/Riwei-HEU/AHNS. RQ1: Performance Comparison Tab. 2 shows the performances of AHNSp=−1, AHNSp=−2, and baseline methods. We can observe the following: • Compared to randomly selecting uninteracted items as negative (RNS), increasing the number (SSM) or the hardness (DNS, MixGCF, DENS, etc.) of negative samples leads to a substantial performance improvement. • By introducing curriculum learning into negative sampling, CuCo draws negative samples with different hardnesses in different training epochs, achieving comparable performances to hard negative sampling methods. • Benefiting from positive-aware adaptive selection in the hardnesses of negative samples, AHNSp=−1 and AHNSp=−2 significantly outperform RNS by on average 20%. Meanwhile, the two methods also show a huge performance boost over other hard negative sampling methods and the curriculum-learning-based method. RQ2: Hardness Visualization To justify the motivation of AHNS, i.e., adaptively selecting hardnesses of negative samples to alleviate FPP and FNP, we plot the curves of average negative hardness and NDCG@20 of RNS, DNS, DENS, and AHNSp=−2 in 0 20 40 60 Epoch 0.0 0.2 0.4 Avg. Hardness Phones RNS DNS DENS AHNSp = 2 (a) Phones, Avg. Hardness 0 20 40 60 Epoch 0 2 4 6 NDCG@20(%) Phones RNS DNS DENS AHNSp = 2 (b) Phones, NDCG@20 0 20 40 60 Epoch 0.0 0.2 0.4 0.6 Avg. Hardness Sports RNS DNS DENS AHNSp = 2 (c) Sports, Avg. Hardness 0 20 40 60 Epoch 0 1 2 3 4 NDCG@20(%) Sports RNS DNS DENS AHNSp = 2 (d) Sports, NDCG@20 0 20 40 60 Epoch 0.0 0.2 0.4 0.6 Avg. Hardness Tools RNS DNS DENS AHNSp = 2 (e) Tools, Avg. Hardness 0 20 40 60 Epoch 0 1 2 3 4 NDCG@20(%) Tools RNS DNS DENS AHNSp = 2 (f) Tools, NDCG@20 Figure 5: Average negative hardness and NDCG@20 of RNS, DNS, DENS, and AHNSp=−2. Fig. 5. Due to the space limitation, we only report the results on the three Amazon datasets. From these figures, we have the following key findings: • As shown in Fig. 5(a), 5(c) and 5(e), compared to fixed hardness negative sampling methods RNS, DNS, and DENS, AHNSp=−2 can adaptively adjust the hardnesses of negative samples as training progresses. Specifically, in the early stages of training, AHNSp=−2 favors negative samples with higher hardnesses, while in the later stages of training, AHNSp=−2 prefers negative samples with lower hardnesses. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8650 0.0 0.2 0.4 0.6 0.8 1.0 12.6 12.8 13.0 13.2 R@20(%) R@20 N@20 6.9 7.0 7.1 7.2 N@20(%) Phones, = 0.1 (a) Phones, β = 0.1 0.1 0.2 0.3 0.4 0.5 12.8 12.9 13.0 13.1 R@20(%) R@20 N@20 7.0 7.1 7.2 N@20(%) Phones, = 1.0 (b) Phones, α = 1.0 0.0 0.2 0.4 0.6 0.8 1.0 8.0 8.2 8.4 8.6 R@20(%) R@20 N@20 4.4 4.5 4.6 4.7 N@20(%) Sports, = 0.1 (c) Sports, β = 0.1 0.1 0.2 0.3 0.4 0.5 8.3 8.4 8.5 R@20(%) R@20 N@20 4.5 4.6 N@20(%) Sports, = 1.0 (d) Sports, α = 1.0 0.0 0.2 0.4 0.6 0.8 1.0 6.8 7.0 7.2 7.4 R@20(%) R@20 N@20 3.8 3.9 4.0 4.1 N@20(%) Tools, = 0.1 (e) Tools, β = 0.1 0.1 0.2 0.3 0.4 0.5 7.1 7.2 7.3 7.4 R@20(%) R@20 N@20 3.9 4.0 4.1 N@20(%) Tools, = 1.0 (f) Tools, α = 1.0 Figure 6: Performance of AHNSp=−2 w.r.t. different hyperparameters. • As shown in Fig. 5(b), 5(d) and 5(f), the performance of RNS peaks in the early stages of training and remains stable thereafter, DNS and DENS present better performance than RNS but suffer a significant performance drop in the later stages of training, and AHNSp=−2 achieves the best performance while maintaining similar stability as RNS. • The average negative hardness and NDCG@20 of RNS (the blue line) well verify the existence of FPP, i.e., when only easy negative samples can be selected during the training process, items of no interest but with initially high predicted scores may not be sufficiently updated and will still be recommended to users, leading to the suboptimal performance of RNS. • The average negative hardness and NDCG@20 of DNS (the orange line) and DENS (the green line) well verify the existence of FNP, i.e., when only hard negative samples can be selected during the training process, items of interests may be selected as negative and ranked lower in the recommendation list, resulting in the performance drop of DNS and DENS. • The average negative hardness and NDCG@20 of AHNSp=−2 (the red line) well justify our motivation. For positives with lower predicted scores, by selecting items with higher hardnesses as negative, AHNSp=−2 well alleviates FPP and achieves a higher peak; for positives with higher predicted scores, by selecting items with lower hardnesses as negative, AHNSp=−2 well avoids FNP and thus prevents the performance drop. DNS(M, N) SSM MixGCF DENS DNS AHNSp = 2 CuCo RNS 101 102 Time (log scale) 393.09 63.62 39.54 37.41 37.08 36.72 36.55 5.09 Figure 7: Time (second) for training per epoch on ML-1M w.r.t. different methods. Best viewed in color. RQ3: Hyperparameter Study As discussed in Thm. 3, hyperparameters α and β affect the hardnesses of selected negative samples. Here we study how these hyperparameters affect the recommendation performance. Fig. 6 shows Recall@20 and NDCG@20 of AHNSp=−2 under different α or β values with other hyperparameters unchanged on the three Amazon datasets. We can see that it is intractable to identify the optimal values of α and β since they are different across datasets and evaluation metrics. However, in practice, we can achieve desirable performance in a relatively wide range of α or β values, which relieves the overhead of hyperparameter tuning. RQ4: Efficiency Analysis As presented in Algo. 1, AHNSp<0 does not introduce additional time cost compared to the simplest hard negative sampling method DNS. Here we empirically compare the time for training each epoch of AHNSp=−2 and other baseline methods on the ML-1M dataset. All the methods are implemented under the same framework and with optimal hyperparameters to ensure fairness. The results are shown in Fig. 7. DNS(M, N) takes the longest time as it requires an extremely large candidate negative set to adjust the hardness of negative samples. SSM costs the second longest time because multiple negative samples are selected to participate in the training of the CF model. The time difference between AHNSp=−2 and other hard negative sampling methods is marginal, and RNS undoubtedly takes the least time. Considering the performance improvements in Tab. 2 that AHNSp=−2 can bring, we believe that AHNSp=−2 is the best negative sampling method in terms of both efficiency and performance. Conclusion In this paper, we propose a new negative sampling paradigm AHNS with three key criteria, which enables adaptive selection of hardnesses of negative samples to alleviate FPP and FNP. We devise a concrete instantiation AHNSp<0 and theoretically demonstrate that it can well fit the three criteria of AHNS and achieve a larger lower bound of NDCG. Comprehensive experiments confirm that AHNSp<0 provides a promising new research direction for negative sampling to further boost implicit CF models’ performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8651 Acknowledgments This work was supported by the Heilongjiang Key R&D Program of China under Grant No. GA23A915 and the National Natural Science Foundation of China under Grant No. 62072136. It was also partially supported by Hong Kong Baptist University IG-FNRA project under Grant No. RCFNRA-IG/21-22/SCI/01. References Cai, L.; and Wang, W. Y. 2018. KBGAN: Adversarial Learning for Knowledge Graph Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1470–1480. Chen, J.; Dong, H.; Wang, X.; Feng, F.; Wang, M.; and He, X. 2023. Bias and Debias in Recommender System: A Survey and Future Directions. ACM Transactions on Information Systems, 41(3): 1–39. Chen, J.; Lian, D.; Jin, B.; Zheng, K.; and Chen, E. 2022. Learning Recommenders for Implicit Feedback with Importance Resampling. In Proceedings of the ACM Web Conference 2022, 1997–2005. Chen, T.; Sun, Y.; Shi, Y.; and Hong, L. 2017. On Sampling Strategies for Neural Network-based Collaborative Filtering. In Proceedings of the 23rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 767–776. Chu, G.; Wang, X.; Shi, C.; and Jiang, X. 2021. CuCo: Graph Representation with Curriculum Contrastive Learning. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, 2300–2306. Ding, J.; Quan, Y.; Yao, Q.; Li, Y.; and Jin, D. 2020. Simplify and Robustify Negative Sampling for Implicit Collaborative Filtering. In Proceedings of the 34th International Conference on Neural Information Processing Systems. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 639–648. Huang, T.; Dong, Y.; Ding, M.; Yang, Z.; Feng, W.; Wang, X.; and Tang, J. 2021. MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 665–674. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations. Lai, R.; Chen, L.; Zhao, Y.; Chen, R.; and Han, Q. 2023. Disentangled Negative Sampling for Collaborative Filtering. In Proceedings of the 16th International Conference on Web Search And Data Mining, 96–104. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, 2980–2988. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of the 27th International Conference on Neural Information Processing Systems, 3111–3119. Park, D. H.; and Chang, Y. 2019. Adversarial Sampling and Training for Semi-Supervised Information Retrieval. In Proceedings of the 28th International Conference on World Wide Web, 1443–1453. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2009. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, 452–461. Shi, W.; Chen, J.; Feng, F.; Zhang, J.; Wu, J.; Gao, C.; and He, X. 2023. On the Theories Behind Hard Negative Sampling for Recommendation. In Proceedings of the ACM Web Conference 2023, 812–822. Wang, J.; Yu, L.; Zhang, W.; Gong, Y.; Xu, Y.; Wang, B.; Zhang, P.; and Zhang, D. 2017. IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models. In Proceedings of the 40rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 515–524. Wang, X.; He, X.; Wang, M.; Feng, F.; and Chua, T.-S. 2019. Neural Graph Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 165–174. Wu, G.; Volkovs, M.; Soon, C. L.; Sanner, S.; and Rai, H. 2019. Noise Contrastive Estimation for One-Class Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 135–144. Wu, J.; Wang, X.; Gao, X.; Chen, J.; Fu, H.; Qiu, T.; and He, X. 2022. On the Effectiveness of Sampled Softmax Loss for Item Recommendation. arXiv preprint arXiv:2201.02327. Xu, L.; Lian, J.; Zhao, W. X.; Gong, M.; Shou, L.; Jiang, D.; Xie, X.; and Wen, J.-R. 2022. Negative Sampling for Contrastive Representation Learning: A Review. arXiv preprint arXiv:2206.00212. Yang, Z.; Ding, M.; Zhou, C.; Yang, H.; Zhou, J.; and Tang, J. 2020. Understanding Negative Sampling in Graph Representation Learning. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1666–1676. Zhang, W.; Chen, T.; Wang, J.; and Yu, Y. 2013. Optimizing Top-N Collaborative Filtering via Dynamic Negative Item Sampling. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, 785–788. Zhao, Y.; Chen, R.; Lai, R.; Han, Q.; Song, H.; and Chen, L. 2023. Augmented Negative Sampling for Collaborative Filtering. In Proceedings of the 17th ACM Conference on Recommender Systems. Zhu, Q.; Zhang, H.; He, Q.; and Dou, Z. 2022. A GainTuning Dynamic Negative Sampler for Recommendation. In Proceedings of the Web Conference 2022, 277–285. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8652
2024
961
18,808
MDFL: Multi-Domain Diffusion-Driven Feature Learning Daixun Li, Weiying Xie*, Jiaqing Zhang, Yunsong Li State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an 710071, China [email protected], [email protected], jqzhang [email protected], [email protected] Abstract High-dimensional images, known for their rich semantic information, are widely applied in remote sensing and other fields. The spatial information in these images reflects the object’s texture features, while the spectral information reveals the potential spectral representations across different bands. Currently, the understanding of high-dimensional images remains limited to a single-domain perspective with performance degradation. Motivated by the masking texture effect observed in the human visual system, we present a multidomain diffusion-driven feature learning network (MDFL) , a scheme to redefine the effective information domain that the model really focuses on. This method employs diffusionbased posterior sampling to explicitly consider joint information interactions between the high-dimensional manifold structures in the spectral, spatial, and frequency domains, thereby eliminating the influence of masking texture effects in visual models. Additionally, we introduce a feature reuse mechanism to gather deep and raw features of high-dimensional data. We demonstrate that MDFL significantly improves the feature extraction performance of highdimensional data, thereby providing a powerful aid for revealing the intrinsic patterns and structures of such data. The experimental results on three multi-modal remote sensing datasets show that MDFL reaches an average overall accuracy of 98.25%, outperforming various state-of-the-art baseline schemes. Code available at https://github.com/LDXDU/ MDFL-AAAI-24. Introduction Recently, remarkable strides have been achieved in precise classification within the realm of natural images, partially attributing to the distinctive representations found within the ImageNet dataset (Li et al. 2022a). These representations frequently encompass local structures, repetitive patterns, and hierarchical arrangements (Yao et al. 2019), rendering them advantages for deep learning models. However, progress in the domain of high-dimensional data analysis has been relatively sluggish in comparison to that in natural images due to several factors, including the absence of prior knowledge, sparse distributions, and low spatial resolutions (Jiang et al. 2021). Consequently, it becomes imperative to *Correspond author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. … … Figure 1: The forward diffusion process in the spectralspatial domain denoted by I0, the noised image at time step 0 and time step T denoted by IT , and the feature visualization results of the hyperspectral image at different time steps. delve into exploring intelligent interpretation algorithms that can be effectively applied to high-dimensional data, thereby improving their classification accuracy and effectiveness. Multi-modal learning (MML), as a vital endeavor in highdimensional data processing, plays a pivotal role in remote sensing applications (Zhao et al. 2023) by harnessing essential information from multiple sensor source images to enhance data acquisition (Feng et al. 2023; Hong et al. 2020). In scenarios where the availability of prior samples is limited, researchers have begun exploring fusion analysis approaches for high-dimensional data. Many of these approaches leverage Convolutional Neural Networks (CNNs) architectures for inter-layer fusion. For instance, Hong et al. (Hong et al. 2022) proposed a deep encoder-decoder network architecture for hyperspectral and LiDAR data classification, effectively capturing spectral and spatial information in a deep latent space. Subsequently, larger-scale models like transformers (Roy et al. 2023) and diffusion model (Zhao et al. 2023) have emerged. Typically, the optimization of these methods involves two distinct steps: extracting features from high-dimensional data using large-scale model architectures, and subsequently classifying the extracted features using a classifier. While these methods mitigate the curse of dimensionality, their feature extraction process does not adequately cater to the manifold structures of highThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8653 dimensional data, and result in suboptimal solutions. Moreover, compared to natural images with feature domains in spatial, spectral, and frequency domains, high-dimensional data often exhibit more compact data relationships (Hu et al. 2022). Therefore, the majority of current techniques that exhibit exceptional performance in natural image scenarios are not applicable in high-dimensional contexts. The aforementioned methodologies approach the fusion of multi-modal data as a mere feature extraction problem. Such methods fail to fully exploit the particularity of multimodal fusion, which involves preserving the spectral, spatial, and frequency domains inherent in high-dimensional data. Thus, they treat the fusion process as a black-box deep learning problem. Motivated by the masking texture effect observed in the human visual system (Liu et al. 2021), the bottom-up attention driven by objective content can locate and analyze important target domains in detail, while other regions are only roughly analyzed or even ignored, this characteristic effectively enhances the focusing ability of image content and improves the performance accuracy of target recognition. However, excessive attention to a single target domain will lead to the loss of analytical capabilities for other effective information domains in the image (Liang et al. 2022), which is consistent with the texture masking effect (Song et al. 2020; Li et al. 2022b). This observation will be validated in the subsequent multi-domain learning methods. As shown in Figure 1, a spectral-spatial sample of multi-modal data is obtained in the preceding process of diffusion. However, with the increase of time steps, the texture occlusion effect caused by the digital domain information makes it difficult to extract effective features. For the multimodal fusion problem, it is obvious that preserving the spatial, spectral, and frequency domains is the main objective. Therefore, deep learning methods should explicitly focus on this aspect, which inspires our proposed deep network called “MDFL”. To the best of our knowledge, this is the first discussion on diffusion characteristics in the frequency domain of high-dimensional data, and the first exploration of the intrinsic properties of high-dimensional data by combining spatial and spectral information: • We propose a diffusion-driven posterior sampling model MDFL, which formulates the MML problem as a unified framework for multi-domain joint feature learning in a manifold structure. It significantly improves feature extraction performance for high-dimensional data. • A novel feature reuse mechanism is developed to integrate learned deep and shallow features. Consisting of two parallel attention modules, it can effectively aggregate cross-level features with minimal additional cost. • We demonstrate the superiority of the proposed approach over the existing SOTA in multiple multimodal datasets. In addition, the effectiveness of feature reuse mechanism and multi-domain learning for high-dimensional data interpretation is verified by ablation experiments. Related Works The existing mainstream methods for multi-modal fusion mainly use CNNs and Transformers (Li et al. 2024), but there are significant problems in the field of highdimensional image processing. High-dimensional image semantic segmentation usually needs to consider the spatial information of the image, and the sparse distribution and adjacent space of ground objects have an important impact on the classification results. Traditional CNNs do not consider the spatial relationship between pixels, so their effectiveness in remote sensing image classification may be limited. The Transformer is mainly used to process sequence data, but for high-dimensional image semantic segmentation, spatial dependencies are more important, so Transformer is not the first choice. The diffusion model is based on non-equilibrium thermodynamics and involves a Markov diffusion step chain that adds random noise to the data and learns the reverse diffusion process to generate desired data samples from the noise (Croitoru et al. 2023). Unlike VAE or flow models, diffusion models have a fixed process and high-dimensional latent variables (Nichol and Dhariwal 2021). Diffusion models are good at capturing fine details and realistic textures of high-dimensional images and have been proven to be very scalable to high-dimensional data. Moreover, diffusion models can effectively extract manifold features of different levels of noise steps. Diffusion models have been applied to various fields such as image super-resolution (Esser, Rombach, and Ommer 2021), semantic segmentation (Baranchuk et al. 2021), and classification (Han, Zheng, and Zhou 2022). Research on these can be categorized into three areas: effective sampling, improved likelihood estimation, and handling data with special structures (Yang et al. 2022). Effective sampling involves generating samples using iterative methods with a large number of evaluation steps. Karras et al. (Karras et al. 2022) determined optimal time discretization and applied high-order Runge-Kutta methods for sampling. They also evaluated different sampler schedules and analyzed the role of randomness in the sampling process. Remote sensing data, which include observations from satellites or sensors, often exhibit spatial correlations and geographic distributions. Zhou et al. (Zhou et al. 2023) designed a temporal leap feature library and dynamic feature fusion module to utilize rich temporal leap features and learn information-rich multi-temporal representations. Han et al. (Han, Zheng, and Zhou 2022) introduced the classification and regression diffusion model, which combines a denoising diffusion-driven generative model with a pretrained mean estimator for more accurate instance-level confidence evaluations in classification tasks. High-dimensional hyperspectral images have been widely used as a representative type of data. In order to fully utilize the digital domain characteristics of hyperspectral data, diffusion-based classification models have been proposed. Although these methods have achieved satisfactory results, challenges still persist in the fusion of multi-modal data, such as the inability to capture the close relationships among multiple target domains. Additionally, current diffusion models used for remote sensing classification do not consider the manifold structure of high-dimensional data. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8654 Methodology Problem Formulation The problem of high dimensional land cover classification can be defined as accurately assigning each pixel in a remote sensing image to different land cover classes. Given a high dimensional image I composed of p-th pixels, we focus on how to train different modalities of data I1, I2 ∈Rh×w×c to achieve classification tasks. The h, w, and c denote the height, width, and the number of image channels, and I1(p) is represented as the p-th pixel pair of the first modality. The two modalities capture the same scene with the same label information, denoted as L ∈Rh×w×c with C ∈N ∗classification categories, to represent various land cover categories, such as buildings, roads, and fields. The objective of image classification is to develop a model Θ(I1,I2), that can effectively map input images from various modalities to a novel representation Cmax I1, I2 , thus, indicating the probability of each pixel being associated with different categories. By setting the maximum probability of distinct categories as the threshold (τ), a binary prediction map is obtained through hard classification, the values 1 and 0 in the map respectively represent the presence of the specific category and other categories, which is defined as: Θ(I1,I2) = 0, if Cmax I1, I2 < τ, 1, otherwise. (1) Based on this basic model, we propose the MDFL network architecture. As shown in Figure 2, the network consists of two branches that are multi-domain learning driven by a diffusion model. Thus, the fusion future graph in our work Cmax I1, I2 can be expressed as: Cmax I1, I2 = Φ I1, I2 | α1, α2  , (2) where a nonlinear objective model Φ(·) is used to transform the image space into the classification space, and α1, α2 represent the corresponding parameters of the two branches. Network Architecture In this section, the network architecture of the diffusiondriven multi-domain feature fusion learning method is introduced in Figure 2 to explicitly model the compact representation of each modal target domain for fusion classification of multi-modal data. Diffusion-driven spectral-spatial feature learning In the realm of high-dimensional data analysis, a thorough investigation of the spectral and spatial domains is crucial for elucidating the diffusion characteristics. This section proposes a novel approach to learn diffusion-driven spectralspatial features in a multi-domain posterior sampling framework. The proposed method utilizes a conditional generation module and a maximum likelihood estimation module, the two jointly analyzing the spectral, spatial and frequency domains. First, we add noise to the digital domain instance in the forward propagation stage and take the fusion of multiple noise scales as the input of the model. As shown in Figure 1, the diffusion process of the preceding term is inspired by the principle of non-equilibrium thermodynamics. As a Markov chain, the one-step derivation formula can be obtained from the iterative formula:  It = √¯αtI0 + √1 −¯αtϵ, ¯αt = Qt i=1 αi, (3) where αt = Qt i αi is the hyperparameter set with the Noise schedule, and ϵ ∼N(0, 1) is Gaussian noise. The noised maps are firstly concatenation in channel dimension as: eI = Concat [I0, I50, I100, I200, I400] . (4) During training, a U-Net basic framework fθ (zt, t) is trained to predict eI from zt by minimizing the training objective with ℓ2 loss: Ltrain = 1 2 fθ (zt, t) −eI 2 , based on Bayes’ theorem, it is found that the posterior q (zt−1 | zt, z0) is a Gaussian distribution as well: q (zt−1 | zt, z0) = N  zt−1; ˜µ (zt, z0) , ˜βteI  , (5) where ˜µt (zt, z0) = √¯αt−1(1 −αt) 1 −¯αt z0 + √αt (1 −¯αt−1) 1 −¯αt zt, (6) and ˜βt = 1 −¯αt−1 1 −¯αt (1 −αt), (7) are mean and variance of this Gaussian distribution. We could get a sample from q (z0) by first sampling from q (zT ) and running the reversing steps q (zt−1 | zt) until z0. Besides, the distribution of q (zT ) is nearly an isotropic Gaussian distribution with a sufficiently large T and reasonable schedule of βt (βt →0), which makes it trivial to sample zT ∼N(0, eI). Moreover, we could approximate q (zt−1 | zt) using a neural network, due to the fact that calculating q (zt−1 | zt) exactly should depend on the entire data distribution. The network is optimized to predict a mean µθ and a diagonal covariance matrix Σθ : pθ (zt−1 | zt) := N (zt−1; µθ (zt, t) , Σθ (zt, t)) . (8) At inference stage, data sample z0 is reconstructed from noise zT with the model fθ with an updating rule in an iterative way, i.e., zT →zT −∆→. . . →z0. At the same time, in order to capture the spatial and spectral residual information in the digital domain of the feature map, we add the spatial-spectral attention mechanism to the UNet basic framework. We then combine the attention mechanism and residual connection to enhance the representation ability of the feature map. Furthermore, we construct the transformation matrix (WQ, WK, WV ) before the multi-head selfattention (MSA) operation (Fan et al. 2021). Q = eFWQ, K = eFWK, V = eFWV . (9) The input intermediate feature eF is partitioned into distinct Q, K, and V tensors. To calculate the attention weights, we initially perform a dot product operation between the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8655 Down-Sampling Spatial-Spectral FDP Down-Sampling Spatial-Spectral Feature Reuse Feature Reuse Adding Noise Adding Noise = Dictionary Construction Classification Result FDP Output Fusion Module Feature Map Frequency Domain Parser (FDP) HSI Spatial-Spectral Matrix LiDAR Token Interaction Self-attention Learnable Parser F F T I F F T Figure 2: Overview of the multi-domain diffusion-driven feature learning framework, which combines spatial-spectral matrix and frequency domain parser for joint learning of multi-modal data. The MDFL framework is utilized to integrate the two modal data at the feature level, after which the extracted features are categorized into the downstream model. query and key tensors. Subsequently, we scale this dot product by a factor of 1/√c to address the issue of vanishing gradients. Finally, the obtained dot product is passed through a softmax activation function, yielding normalized weights that reflect the relative significance of various elements within the input tensor. A = Attention(Q, K, V ) = softmax QKT √c  V. (10) A linear layer is used to produce the output. Multi-head self-attention splits the queries, keys and values to h parts and performs the attention function in parallel, and then the output values of each head are concatenated and linearly projected to form the final output. The resulting tensor is then reshaped to its original dimensions, providing a refined representation of the input tensor capturing relevant spatial and spectral features. This module enables our model to effectively capture and exploit the intrinsic interdependencies between elements in the input tensor, thereby facilitating enhanced feature extraction and improving the interpretation performance of high-dimensional data in the digital domain. Frequency-aware discriminative feature learning The visual masking effect will be caused with the only digital domain interpretation of high-dimensional data. As suggested, the phase of a blurry image plays an important role in blurring, providing faithful information about motion patterns (Pan et al. 2019). The blurry naturally exists in remote sensing images whose resolution is constrained by imaging distance. We introduce the frequency domain to enhance the detail information (such as texture and color information) in the different multi-modalities to make the object more discriminative. The pipeline and effect of frequency-aware discriminative feature learning is shown in Figure 4. Our main idea is to learn a parameterized filter by applying it to the Fourier-space features. Let X(p) ∈RH×W ×C be the input feature matrix, where H, W, and C indicate the height, width, and channel of the of the feature respectively feature. Firstly, the 2D FFT is performed along the spatial dimensions, which can be represented as: M = F(X(p)) ∈RH×W ×C, (11) where F(·) denotes the 2D FFT. We then modulate a convolution operation to obtain an attention map that can expose the importance of the different frequency components. In other words, the weight of convolution can be regarded as a learnable version of frequency filters widely utilized in digital image processing. M ′ = W ⊗M, (12) where W denotes the trainable weights in the frequency domain. It serves the purpose of regulating the noise at high frequencies while simultaneously learning the output of the spectral-spatial convolution from the preceding layer of the network, considering its spectral characteristics. And ⊗represents the element-wise product. We reverse back to the spatial domain by adopting inverse FFT: X′ = F−1(M ′). (13) Different from the spectral-spatial domain, FDFL makes globally adjustments to the components of the specific frequencies to enhance the discriminative information. It can The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8656 (a) Houston2013 (b) Trento (c) Muufl 50 100 200 400 t 90 95 100 ACC OA AA kappa 50 100 200 400 t 90 95 100 ACC OA AA kappa 50 100 200 400 cat t 90 95 100 ACC OA AA kappa cat cat Figure 3: The graph comparison between single step training and multi-step fusion training under different synchronization lengths is provided. F F T I F F T Figure 4: The frequency domain parser visualizations. be learned to constrain the different frequency components for adaptive integration. Feature reuse module In this framework, all network layers share an objective, which is to extract diverse types of features to achieve accurate recognition. The downstream model is expected to leverage the deep features and shallow features from the upstream model U-Net to optimize efficiency. To actualize this concept, we introduce a feature reuse mechanism that combines the inherent and semantic information of an image. Specifically, we propose two parallel attention modules, wherein each module modulates the current layer to attain high-level features. This procedure can be described as follows: XFRM = FRM (Xlow, Xdeep) = Xlow + Xdeep, (14) where FRM(·) represents the mapping function learned by the feature reuse module. Xlow and Xdeep represent the shallow and deep features in the U-Net encoder, respectively. Xlow and Xdeep are the attention features in shallow layer and deep layer, which can be obtained as follows: Xdeep = Sigmoid (Fd (Xlow )) Fb (Xdeep) + Fb (Xlow ) , (15) Xlow = Sigmoid (Fl (Xlow )) Fb (Xdeep) + Fb (Xlow ) , (16) Fb(·) is suitable for the 1×1 convolutional layer structure constraining high-frequency noise, Fd(·) represents the 1×1 deformable convolutional layer used to extract the deep layer, Fl(·) represents the 1×1 variability convolutional layer used to extract the shallow layer. Therefore, the embedded XFRM is injected into the downstream model to provide prior knowledge to recognize the input image, which can be expressed by the following equation: X(p) = MLP FRM FRM X1 0, X1 l  , FRM X2 0, X2 l  . (17) The l ∈{1, . . . , L}, and the original and deep information of the single modality are first fused by feature reuse, and then the FRM results of the two modalities are fused in the same way for the classification of downstream models. Experiments and Analysis Experimental Settings Datasets To validate the effectiveness of the proposed method in analyzing high-dimensional data, three remote sensing multimodal datasets with hyperspectra, namely the Houston2013 dataset, the Trento dataset, and the MUUFL dataset, are selected for verification of the proposed classification model. To assess the performance of the proposed method in classifying test images, three metrics are used: overall accuracy (OA), average accuracy (AA), and kappa (κ) coefficient. OA measures the ratio of correctly classified test samples to the total number of test samples. AA represents the average accuracy across all classes. The κ coefficient measures the agreement between the classification maps generated by the model and the true values provided. Implement details These experiments are conducted on a machine equipped with an NVIDIA A100 Tensor Core GPU. The training samples are randomly cropped to a size of 7 × 7. The Adam optimizer is employed with an initial learning rate set to 1e-3, and a weight decay of 5e-3 was applied. The training process spans 1000 epochs. In addition, a step scheduler with a step size of 50 and gamma value of 0.9 is utilized. The batch size is set to 64. Additionally, for incorporating noise, a total step size of 500 was used, and the values of t = 0, 50, 100, 200, 400 are selected. Ablation Study In this section, we conduct an ablation study to assess the individual contributions of different components in the proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8657 (a) t = 0,50,100,200,400 (b) t = 50 (c) t = 100 (d) t = 200 (e) t = 400 Figure 5: t-SNE effect display for different time steps. Houston2013 Trento MUUFL Method OA(%) AA(%) κ(×100) OA(%) AA(%) κ(×100) OA(%) AA(%) κ(×100) RNN 62.61 62.24 59.64 96.43 92.38 95.21 88.79 75.84 85.18 Cross 91.84 92.70 91.16 97.82 97.35 97.09 87.29 63.81 82.75 CNN-2D 92.30 92.69 91.65 98.65 94.58 98.19 91.88 78.44 89.22 CALC 88.97 90.78 88.06 94.62 91.33 92.81 93.94 74.09 92.00 ViT 85.05 86.83 83.84 96.47 94.56 95.28 92.15 78.50 89.56 MFT 89.80 91.54 88.93 98.32 95.98 97.75 94.34 81.48 92.51 MDFL 99.16 96.72 94.09 99.43 96.13 94.41 96.96 82.63 94.63 Table 1: OA, AA and κ on the Houston2013 dataset, Trento dataset and MUUFL dataset by considering HSI and LiDAR data. posed MDFL to MML. Specifically, five scenarios are designed: (A) a comparison of noise with various synchronization lengths selected by us against single-step training and multi-step fusion training; (B) the omission of frequency domain analysis during the training of diffusion models; (C) the exclusion of the original information in upstream model training from participating in feature reuse; (D) the consideration of the complete proposal model, MDFL. Description OA(%) AA(%) κ(×100) (B) 97.75 95.62 93.24 (C) 98.10 90.44 89.39 (D) 98.51 91.82 94.37 Table 2: Average results of ablation study for MDFL conduct on 3 datasets. The best result is highlighted. As shown in Figure 3, we observe with the line plots of three datasets that single-step training performs worse than multi-step fusion training in the process of forward diffusion, thereby validating the effectiveness of multi-step noise fusion. To further validate the benefits of multi-step time step fusion to classification, we demonstrate this with t-SNE maps at different time steps, as shown in Figure 5. It can be observed that (a) exhibits excellent clustering performance, with class 2 clearly separated from other classes in the twodimensional projection space. However, in (b), (c), (d), and (e), class 2 shows incorrect associations with other classes in the space, with confirms the effectiveness of multi-step fusion. Additionally, Table 2 demonstrates that the average OA precision of MDFL across the three datasets is 98.51%. In (B), by removing the joint analysis in the frequency domain, we observe a decrease in the results to 97.75%, resulting in a performance loss of 0.76%, which indicates the importance of multi-domain joint learning. The effectiveness of feature reuse in high-dimensional manifold structures is validated in (C). Therefore, in high-dimensional feature extraction, multi-objective domain joint learning and feature reuse modules can effectively enhance its performance. Comparisons with Previous Methods The accuracy performance of the proposed model as well as other models on the Houston2013, Trento, and MUUFL datasets is presented in Table 1. The best results are denoted in bold. In selecting comparison methods, we consider classic deep learning techniques such as RNN (Cho et al. 2014), mainstream RS multi-modal methods including Cross (Hong et al. 2020), CNN-2D and CALC (Ding et al. 2022), and transformer methods such as ViT (Dosovitskiy et al. 2021) and MFT (Roy et al. 2023). Our evaluation demonstrates that the proposed method attains the highest OA, AA, and κ scores on most classification tasks, thus surpassing other methods. Specifically, on the Houston2013 dataset, MDFL exhibits superior accuThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8658 racy across all categories, outperforming mainstream methods in terms of these coefficients. Although traditional methods perform better in terms of accuracy when combining HSI and LiDAR data, our model still presents significant advancements compared to other models. In comparison to transformer architecture, MDFL demonstrates a 9.36% increase in OA, a 5.18% increase in AA, and a 5.16% increase in κ compared MFT. When compared to Cross, MFT increases by 7.32% in OA, 4.02% in AA, and 2.93% in κ. Overall, MDFL achieves comprehensive performance improvements across three multimodal datasets and offers an effective solution for feature extraction from highdimensional data. Result Visualization The results are depicted in Figure 6, Figure 7, and Figure 8, illustrating the macroscopic performance evaluation of the classification graphs produced by MDFL. To achieve this, we employ a visualization technique that assigns a unique color to each class. MDFL reconstructs high-dimensional features by integrating the joint information from various domains, including spectral, spatial, and frequency domains. This approach effectively reduces the granularity of textures in the classification graphs, resulting in a more diverse and finely-detailed representation. Overall, MDFL is wellsuited for generating classification maps with enhanced performance and intricate details, thus, it is particularly suitable for land use and scene classification applications. Grass-healthy Grass-stressed Grass-synthetic Tree Soil Water Residential Commercial Road Highway Railway Parking-lot1 Parking-lot2 Tennis-court Running-track (a) HSIMAP (b) LiDARMAP (c) GT (d) MDFL Figure 6: Visualization of false-color HSI and LiDAR images using MDFL based on the Houston2013 dataset. Conclusion In the context of high-dimensional image feature extraction, a significant challenge lies in developing a unified MML Apples Buildings Ground Woods Vineyard Roads (a) HSIMAP (b) LiDARMAP (c) GT (d) MDFL Figure 7: Visualization of false-color HSI and LiDAR images using MDFL based on the Trento dataset. (a) HSIMAP (b) LiDARMAP (c) GT (d) MDFL Trees Grass-Pure Grass-Groundsurface Dirt-And-Sand Road-Materials Water Buildings`-Shadow Buildings Sidewalk Yellow-Curb ClothPanels Figure 8: Visualization of false-color HSI and LiDAR images using MDFL based on the MUUFL dataset. model capable of facilitating joint information exchange and feature extraction across multiple domains. This problem has not been fully addressed within the previous framework that primarily focuses on a single spatial domain. Taking inspiration from the masking texture effect observed in the human visual system, we propose a novel approach for joint feature learning in high-dimensional multimodal images. This approach encompasses spectral domain, spatial domain, and frequency domain to comprehensively capture the intricate features. Additionally, we consider the manifold structure characteristics of high-dimensional data and introduce a feature reuse mechanism to aggregate deep and primitive features from multiple modes. With multi-domain joint learning, our method not only reveals the underlying patterns and structures of high-dimensional data, but also maximizes the performance of feature extraction. Extensive experiments have been conducted to validate the efficacy of our proposed method, MDFL, in tackling MML problems. The results consistently demonstrate that MDFL outperforms existing methods, showcasing superior performance and highlighting its potential in addressing the challenges of highdimensional data analysis. References Baranchuk, D.; Rubachev, I.; Voynov, A.; Khrulkov, V.; and Babenko, A. 2021. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126. Cho, K.; Van Merri¨enboer, B.; Bahdanau, D.; and BenThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8659 gio, Y. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Croitoru, F.-A.; Hondru, V.; Ionescu, R. T.; and Shah, M. 2023. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Ding, K.; Lu, T.; Fu, W.; Li, S.; and Ma, F. 2022. Global–Local Transformer Network for HSI and LiDAR Data Joint Classification. IEEE Transactions on Geoscience and Remote Sensing (TGRS), 60: 1–13. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations (ICLR). Esser, P.; Rombach, R.; and Ommer, B. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12873–12883. Fan, H.; Xiong, B.; Mangalam, K.; Li, Y.; Yan, Z.; Malik, J.; and Feichtenhofer, C. 2021. Multiscale vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, 6824–6835. Feng, Z.; Song, L.; Yang, S.; Zhang, X.; and Jiao, L. 2023. Cross-Modal Contrastive Learning for Remote Sensing Image Classification. IEEE Transactions on Geoscience and Remote Sensing (TGRS). Han, X.; Zheng, H.; and Zhou, M. 2022. Card: Classification and regression diffusion models. Advances in Neural Information Processing Systems (NIPS), 35: 18100–18115. Hong, D.; Gao, L.; Hang, R.; Zhang, B.; and Chanussot, J. 2022. Deep Encoder–Decoder Networks for Classification of Hyperspectral and LiDAR Data. IEEE Geoscience and Remote Sensing Letters (GRSL), 19: 1–5. Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; and Zhang, B. 2020. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification. IEEE Transactions on Geoscience and Remote Sensing (TGRS), 59(5): 4340–4354. Hu, X.; Cai, Y.; Lin, J.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; and Van Gool, L. 2022. Hdnet: High-resolution dual-domain learning for spectral compressive imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17542–17551. Jiang, K.; Xie, W.; Lei, J.; Jiang, T.; and Li, Y. 2021. LREN: Low-rank embedded network for sample-free hyperspectral anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 35, 4139–4146. Karras, T.; Aittala, M.; Aila, T.; and Laine, S. 2022. Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems (NIPS), 35: 26565–26577. Li, D.; Ling, H.; Kim, S. W.; Kreis, K.; Fidler, S.; and Torralba, A. 2022a. BigDatasetGAN: Synthesizing ImageNet with pixel-wise annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21330–21340. Li, D.; Xie, W.; Li, Y.; and Fang, L. 2024. FedFusion: Manifold-Driven Federated Learning for Multi-Satellite and Multi-Modality Fusion. IEEE Transactions on Geoscience and Remote Sensing (TGRS), 62: 1–13. Li, W.; Lin, Z.; Zhou, K.; Qi, L.; Wang, Y.; and Jia, J. 2022b. Mat: Mask-aware transformer for large hole image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10758– 10768. Liang, J.; Hu, D.; Feng, J.; and He, R. 2022. Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8003–8013. Liu, S.; Wang, S.; Liu, X.; Gandomi, A. H.; Daneshmand, M.; Muhammad, K.; and De Albuquerque, V. H. C. 2021. Human memory update strategy: a multi-layer template update mechanism for remote visual monitoring. IEEE Transactions on Multimedia (TM), 23: 2188–2198. Nichol, A. Q.; and Dhariwal, P. 2021. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (ICML), 8162–8171. PMLR. Pan, L.; Hartley, R.; Liu, M.; and Dai, Y. 2019. Phase-only image based kernel estimation for single image blind deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6034–6043. Roy, S. K.; Deria, A.; Hong, D.; Rasti, B.; Plaza, A.; and Chanussot, J. 2023. Multimodal fusion transformer for remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing (TGRS). Song, K.; Wei, X.-S.; Shu, X.; Song, R.-J.; and Lu, J. 2020. Bi-modal progressive mask attention for fine-grained recognition. IEEE Transactions on Image Processing (TIP), 29: 7006–7018. Yang, L.; Zhang, Z.; Song, Y.; Hong, S.; Xu, R.; Zhao, Y.; Shao, Y.; Zhang, W.; Cui, B.; and Yang, M.-H. 2022. Diffusion models: A comprehensive survey of methods and applications. arXiv preprint arXiv:2209.00796. Yao, T.; Pan, Y.; Li, Y.; and Mei, T. 2019. Hierarchy parsing for image captioning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2621– 2629. Zhao, Z.; Bai, H.; Zhu, Y.; Zhang, J.; Xu, S.; Zhang, Y.; Zhang, K.; Meng, D.; Timofte, R.; and Van Gool, L. 2023. DDFM: denoising diffusion model for multi-modality image fusion. arXiv preprint arXiv:2303.06840. Zhou, J.; Sheng, J.; Fan, J.; Ye, P.; He, T.; Wang, B.; and Chen, T. 2023. When Hyperspectral Image Classification Meets Diffusion Models: An Unsupervised Feature Learning Framework. arXiv preprint arXiv:2306.08964. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8660
2024
962
18,809
CoreRec: A Counterfactual Correlation Inference for Next Set Recommendation Kexin Li1, Chengjiang Long2*, Shengyu Zhang1, Xudong Tang1, Zhichao Zhai1, Kun Kuang1, Jun Xiao1 1 Zhejiang University 2 Meta Reality Labs {12221004,sy zhang,22051028, wheltz,junx}@zju.edu.cn, [email protected], [email protected] Abstract The next set recommendation aims to predict the items that are likely to be bought in the next purchase. Central to this endeavor is the task of capturing intra-set and cross-set correlations among items. However, the modeling of cross-set correlations poses challenges due to specific issues. Primarily, these correlations are often implicit, and the prevailing approach of establishing an indiscriminate link across the entire set of objects neglects factors like purchase frequency and correlations between purchased items. Such hastily formed connections across sets introduce substantial noise. Additionally, the preeminence of high-frequency items in numerous sets could potentially overshadow and distort correlation modeling with respect to low-frequency items. Thus, we devoted to mitigating misleading inter-set correlations. With a fresh perspective rooted in causality, we delve into the question of whether correlations between a particular item and items from other sets should be relied upon for item representation learning and set prediction. Technically, we introduce the Counterfactual Correlation Inference framework for next set recommendation, denoted as CoreRec. This framework establishes a counterfactual scenario in which the recommendation model impedes cross-set correlations to generate intervened predictions. By contrasting these intervened predictions with the original ones, we gauge the causal impact of inter-set neighbors on set prediction—essentially assessing whether they contribute to spurious correlations. During testing, we introduce a post-trained switch module that selects between set-aware item representations derived from either the original or the counterfactual scenarios. To validate our approach, we extensively experiment using three real-world datasets, affirming both the effectiveness of CoreRec and the cogency of our analytical approach. Introduction In retail and e-commerce contexts, it is customary for patrons to make multi-item acquisitions within a single transaction, often termed a ’purchase set.’ Such sets acquired sequentially may unveil inherent interdependencies. Consequently, it becomes instinctive to discern users’ inclinations and fathom their underlying motives by scrutinizing their *The corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Set 1 Set 2 Set 3 (c) CoreRec (b) Regular Graphs (a) Next set prediction Set 1 2016/05/05 Set 3 Set 2 2016/05/06 2016/05/07 Set 1 Set 3 2016/05/07 2016/05/05 Set 2 2016/05/06 Next Set user Real Purchase Set 50% 50% … … Recommendation Set 75% 25%     50% 50% 50% 50%    Low Frequency Items spurious correlation Real Purchase Set Recommendation Set real correlation High Frequency Items Figure 1: Our objective is to forecast succeeding sets from a given sequence of sets. Our investigation reveals that leveraging homogeneous information within the sets to establish connections inadvertently inflates the occurrence of highfrequency objects. To address this, we propose the incorporation of heterogeneous information and causal graphs, which effectively rectify the bias stemming from the amplified prevalence of high-frequency objects. historical purchase sets. This scrutiny can then pave the way for anticipatory forecasting of ensuing purchase sets. Prior research (Hu and He 2019; Jung et al. 2021; Li et al. 2023; Qin, Wang, and Li 2021; Sun et al. 2020; Wang et al. 2020; Yu et al. 2022a, 2023, 2022b) has predominantly focused on investigating intra-set and inter-set correlations among items. This endeavor aims to enhance the representation of items, facilitating a more nuanced understanding of user preferences. Notably, Yu et al. introduced temporal graphs to establish connections among items within the same set, thereby modeling intra-set relationships (Yu et al. 2020). In a similar vein, Sun et al. utilized a co-transformer framework to aggregate item representations at both intraset and inter-set levels (Sun et al. 2020). In spite of the progress achieved in existing studies, there exist two pivotal concerns that impede the precise modelThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8661 ing of item correlations. Firstly, inter-set correlations, lacking inherent labels, remain implicit. This could lead models to erroneously connect unrelated items, possibly assimilating spurious correlations within item representation learning. Secondly, among individual users, numerous highfrequency items recurrently appear across historical sets. For instance, as illustrated in Figure 1(a), items such as “napkin” and “gauze mask” are pervasive but not necessarily correlated, unlike less frequent items like “laptop” and “headset”. Notably, the dominance of high-frequency items due to their extensive co-occurrence might distort correlation modeling, particularly affecting low-frequency items. Consequently, the inadvertent absorption of misleading correlations during item representation learning can lead to susceptibility in the accuracy of recommending the next set—potentially suppressing low-frequency items in such recommendations. Given the continuous real-time refinement of real-world recommendations through user interactions, this phenomenon can trigger the Matthew effect and consequent performance degradation (Wang et al. 2021a). Our objective is to mitigate spurious inter-set correlations. A pivotal aspect of this objective involves scrutinizing the impact of potentially correlated items on both item-level representation learning and prediction. This scrutiny inherently entails estimating the causal effects of correlations on the prediction of subsequent sets. To accomplish this, we adopt the causal graph framework (Feng et al. 2021; Jr. 2005; Wang et al. 2021b; Wei et al. 2021; Zhang et al. 2021a,b) to delineate the causal relationships within the context of the next set recommendation (depicted in Figure 4). The crux of our approach involves creating a counterfactual scenario for each item after an intervention. In this scenario, inter-set correlations are effectively obstructed, compelling the model to depend solely on intra-set correlations for item representation learning. Through this, we can ascertain the causal effect of inter-set correlations by comparing the predictions made under standard conditions with those made in the post-intervention situation. Technically, we propose a Counterfactual Correlation Inference framework, namely CoreRec, which operates at the intersection of counterfactual analysis and correlation inference. CoreRec is designed to dissect and harness inter-set correlations in a controlled manner, enabling us to illuminate causal relationships within complex systems. The framework employs two distinct weighted graphs: a regular graph capturing regular inter-set correlations and an intervened graph with these correlations strategically suppressed (depicted in Figure 1). Furthermore, CoreRec invokes a causal intervention that perturbs the aggregation mechanism, compelling the model to pivot towards utilizing the inherent intra-set correlation attributes of individual items. To operationalize this, we introduce a switch module that judiciously toggles between the intervened and regular item representations, taking into account multifaceted determinants such as causal influence and prediction reliability. Extensive experiments are conducted to validate the effectiveness of our framework. Our CoreRec achieves stateof-the-art performance on three commonly used datasets. In particular, CoreRec achieves 61.46 PHR@20 and 47.28 Recall@20 on the JD dataset (vs. ETGNN: 56.52 PHR@20 & 38.12 Recall@20), which increased by 8.74% and 22.77% respectively compared with the ETGNN. In summary, our contributions are three-fold as follows: • We identify two critical issues that hinder accurate interset correlation modeling, and formulate the causal graph of correlation-based next set recommendation. • We propose a novel framework named CoreRec, which constructs two weighted graphs and a switch module, together formulating counterfactual inference and achieving adaptive inter-set correlation modeling. • We conduct extensive experiments on three real-world datasets and the experimental results strongly demonstrate that CoreRec outperforms all the baseline methods. Related Work Next Set Prediction is increasingly receiving attention in recommendation system research (Hu et al. 2020; Li et al. 2023; Qin, Wang, and Li 2021; Yu et al. 2023). Especially, Rendle et al. (Rendle, Freudenthaler, and Schmidt-Thieme 2010) proposed a classical method to recommend the next basket. It learns both sequential behaviors and personal tastes, based on personalized transition graphs over underlying Markov chains. Recently, Yu et al. (Yu et al. 2020) adopted a method that learns element relationships based on a set-level co-occurrence graph and uses attention-based temporal dependency learning for the next set prediction. However, previous methods lack a meticulous examination of the authenticity of these presumed associations. To address this gap, we introduce a discerning switch model imbued with counterfactual techniques, which critically evaluates the essentiality of these associations. Causality-aware Model Prediction. Causal inference finds extensive utility across a spectrum of machine learning domains. In the context of recommendation, the realm of causal inference (Pearl 2009) predominantly centers on mitigating diverse biases intrinsic to user feedback. This encompasses addressing position bias (Joachims, Swaminathan, and Schnabel 2017; Shengyu et al. 2023; Wang et al. 2021c; Zhang et al. 2021c), countering clickbait-related concerns, and alleviating popularity-induced bias. For example, Saito et al. calculated exposure propensity for individual user-item pairs and employed sample re-weighting to tackle the challenge of non-random missing data (Saito et al. 2020). Nevertheless, the existing methods heavily lean on the precision of propensity estimation, often grappled by elevated propensity variance. Consequently, we focus on the switch techniques that are frequently enlisted as a subsequent remedy. Problem Formalization Let U =  u1, · · · , u|U| denote a collection of users, V =  v1, · · · , v|V | denote all the available items, and E ∈R|V |×d denote the embedding matrix of all items. We use ˆSi = h v1 i , v2 i , · · · , vt | ˆSi| i to denote a user’s historical interaction and organize temporal sets Si =  s1 i , s2 i , · · · , st i  by treating items bought in the same timestamp as a set, in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8662 which st i = n v1, v2, · · · v|st i| o represents a set interacted by the user ui at time t. For a user, given the user’s historical interaction temporal sets Si, the goal is to predict the next set according to the historical records, that is, st+1 i = F (Si, W) , (1) where W represents the trainable parameters. Approach Previous methods often employed fixed item-associations to formulate predictions. However, this rigid approach, characterized by predetermined artificial associations, hinders adaptability to data-specific nuances. CoreRec introduces a dynamic solution by training the switch model to grasp distinct item associations inherent in each dataset, thus yielding data-adaptive item representations. CoreRec comprises Graph-based Item Representation Learning and Switch Designed by Counterfactual Intervention. The method begins by constructing two graphs to delineate standard and postintervention conditions. Next, a switch model is trained through causal reasoning on historical sequences. This culminates in the prediction of forthcoming sets, anchored in the item representation selected by the switch mechanism. Graph-based Item Representation Learning We designed two weight graphs: the regular graph encompassing connections among items intra and inter-set, and the intervened graph encompassing solely the intra-set connections. During the item representation learning process, we comprehensively incorporate factors like purchase time intervals, long- and short-term sequences. Time Interval Matrices for Graph Construction. To reinforce the user’s recently active interest representation, we extract a short-term sequence from the long-term original sequence and then obtain two input sequences. We use the SL i = [s1 i . .. st/2 i . .. st i] to denote the long-term sequence, and use the SS i = [st/2 i . .. st i] to denote the extracted shortterm sequence. Our investigation reveals that the sets with extended time intervals display a comparatively diminished correlation. Capitalizing on this discernment, we formulate a time interval matrix to encapsulate the influence of temporal gaps between items. Note that the distance between two items in the same set is 0, and the distance between items in set sj i and set sk i can be denoted as |j −k|. For arbitrary nodes vj n ∈sj i and vk m ∈sk i , the time interval value is defined as δn,m, which is negatively correlated with the distance, i.e., δn,m = |S∗ i | −|j −k|, (2) where S∗ i ∈{SL i , SS i } denotes long-term or short-term sequence of user i, and |S∗ i | denotes the total number of sets in current sequence. Therefore, the time-interval matrix is naturally constructed according to {δn,m}. Specially, we consider the time-interval values for the set-self and all the 2-set pairs, which results in C2 |S∗ i | + 1 time-interval matrices {Ti,k}, where Ck n represents the number of k-combinations from a given set of n elements. Finally, we average all of these time-interval matrices and normalize it w.r.t its maximum value to obtain the time-interval matrix Ti, Regular Graphs Construction. We take the sequence that contains three sets as an example, as is shown in Figure 3. First, we define the purchase frequency of item vi as: fvi = Dvi/ X vj∈ˆSi Dvj, (3) where Dvi denotes the number of interactions in sequence for item vi. Then, we divide the items into high-frequency items and low-frequency items with ε as a threshold: vi,∗= ( vhigh i,∗ if fvi > ε, vlow i,∗ otherwise. (4) We connect the items in the same set and also connect all the low-frequency items vlow i,∗between any two different sets. We count the number of co-occurrence pairs to obtain the co-occurrence matrix Cregular i for the regular graph. Then, the weighted matrix for the regular graph can be obtained, Wregular i = Normmax  Cregular i  + λTi ⊙(Cregular i > 0), (5) where ⊙indicates element-wise product, Cregular i denotes the co-occurrence matrix of the intervened graph, and Ti denotes the time interval matrix. The hyper-parameter λ can control the contribution of the time interval matrix. Finally, we add self-connection to the weighted matrix and construct a regular graph concerning the weighted matrix. Intervened Graphs Construction. We connect all the items in the same set and obtain three fully connected graphs. Then we combine the same items from different graphs and obtain the co-occurrence matrix Cinter i . The weighted matrix Winter i is obtained by integrating the co-occurrence matrix Cinter i and the time-interval matrix Ti, Winter i = Normmax Cinter i  + λTi ⊙(Cinter i > 0), (6) where ⊙indicates element-wise product, and the hyperparameter λ control the scale of the time interval matrix. We further analyze the variation of hyper-parameter values λ in the experimental part. We also add self-connection for each item appearing in sequence, which helps to reduce information loss. In this way, we construct an intervened graph with weights from the weighted matrix. GNNs for Feature Encoding. We perform GNNs on the above two types of weighted graphs. Let Gi = (Vi, Ei) denotes the graph with a weighted matrix W ∈R|Vi|×|Vi|, where Vi denotes the items in Si and Ei denotes the edges in Gi. Each item in graph Gi is linearly combined according to the attention score. Let N(m) be the set of neighborhood nodes of vm, and hN (m) denotes the neighborhood representation, i.e., hN (m) = X vn∈N (m) π (vm, vn) hvn, (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8663 Regular Graph Time Interval Matrix Time Interval Matrix Long-term sequence Short-term sequence 𝑺𝒊 𝑳 𝑺𝒊 𝑺 (a) Time Interval Matrices (b) Item Relationship Learning with GNNs (c) Predictor Layer 𝑻𝒊 𝑻𝒊 𝑹𝒊 Switch 𝑔 Switch 𝒓𝒊,𝑳 𝒓𝒊,𝑺 𝒓𝐢,𝑳 𝒓𝒆𝒈𝒖𝒍𝒂𝒓 𝒓𝐢,𝑳 𝒊𝒏𝒕𝒆𝒓 𝒓𝐢,𝑺 𝒊𝒏𝒕𝒆𝒓 𝒓𝐢,𝑺 𝒓𝒆𝒈𝒖𝒍𝒂𝒓 Regular Graph Intervened Graph Self-attention Intervened Graph 𝑔 (d) Switch Training Regular Prediction Intervened Prediction 𝒓𝐢,𝑺 𝒊𝒏𝒕𝒆𝒓 𝒓𝐢,𝑺 𝒓𝒆𝒈𝒖𝒍𝒂𝒓 Switch Ƹ𝑝=0.9 LOSS predict Data Construction 𝒆 } ො𝒛𝒉 { … } = { … 𝑫= 𝒙, 𝒑ො𝒛= 𝒛∪ො𝒛𝒉= 𝒛, 𝒑= 𝒇𝒍𝒂𝒈(ො𝒛= 𝒛) Training Process ො𝒛 … … … … … … Figure 2: Overview of CoreRec. (a) Construct time interval matrices to capture temporal dependency. (b) Construct regular graphs and intervened graphs to obtain the sequence of item representation rregular i,∗ and rinter i,∗ respectively. (c) Choose the final sequence of item representations Ri by the switch model. (d) Train the switch by applying causal intervention. Co-occurrences matrix Co-occurrences matrix Time Interval matrix weight matrix of the intervened graph weight matrix of the regular graph Regular graph Intervened graph 𝑻𝒊 𝑪𝒊 𝒓𝒆𝒈𝒖𝒍𝒂𝒓 𝒊 𝑪𝒊𝒏𝒕𝒆𝒓 Set1 Set2 Set3 Set2 Set1 Set2 Set3 Set1 Set3 Figure 3: Intervened graph and regular graph construction. The ‘Fusion Func.’ is stated as Eq. (6) and Eq. (5) . where π (vm, vn) estimates the importance weight of different neighbors. We implement π (vm, vn) as follows, π (vm, vn) = Relu (w1 [(hvm ⊙hvn) ∥ˆwmn]) , (8) where ∥indicates concatenation operation, ˆwmn denotes the weight of edges in graph, w1 denotes trainable parameters. We concat the item representation hvm and its neighborhood representation hN (m) to obtain the final sequence of item’s representation Ri = {hvm | m = 1, . . . , | ˆSi|} Next Set Prediction. From the above two weighted graphs, we can obtain two sequences of the item’s representation, rregular i,∗ and rinter i,∗ , we then employ the post-trained switch to choose one sequence of item’s representation Ri. Existing methods usually utilize set embedding by pooling operation, but it will cause information loss, so we directly use the sequence of the item’s representation Ri as input. We perform the self-attention to capture temporal dependency, Zi = softmax (RiWq) · (RiWk)⊤ √dk ! · (RiWv) , (9) where Wq,Wk and Wv are trainable parameters. Then we empoly Zi to update the item original embedding matrix Ei, Eupdate i,I(j) = Ei,I(j) + Zi,j, (10) where I(·) is a function that maps items vi,j to its corresponding index in Ei. In Equation (10), the item representations are updated according to both the co-occurrence relationships and the temporal dependency of the items. We maintain the original representations for all other items. The probabilities of each item appearing in the subsequent set can be computed based on the current state, ˆyi = sigmoid  Eupdate i · wo + bo  , (11) where wo ∈Rd and bo ∈R are trainable parameters to provide the final prediction result. When training, predicting the next set could be treated as a multi-label learning problem (Ghamrawi and McCallum; Yu et al.; Zhang and Zhou; Zhang and Zhou), so we adopt the loss function with L2 regularization technique as follows. L = − 1 N N X i 1 |V | |V | X j yj i log  ˆyj i  +  1 −yj i  log  1 −ˆyj i  + γ∥W ∥2, (12) where N denotes the number of training samples, yi and ˆyi denote the ground truth and the predicted appearing possibility in the next set of user ui, and γ is a hyper-parameter to control the importance of L2 regularization. Switch Designed by Counterfactual Intervention In recommendation systems, the item correlation across diverse datasets varies significantly, shaped by the unique attributes of the generating users. However, previous approaches have been restricted to employing static patterns The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8664 𝐻 𝑌 (a) Causal graph ത𝑋 𝑋 (b) Original prediction 𝑥 ҧ𝑥 ℎ 𝑦 (c) post-intervention prediction 𝑥 ҧ𝑥 ℎ 𝑦 Inter-set information Intra-set feature Item representation Prediction Figure 4: Cause-effect view of set recommendation. for designing and establishing associations between items. These methods struggle to flexibly grasp the distinct data structures. To address this limitation, we advocate for an approach where the choice of item representation is informed by a switch model trained on historical datasets. Causal Graph. The underpinning causal relationships among variables are elegantly captured through a causal graph, as illustrated in Figure 4. In the context of our study, we conceptualize the inference of the next set prediction framework within the framework of a causal graph. This representation encapsulates four key variables: • X, which denotes the item representation from the graph based on the intra-set correlations. • H, which denotes the inter-set information. • ¯X, which denotes final item representation. • Y , which denotes the prediction result. The final item representation ¯X is directly affected by the intra-set item representation X and the inter-set information H. Existing set prediction methods can be divided into two categories. One approach simply aggregates item embeddings into set embeddings for prediction, directly utilizing the information within the set, in which, the intra-set information X between items directly impacts the results Y . The other approach uses a sequence of item embeddings for prediction, considering both intra- and inter-set information. As the inter-set information between items is introduced only when updating item embeddings for prediction, there is no direct effect between H and Y . Moreover, the temporal sets prediction result Y is directly affected by the final item representation ¯X, which is represented as ¯X −→Y . Causal Intervention. We utilize causal intervention (Galles and Pearl 2013; Mueller, Li, and Pearl 2021; Pearl 2012) to assess the causal effect of inter-set information on the prediction (i.e., the causal effect of H = h). This method entails assigning an instance to a treatment variable deliberately. In our causal graph described earlier, the causal effect denoted as e is precisely defined as: e = f(x, h | ˆθ) −f(x, do(H = ∅) | ˆθ) = f(x, h | ˆθ) −f(x, ∅| ˆθ) = ˆy −ˆyh, (13) where the term do(H = ∅) denotes a forceful causal intervention, assigning a reference status of H. This intervention yields a post-intervention prediction f(x, h | ˆθ) (see Algorithm 1: CoreRec Training Input: Training data set sequence Strain Output: ˆθ, the parameter for GNN(·); ˆη, the parameter for g(·) 1: Optimize Eq.(11) with Strain, obtain ˆθ; ▷Training GNN 2: Construct data D, including regular and intervened cases 3: Calculate causal effect e; ▷Causal Intervention 4: Optimize Eq.(17), obtain ˆη; ▷Training Switch Model 5: Return ˆθ and ˆη; Algorithm 2: CoreRec Inference Input: Testing data set sequence Stest, parameter ˆθ and ˆη Output: Binary prediction results for each item y 1: Extract feature through GNN(ˆθ) with Stest; ▷Encoding 2: Calculate f(x, h | ˆθ); ▷Regular prediction 3: Calculate f(x, ∅| ˆθ); ▷Post-intervention prediction 4: Calculate causal effect e; 5: Calculate final item embedding through Eq.(15) with ˆη; 6: Return final classification y ▷Final prediction Figure 4(c)). As H lacks a predecessor, f(x, do(H = ∅) | ˆθ)& = f(x, ∅| ˆθ), expressed as ˆyh. Intuitively, this postintervention prediction signifies the outcome if the inter-set information is absent from the target item representation. We contend that e offers insights into selecting a more expressive item representation for the target item. Switch Model. We train the set prediction model with the regular graph and the intervened graph respectively. To construct the training data, we calculate the ground truth according to the correctness of ˆz and ˆzh, where ˆz = arg max ˆy and ˆzh = arg max ˆyh, see Figure 2 (d). Then we train the switch model by fixing the parameters of GNNs with the two item representations rregular i,∗ and rinter i,∗ , and the causal effect e. We devise the switch model as a multi-layer perceptron to make wise choices between rregular i,∗ and rinter i,∗ , ri,∗=  rregular i,∗ , ˆp ≥κ, rinter i,∗ , ˆp < κ, ˆp = g  rregular i,∗ , rinter i,∗ , e | η  , (14) where g(·) represents a binary classifier parameterized by η. The classifier’s output, denoted as ˆp, guides the decisionmaking process, with κ serving as the decision threshold. The training of the switch model is thus formulated as: ˆη = min η X (x,p)∈D l(ˆp, p), (15) where p and ˆp denote the ground truth and the predicted appearing possibility. EXPERIMENTS To evaluate the effectiveness of the proposed method, we conduct experiments on three real-world datasets: TaFeng (TF)1: The TaFeng dataset is a public dataset that contains a Chinese grocery store transaction data from 1www.kaggle.com/chiranjivdas09/ta-feng-grocery-dataset The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8665 Method K=20 K=40 K=60 K=80 PHR NDCG Recall PHR NDCG Recall PHR NDCG Recall PHR NDCG Recall TaFeng TOP 35.45 9.21 10.17 44.65 10.24 13.09 52.77 11.32 16.77 57.82 11.98 19.03 DeepFM 46.18 12.37 16.68 57.48 14.04 22.57 62.38 14.45 23.82 65.98 14.73 24.46 Sets2Sets 49.86 13.56 17.79 61.01 14.94 23.44 70.45 15.78 25.15 72.02 16.65 27.33 DSNTSP 60.01 16.40 21.50 68.69 18.56 26.57 70.90 19.42 28.54 72.05 19.57 29.65 DNNTSP 62.77 16.45 22.13 69.70 18.09 26.96 72.67 19.43 29.37 74.25 19.44 31.14 ETGNN 62.45 16.47 21.06 69.73 18.67 27.13 71.88 19.52 29.65 73.92 19.53 31.29 CoreRec 62.38 16.91 22.27 70.39 18.73 27.45 74.46 19.70 30.59 76.34 20.22 32.30 improv.(%) -0.63 2.67 0.63 0.95 0.32 1.18 2.46 0.92 3.17 2.81 3.32 3.23 TaoBao TOP 3.99 0.35 0.50 5.03 0.39 0.71 6.71 0.46 0.98 8.39 0.51 1.11 DeepFM 22.09 2.98 2.71 27.61 3.27 3.71 29.90 3.37 4.34 31.57 3.44 4.71 Sets2Sets 23.95 3.62 4.83 31.29 4.21 6.62 34.43 4.56 7.75 35.63 4.77 8.41 DSNTSP 29.76 4.66 6.04 39.71 5.06 7.69 45.73 5.25 8.61 48.83 5.40 9.33 DNNTSP 29.69 4.70 5.83 41.50 5.39 8.29 47.16 5.88 9.95 49.68 6.16 10.97 ETGNN 32.52 5.60 6.24 42.87 5.78 8.45 47.93 6.12 10.23 50.01 6.42 11.32 CoreRec 36.05 6.07 7.63 46.33 6.73 10.48 49.68 7.11 11.63 50.94 7.34 12.30 improv.(%) 10.85 8.39 26.3 8.07 16.44 26.4 3.65 16.18 16.8 1.86 14.33 12.1 JingDong TOP 21.87 6.48 14.19 31.77 8.17 21.46 35.42 8.76 24.21 40.10 9.43 27.73 DeepFM 40.57 15.02 23.85 49.34 16.40 27.84 55.34 17.50 34.47 56.26 17.85 35.91 Sets2Sets 48.96 17.71 34.12 54.69 21.37 40.28 59.03 22.97 46.05 60.33 23.58 48.04 DSNTSP 50.52 24.36 38.51 64.45 25.09 46.59 69.21 25.54 48.84 71.72 26.39 55.15 DNNTSP 53.12 22.06 36.36 65.10 24.52 46.00 67.70 25.71 50.08 71.87 26.87 56.45 ETGNN 56.52 24.26 38.12 67.11 25.78 47.97 69.83 27.11 52.38 73.43 27.43 57.83 CoreRec 61.46 28.37 47.28 70.31 30.51 55.36 76.04 31.82 61.11 78.13 32.45 64.05 improv.(%) 8.74 16.46 22.77 4.77 18.35 15.41 8.89 17.37 16.67 6.41 18.30 10.76 Table 1: Comparisons with methods on Top-K performance. Note that the bold values indicate the best score, and the underlined value means the best among the baselines. The improvement (%) of our CoreRec is based on the score with an underline. November 2000 to February 2001. We remove users whose purchase time is less than 10 days. TaoBao (TB)2: This dataset is a subset of Taobao user behavior data (Zhu et al.) that contains behaviors including click, purchase, adding item to the shopping cart and item favoring. We select all purchasing behaviors. Jingdong (JD)3: The JingDong dataset contains user action records from February 1, 2018, to April 15, 2018. We remove users whose purchase time is less than 5 days. For the readers’ convenience, we provide the statistics of the three datasets in Table 2. Data items sets users cate TF 21,858 76,251 8,816 1,954 TB 242,111 34,642 4,827 5,070 JD 41,212 270,397 2,011 6 Table 2: Statistic information after pre-processing on three datasets. Note that “I/S” indicates the average ratio of items to sets, and “S/U” is the average ratio of sets to users. Implementation Details. We treat all the items bought in the same order as a set and divide each dataset into train, validation, and test sets across users with ratios of 80%, 10%, and 10%. For evaluation, we generate a ranking list 2tianchi.aliyun.com/dataset/dataDetail?dataId=649 3jdata.jd.com/html/detail.html?id=8 of top-K items from the output and K is from 10 to 100, with an interval of 10. The epoch is set to 100, 300, and 500 on Tafeng, Taobao, and JD datasets respectively. We adopt Adam (Kingma and Ba) with a learning rate set to 0.001 as the optimizer in the experiment. In addition, the hidden dimension and batch size are set to 32 and 64. Furthermore, λ in Eq. (6) and in Eq. (5) are set to 0.4, and γ in Eq. (12) is set to 0.8. Besides, we set the short sequence whose length is 1/2 of the original sequence. We use PyTorch (Paszke et al.) to implement our model and train it on 4 GeForce GTX 1080Ti GPUs. Regarding the metrics, we leverage Personal Hit Ratio@K (PHR@K), Normalized Discounted Cumulative Gain@K (NDCG@K), and Recall@K, to evaluate the performance of temporal set prediction. Comparison To demonstrate the effectiveness of CoreRec, we compare it with six competing baselines, including DeepFM (Guo et al.), Sets2Sets (Hu and He), DSNTSP (Sun et al.), DNNTSP (Yu et al.), and ETGNN (Yu et al.). To ensure fair comparison, we use the same setting for all methods and train all models from scratch. The experimental results are summarized in Table 1. ETGNN demonstrates a significantly enhanced performance in contrast to TOP, DeepFM, Sets2Sets, DSNTSP, and DNNTSP. Nonetheless, our novel CoreRec outperforms the ETGNN. Notably, the JingDong and Taobao datasets encompass a broader spectrum of items, characterized by a pronounced disparity beThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8666 TaFeng K=50 K=100 PHR NDCG Recall PHR NDCG Recall w/o s 72.67 18.78 28.75 77.33 20.21 33.43 w/o t 72.08 18.57 28.92 77.23 20.01 33.71 w/o Greg 71.78 18.59 28.45 75.94 19.96 32.94 w/o Ginter 72.18 18.54 28.85 76.53 19.91 33.39 w/o switch 72.06 18.67 28.59 76.43 20.13 33.12 CoreRec 73.07 19.26 29.15 77.33 20.74 34.06 TaoBao K=50 K=100 PHR NDCG Recall PHR NDCG Recall w/o s 44.44 6.81 9.91 51.36 7.46 11.95 w/o t 46.54 6.19 10.33 52.62 6.87 12.56 w/o Greg 45.49 5.68 9.27 51.32 6.58 11.97 w/o Ginter 47.37 6.81 10.69 51.24 7.45 12.65 w/o switch 45.58 6.25 9.32 51.41 6.92 12.03 CoreRec 47.58 6.90 11.04 52.62 7.47 12.67 JingDong K=50 K=100 PHR NDCG Recall PHR NDCG Recall w/o s 68.22 25.52 52.70 77.08 27.94 64.26 w/o t 70.31 25.39 55.64 77.08 30.84 64.18 w/o Greg 66.14 25.16 48.46 75.12 27.79 62.45 w/o Ginter 70.31 25.28 53.58 80.21 27.42 64.34 w/o switch 69.22 28.81 55.59 76.02 30.15 64.02 CoreRec 73.96 31.37 59.20 79.69 32.86 66.21 Table 3: Ablation study of CoreRec. s denotes short-term sequence, t denotes time interval matrices, Greg denotes the regular graph, Ginter denotes the intervened graph. PHR@10 A B C D E F DNNTSP 5.07 10.41 12.77 18.24 25.09 54.55 CoreRec 5.12 10.62 13.12 20.13 25.19 54.58 NDCG@10 A B C D E F DNNTSP 2.28 3.97 3.94 6.84 9.01 29.12 CoreRec 2.41 4.05 4.10 7.28 9.11 28.89 Recall@10 A B C D E F DNNTSP 2.89 7.16 7.58 11.33 16.54 45.55 CoreRec 3.03 7.24 7.76 12.08 16.60 45.27 Table 4: The performance varies on different purchase frequency groups on Tafeng. tween low-frequency and high-frequency items. This distinction accentuates the susceptibility to spurious correlations. Consequently, our model excels on the JingDong and Taobao datasets by mitigating such spurious correlations through the utilization of the switch module, as illustrated in Table 1. Furthermore, the data presented in Table 3 underscores the more discernible impact of incorporating time interval matrices on the JingDong dataset. Ablation Study To investigate the efficacy of key modules in CoreRec, we consider several model variants and obtain the conclusions: 1) Impact of the Short Sequence. Table 3 shows that short-term sequences help to achieve better performance. Because the short-term sequence represents the user’s recent active interests. 2) Impact of the Time Interval Matrices. We consider that two sets with small time intervals have a strong correlation, while two sets with large time intervals have a weak correlation. Table 3 shows that time interval matrices are effective. 3) Impact of the Regular Graph. The regular graph not only explores the items’ intra-set correlation but also considers the items’ purchase frequency 81.2% 12.4% C A B 4.9% 2.1% D E F 0.9% 0.2% A B C D E F (a) The relative improvement of CoreRec in each group compared with DNNTSP (b) The proportion of the number of items in each group 0.130.05 0.14 0.08 0.21 0.08 0.16 0.35 0.18 0.44 0.75 0.1 0.1 0.06 - 0.23 0.03 - 0.28 1.89 PHR NDCG Recall Figure 5: We divide items into groups based on items’ purchased frequency. ‘A’(0-20) group contains items purchased 0-20 times, and for easy to write, ‘B’(20-50), ‘C’(50-100), ‘D’(100-200), ‘E’(200-500), ‘F’(500+). and items’ inter-set correlation, which are effective on the whole. 4) Impact of the Intervened Graph. The intervened graph connects the items in the same set, which formulates a counterfactual situation. From Table 3, we can learn that items’ intra-set correlations are also effective on the whole. 5) Impact of the Post-trained Switch Module. Switch module aims is to determine whether to aggregate information across sets. Compared with the undifferentiated aggregation method, using the switch can dynamically select the more appropriate representation for each item. Performance Vary on Purchase Frequency. Initial analysis of the data reveals a prevalent long-tail distribution in item frequency across multiple scenarios. This distribution raises concerns about low-frequency suppression, wherein items appearing in a limited number of sets establish fewer connections, resulting in diminished recommendations. To address this challenge, our CoreRec offers a viable solution. Insights drawn from our analysis, as depicted in Figure X and Table Y, highlight that items with fewer than 500 purchases (encompassing Groups A, B, C, D, and E) constitute a substantial majority (99.8%). In this context, our proposed CoreRec surpasses DNNTSP in performance for these items. Conversely, for items exceeding 500 purchases, CoreRec performs comparably to DNNTSP. Conclusion In this study, we delve into the detrimental impact of spurious inter-set correlations on model performance. These correlations often emerge from the confluence of highfrequency items and extraneous noise. To address this challenge, we introduce CoreRec, an innovative framework designed to mitigate these spurious inter-set correlations. CoreRec draws upon the principles of causal intervention, manifested through two distinct graphs: an intervened graph and a regular graph. By incorporating a purpose-built switch, CoreRec adeptly navigates between post-intervention predictions and original predictions. Extensive experimentation across three benchmark datasets robustly demonstrates the effectiveness of our proposed CoreRec framework. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8667 Acknowledgements This work was supported by the National Natural Science Foundation of China (No. 62337001, 62376243, 62037001), Young Elite Scientists Sponsorship Program by CAST (2021QNRC001) References Feng, F.; Huang, W.; He, X.; Xin, X.; Wang, Q.; and Chua, T. 2021. Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 1208–1218. ACM. Galles, D.; and Pearl, J. 2013. Testing Identifiability of Causal Effects. CoRR, abs/1302.4948. Ghamrawi, N.; and McCallum, A. 2005. Collective multilabel classification. In Proceedings of the 14th ACM international conference on Information and knowledge management, 195–200. Guo, H.; Tang, R.; Ye, Y.; Li, Z.; and He, X. 2017. DeepFM: A Factorization-Machine based Neural Network for CTR Prediction. In Sierra, C., ed., Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, 1725–1731. ijcai.org. Hu, H.; and He, X. 2019. Sets2sets: Learning from sequential sets with neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1491–1499. Hu, H.; He, X.; Gao, J.; and Zhang, Z. 2020. Modeling Personalized Item Frequency Information for Next-basket Recommendation. In Huang, J.; Chang, Y.; Cheng, X.; Kamps, J.; Murdock, V.; Wen, J.; and Liu, Y., eds., Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, 1071–1080. ACM. Joachims, T.; Swaminathan, A.; and Schnabel, T. 2017. Unbiased learning-to-rank with biased feedback. In Proceedings of the tenth ACM international conference on web search and data mining, 781–789. Jr., H. E. K. 2005. Judea Pearl, Causality, Cambridge University Press (2000). Artif. Intell., 169(2): 174–179. Jung, S.; Park, Y.; Jeong, J.; Kim, K.; Kim, H.; Kim, M.; and Kwak, H. 2021. Global-Local Item Embedding for Temporal Set Prediction. In Pamp´ın, H. J. C.; Larson, M. A.; Willemsen, M. C.; Konstan, J. A.; McAuley, J. J.; GarciaGathright, J.; Huurnink, B.; and Oldridge, E., eds., RecSys ’21: Fifteenth ACM Conference on Recommender Systems, Amsterdam, The Netherlands, 27 September 2021 - 1 October 2021, 674–679. ACM. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Li, M.; Jullien, S.; Ariannezhad, M.; and de Rijke, M. 2023. A next basket recommendation reality check. ACM Transactions on Information Systems, 41(4): 1–29. Mueller, S.; Li, A.; and Pearl, J. 2021. Causes of Effects: Learning individual responses from population data. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32: 8026–8037. Pearl, J. 2009. Causality. Cambridge university press. Pearl, J. 2012. The Do-Calculus Revisited. Qin, Y.; Wang, P.; and Li, C. 2021. The World is Binary: Contrastive Learning for Denoising Next Basket Recommendation. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 859–868. ACM. Rendle, S.; Freudenthaler, C.; and Schmidt-Thieme, L. 2010. Factorizing personalized markov chains for nextbasket recommendation. In Proceedings of the 19th international conference on World wide web, 811–820. Saito, Y.; Yaginuma, S.; Nishino, Y.; Sakata, H.; and Nakata, K. 2020. Unbiased recommender learning from missingnot-at-random implicit feedback. In Proceedings of the 13th International Conference on Web Search and Data Mining, 501–509. Shengyu, Z.; Yunze, T.; Kun, K.; Fuli, F.; Jiezhong, Q.; Jin, Y.; Zhou, Z.; Hongxia, Y.; Zhongfei, Z.; and Fei, W. 2023. Stable Prediction on Graphs with Agnostic Distribution Shifts. In The KDD’23 Workshop on Causal Discovery, Prediction and Decision, 49–74. PMLR. Sun, L.; Bai, Y.; Du, B.; Liu, C.; Xiong, H.; and Lv, W. 2020. Dual Sequential Network for Temporal Sets Prediction. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1439–1448. Wang, W.; Feng, F.; He, X.; Wang, X.; and Chua, T. 2021a. Deconfounded Recommendation for Alleviating Bias Amplification. In Zhu, F.; Ooi, B. C.; and Miao, C., eds., KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, 1717–1725. ACM. Wang, W.; Feng, F.; He, X.; Zhang, H.; and Chua, T. 2021b. Clicks can be Cheating: Counterfactual Recommendation for Mitigating Clickbait Issue. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 1288–1297. ACM. Wang, W.; Feng, F.; He, X.; Zhang, H.; and Chua, T.-S. 2021c. Clicks can be cheating: Counterfactual recommendation for mitigating clickbait issue. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1288–1297. Wang, Z.; Wei, W.; Cong, G.; Li, X.-L.; Mao, X.-L.; and Qiu, M. 2020. Global Context Enhanced Graph Neural Networks for Session-based Recommendation. In Proceedings The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8668 of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM. Wei, T.; Feng, F.; Chen, J.; Wu, Z.; Yi, J.; and He, X. 2021. Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System. In Zhu, F.; Ooi, B. C.; and Miao, C., eds., KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, 1791–1800. ACM. Yu, L.; Liu, Z.; Zhu, T.; Sun, L.; Du, B.; and Lv, W. 2022a. Modelling Evolutionary and Stationary User Preferences for Temporal Sets Prediction. arXiv:2204.05490. Yu, L.; Liu, Z.; Zhu, T.; Sun, L.; Du, B.; and Lv, W. 2023. Predicting temporal sets with simplified fully connected networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 4835–4844. Yu, L.; Sun, L.; Du, B.; Liu, C.; Xiong, H.; and Lv, W. 2020. Predicting Temporal Sets with Deep Neural Networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1083– 1091. Yu, L.; Wu, G.; Sun, L.; Du, B.; and Lv, W. 2022b. Elementguided Temporal Graph Representation Learning for Temporal Sets Prediction. In Laforest, F.; Troncy, R.; Simperl, E.; Agarwal, D.; Gionis, A.; Herman, I.; and M´edini, L., eds., WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, 1902–1913. ACM. Zhang, M.-L.; and Zhou, Z.-H. 2006. Multilabel neural networks with applications to functional genomics and text categorization. IEEE transactions on Knowledge and Data Engineering, 18(10): 1338–1351. Zhang, M.-L.; and Zhou, Z.-H. 2013. A review on multilabel learning algorithms. IEEE transactions on knowledge and data engineering, 26(8): 1819–1837. Zhang, S.; Yao, D.; Zhao, Z.; Chua, T.; and Wu, F. 2021a. CauseRec: Counterfactual User Sequence Synthesis for Sequential Recommendation. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 367–377. ACM. Zhang, S.; Yao, D.; Zhao, Z.; Chua, T.-S.; and Wu, F. 2021b. Causerec: Counterfactual user sequence synthesis for sequential recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 367–377. Zhang, Y.; Feng, F.; He, X.; Wei, T.; Song, C.; Ling, G.; and Zhang, Y. 2021c. Causal intervention for leveraging popularity bias in recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 11–20. Zhu, H.; Li, X.; Zhang, P.; Li, G.; He, J.; Li, H.; and Gai, K. 2018. Learning tree-based deep model for recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1079–1088. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8669
2024
963
18,810
Ada-Retrieval: An Adaptive Multi-Round Retrieval Paradigm for Sequential Recommendations Lei Li1, Jianxun Lian2, Xiao Zhou1*, Xing Xie2 1Gaoling School of Artificial Intelligence, Renmin University of China 2Microsoft Research Asia [email protected], [email protected], [email protected], [email protected] Abstract Retrieval models aim at selecting a small set of item candidates which match the preference of a given user. They play a vital role in large-scale recommender systems since subsequent models such as rankers highly depend on the quality of item candidates. However, most existing retrieval models employ a single-round inference paradigm, which may not adequately capture the dynamic nature of user preferences and stuck in one area in the item space. In this paper, we propose Ada-Retrieval, an adaptive multi-round retrieval paradigm for recommender systems that iteratively refines user representations to better capture potential candidates in the full item space. Ada-Retrieval comprises two key modules: the item representation adapter and the user representation adapter, designed to inject context information into items’ and users’ representations. The framework maintains a model-agnostic design, allowing seamless integration with various backbone models such as RNNs or Transformers. We perform experiments on three widely used public datasets, incorporating five powerful sequential recommenders as backbone models. Our results demonstrate that Ada-Retrieval significantly enhances the performance of various base models, with consistent improvements observed across different datasets. Our code and data are publicly available at: https://github.com/ll0ruc/AdaRetrieval. Introduction Recommender systems have become a crucial element in a wide range of online applications, encompassing ecommerce, social media, and entertainment platforms (Covington, Adams, and Sargin 2016; Ying et al. 2018). By providing personalized recommendations tailored to users’ historical behavior and preferences, these systems enhance user experience and engagement. Among the diverse types of recommender systems, sequential recommender systems (Rendle 2010; Tang and Wang 2018) have attracted considerable interest due to their capacity to effectively capture temporal dynamics in user history and accurately forecast near-future user behaviors. In this domain, various backbone models have been proposed, including recurrent neural networks (RNNs) (Hidasi et al. 2015), convolutional neural networks (CNNs) (Tang and Wang 2018), transformers (Kang and *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Single-round retrieval (b) Multi-round retrieval Figure 1: Illustrations of (a) the conventional single-round retrieval paradigm, and (b) our proposed adaptive multiround retrieval paradigm, in which the final retrieval result is the union of each individual retrieval Ki. McAuley 2018), and graph neural networks (GNNs) (Wu et al. 2019), each contributing to the ongoing advancement of sequential recommendation techniques. This paper does not aim to propose a stronger backbone model. Instead, we observe that most existing models employ a single-round inference paradigm to retrieve the topk item candidates (He and McAuley 2016; Tang and Wang 2018; Sun et al. 2019). Specifically, given users’ profiles such as behavior histories, the model initiates the forward process and generates user representations, which are then used as queries to match the top-k most similar items in the database. However, this single-round inference paradigm may not adequately capture the dynamic nature of user preferences and adapt to the ever-changing diversity of the item space. As illustrated in Figure 1(a), once the model’s forward pass is completed, the user representation remains fixed, resulting in a top-k search area in the item space that is confined to a static region. If the initial user representation is inaccurate or the user’s future preferences are diverse, this paradigm may fail to deliver satisfactory performance. We argue that a multi-round inference paradigm offers a more effective retrieval approach for recommender systems. As illustrated in Figure 1(b), the objective of retrieving k items is divided into n batches, with each batch representing a round of retrieving k/n items. The forward passes of user representation in different rounds are conducted indepenThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8670 dently. If the previously retrieved items do not adequately match the user’s preferences, the user representation will be adjusted in the next round, allowing the model to search for target items in a different region of the item space. Taking the search engine scenario as an analogy (Zhang and Nasraoui 2006), users may rewrite their queries if the currently retrieved information does not accurately address their questions. In this regard, previous rounds’ retrieval can serve as feedback information (Lewandowski 2008), which helps refine the user representation if necessary. Thus, this multiround paradigm presents a significant advantage, as it prevents user representations from being confined to a static area, enabling more dynamic and diverse recommendations. As an embodiment of the new paradigm, we present AdaRetrieval, an adaptive multi-round retrieval approach for recommender systems. Fundamentally, Ada-Retrieval alters the traditional training and inference process while maintaining a model-agnostic design, allowing seamless integration with various backbone models such as RNNs or Transformers. Ada-Retrieval comprises two key modules: the item representation adapter and the user representation adapter. Both modules aim to inject context information, which refers to previous user representations and retrieved items up to the current retrieval round, into items’ and users’ representations. The item representation adapter consists of a learnable filter (LFT) layer and a context-aware attention (CAT) layer, designed to adjust item embeddings in the user history according to the retrieval context. This enables the user model to potentially optimize for the next round of retrieval by considering the feedback of item candidate space. The user representation adapter, on the other hand, is composed of a Gated Recurrent Units (GRU) layer and a Multi-Layer Perceptron (MLP) layer. The GRU layer encodes all user representations generated in previous rounds as user context, while the MLP layer fuses this context with the current user representation to produce an adapted one. By incorporating these components, Ada-Retrieval can integrate contextual information during the retrieval process into traditional sequential recommendation models, generating progressively refined user representations for item retrieval while maintaining a lightweight and model-agnostic advantage. We perform experiments on three widely used public datasets, incorporating five powerful sequential recommenders as backbone models. Comprehensive results demonstrate that Ada-Retrieval can significantly enhance the performance of various base models, with consistent improvements observed across different datasets. For instance, on the Beauty dataset, Ada-Retrieval boosts SASRec’s performance by 8.55% in terms of NDCG@50 and improves the best base model, FMLPRec, by 5.66%. The key contributions of this paper are summarized as follows: • We propose Ada-Retrieval, a novel adaptive multi-round retrieval framework for sequential recommendations. Unlike traditional single-round retrieval, Ada-Retrieval iteratively refines user representations to better capture potential candidates across the entire item space. • We design several key components, including LFT and CAT for the item representation adapter, and GRU and MLP for the user representation adapter. These components enable the integration of contextual information in a model-agnostic manner. • We conduct extensive experiments on real-world datasets to demonstrate the effectiveness of Ada-Retrieval, showing significant improvements over various sequential recommender systems. Related Work Deep Retrieval In practical recommender systems, the retrieval stage (candidate generation) aims to efficiently retrieve a small subset of items, typically in the hundreds, from large corpora (Xie et al. 2020). With the rise of deep learning, there has been a surge in efforts to construct sophisticated retrieval models for recommender systems. Embedding-based methods often adopt a two-tower architecture, as seen in FM (Rendle 2010), YoutubeDNN (Covington, Adams, and Sargin 2016), and AFT (Hao et al. 2021), dividing the construction of user and item representations into two distinct branches. Innovations like TDM (Zhu et al. 2018) and JTM (Zhu et al. 2019) offer fresh perspectives on leveraging user-item dynamics through tree-based structures. Additionally, graphbased matching models (Xie et al. 2021) are proposed to learn user/item representations. Departing from the singleround inference paradigms of these methods, our model introduces a multi-round inference paradigm, providing a more effective retrieval approach for recommender systems. Sequential Recommendation Sequential recommendation, predicting future items to interact with based on correlations in item transitions within user activity sequences, has evolved from foundational Markov Chain models (He and McAuley 2016) to contemporary deep learning technologies. Caser (Tang and Wang 2018) employed CNNs to analyze sequences of item embeddings, while GRU4Rec (Hidasi et al. 2015) used Gated Recurrent Units (GRU) for session-driven recommendations. More recently, SASRec (Kang and McAuley 2018) incorporated self-attention mechanisms to selectively aggregate relevant items, refining user modeling. Inspired by the Cloze task, Bert4Rec (Sun et al. 2019) predicted masked items by jointly utilizing preceding and succeeding contexts. Training frameworks like CL4SRec (Xie et al. 2022) integrated contrastive approaches for diverse perspectives through data enhancement. Despite the success of these models, a challenge remains in generating diverse user feature representations. In addressing this, our model iteratively refines user representations, enhancing the capture of dynamic user preferences through inserted contextual information. Preliminaries Problem Formulation Let us assume a set of users U = {u1, u2, . . . , u|U|} and items I = {i1, i2, . . . , i|I|}, with u ∈U representing a user and i ∈I representing an item. The user behavior can The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8671 Sequential User Model Prediction Layer User Behavior Sequence Item Context User Context User Representation Adapter Sequential User Model User Behavior Sequence Item Context User Context Context Information Generator . . . . . . Top-K items t-st Round Retrieval t+1-st Round Retrieval Sequential User Model User Behavior Sequence Prediction Layer (a) The Base Retrieval Model (b) Ada-Retrieval MatMul Scale MatMul SoftMax Q FFT IFFT Learnable Filter Add & Norm Add & Norm Filter Layer K V Attention Layer Item Sequence Embedding Item Context Embedding Add & Norm Dropout Multi-Layer Perceptron GRU GRU GRU User Context Embedding User Embedding (c) Item Representation Adapter (d) User Representation Adapter User Representation Adapter Item Representation Adapter Item Representation Adapter 𝐸𝑢 𝐸𝑢 𝐹𝑢 𝐹𝑢;𝑡 𝑐 𝐸𝑢;𝑡 (𝑎) 𝐹𝑢;𝑡 (𝑎) 𝐸𝑢;𝑡 𝑐 𝐸𝑢;𝑡+1 𝑐 𝐸𝑢;𝑡 𝑐 𝐹𝑢;𝑡+1 𝑐 𝐹𝑢;𝑡+1 (𝑎) 𝐹𝑢;𝑡 𝑐 𝐹𝑢;𝑡 𝐸𝑢;𝑡+1 (𝑎) Figure 2: An overview of the traditional retrieval model (a) and our proposed Ada-Retrieval paradigm (b), which consists of two key parts: the item representation adapter (c) and the user representation adapter (d). We use colored elements to indicate the new components in Ada-Retrieval. be denoted as S = {s1, s2, . . . , s|U|}. In sequential recommendation, a user’s behavior sequence is typically ordered chronologically: su = {i1, i2, . . . , in}, where su ∈S and u ∈U. The objective of sequential recommendation is to predict the next item the user is likely to interact with, denoted as p(in+1|i1:n). Base Sequential Model A common architecture for a sequential recommender system typically consists of three key components: an embedding lookup layer EMB(·), a sequential encoding layer SEL(·), and a prediction layer PRED(·), as illustrated in Figure 2(a). When provided with a user behavior sequence su = {i1, i2, . . . , in}, the sequence initially passes through the embedding lookup layer EMB(·), resulting in a sequence of corresponding item embeddings: Eu = EMB(su) = {e1, e2, . . . , en} (1) Subsequently, the embeddings of this item sequence are processed through a sequential user encoder, represented as SEL(·), which can be implemented using appropriate backbones such as RNNs or Transformers: Fu = SEL(Eu) (2) Here, Fu represents the embedding vector serving as the user representation. Then, Fu is combined with a target item vector Ei as the input to the prediction layer PRED(·): ˆyui = PRED(Fu, Ei) (3) The prediction layer is commonly implemented using either a dot product or cosine similarity, particularly for retrieval purposes. Methodology Our model, Ada-Retrieval, introduces an adaptive multiround retrieval approach for recommender systems. The overall framework is depicted in Figure 2(b). At the core of the adaptive retrieval paradigm are two meticulously crafted adaptation modules: the Item Representation Adapter (IRA) and the User Representation Adapter (URA). These modules seamlessly integrate contextual information into user preference modeling. In contrast to traditional sequential recommendation models, our modifications primarily focus on the input and output of the sequential encoding layer SEL(·): E(a) u = IRA(Eu; Ec u) (4) F(a) u = URA(Fu; Fc u) (5) where Ec u, Fc u are the feature representation of item context and user context. E(a) u and F(a) u represent the adjusted feature representation of item/user. In the next section, we will introduce the details of each proposed component. Item Representation Adapter The item representation adapter is designed to recalibrate item embeddings within users’ historical data based on the prevailing retrieval item context. Further details are illustrated in Figure 2(c). Learnable Filter Layer Considering potential noise in item context information from previous rounds, we use a single learnable filter block for refining item features efficiently. This approach draws inspiration from the filterenhanced MLP (Zhou et al. 2022) used in recommendation systems, which typically employs multiple stacked blocks to enhance item feature representations by removing noise. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8672 Upon receiving the item context information for the current round, denoted as Ct u = {i1, i2, · · · , iCt u}, where i represents items retrieved in previous rounds, our initial step involves passing it through an encoder layer to extract its features, Ec u = EMB(Ct u). Subsequently, we apply the Fast Fourier Transform (FFT), denoted as F(·), along the item dimension. This operation transforms the item context representation matrix Ec u into the frequency domain: Xc u = F(Ec u) (6) Note that Xc u is a complex tensor representing the spectrum of Ec u. We can then proceed by multiplying it with a learnable filter, denoted as W: eXc u = W ⊙Xc u (7) where ⊙is the element-wise multiplication. Finally, we employ the inverse FFT to revert the modulated spectrum, eXc u, back into the time domain, subsequently updating the sequence representations: eEc u = F−1( eXc u) (8) where F−1(·) denotes the inverse 1D FFT, which converts the complex tensor into a real number tensor. To avoid overfitting, dropout layer (Srivastava et al. 2014), residual connection structure (He et al. 2016), and layer normalization operations (Ba, Kiros, and Hinton 2016) are applied on the obtained output Hc u: Hc u = LayerNorm(Ec u + Dropout(eEc u)) (9) Context-aware Attention Layer Attention mechanisms have proven effective in recommender systems (Kang and McAuley 2018; Tan et al. 2021a). They empower the model to selectively focus on different segments of the sequence based on their relevance to the immediate prediction task. The following is the standard dot-product attention: Attention(Q, K, V) = softmax(QKT √ d )V (10) Here, Q denotes queries, K stands for keys, and V represents values. The embedding size is denoted by d, and the scale factor √ d is introduced to prevent excessively large values in the inner product. In the typical self-attention mechanism, Q, K, and V are derived from the same input vector but are produced using distinct weight matrices. However, in our scenario, Q corresponds to the item sequence features, while K embodies the item context features. Therefore, the context-aware attention mechanism can be articulated as: eHc u = Attention(Eu, Hc u, Hc u) (11) Certain pieces of item context information wield more influence in determining the subsequent item with which the user might engage. The attention mechanism enables the model to autonomously pinpoint these pivotal items, bestowing upon them greater weights. Following this, we incorporate the layer normalization and dropout operations to alleviate the gradient vanishing and unstable training problems as: E(a) u = LayerNorm(Eu + Dropout( eHc u)) (12) A Case with Sequential User Model Following the acquisition of the adjusted item sequence feature representation, it seamlessly integrates into a conventional sequential recommendation model. Taking SASRec as an illustration, the user feature representation is denoted as Fu = TRFM(Eu), where TRFM(·) signifies the Transformer architecture within SASRec. In the context of Ada-Retrieval, the user feature representation is expressed as Fu = TRFM(E(a) u ), representing a modification to the input. It is noteworthy that Ada-Retrieval, with its model-agnostic nature, refrains from altering the intrinsic parameters of SASRec. Instead, it dynamically adjusts the current item sequence input features based on the item context information. User Representation Adapter Utilizing the available user context information, we formulate the design of the user representation adapter to produce adaptive user representations, as illustrated in Figure 2(d). Gated Recurrent Unit Layer Recurrent Neural Networks (RNNs) have been developed to model variable-length sequence data (Sherstinsky 2020), showcasing promising advancements in recommendation systems (Li et al. 2017; Guo et al. 2020). Their efficacy stems from their capacity to capture a user’s sequential behavior. Gated Recurrent Units (GRUs) (Cho et al. 2014) represent a more sophisticated variant of RNNs designed to address the vanishing gradient challenge. Essentially, user features derived from earlier rounds exert influence on the current one, with this influence diminishing as the round distance grows. Therefore, we leverage the capabilities of the GRU module to encode user representations accumulated from previous rounds. eFc u = GRU(Fc u) (13) Fc u serves as the feature representation encapsulating user context, consisting of adjusted user feature representations generated in previous rounds. With the trivial feature extractor of the user context, we essentially use the final hidden state as the representation of the user’s context representation eFc u. Multi-Layer Perceptron Layer After deriving the user’s context feature representation eFc u, we concatenate it with the currently generated user feature representation Fu. This combined representation then undergoes processing through a two-layer Multilayer Perceptron (MLP) with a ReLU activation function. The process is defined as follows: F(a) u = W2 ReLU(W1[eFc u; Fu] + b1) + b2 (14) where W1, b1, W2, b2 are trainable parameters. Subsequently, we incorporate skip connection and layer normalization operations, as detailed in Eq. (12), to produce the final user representation F(a) u . Context Information Generator To collect the user context information generated in each round, we utilize a stacking methodology to assemble an array of context vectors: Fc u;t = STACK({F(a) u;1, F(a) u;2, · · · , F(a) u;t−1}) (15) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8673 Concurrently, from the entire pool of candidate items, we retrieve the top-k items with the highest scores aligned with the user representation of the current round, F(a) u . The corresponding item IDs are then added to the item context pools: Ct u = Ct−1 u + top-k argmax i∈I Sim(F(a) u;t−1; Ei) (16) where Sim is a function to measure the feature similarity between users and items. Here, we utilize the dot product for this purpose. It is crucial to highlight that the item context, determined through this similarity measure, will subsequently be input into the embedding look-up layer to retrieve their corresponding feature representations. Model Prediction and Optimization After T iterative rounds, Ada-Retrieval yields T user representations {F(a) u;1, F(a) u;2, · · · , F(a) u;T}. Then we multiply it by the item embedding matrix E to predict the relevance of the candidate item: ˆyui;t = ET i F(a) u;t (17) Each of these representations is employed to retrieve k/T items, which are then sequentially concatenated to form the final set of top-k items. We anticipate that the actual item i chosen by user u should result in a higher score ˆyui. Hence, we utilize the cross-entropy loss to optimize the model parameters. The objective function for the t-th round is formulated as: Lt = − X u∈U,i∈I yui;t log(σ(ˆyui;t))+(1−yui;t) log(1−σ(ˆyui;t)) (18) To emphasize early and accurate predictions of the positive item, we introduce a decay factor λ to weigh each round’s contribution to the overall training loss as: L = T X t=1 λtLt (19) To optimize training efficiency, we employ a two-phase training approach. Initially, we pre-train a foundational sequential model. Subsequently, in the second phase, we finetune Ada-Retrieval, utilizing the pre-trained model as a starting point. In this phase, we concurrently update both the parameters Θ of Ada-Retrieval and the Φ of the base model. Experiments Experimental Settings Datasets. To validate our proposed method across diverse data types, we assess the model using three publicly available benchmark datasets. Beauty and Sports represent subsets of the Amazon Product dataset (McAuley et al. 2015), capturing user reviews of Amazon.com products. The Yelp dataset1 is a sizable collection of extended item sequences derived from business recommendations, using only transaction records post-January 1st, 2019. For uniformity, we categorize interaction records by users or sessions and sequence 1https://www.yelp.com/dataset Dataset Beauty Sports Yelp # Sequences 22,363 25,598 30,431 # Items 12,101 18,357 20,033 # Actions 198,502 296,337 316,354 # Sparsity 99.93% 99.95% 99.95% Table 1: Statistics of datasets after preprocessing. them chronologically based on timestamps. Following (Sun et al. 2019; Li, Wang, and McAuley 2020), we filter out users/items with fewer than 5 interactions. Detailed statistics for each dataset are summarized in Table 1. Evaluation Settings. To facilitate comprehensive model evaluation, we employ the leave-one-out strategy (Kang and McAuley 2018; Zhou et al. 2020) for partitioning each user’s item sequence into training, validation, and test sets. Diverging from conventional sampling practices, our approach considers all items not previously engaged with by the user as candidate items (Krichene and Rendle 2020). The evaluation metrics adopted for assessing model performance encompass top-k Hit Ratio (HR@k) and top-k Normalized Discounted Cumulative Gain (NDCG@k). Implementation Details. When comparing with existing models, we adopt the optimal parameter settings from their original papers and conduct a meticulous grid search around these configurations for baseline models. Ada-Retrieval is implemented using Python 3.8 and PyTorch 1.12.1, executed on NVIDIA V100 GPUs with 32GB memory. Training parameters include an Adam optimizer with a learning rate of 0.001 and a batch size of 1024. Across all datasets, we set the maximum sequence length to 50, embedding dimension to 64, and training epochs to a maximum of 200. For Ada-Retrieval, we varied hyperparameters T and λ within the ranges [3, 8] and [0.1, 0.9], respectively, with step sizes of 1 and 0.2. The experiments were conducted five times, and results, reported as averages with standard deviations, reflect the model’s performance. We also employed an earlystopping strategy, halting training if HR@50 performance on the validation set continuously decreased over 10 consecutive epochs. Main Results with Various Backbone Models Backbones. As Ada-Retrieval is model-agnostic, we evaluate its performance with representative sequential recommenders, including GRU4Rec (Hidasi et al. 2015), SASRec (Kang and McAuley 2018), NextItNet (Yuan et al. 2019), SRGNN (Wu et al. 2019), and FMLPRec (Zhou et al. 2022). These models employ diverse architectures, encompassing RNN, CNN, GNN, and MLP. Results. We train the base sequential models and their corresponding Ada-Retrieval counterparts using three datasets. The top 50 recommendation results are shown in Table 2. Here, we consistently observe that Ada-Retrieval consistently and significantly outperforms all base sequential models across all datasets and metrics. Notably, Ada-Retrieval The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8674 Datasets Models GRU4Rec SASRec NextItNet SRGNN FMLPRec HR NDCG HR NDCG HR NDCG HR NDCG HR NDCG Beauty Base 13.126 4.574 17.110 6.506 12.539 4.064 12.411 4.242 17.935 6.876 Ada. 14.175 4.915 17.741 7.062 12.948 4.288 13.274 4.375 18.531 7.265 Improv. 7.99% 7.46% 3.69% 8.55% 3.27% 5.50% 6.96% 3.12% 3.32% 5.66% Sports Base 7.644 2.447 10.924 4.046 7.939 2.563 7.704 2.504 11.607 4.238 Ada. 8.226 2.683 11.352 4.234 8.425 2.663 8.551 2.731 11.903 4.444 Improv. 7.61% 9.67% 3.92% 4.65% 6.12% 3.90% 11.00% 9.07% 2.55% 4.85% Yelp Base 9.252 2.765 12.062 3.770 9.828 2.971 10.501 3.141 13.013 4.029 Ada. 10.415 2.985 12.637 3.852 11.224 3.316 11.688 3.454 13.430 4.157 Improv. 12.57% 7.98% 4.77% 2.19% 14.20% 11.60% 11.31% 9.96% 3.20% 3.18% Table 2: Top-50 performance comparison of five backbone models and Ada-Retrieval (Ada.) on three datasets. All the metrics in the table are percentage numbers with ’%’ omitted. (GRU4Rec) exhibits an average improvement of 8.37% in terms of NDCG@50 over GRU4Rec on the three datasets, while Ada-Retrieval (SASRec) demonstrates an improvement of 5.57% over SASRec. Whether it is RNN-based (GRU4Rec), Transformer-based (SASRec), CNN-based (NextItNet), GNN-based (SRGNN), or MLP-based (FMLPRec), Ada-Retrieval seamlessly integrates and consistently enhances performance. Notably, Ada-Retrieval shares the same embedding layer, user model architectures, and prediction layers with its respective base models. This shared architecture underscores its effectiveness in adapting a base model according to varying contextual information for the specific task. In essence, AdaRetrieval embodies a plug-and-play property, allowing the augmentation of any given base model with adaptation modules while preserving the original architecture’s integrity. Comparison with Multi-Interest Models Baselines. Our method employs multi-round adaptive learning to progressively generate multiple user representations. Although it fundamentally differs from another research direction, multi-interest-aware user modeling, there are several similarities between the two approaches, such as producing multiple user representations during inference. In this regard, we compared Ada-Retrieval with several multiinterest retrieval models as baselines: DNN (Covington, Adams, and Sargin 2016) (also known as YouTube DNN), MIND (Li et al. 2019), ComiRec (Cen et al. 2020), and SINE (Tan et al. 2021b) Results. The overall results are presented in Table 3. It is evident that approaches utilizing multiple user representation vectors (such as MIND, ComiRec, and Ada-Retrieval) exhibit superior performance compared to those employing a single representation (DNN). This finding highlights the effectiveness of multiple user representation vectors in capturing diverse user interests and, consequently, elevating recommendation accuracy. Generally, Ada-Retrieval consistently outperforms other models across all metrics on the three datasets, underscoring its effectiveness. This success can be attributed to two key factors: 1) Ada-Retrieval’s multi-round retrieval paradigm, Methods Beauty Sports Yelp HR NDCG HR NDCG HR NDCG DNN 13.705 4.726 8.798 2.890 11.241 3.317 MIND 14.045 5.002 8.888 2.918 11.320 3.443 ComiRec 14.394 5.232 9.270 3.250 11.479 3.523 SINE 13.191 4.325 9.087 2.978 12.091 3.724 Ada. 17.741 7.062 11.352 4.234 12.637 3.852 Table 3: Top-50 performance comparison of several baselines and Ada-Retrieval (SASRec) on three datasets. Methods Beauty Sports Yelp HR NDCG HR NDCG HR NDCG Base 17.110 6.506 10.924 4.046 12.062 3.770 w/o LFT 17.549 6.945 11.287 4.186 12.474 3.834 w/o CAT 17.536 6.948 11.141 4.182 12.566 3.826 w/o IRA 17.209 6.670 10.980 4.101 12.331 3.815 w/o GRU 17.348 6.845 11.020 4.137 12.464 3.817 w/o MLP 17.227 6.716 10.692 3.963 12.026 3.691 w/o URA 17.308 6.793 10.782 4.017 12.147 3.720 w/o PT 17.200 6.553 10.739 3.969 12.008 3.692 Ada. 17.741 7.062 11.352 4.234 12.637 3.852 Table 4: Results of Ablation Study. which refines user representations iteratively based on contextual information, enabling more precise identification of potential candidates across the entire item space. 2) Unlike multi-interest methods that rely on heuristic rules to determine the number of interests, Ada-Retrieval autonomously discerns the depth and range of users’ preferences. Ablation Study Our proposed Ada-Retrieval includes a learnable filter layer (LFT), a context-aware attention layer (CAT) within the item representation adapter (IRA), and a GRU layer and MLP layer in the user representation adapter (URA). We conduct The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8675 3 4 5 6 7 8 0.172 0.174 0.176 0.178 HR@50 Beauty 3 4 5 6 7 8 0.1110 0.1115 0.1120 0.1125 0.1130 0.1135 HR@50 Sports 3 4 5 6 7 8 0.123 0.124 0.125 0.126 HR@50 Yelp Figure 3: Effect of the number of recommendation batches. 0.1 0.3 0.5 0.7 0.9 0.175 0.176 0.177 0.178 0.179 HR@50 Beauty 0.1 0.3 0.5 0.7 0.9 0.1125 0.1130 0.1135 0.1140 HR@50 Sports 0.1 0.3 0.5 0.7 0.9 0.1250 0.1255 0.1260 0.1265 HR@50 Yelp Figure 4: Sensitivity analysis of parameter λ. an ablation study comparing Ada-Retrieval (SASRec) with SASRec on three datasets to analyze the contribution of each part. Additionally, we explore different training strategies, such as without pre-training (w/o PT) in Ada-Retrieval. The results are reported in Table 4. Upon omitting the filter layer, there is a discernible drop in performance, suggesting that learnable filters play a pivotal role in mitigating the effects of noisy data within the item context. When we replace the attention layer with the average of item embeddings in context, the decline in performance suggests that the attention mechanism allows the model to automatically identify key items by assigning higher weights to them. The most pronounced degradation in w/o IRA highlights the vital role of item context data in Ada-Retrieval’s user simulation process. In terms of user context, removing either the GRU layer or the MLP module results in a significant performance drop compared to Ada-Retrieval, highlighting the effectiveness of our user representation adapter in integrating context information. Notably, omitting the MLP causes a more pronounced decline in performance than using the model without the URA. This suggests that directly incorporating the user context vector into the current user representation introduces noise, emphasizing the importance of carefully designing a fusion module to effectively leverage user context. Additionally, jointly training Φ and Θ from scratch results in inferior performance compared to Ada-Retrieval in three datasets, highlighting the significance of the two-stage training procedure. This can be attributed to the pre-trained base model’s ability to generate more accurate and robust context information, which facilitates the training of the model. Hyper-parameter Analysis We further investigate the impact of our model’s hyperparameters, specifically T and λ, on three datasets. In Figure 3, the optimal performance is observed when T is set to 5 for the Beauty dataset, 8 for Sports, and 6 for 1 2 3 4 5 t-th round 2 0 2 4 6 8 Improv.(%) on HR@100 Beauty 1 2 3 4 5 6 7 8 t-th round 0 5 10 15 Improv.(%) on HR@100 Sports 1 2 3 4 5 t-th round 5 0 5 10 15 Improv.(%) on HR@100 Yelp Figure 5: Improvement between Ada-Retrieval and SASRec on each round. Yelp. The model’s performance exhibits a monotonically increasing trend as T rises from 1 to the optimal T ∗. However, exceeding T ∗introduces unpredictability due to excessive inference rounds. Figure 4 reveals that the performance of Ada-Retrieval initially increases with the rise of λ. It gradually reaches its peak when λ is 0.3 for Beauty, 0.5 for Sports, and 0.7 for Yelp. Subsequently, the performance begins to decline. When the factor λ is too low or high, it fails to provide useful supervisory information for training. Therefore, choosing an appropriate value for λ with a validation set is crucial. Analysis of Each Round One core aspect of the Ada-Retrieval model is its cascading multi-round preference modeling of users. Thus, we compare the performance difference between Ada-Retrieval and the base model in each turn, assessing the improvement in top-k recommendations made by Ada-Retrieval (SASRec) over SASRec, as depicted in Figure 5. Ada-Retrieval consistently exhibits substantial improvements in the early rounds, achieving a 7.5% enhancement on Beauty, 15% on Sports, and Yelp in terms of HR@100. However, as the rounds progress, the performance advantage narrows to a modest 3% on Sports and experiences a slight dip of -2.5% on Beauty. Nevertheless, when considering the overall performance, Ada-Retrieval consistently outperforms. This suggests that Ada-Retrieval excels at rapidly and precisely elevating items—those that the base model either misses or ranks lower—during the preliminary rounds. Conclusion In this paper, we introduce Ada-Retrieval, a novel adaptive multi-round retrieval paradigm for sequential recommendations, which provides a more dynamic and diverse approach compared to the traditional single-round inference paradigm. This model-agnostic framework incorporates key components, such as the item representation adapter and user representation adapter, to effectively refine the retrieval process in a progressive manner. Extensive experiments on publicly available datasets demonstrate the effectiveness of Ada-Retrieval, emphasizing its potential to enhance the performance of various sequential recommender systems. Future work may include investigating the theoretical foundations of the benefits provided by the multi-round retrieval paradigm and expanding its application to large-language models, such as augmenting task-planning abilities. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8676 Acknowledgements This work was supported by the National Natural Science Foundation of China (NSFC Grant No.62106274); the Fundamental Research Funds for the Central Universities, Renmin University of China (No.22XNKJ24). We also wish to acknowledge the support provided by the Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the ”Double-First Class” Initiative. References Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Cen, Y.; Zhang, J.; Zou, X.; Zhou, C.; Yang, H.; and Tang, J. 2020. Controllable multi-interest framework for recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2942–2951. Cho, K.; Van Merri¨enboer, B.; Bahdanau, D.; and Bengio, Y. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Covington, P.; Adams, J.; and Sargin, E. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, 191– 198. Guo, Q.; Sun, Z.; Zhang, J.; and Theng, Y.-L. 2020. An attentional recurrent neural network for personalized next location recommendation. In Proceedings of the AAAI Conference on artificial intelligence, 83–90. Hao, X.; Liu, Y.; Xie, R.; Ge, K.; Tang, L.; Zhang, X.; and Lin, L. 2021. Adversarial feature translation for multidomain recommendation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2964–2973. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. He, R.; and McAuley, J. 2016. Fusing similarity models with markov chains for sparse sequential recommendation. In 2016 IEEE 16th international conference on data mining (ICDM), 191–200. IEEE. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM), 197–206. IEEE. Krichene, W.; and Rendle, S. 2020. On sampled metrics for item recommendation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 1748–1757. Lewandowski, D. 2008. The retrieval effectiveness of web search engines: considering results descriptions. Journal of documentation, 64(6): 915–937. Li, C.; Liu, Z.; Wu, M.; Xu, Y.; Zhao, H.; Huang, P.; Kang, G.; Chen, Q.; Li, W.; and Lee, D. L. 2019. Multi-interest network with dynamic routing for recommendation at Tmall. In Proceedings of the 28th ACM international conference on information and knowledge management, 2615–2623. Li, J.; Ren, P.; Chen, Z.; Ren, Z.; Lian, T.; and Ma, J. 2017. Neural attentive session-based recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 1419–1428. Li, J.; Wang, Y.; and McAuley, J. 2020. Time interval aware self-attention for sequential recommendation. In Proceedings of the 13th international conference on web search and data mining, 322–330. McAuley, J.; Targett, C.; Shi, Q.; and Van Den Hengel, A. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, 43–52. Rendle, S. 2010. Factorization machines. In 2010 IEEE International conference on data mining, 995–1000. IEEE. Sherstinsky, A. 2020. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena, 404: 132306. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1): 1929–1958. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, 1441–1450. Tan, Q.; Zhang, J.; Liu, N.; Huang, X.; Yang, H.; Zhou, J.; and Hu, X. 2021a. Dynamic memory based attention network for sequential recommendation. In Proceedings of the AAAI conference on artificial intelligence, 4384–4392. Tan, Q.; Zhang, J.; Yao, J.; Liu, N.; Zhou, J.; Yang, H.; and Hu, X. 2021b. Sparse-interest network for sequential recommendation. In Proceedings of the 14th ACM international conference on web search and data mining, 598–606. Tang, J.; and Wang, K. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining, 565–573. Wu, S.; Tang, Y.; Zhu, Y.; Wang, L.; Xie, X.; and Tan, T. 2019. Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, 346–353. Xie, R.; Liu, Q.; Liu, S.; Zhang, Z.; Cui, P.; Zhang, B.; and Lin, L. 2021. Improving accuracy and diversity in matching of recommendation with diversified preference network. IEEE Transactions on Big Data, 8(4): 955–967. Xie, R.; Qiu, Z.; Rao, J.; Liu, Y.; Zhang, B.; and Lin, L. 2020. Internal and Contextual Attention Network for Coldstart Multi-channel Matching in Recommendation. In IJCAI, 2732–2738. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8677 Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; and Cui, B. 2022. Contrastive learning for sequential recommendation. In 2022 IEEE 38th international conference on data engineering (ICDE), 1259–1273. IEEE. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W. L.; and Leskovec, J. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 974–983. Yuan, F.; Karatzoglou, A.; Arapakis, I.; Jose, J. M.; and He, X. 2019. A simple convolutional generative network for next item recommendation. In Proceedings of the twelfth ACM international conference on web search and data mining, 582–590. Zhang, Z.; and Nasraoui, O. 2006. Mining search engine query logs for query recommendation. In Proceedings of the 15th international conference on World Wide Web, 1039– 1040. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen, J.-R. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management, 1893–1902. Zhou, K.; Yu, H.; Zhao, W. X.; and Wen, J.-R. 2022. Filterenhanced MLP is all you need for sequential recommendation. In Proceedings of the ACM web conference 2022, 2388–2399. Zhu, H.; Chang, D.; Xu, Z.; Zhang, P.; Li, X.; He, J.; Li, H.; Xu, J.; and Gai, K. 2019. Joint optimization of tree-based index and deep model for recommender systems. Advances in Neural Information Processing Systems, 32. Zhu, H.; Li, X.; Zhang, P.; Li, G.; He, J.; Li, H.; and Gai, K. 2018. Learning tree-based deep model for recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1079–1088. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8678
2024
964
18,811
CONSIDER: Commonalities and Specialties Driven Multilingual Code Retrieval Framework Rui Li1, 2, Liyang He1, 2, Qi Liu1, 2*, Yuze Zhao1, 2, Zheng Zhang1, 2, Zhenya Huang1, 2, Yu Su3, Shijin Wang2, 4 1Anhui Province Key Laboratory of Big Data Analysis and Application & School of Computer Science and Technology, University of Science and Technology of China 2State Key Laboratory of Cognitive Intelligence 3School of Computer Science and Artificial Intelligence, Hefei Normal University 4iFLYTEK AI Research (Central China), iFLYTEK Co., Ltd. {ruili2000, heliyang, yuzezhao, zhangzheng}@mail.ustc.edu.cn, {huangzhy, qiliuql}@ustc.edu.cn, [email protected], [email protected] Abstract Multilingual code retrieval aims to find code snippets relevant to a user’s query from a multilingual codebase, which plays a crucial role in software development and expands their application scenarios compared to classical monolingual code retrieval. Despite the performance improvements achieved by previous studies, two crucial problems are overlooked in the multilingual scenario. First, certain programming languages face data scarcity in specific domains, resulting in limited representation capabilities within those domains. Second, different programming languages can be used interchangeably within the same domain, making it challenging for multilingual models to accurately identify the intended programming language of a user’s query. To address these issues, we propose the CommONalities and SpecIalties Driven Multilingual CodE Retrieval Framework (CONSIDER), which includes two modules. The first module enhances the representation of various programming languages by modeling pairwise and global commonalities among them. The second module introduces a novel contrastive learning negative sampling algorithm that leverages language confusion to automatically extract specific language features. Through our experiments, we confirm the significant benefits of our model in real-world multilingual code retrieval scenarios in various aspects. Furthermore, an evaluation demonstrates the effectiveness of our proposed CONSIDER framework in monolingual scenarios as well. Our source code is available at https://github.com/smsquirrel/consider. Introduction Code retrieval is a foundational task in code intelligence (Mukherjee, Jermaine, and Chaudhuri 2020; Kim et al. 2010). As illustrated in Figure 1 (a), given a natural language query and a selected programming language, the classical monolingual code retrieval model aims to find code snippets in a large-scale codebase (Haldar et al. 2020; Wan et al. 2019). This system can assist developers in code reuse (Shuai et al. 2020; Nie et al. 2016) and understanding the complex software libraries (Ling et al. 2021). With the ad*Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Python codebase def create_one_hot(l): label = np.array(l) ... def train(dataset, E): model.train() ... fine-tuning the pretrained BERT model User Query: use numpy to create a onehot vector User Selected: User Query: Python Java PHP Multilingual Monolingual Monolingual User Selected: None (a) (b) Python codebase Code Retrieval System user select automatic identification Python codebase ? Figure 1: (a) Monolingual scenarios: the user’s query and selected language guide the system’s choice of model and code repository. (b) Multilingual scenarios: the retrieval model autonomously searches for code snippets in a multilingual library without user-specified language selection. vancement of artificial intelligence technologies, many advanced monolingual code retrieval methods (Cambronero et al. 2019; Chen and Zhou 2018; Shuai et al. 2020) have been proposed and made tremendous progress. As the scope of application scenarios expands, there has been a surge in the demand for code retrieval. For example, software projects hosted on code repositories like GitHub1 and GitLab2 are increasingly developed using multiple programming languages. This trend has led to a growing need for multilingual code retrieval capabilities (Li, Xu, and Chen 2022; Ma et al. 2023). Compared to monolingual code retrieval, multilingual code retrieval models provide significant advantages. First, they eliminate the requirement for deploying and maintaining multiple retrieval models for different languages, resulting in cost reduction. Second, as illustrated in Figure 1 (b), multilingual models compare the similarity between queries and various programming languages, thereby extending the scope of retrieval scenarios. This capability allows them to retrieve the relevant code 1https://github.com 2https://about.gitlab.com/ The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8679 from a multilingual codebase. However, existing work faces the following two problems in the scenario of multilingual code retrieval. First, some programming languages often encounter the problem of data scarcity in certain specific domains. For example, in the domain of deep learning, the C++ language plays a significant role due to its exceptional performance. However, due to the complexity of C++ and its steep learning curve, there are fewer available resources in this domain. In contrast, Python offers a plethora of libraries and frameworks for deep learning, making it rich in application resources within this domain. In such cases, we can leverage the rich information provided by Python in deep learning to enhance the representation of C++ code in this domain. Therefore, it is necessary to exploit the commonalities of programming languages in multilingual code retrieval models to enhance performance in the scenario where data is scarce for certain programming languages. Second, while the commonalities among programming languages can be beneficial in data-scarce scenarios, they can also pose challenges for current multilingual code retrieval models in accurately determining the intended language for a given query. Considering an example query in Figure 1 (b): “fine-tuning the pre-trained BERT model.” In this case, the retrieval model may struggle to discern whether the intended language is C++ or Python, as both languages have applications in the field of deep learning. However, it is worth noting that C++ is often employed for hardware acceleration and deployment optimization in deep learning, whereas Python is commonly chosen for model design and training processes. Therefore, the target language is more likely to be Python in this context. Consequently, it is crucial to incorporate the specialties of programming languages into the modeling process to accurately identify the language intent of a user’s query in multilingual scenarios. To this end, we propose the CommONalities and SpecIalties Driven Multilingual CodE Retrieval Framework (CONSIDER) to tackle aforementioned problems, which facilitates the seamless integration of existing pre-trained Transformer models into multilingual scenarios. This model consists of two modules. First, to capture the commonalities between programming languages, we introduce the paired commonality extraction module and the global commonality extraction module. These modules enable us to effectively model the commonalities and enhance the code representation across different languages. Second, in order to model the specificity of programming languages, we propose a novel algorithm for sampling negative examples in contrastive learning. We utilize confusion matrices between different programming languages to construct negative samples, aiming to automatically capture the differences in easily confused languages. Besides, introducing a confusion matrix during the sampling process leads to an imbalance within and between languages. To address this issue, we further employ a balancing distribution technique to ensure stable training. By combining these modules, CONSIDER offers a comprehensive approach to address the challenges of accurately identifying language intent and enhancing performance in data-scarce scenarios within multilingual code retrieval models. In summary, our main contributions can be summarized as follows: • We propose a novel framework CONSIDER for multilingual code retrieval that incorporates two crucial aspects of multiple programming languages: their commonalities and their specialties. • We introduce a pairwise commonality extraction module and a global commonality extraction module to model the commonalities of programming languages, enabling effective modeling of the shared characteristics among programming languages. • We propose the Confusion-Matrix-Guided Sampling Algorithm, which leverages confusion matrices to capture the specialties of programming languages, thereby enhancing the ability of discerning the query intend. • We conduct experiments in real-world multilingual retrieval scenarios, demonstrating the unique advantages of our proposed CONSIDER framework compared to other multilingual code retrieval models in this scenario. The experimental results also prove that our model can enhance the performance of multiple languages in monolingual scenarios. Related Work Code Retrieval. We mainly introduce code retrieval in two parts: monolingual code retrieval models and multilingual code retrieval models. For monolingual code retrieval models, one type of research work involves query enhancement (Arakelyan et al. 2022; Lv et al. 2015; Lemos et al. 2014; Zhang et al. 2018). This method aims to supplement the query with additional knowledge, increasing the information of the query before matching it with the code. A second method involves multi-perspective modeling of the code (Chen and Zhou 2018; Kim et al. 2010; Zubkov et al. 2022), using technologies such as AST, CFG, DFG to extract structured features of the code and thereby strengthen the representation of the code. Another approach comes from the perspective of multitask learning (Yao, Peddamail, and Sun 2019; Ye et al. 2020), enhancing the retrieval task through the design of auxiliary tasks related to retrieval, including tasks of annotation generation and code generation. As for multilingual code retrieval models, there is presently less related work. One method employs knowledge distillation for multilingual code retrieval; the idea is to first train a monolingual teacher model, and then use this monolingual teacher model to guide the training of a multilingual student model. This method effectively trains a multilingual code retrieval model. The second approach involves using the LLVM compiler to pre-generate a consistent intermediate representation (IR) for each programming language. This method can obtain a unified representation across languages. However, both of these methods overlook modeling the commonalities among programming languages, leading to suboptimal performance in scenarios with sparse programming language data. Additionally, they struggle to identify language intent from the linguistic features within user queries. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8680 Code Encoder [Python] [Java] [PHP] … … Query Encoder [CLS] [CLS] (a) Overall Framework Structure Commonalities Enhancement Module … … 7 2 1 2 6 2 1 4 5 C Python Java PHP Java PHP 0.3 0.3 0.4 (c) Confusion-Matrix-Guided Sampling Algorithm 0.1 0.6 0.3 Sample Python Balance Update Train Existing Pretrained Model Codebase MLP (b) Commonalities Enhancement Module … Relevance Weighting Max MI Global Commonality Extraction Pairwise Commonality Extraction … Contrastive learning Figure 2: The framework of CONSIDER. Contrastive learning. Contrastive learning is a technique that aims to make representations agree with each other under proper transformations. It has attracted the attention of researchers across all fields, including CV (He et al. 2020), NLP (He et al. 2023; Giorgi et al. 2021; Zhao et al. 2023), and other domains (Tong et al. 2020; Ning et al. 2023). Recently, researchers in code retrieval have also begun leveraging contrastive learning techniques to enhance task performance. (Bui, Yu, and Jiang 2021) primarily employs semantically preserving program transformations to generate functionally equivalent code snippets as positive samples for contrastive learning, aiming to identify semantically equivalent and non-equivalent code segments. (Li et al. 2022) and colleagues construct positive contrastive learning samples through representation-level data augmentation. (Huang et al. 2021) introduced CoCLR, which uses query rewriting techniques like random word deletion as positive samples for query in contrastive learning. Differing from prior researches, we have devised a novel contrastive learning approach specifically tailored for multilingual code retrieval, aiming to model programming language characteristics. Compared to other contrastive learning negative sample construction methods, our method essentially is a heuristic approach for constructing negative samples, which does not require additional inference overhead for the selection of negative samples. CONSIDER Framework Problem Definition Code Retrieval. Given a (query, code snippet) space (Q, C). We denote a pair of (query, code snippet) as (q, c) ∈ (Q, C), where q = {q1, q2, ..., qn} is a query composed of n tokens, and c = {c1, c2, ..., cm} is a sequence of code snippets composed of m tokens. Our goal is to train a model f, we can find the code snippet c with the highest matching score according to f: ∀q ∈Q, max c∈C f(q, c), (1) where f(q, c) represents the matching score between the query q and the code snippet c. This formalization can be adapted to address both monolingual and multilingual code retrieval tasks. In the monolingual case, the (Q, C) space is composed of one programming language dataset. In contrast, multilingual code retrieval involves a (query, code snippet) space (Q, C) that encompasses multiple language datasets, denoted as Q = ∪N i=1Qi and C = ∪N i=1Ci. Qi and Ci represent queries and code snippets from the (query, code snippet) space of language i, respectively. Framework Overview We design two structures to address the two challenges mentioned above. Figure 2 depicts our framework. First, we model different attention patterns for different languages by adding language tokens, which allows for better representation modeling of different languages; Second, to extract language commonalities, we design paired commonality extraction modules and global commonality extraction modules, which are used to extract the commonalities among all languages and between language pairs, respectively; Third, for modeling language-specific features, based on contrastive learning, we introduce a novel negative sampling algorithm. By utilizing the confusion matrix of multiple languages on the validation set, we sample the languages that are easily confused with the selected base language as negative samples, thus automatically learning languagespecific features. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8681 Feature Extraction We begin with a bi-encoder architecture that utilizes pretrained Transformer models as the backbone neural network, as depicted in Figure 2(a). To capture the representation of the query, we prepend a [CLS] token before the original query’s token q, resulting in q′: q′ = [CLS] ◦q ◦[SEP], (2) where the ◦represents concatenation operation and [SEP] is a special token used to denote the end of input. Subsequently, q′ is input into the query encoder Eϕ, yielding the output Eϕ(q′), where ϕ denotes the parameters of the query encoder. The representation of the [CLS] token, denoted as eq CLS, serves as the representation of the query q. To effectively model programming language features, we introduce language tokens and employ distinct attention mechanisms to extract features from different programming languages. For each programming language i, we introduce a language token denoted as [Li]. We append all the language tokens to the code snippet c and add a [CLS] token at the beginning, resulting in a new input c′: c′ = [CLS] ◦[L1] . . . [Ln] ◦c ◦[SEP]. (3) To ensure seamless integration of the introduced [L] tokens into the model, we initialize them using the embedding of the [CLS] token, because the [CLS] token tends to capture the overall meaning of the entire sentence (Clark et al. 2019; Kovaleva et al. 2019). Additionally, to prevent any interference with the positional encoding of the original input sentences, we set the position ids of all [Li] tokens to 0, while the original code segments are marked starting from 1. Then, we feed c′ into the code encoder Eθ to get the output Eθ(c′), where ϕ denotes the parameters of the code encoder. The representations of the [CLS] tokens are denoted as ec CLS, and the representations of the [L] tokens are denoted as eLi, where i = 0, 1, . . . , n. Commonalities Enhancement Module To alleviate the problem of data scarcity in certain domains for a programming language, we design the pairwise commonality extraction module and the global commonality extraction module to capture the commonalities between language pairs and across all languages, respectively. To extract commonalities between programming language i and other languages, we employ the pairwise commonality extraction module. This module utilizes the relevance between language i and other languages to identify shared features. Specifically, we calculate the relevance scores between the language i and representations of other languages as follows: sij = (eLiW T i ) · (eLjW T j ), (4) where the Wi and Wj are the parameters specific to programming language i and j, respectively, and · is the dot product operation. we utilize the relevance scores as weights to aggregate the commonalities from other languages for programming language i as follows: eeLi = X j=1,2,...,N,j̸=i esij P k=1,2,...,N,k̸=i esik · eLj ! . (5) To extract the overall commonality across all languages, we draw inspiration from the work presented in (Ou et al. 2021). In this approach, we aim to maximize the mutual information between the embedding of the [CLS] token and the embedding of each language token. To estimate this mutual information, we utilize the Jensen-Shannon divergence estimator (JSDE) (Nowozin, Cseke, and Tomioka 2016), which is a widely used method and is insensitive to the number of negative samples. We apply JSDE to estimate mutual information and optimize it with model parameters. Specifically, we estimate the mutual information among all languages by using the following function: ˜Iϕ (ec CLS) = N X i=1 (EP (ec CLS,eLi ) [Dδ (ec CLS, eLi)] − EP (ec CLS)P (eLi ) [Dδ (ec CLS, eLi)]), (6) where P(ec CLS, eLi) represents the joint probability distribution, P(ec CLS)P(eLi) represents the marginal probability, softplus function is defined as softplus(x) = log(1+ex), and Dδ(·, ·) is a discriminator realized by a neural network with parameter δ. By maximizing the mutual information using this approach, we aim to capture the commonalities that exist across all languages. Finally, we employ a fusion strategy to combine the language token representation eLi, pairwise commonality representations eeLi, and the [CLS] token representation ec CLS into a unified representation. This unified representation is then fed into a Multilayer Perceptron (MLP) to generate the final representation ˆec of the code snippet c: ˆec = MLP(eLi ◦eeLi ◦ec CLS). (7) By explicitly modeling the commonalities between language pairs and the overall commonalities across all languages, this method enhances the final representation of the code snippet in the data-scarce domain. Confusion-Matrix-Guided Sampling Algorithm To capture the specialties features of each programming language, we design a novel negative sampling algorithm based on contrastive learning. As illustrated in Figure 2(c), we use the confusion matrix between programming languages to determine the number of samples to be included for each type when constructing a batch. This enables us to group together languages that are often confused with each other in the same batch for contrastive learning, thereby automatically learning the characteristic features of different languages and better distinguishing the target language of the query. Specifically, we initialize the confusion matrix C ∈ Rn×n by summing an identity matrix I with an all-ones matrix 1 (i.e., C = I + 1) to achieve uniform sampling across other programming languages. During the subsequent training process, we employ a method similar to computing Mean Reciprocal Rank (MRR) to calculate the confusion matrix on the validation set. Specifically, at regular training intervals, for the calculation of Cij, we begin by considering each query of the i-th language in the validation set Qval i . Using the current model, we retrieve the top K code snippets (DK = {d1, d2, . . . , dK}) that are most similar to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8682 the query. Subsequently, we calculate the reciprocal sum of the ranking values rankd for all code snippets d of the j-th language among the retrieved snippets. Then we update the confusion matrix Cij as follows: Cij = X q∈Qval i   X d∈{d|d∈DK,L(d)=j} 1 rankd  , (8) where L(d) is the programming language of code snippet d. Next, we use the idea of stratified sampling algorithm to guide the sampling process. Specifically, we determine the probability of selecting each programming language as pl = {pl 1, pl 2, ..., pl N}, where pl i is the probability to select the i-th programming language and is defined as: pl i = Ni PN j=0 Nj , (9) Here, Ni is the number of the i-th programming language in the training set. Then, we calculate the confusion vector of language i using the confusion matrix: vi = C⊤ i· + C·i, (10) where Ci· represents the scenario where queries with the target language being i are confused with each language which can be considered as precision and C·i represents the situation where queries with each language as the target language are confused with language i which can be considered as recall. By considering both the precision and recall of language i, we can comprehensively measure the confusion level between languages. Then, we use sun normalization norm(·) to normalize the vi to obtain the sampling probability pi = norm(vi), pi = {pi1, pi2, ..., pin} for language i, where pij is the probability of a query with the target language being the i-th one being confused with the j-th language. Besides, considering only the confusion level may result in fewer samples being sampled for some easily confused base languages. Thus we further use the base language probability boost to increase the probability of base language i being sampled as follows: p′ ii = pii + (α −pii)β, (11) where α is the threshold and β represents the final sampling probability. When the probability of base language i is lower than the threshold α, its probability is increased. Due to the sample probability pii is changed to p′ ii, we perform normalization on {pi1, ..., pii, ...pin} to get the final probability {ˆpi1, ..., ˆp′ ii, ...ˆpin} and construct next batch set Dsel ⊆(Q, C) according to each language’s probability ˆpij. Furthermore, the aforementioned sampling process may introduce language imbalance and intra-language imbalance. First, the inconsistent sampling quantity for each language compared to the actual language data distribution in the subsequent sampling process leads to language imbalance. To address this, we balance the probability of selecting the language pl using the actual sampling probability: ˆpl = norm(pl · norm( pl pr )), (12) Language Training Validation Test Codebase Ruby 2.5K 1.4K 1.2K 4.4K JavaScipt 5.8K 3.9K 3.3K 13.9K Go 16.7K 7.3K 8.1K 28.1K Python 25.2K 13.9K 14.9K 43.8K Java 16.4K 5.2K 10.9K 40.3K PHP 24.1K 13.0K 14.0K 52.7K Table 1: CodeSearchNet dataset statistics. Here, pr denotes the actual sampling probability for each language, which is determined during the sampling process and subsequently calculated. Besides, the repeated sampling of already sampled samples based on ˆpl causes intralanguage imbalance. To mitigate this, we apply exponential decay to the sampling probability of the already sampled samples to reduce the probability of resampling them. Finally, we conduct contrastive learning based on the constructed batch Dsel: L = X (q,c)∈(Q,C) log exp(φ(eq CLS, ˆec)) P (q′,c′)∈Dsel exp(φ(eq′ CLS, ˆec′)) , (13) where φ(·, ·) is used to measure the cosine similarity between the query and code representations. For negative samples of the same language, the primary objective of contrastive learning is to amplify the semantic gap between the query and code. For negative samples of different languages, contrastive learning not only increases the semantic gap between the query and code in terms of meaning but also accentuates the differences in language-specific features. As a result, contrastive learning facilitates more effective modeling of linguistic characteristics. The advantage of this algorithm is that it can construct negative samples according to the confusion matrix, thus automatically learning the differences between easily confused languages. Furthermore, it can automatically construct the confusion matrix during the evaluation period, without consuming additional computational costs for model inference. Experiment In this section, we conduct experiments on monolingual and multilingual tasks with a real-world code retrieval dataset, to verify the effectiveness of our proposed approach. Experimental Setup Dataset. Since we need to evaluate model performance in a multilingual environment, we have chosen CodeSearchNet (Husain et al. 2019) as our dataset. This dataset collects code snippets and queries related to six programming languages (Go, Python, Java, JavaScript, Ruby, and PHP) from GitHub, and it is the largest and most widely used dataset for assessing code retrieval performance. Table 1 contains the statistics for the dataset. Evaluation Tasks. We conduct model performance evaluation in both monolingual and multilingual scenarios. In the monolingual scenario, we evaluate the model on test sets for The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8683 Framework Model Ruby JavaScript Go Python Java PHP Overall Multilingual RoBERTa(code) 46.0 46.3 82.1 54.7 56.1 52.3 56.2 CodeBERT 50.2 50.1 83.8 59.2 59.9 55.6 59.8 GraphCodeBERT 51.7 51.4 84.6 62.6 61.4 58.6 61.7 UniXCoder 57.1 56.8 86.4 65.3 65.6 60.3 65.3 Distill RoBERTa(code) 45.3 46.5 81.6 56.0 57.7 53.5 56.8 CodeBERT 49.8 49.1 82.2 60.9 61.8 57.1 60.2 GraphCodeBERT 51.3 50.3 84.0 64.2 63.6 60.3 62.3 UniXCoder 58.2 58.9 88.2 67.0 67.3 61.9 65.4 CONSIDER RoBERTa(code) 52.6(+6.6) 53.4(+7.1) 87.4(+5.3) 60.2(+5.5) 62.7(+6.6) 59.1(+6.8) 62.6(+5.8) CodeBERT 56.2(+6.0) 57.0(+6.9) 89.0(+5.2) 64.6(+5.4) 66.5(+6.6) 62.3(+6.7) 65.9(+6.1) GraphCodeBERT 57.8(+6.1) 57.6(+6.2) 89.5(+4.9) 66.1(+3.5) 67.3(+5.9) 63.1(+4.5) 66.9(+5.2) UniXCoder 61.6(+3.4) 60.9(+2.0) 90.2(+2.0) 69.7(+4.4) 69.9(+4.3) 65.0(+4.7) 69.6(+4.3) Table 2: Comparison of the overall performance between our framework and the baseline in a multilingual scenario, measured by the MRR metric. Bold means state of the art on this metric. each individual language. To simulate the multilingual context of the real world, we have merged the test sets of each language to serve as the test set in a multilingual scenario. We use mean reciprocal rank(MRR) (Hull 1999) as evaluation metrics for all models. MRR = 1 N N X i=1 1 ranki . (14) Comparison Methods. We compared our framework with other multilingual retrieval model training methods: 1). Multilingual: Mixing training datasets from various programming languages, followed by applying contrastive learning algorithms for training. 2). Distill: Utilizing the multilingual retrieval model training framework proposed by (Li, Xu, and Chen 2022): first, training monolingual teacher models for each programming language, and then training multilingual student models in a multilingual environment. Moreover, to demonstrate that our multilingual retrieval framework is model-agnostic and can be applied to different pre-trained Transformers to achieve better performance, we fine-tuned four popular pre-trained code retrieval Transformers—RoBERTa(code) (Husain et al. 2019), CodeBERT (Feng et al. 2020), GraphCodeBERT (Guo et al. 2021), and UniXCoder (Guo et al. 2022)—on the CodeSearchNet dataset in our study. Implementation Details Our CONSIDER framework is implemented in PyTorch. For all models, we map the final output dimensions to 768, utilizing the AdamW optimizer (Loshchilov and Hutter 2017). Batch size, learning rate, and training steps are set to 256, 2e-5, and 50K respectively. The maximum sequence lengths for text and code are set to 128 and 320 respectively. All experiments are conducted using two Tesla A100 GPUs. We consider hyperparameters α within {0.5, 0.6, 0.7} and β within {1.5, 1.75, 2.0}. We conduct a grid search across various scenarios to identify their optimal combinations. Ruby JavaScript Go Python Java PHP 60 65 70 75 80 85 90 95 MRR Monolingual Multilingual Distill CONSIDER Figure 3: Comparison of the overall performance between our framework and the baseline in a monolingual scenario. Overall Results First, we test all multilingual frameworks in a multilingual scenario, as shown in Table 2. We find that, compared to the monolingual scenario, both directly applying a multilingual training set and adopting the knowledge distillation learning method result in a more significant performance decline. However, the performance degradation of our proposed CONSIDER framework in a multilingual scenario is relatively smaller. This indicates that our method better identifies users’ language intent in multilingual scenarios by modeling language specialties. Next, we conduct experiments in a monolingual scenario, as shown in Figure 3. We find that although we directly apply a multilingual training set to improve the overall performance of the model, especially for low-resource languages, it affects the performance of some high-resource languages. Although the knowledge distillation learning method enhances the overall performance, the improvement is relatively small due to the limitations of the teacher model. In contrast, our proposed CONSIDER framework demonstrates a stable improvement in performance across all languages compared to the monolingual model, suggesting that our CONSIDER framework leverages the similarities between programming languages to enhance the performance of multiple languages. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8684 CONSIDER Multilingual (a) Pairwise commonality module visualizations. (b) Confusion matrices visualizations. (c) t-SNE visualizations. Multilingual CONSIDER - 0.8 - 0.0 Go Java JS PHP Python Ruby Go Java JS PHP Python Ruby - 0.5 - 0.0 Go Java JS PHP Python Ruby Go Java JS PHP Python Ruby Go Java JS PHP Python Ruby Go Java JS PHP Python Ruby Figure 4: Visualization of commonalities and specialties modeling effectiveness in CONSIDER. Monolingual scenario Multilingual scenario w Commonalities enhancement CONSIDER w/o All w CMGS algorithm 65 71 74 77 54 57 60 63 66 68 MRR Figure 5: Ablation experiments. Model Analysis Ablation Study. To investigate the impact of each module, we conducted an ablation study. We use the CodeBERT model to initialize our framework and conduct experiments in monolingual versus multilingual scenarios. The results are shown in Figure 5. We observed that removing a module leads to a decrease in model performance, indicating the effectiveness of our design. Furthermore, we noticed that the commonalities enhancement module enhances the overall performance. Modeling language-specific features produces more significant improvements in performance in multilingual scenarios. Moreover, we found that employing the CMGS algorithm also improves the model performance in monolingual scenarios. We speculate that this is because the update of the confusion matrix has led to a more diversified sample distribution in contrastive learning. Visualization. Firstly, to examine the effectiveness of our framework in modeling language features, we statistically analyze and visualize the correlation scores for each pair of languages in the pairwise commonality extraction module, as depicted in Figure 4(a). We observe that the language pairs Javascript and PHP, and Python and Ruby have strong correlations (Javascript and PHP are commonly used in web development). This result confirms the commonalities enhancement module can exploit the commonalities between languages to improve their performance. Next, to investigate the effectiveness of our framework in modeling language features, we apply the Multilingual and CONSIDER frameworks to perform multilingual finetuning of the CodeBERT model. We visualize the confusion matrix of the model on the test set. As shown in Figure 4(b), we notice that applying this algorithm leads to a confusion matrix closer to a diagonal matrix. Additionally, to visually demonstrate CONSIDER’s ability to model language features, we randomly select 500 code snippets from the test set of each language and project their representations onto a 2D space using t-SNE (Van der Maaten and Hinton 2008). In Figure 4(c), we differentiate code representations of different programming languages using distinct colors, and we compare our framework with the multilingual method. Upon observation, we find that the visualization plot using the CONSIDER framework shows clearly separated clusters of different languages, while the plots using other frameworks depict larger areas of overlap among code representations of various programming languages. This indicates that CONSIDER can effectively model the characteristics of programming languages, whereas other multilingual retrieval frameworks struggle to accurately capture the specialties features of different programming languages. Conclusion In this paper, we investigated the task of multilingual code retrieval. We proposed a novel multilingual code retrieval framework CONSIDER to enhance the capability of the retrieval model in multilingual scenarios. Specifically, we first modeled the commonalities between programming languages and enhanced the representation of each language based on this. Then, we introduced a novel confusionmatrix-guided sampling algorithm to model the specialties of languages. Through extensive experiments on both monolingual and multilingual retrieval scenarios, we demonstrated that CONSIDER could leverage the commonalities between programming languages to boost overall performance, and it could effectively model the specialties of languages, thereby enabling a better understanding of the target language of user queries. We also conducted additional analysis experiments to substantiate the effectiveness and rationality of CONSIDER. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8685 Acknowledgements This research was partially supported by grants from the National Key Research and Development Program of China (No. 2021YFF0901003), and the National Natural Science Foundation of China (No.62106244), the University Synergy Innovation Program of Anhui Province (No. GXXT2022-042) References Arakelyan, S.; Hakhverdyan, A.; Allamanis, M.; Garcia, L.; Hauser, C.; and Ren, X. 2022. NS3: Neuro-symbolic Semantic Code Search. In NeurIPS. Bui, N. D. Q.; Yu, Y.; and Jiang, L. 2021. Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations. In SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 511–521. Cambronero, J.; Li, H.; Kim, S.; Sen, K.; and Chandra, S. 2019. When deep learning met code search. In Proceedings of the ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019, Tallinn, Estonia, August 26-30, 2019, 964–974. Chen, Q.; and Zhou, M. 2018. A neural framework for retrieval and summarization of source code. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3-7, 2018, 826–831. Clark, K.; Khandelwal, U.; Levy, O.; and Manning, C. D. 2019. What Does BERT Look At? An Analysis of BERT’s Attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Feng, Z.; Guo, D.; Tang, D.; Duan, N.; Feng, X.; Gong, M.; Shou, L.; Qin, B.; Liu, T.; Jiang, D.; and Zhou, M. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, 1536–1547. Giorgi, J.; Nitski, O.; Wang, B.; and Bader, G. 2021. DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Guo, D.; Lu, S.; Duan, N.; Wang, Y.; Zhou, M.; and Yin, J. 2022. UniXcoder: Unified Cross-Modal Pre-training for Code Representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, 7212–7225. Guo, D.; Ren, S.; Lu, S.; Feng, Z.; Tang, D.; Liu, S.; Zhou, L.; Duan, N.; Svyatkovskiy, A.; Fu, S.; Tufano, M.; Deng, S. K.; Clement, C. B.; Drain, D.; Sundaresan, N.; Yin, J.; Jiang, D.; and Zhou, M. 2021. GraphCodeBERT: Pretraining Code Representations with Data Flow. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. Haldar, R.; Wu, L.; Xiong, J.; and Hockenmaier, J. 2020. A Multi-Perspective Architecture for Semantic Code Search. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, 8563–8568. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). He, L.; Huang, Z.; Chen, E.; Liu, Q.; Tong, S.; Wang, H.; Lian, D.; and Wang, S. 2023. An Efficient and Robust Semantic Hashing Framework for Similar Text Search. ACM Trans. Inf. Syst., 41(4). Huang, J.; Tang, D.; Shou, L.; Gong, M.; Xu, K.; Jiang, D.; Zhou, M.; and Duan, N. 2021. CoSQA: 20, 000+ Web Queries for Code Search and Question Answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, 5690–5700. Hull, D. 1999. Xerox TREC-8 Question Answering Track Report. Husain, H.; Wu, H.; Gazit, T.; Allamanis, M.; and Brockschmidt, M. 2019. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. CoRR, abs/1909.09436. Kim, J.; Lee, S.; Hwang, S.; and Kim, S. 2010. Towards an Intelligent Code Search Engine. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010. Kovaleva, O.; Romanov, A.; Rogers, A.; and Rumshisky, A. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Lemos, O. A. L.; de Paula, A. C.; Zanichelli, F. C.; and Lopes, C. V. 2014. Thesaurus-based automatic query expansion for interface-driven code search. In 11th Working Conference on Mining Software Repositories, MSR 2014, Proceedings, May 31 - June 1, 2014, Hyderabad, India, 212– 221. Li, H.; Miao, C.; Leung, C.; Huang, Y.; Huang, Y.; Zhang, H.; and Wang, Y. 2022. Exploring Representation-level Augmentation for Code Search. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, 4924–4936. Li, W.; Xu, J.; and Chen, Q. 2022. Knowledge DistillationBased Multilingual Code Retrieval. Algorithms, 15(1): 25. Ling, X.; Wu, L.; Wang, S.; Pan, G.; Ma, T.; Xu, F.; Liu, A. X.; Wu, C.; and Ji, S. 2021. Deep Graph Matching and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8686 Searching for Semantic Code Retrieval. ACM Trans. Knowl. Discov. Data, 15(5): 88:1–88:21. Loshchilov, I.; and Hutter, F. 2017. Fixing Weight Decay Regularization in Adam. CoRR, abs/1711.05101. Lv, F.; Zhang, H.; Lou, J.-g.; Wang, S.; Zhang, D.; and Zhao, J. 2015. CodeHow: Effective Code Search Based on API Understanding and Extended Boolean Model (E). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). Ma, Y.; Yu, Y.; Li, S.; Jia, Z.; Ma, J.; Xu, R.; Dong, W.; and Liao, X. 2023. MulCS: Towards a Unified Deep Representation for Multilingual Code Search. In IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2023, Taipa, Macao, March 21-24, 2023, 120–131. Mukherjee, R.; Jermaine, C.; and Chaudhuri, S. 2020. Searching a Database of Source Codes Using Contextualized Code Search. Proc. VLDB Endow., 13(10): 1765–1778. Nie, L.; Jiang, H.; Ren, Z.; Sun, Z.; and Li, X. 2016. Query Expansion Based on Crowd Knowledge for Code Search. IEEE Trans. Serv. Comput., 9(5): 771–783. Ning, Y.; Huang, Z.; Lin, X.; Chen, E.; Tong, S.; Gong, Z.; and Wang, S. 2023. Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training. In Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, 13409–13418. Nowozin, S.; Cseke, B.; and Tomioka, R. 2016. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 271–279. Ou, Z.; Su, Q.; Yu, J.; Zhao, R.; Zheng, Y.; and Liu, B. 2021. Refining BERT Embeddings for Document Hashing via Mutual Information Maximization. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, 2360–2369. Shuai, J.; Xu, L.; Liu, C.; Yan, M.; Xia, X.; and Lei, Y. 2020. Improving Code Search with Co-Attentive Representation Learning. In ICPC ’20: 28th International Conference on Program Comprehension, Seoul, Republic of Korea, July 1315, 2020, 196–207. Tong, W.; Tong, S.; Hunag, W.; He, L.; Ma, J.; Liu, Q.; and Chen, E. 2020. Exploiting knowledge hierarchy for finding similar exercises in online education systems. In 2020 IEEE International Conference on Data Mining (ICDM), 1298– 1303. IEEE. Wan, Y.; Shu, J.; Sui, Y.; Xu, G.; Zhao, Z.; Wu, J.; and Yu, P. S. 2019. Multi-modal Attention Network Learning for Semantic Source Code Retrieval. In 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, San Diego, CA, USA, November 11-15, 2019, 13–25. Yao, Z.; Peddamail, J. R.; and Sun, H. 2019. CoaCor: Code Annotation for Code Retrieval with Reinforcement Learning. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, 2203–2214. Ye, W.; Xie, R.; Zhang, J.; Hu, T.; Wang, X.; and Zhang, S. 2020. Leveraging Code Generation to Improve Code Retrieval and Summarization via Dual Learning. In WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, 2309–2319. Zhang, F.; Niu, H.; Keivanloo, I.; and Zou, Y. 2018. Expanding Queries for Code Search Using Semantically Related API Class-names. IEEE Transactions on Software Engineering, 44(11): 1070–1082. Zhao, C.; Zhao, H.; He, M.; Zhang, J.; and Fan, J. 2023. Cross-domain recommendation via user interest alignment. In Proceedings of the ACM Web Conference 2023, WWW 2023, Austin, TX, USA, 30 April 2023 - 4 May 2023, 887– 896. Zubkov, M.; Spirin, E.; Bogomolov, E.; and Bryksin, T. 2022. Evaluation of Contrastive Learning with Various Code Representations for Code Clone Detection. CoRR, abs/2206.08726. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8687
2024
965
18,812
UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models Xiaoxi Li*, Yujia Zhou*, Zhicheng Dou Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China Engineering Research Center of Next-Generation Intelligent Search and Recommendation, Ministry of Education {xiaoxi_li, zhouyujia, dou}@ruc.edu.cn Abstract Generative information retrieval, encompassing two major tasks of Generative Document Retrieval (GDR) and Grounded Answer Generation (GAR), has gained significant attention in the area of information retrieval and natural language processing. Existing methods for GDR and GAR rely on separate retrieval and reader modules, which hinder simultaneous optimization. To overcome this, we present UniGen, a Unified Generative framework for retrieval and question answering that integrates both tasks into a single generative model leveraging the capabilities of large language models. UniGen employs a shared encoder and two distinct decoders for generative retrieval and question answering. To facilitate the learning of both tasks, we introduce connectors, generated by large language models, to bridge the gaps between query inputs and generation targets, as well as between document identifiers and answers. Furthermore, we propose an iterative enhancement strategy that leverages generated answers and retrieved documents to iteratively improve both tasks. Through extensive experiments on the MS MARCO and NQ datasets, we demonstrate the effectiveness of UniGen, showcasing its superior performance in both the retrieval and the question answering tasks. Introduction Generative information retrieval has been a focal point of research in recent years, concerning the generation of relevant information from a vast corpus, such as Wikipedia, in response to a specific query. This field primarily encompasses two tasks: Generative Document Retrieval (GDR) (Metzler et al. 2021; Tay et al. 2022; Zhuang et al. 2022; Wang et al. 2022) and Grounded Answer Generation (GAR) (Guu et al. 2020; Lewis et al. 2020; Izacard and Grave 2020). GDR retrieves a ranked list of documents in response to a query through an encoder-decoder architecture that directly generates document identifiers (docids). Concurrently, GAR generates an answer that matches a specific segment of grounding information, in response to the user’s query. The generative information retrieval landscape has been dramatically reshaped by recent advances in GDR and GAR. For the GDR task, the seminal work of (Metzler et al. 2021) *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Answer UniGen Question Q-Connector Output space Docid space Answer space D-Connector Input space Documents Retrieval Task QA Task Figure 1: Illustration of the unified generative framework, which combines retrieval and question answering tasks through LLM-generated connectors. has been instrumental, where document retrieval is accomplished by directly generating document identifiers via endto-end training generation models. Subsequent research has built upon this work, notably enhancing the indexing strategy (Tay et al. 2022; Zhuang et al. 2022; Wang et al. 2022), identifier design (Tay et al. 2022; Bevilacqua et al. 2022; Zhou et al. 2022b; Sun et al. 2023; Zhang et al. 2023; Tang et al. 2023), and dynamic corpora (Mehta et al. 2022). For the GAR task, prevalent models such as REALM (Guu et al. 2020), RAG (Lewis et al. 2020), FID (Izacard and Grave 2020), EMDR2 (Singh et al. 2021) and Atlas (Izacard et al. 2022) have employed dense retrieval models to retrieve relevant documents, which are then synthesized by generative models to yield the final answer. Despite the advancements, optimizing generative retrieval and question answering (QA) tasks individually requires separate training techniques, distinct training data, and additional time costs. To address these challenges, we propose to utilize a single model to optimize both tasks simultaneously. Noting that both tasks could employ an encoderdecoder structure and possess two essential characteristics: (1) the need for a profound comprehension of the semantic significance behind the query input, and (2) the necessity to comprehend and memorize knowledge in the corpus. Drawing inspiration from these shared characteristics, we propose using a unified framework that is capable of jointly generating docids and answers, facilitating knowledge sharing, and ultimately reinforcing performance on downstream tasks. More specifically, we propose UniGen, a Unified Generative framework that enhances retrieval and QA tasks concurrently. UniGen employs a shared encoder and two distinct decoders: the retrieval decoder and the QA decoder. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8688 By leveraging a shared encoder, we can improve the model’s comprehension of the input through shared knowledge from both tasks, resulting in enhanced overall performance. As shown in Figure 1, the retrieval decoder generates docids for the retrieval task, while the QA decoder generates answers for the QA task. By utilizing such a shared encoder and separate decoders, our proposed UniGen framework enhances the comprehension of model input resulting in improved performance of both tasks. Nevertheless, there are two notable gaps in such a unified generative IR framework that hinder the training process, which include: (1) the input-output gap: The input queries are often brief and lack contextual semantics, leading to a disparity between the query inputs and the generation targets. (2) The docid-answer gap: Conventional docids are typically unreadable sequences, posing a challenge to learn jointly with answer generation and thus creating a gap between docids and answers. To address these gaps, we introduce the concept of Connectors to serve as bridges. Specifically, we introduce the Q-Connector and the D-Connector, which enrich the query’s context and refine the document’s content, thereby bridging the input-output gap and the docidanswer gap, respectively. Considering generating these connectors is a highly knowledge-intensive task, we propose leveraging large language models (LLMs), which have recently gained significant attention (Touvron et al. 2023a; Chiang et al. 2023; Chowdhery et al. 2022), to effectively accomplish this task. Figure 1 illustrates this approach. Furthermore, previous works (Lewis et al. 2020; Mao et al. 2020) have shown that the integration of the retrieval and QA tasks allows for a mutually beneficial relationship. Specifically, the documents acquired through the retrieval decoder serve as supplementary knowledge to enhance answer generation. Simultaneously, the answers generated by the QA decoder can contribute to more effective document retrieval. Building upon this insight, we further propose an Iterative Enhancement Strategy to optimize the performance of both retrieval and QA tasks at the data level. This strategy entails utilizing the retrieved documents and generated answers from the previous iteration as inputs for subsequent model iterations. We continuously refine the model input by employing this iterative process, resulting in superior performance in both tasks. A series of experiments conducted on the public datasets MS MARCO Question Answering (Nguyen et al. 2016) and Natural Questions (NQ) (Kwiatkowski et al. 2019) validate the effectiveness of our proposed methods. The results demonstrate significant improvements in both retrieval and QA performance compared to baseline models. The paper makes the following key contributions: • Unified Generative Framework: We develop a generative framework that incorporates a multi-decoder structure to simultaneously learn retrieval and QA tasks. • LLM-enhanced Connectors: We introduce Qconnector and D-connector generated by LLMs, which establish semantic connections in the input-output and docid-answer spaces, enhancing query semantics and refining document content, respectively. • Iterative Enhancement Strategy: We propose an iterative approach to improve both generative retrieval and QA tasks by leveraging the generated answers and the retrieved documents. Related Work Generative Retrieval. Generative retrieval is an innovative approach to information retrieval that leverages the parameters of pre-trained language models as differentiable indices (Tay et al. 2022), enabling the direct generation of relevant document identifiers. Recent research in this field primarily focuses on document representation and model training. For document representation, existing studies draw inspiration from DSI (Tay et al. 2022) and explore various approaches such as atomic identifiers, text fragments, and semantic clusters. Among these, text fragments stand out due to their ease of use and interpretability. For instance, Ultron (Zhou et al. 2022b) utilizes the document URL and title as representations, while SEAL (Bevilacqua et al. 2022) considers all n-grams within a file as potential identifiers. MINDER (Li et al. 2023) takes a multi-view approach, incorporating synthetic identifiers, titles, and substrings. For model training, a simple yet effective method involves using generated pseudo-query data to train the model to learn the mapping between pseudo-queries and their corresponding docids (Zhuang et al. 2022; Wang et al. 2022; Zhou, Dou, and Wen 2023; Wang et al. 2023; Zhou et al. 2022a). Subsequently, labeled query-docid data is employed to further refine the model. Another notable contribution is TOME (Ren et al. 2023), which proposes a two-stage model structure that first generates a paragraph relevant to the query and then generates the URL associated with the paragraph. Open-Domain Question Answering. Open-domain question answering refers to providing solutions to queries without depending on contextual information. It involves two primary forms: closed-book and open-book. In closedbook QA, models cannot access external knowledge banks and must internalize all necessary information within their parameters. Earlier works such as T5 (Raffel et al. 2020), BART (Lewis et al. 2019), and GPT (Brown et al. 2020) attempt closed-book QA by pre-training on massive text corpora, but still struggle with knowledge-intensive questions. In open-book QA, models can utilize knowledge bases like Wikipedia during answer generation. The typical process involves two main components: a retrieval module that searches knowledge bases for relevant contexts, and a reading module that analyzes the retrieved information to formulate a solution. For example, popular models like DPR (Karpukhin et al. 2020), RAG (Lewis et al. 2020), and EMDR2 (Singh et al. 2021) employ a dual-encoder dense retriever built upon BERT (Devlin et al. 2018), along with another BERT-based model for answer extraction or a T5/BART-based model for answer generation. Large language models (LLMs) have recently shown promising results in open-domain QA (Yu et al. 2022; Sun et al. 2022; Ram et al. 2023; Shi et al. 2023; Borgeaud et al. 2022; Liu et al. 2023). For instance, GenRead (Yu et al. 2022) prompts an LLM to generate context documents instead of using a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8689 Documents offline LLM (Docids) (a) Traditional Separate Frameworks for Retrieval and QA. (b) Unified Generative Retrieval and QA. Question Answer LLM Q-Connector D-Connector Constrained Decode Top-k Documents Shared Encoder Retrieval Decoder QA Decoder Iterative Retrieval and QA Enhancement Strategy i-th Iteration Answer … … Documents offline Question Constrained Decode Top-k Documents Docids Generative Retrieval Question Answering Encoder Decoder Encoder Decoder LargeScale Index Max Inner Product Documents offline Question Top-k Documents Dense Retrieval Question Encoder Top-k Documents Question Document Encoder Figure 2: The comparison between traditional separate frameworks and our unified framework for retrieval and QA. (a) Traditional approaches typically employ separate and independently designed structures for retrieval and QA tasks. (b) Our proposed framework incorporates a multi-decoder structure to simultaneously achieve retrieval and QA tasks in a generative manner. To effectively enhance the performance of both tasks, we introduce LLM-generated Q-connector and D-connector, along with an iterative enhancement strategy. retriever. Combining generation and retrieval techniques can further improve performance. RECITE (Sun et al. 2022) suggests asking the LLM to generate support paragraphs containing the answer, which are then used as an additional prompt along with the question. Inspired by these methods, we present a unified approach that integrates both the retrieval and QA tasks into generative manners, optimizes both tasks through a single generative model, and leverages the previous results for iterative generation, enhancing the model’s overall performance. Methodology In this section, we present a complete overview of our proposed framework, which aims to tackle generative retrieval and QA tasks. We will begin by defining these tasks and then dive into the structure and training methodologies employed in our unified framework. Task Formulation Consider a document d in a document corpus and let d′ denotes the pre-built docid of document d. For generative retrieval task, given a query q, we obtain the relevance R between q and each document d by R(q, d) = fretr(d′|q; θ, ϕ) = YT i=1 fretr(d′ i|d′ <i, q; θ, ϕ), (1) where T is the length of the target document identifier d′, d′ i is the ith token of d′, fretr is the generative retrieval model comprising an encoder with parameters θ and a retrieval decoder with parameters ϕ. The model is trained to maximize the likelihood of generating the target document identifier in Eq. (1). Teacher forcing is used during training to optimize the following cross-entropy loss: Lretr = − XT i=1 logfretr(di|d′ <i, q; θ, ϕ). (2) Similarly, for QA task, given a query q, the probability A of generating answer a is obtained by A(a|q) = fqa(a|q; θ, µ) = YT ′ i=1 fqa(ai|a<i, q; θ, µ), (3) where T ′ is the length of answer a, ai is the ith token of answer a, fqa is the generative QA model with a shared encoder with parameters θ and a distinct QA decoder with parameters µ. Similarly, the optimization of the parameters θ and µ is achieved through the standard seq-to-seq objective, which consists of maximizing the likelihood of the target sequence in Eq. (3) by employing teacher forcing. The QA loss function can be represented by Lqa = − XT ′ i=1 logfqa(ai|a<i, q; θ, µ). (4) UniGen: Unified Generative Retrieval and QA This section discusses the details of the UniGen framework proposed in the paper, including the overall model structure, LLM-based connectors generation, joint learning method, and iterative enhancement strategy. Model Architecture Our proposed UniGen framework introduces a multi-decoder structure to simultaneously tackle the retrieval and the QA tasks. This is different from conventional methods that depend on separate and independently designed architectures for each task. Figure 2 demonstrates the contrast between our UniGen framework and the traditional methods, where dense retrieval relies on large-scale document indices, and generative retrieval and QA methods are usually distinct modules. The architecture of our model comprises an encoder and two separate decoder heads: a retrieval decoder and a QA decoder. The encoder takes the enhanced query generated by the LLM as input, denoted by Q-Connector. The retrieval The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8690 Relevant Document: Pirates of the Caribbean: Dead Man's Chest is a 2006 American fantasy film, the second installment of the ""Pirates of the Caribbean"" film series and the sequel … to acquire the compass of Captain Jack Sparrow (Johnny Depp) in a bid to find the Dead Man’s. Q-Connector: Question: Who plays captain jack sparrow in pirates of the Caribbean? Context: Captain Jack Sparrow, the iconic pirate character in the Pirates of the Caribbean film series, is portrayed by actor Johnny Depp. D-Connector (DocID): "Pirates of the Caribbean: Dead Man's Chest" is a 2006 American film. It features Captain Jack Sparrow (Johnny Depp) and Will Turner on a quest to find the Dead Man's Chest. Easier to Learn Jointly Query Input: Who plays captain jack sparrow in pirates of the Caribbean? Write a Context Hard to Learn Jointly UniGen UniGen 1,5,3,2,4,4 Traditional DocIDs Summarize the Key Information Figure 3: An example of generating LLM-based connectors from the query side and document side, with the labeled answer highlighted in the green box. decoder employs constrained beam search within a prefix tree to generate a ranked list of document identifiers. These identifiers are represented by the connector generated by the LLM from the document side, represented by the DConnector. At the same time, the QA decoder generates the answer text. By using a joint architecture for retrieval and QA, our model optimizes both tasks simultaneously, resulting in enhanced overall system performance. LLM-based Connectors Generation Learning to generate docids and answers concurrently based on query inputs is a challenging task. Query inputs are typically short and lack context, while documents are long and contain redundant information. Directly mapping queries to documents and answers is difficult. Additionally, existing docid representations are often meaningless sequences, hindering the joint learning of generative retrieval and QA tasks. To address these issues, we propose using LLM to generate QConnectors and D-Connectors on the query and document sides. These connectors serve as bridges between query inputs, documents, and answer outputs. Figure 3 provides an example of the LLM-generated connectors. Firstly, for D-Connector generation, the LLM takes the prompt of “Summarize the key information of the following document in about {m} words.\n Document:{d}" along with a document d as input and outputs a summary of the document called a D-Connector dc. The D-Connector serves as a docid of the document that captures its essential information, which greatly reduces the difficulty of the model’s memory for long documents. Additionally, since the answer is typically a short phrase or sentence, it is easier to jointly learn with the answer generation task using the unified framework proposed in this paper. Secondly, for Q-Connector generation, the LLM takes the prompt of “Write a context to the following question in about {n} words.\n Question:{q}" and a question q as input and generates a Q-Connector qc. The Q-Connector provides a contextual representation of the query, which aids in generating relevant docids and accurate answers. The QConnector enables the model to better understand the query and its related context, thereby enabling it to effectively map to relevant docids and provide contextual knowledge for the QA task. This approach does not rely on external corpora and can achieve impressive results for QA. Joint Learning of Retrieval and QA Taking the QConnector qc as the model input, we establish the relevance between the query q and each document d in the set D and the probability of generating an answer a by fretr(dc|qc; θ, ϕ) and fqa(a|qc; θ, µ), respectively. Here, θ, ϕ, and µ represent the parameters of the model encoder, retrieval decoder, and QA decoder, respectively. We can modify the retrieval and QA losses from Eq. (2) and Eq. (4) as follows: L ′ retr = − X i logfretr(dci|dc<i, qc; θ, ϕ), (5) L ′ qa = − X i logfqa(ai|a<i, qc; θ, µ), (6) where dci and ai denote the ith token in the generation of dc and a, respectively. To equip the model with initial generative capabilities for both tasks, we begin by training it on synthetic training data. Previous research has demonstrated that using synthetic data can enhance the effectiveness of generative retrieval and question answering (Zhuang et al. 2022; Puri et al. 2020). Hence, we present a two-stage training approach, including a pre-training stage and a fine-tuning stage: In the pre-training stage, for each document d, we employ the DocT5query (Nogueira, Lin, and Epistemic 2019) model to generate K pseudo queries qk, where k ∈ {1, ..., K}. Next, we feed each pseudo query qk and its corresponding document d into the large language model LLaMA-13B-Chat (Touvron et al. 2023b) to generate label answers ak. To simulate the Q-Connector qc generated by the LLM, we concatenate qk and d as the input of our generative model, denoting qk+d. This approach allows us to generate K pairs of retrieval and QA training data <qk + d, dc> and <qk + d, ak> for each document d. In the fine-tuning stage, we proceed by training the model based on labeled <qc, dc> and <qc, a> data, where qc is generated by LLM from query q. To optimize the model for both generative retrieval and QA tasks, our UniGen framework employs both the generative retrieval loss and the QA loss, denoted by Eq. (5) and Eq. (6), respectively. To jointly optimize the encoder parameters θ, retrieval decoder parameters ϕ, and QA decoder parameters µ, we combine these two losses into a single overall loss: L = λL ′ retr + (1 −λ)L ′ qa, (7) where λ is the regularization weight. By following this training process and optimizing the loss function as described, the model can effectively learn both retrieval and QA tasks The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8691 simultaneously. We refer to this foundational model as UniGen-Base. Iterative Enhancement Strategy To further enhance the retrieval and QA performance of the model at the data level, we propose an iterative enhancement strategy. The objective is to utilize the retrieved documents and generated answers from the previous iteration as inputs for the next round of the model, as shown in the dashed portion in Figure 2(c). In each iteration, we input the top-k documents, answer, and query from the previous round into a large language model. The aim is to generate a higher-quality Q-Connector, denoted as qc, and increase the likelihood of the model producing the correct answer and retrieving more relevant documents. To accomplish this, we use the following prompt: "Given the following potentially relevant documents and the potentially correct answer, please provide the context for the question in {n} words. \n Document:{d} \n Answer:{a} \n Question:{q}". The parameter n allows us to control the length of qc. Through this iterative approach, our goal is to continuously refine the model’s performance in retrieving and answering questions, ultimately improving its overall effectiveness. To strike a balance between model performance and efficiency, we have created an enhanced version of the model called UniGen-Iter, which incorporates two iterations on top of the UniGen-Base. Experimental Settings Datasets To thoroughly evaluate the retrieval and question answering performance of our proposed model, we utilize two wellknown datasets: MS MARCO and Natural Questions. MS MARCO Question Answering (Nguyen et al. 2016) is designed to train and test systems that can effectively generate the most probable answer given a real-world user query. We use the QnA (v2.1) dataset and extract passages from the corpus that contain labeled data, resulting in a substantial collection of approximately 100k passages and a set of 94,871 training query-answer-relevant document triplets. Natural Questions (NQ) (Kwiatkowski et al. 2019) consists of questions sampled from the Google search engine. Following the methodologies proposed by (Karpukhin et al. 2020), we divide each Wikipedia article into nonoverlapping chunks of 100 words. To ensure a robust evaluation, we identify passages in the corpus that contain labeled data based on the training set. This meticulous process results in a diverse collection of around 100k passages and 38,191 training query-answer-relevant document triplets. Baselines We choose several baseline models for the retrieval and QA tasks, categorized into different classes. For the retrieval task, we select three classes of models. The first class consists of Sparse Retrieval models, which include BM25 (Robertson, Zaragoza et al. 2009) and DocT5Query (Nogueira, Lin, and Epistemic 2019). The second class comprises Dense Retrieval models, such as DPR (Karpukhin et al. 2020) and ANCE (Xiong et al. 2020). Lastly, the Generative Retrieval models class includes DSI (Tay et al. 2022), DSI-QG (Zhuang et al. 2022), NCI (Wang et al. 2022), and Ultron (Zhou et al. 2022b). Regarding the QA task, we consider three types of baseline models. The first type is Closed-book Generation models, represented by T5 (Raffel et al. 2020) and BART (Lewis et al. 2019). The second type is Retrieval-augmented Generation models, which incorporate RAG (Lewis et al. 2020) and a combination model that utilizes DPR, NCI, Ultron, and Fusions-in-Decoder (Izacard and Grave 2020). The last type is LLM-based Generation models, where we directly evaluate the QA performance of gpt-3.5-turbo-0613 and LLaMA2-13B-Chat (Touvron et al. 2023b). Evaluation Metrics Retrieval models are evaluated using MRR and recall, which measure the average rank of the first relevant document and the proportion of relevant documents retrieved, respectively. For QA evaluation, we use BLEU-1 (B-1) and ROUGEL (R-L) metrics on MS MARCO. B-1 measures uni-gram overlap, while R-L measures the longest common subsequence overlap. On the NQ dataset, we utilize the Exact Match (EM) and F1 score, which measure exact matches and the harmonic mean of precision and recall, respectively. Implementation Details In our experiments, we utilize the pre-trained T5-base encoder as the shared encoder for our model. Both the retrieval decoder and QA decoder also employ the T5-base decoder with pre-trained parameters from HuggingFace Transformers (Wolf et al. 2019). We incorporate the gpt-3.5-turbo0613 API as the LLM in our system. To generate training data, we create 10 pseudo-queries and 10 pseudo-answers for each document. During training, we set the value of λ to 0.6. Our training process involves a batch size of 128, a learning rate of 5e-4, and 2k learning rate warm-up steps. During inference, we employ constrained beam search for generative retrieval decoding and greedy search for QA decoding. Due to memory and time constraints, we limit the beam size to a maximum of 10. The experiments are conducted on 4 NVIDIA RTX 3090 GPUs. Experimental Results In this section, we present the results of our experiments to evaluate the performance of the proposed unified model in both retrieval and QA tasks. Passage Retrieval Performances We evaluate the retrieval performance, and the overall results are summarized in Table 1. (1) Our proposed UniGen-Base model outperforms existing baseline models in terms of most metrics. Specifically, for the MRR@10 metric, UniGen-Base outperformed the best baseline models on the MS MARCO and NQ datasets by 1.81% and 0.83%, respectively. This can be attributed The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8692 Model # Params MS MARCO Natural Questions (NQ) R@1 R@5 R@10 MRR@10 R@1 R@5 R@10 MRR@10 Sparse Retrieval BM25 25.70 53.28 65.85 37.79 45.36 72.86 81.72 57.18 DocT5Query 31.14 60.04 68.29 42.93 49.43 76.25 84.10 60.81 Dense Retrieval DPR 220M 36.96 70.92 80.18 50.69 60.25 82.60 86.97 69.90 ANCE 220M 37.70 72.34 81.52 51.70 61.45 84.25 88.71 71.30 Generative Retrieval DSI-Semantic 250M 28.84 46.22 52.94 36.60 46.70 66.34 70.79 54.73 DSI-QG 250M 35.41 68.38 73.34 46.48 59.52 78.35 81.93 67.94 NCI 267M 37.89 72.23 77.39 49.41 63.00 84.61 88.90 71.77 Ultron-PQ 257M 37.38 72.07 78.09 51.47 63.54 85.01 86.34 72.68 Unified Generative Retrieval and QA (Retrieval Decode) UniGen-Base 367M 38.75† 72.69† 79.07 52.64† 63.71† 86.39† 88.74 72.81† UniGen-Iter 367M 42.34† 75.99† 81.85† 56.38† 64.92† 88.15† 90.01† 74.61† Table 1: Overall Retrieval Performance, where # Param indicates the size of model parameters. The best results among all experiments are emphasized in bold, while the best results of baseline models are underlined. The symbol "†" signifies that our basic model achieved superior results among all baselines in a statistically significant manner (t-test, p < 0.05). to the joint learning strategy of retrieval and questionanswering tasks, which makes the shared encoder more robust, alleviates overfitting, and improves the understanding of query inputs. In addition, the connectors generated by LLM on the query and document sides serve to enrich the contextual semantics of queries and refine the corpus documents, thereby facilitating the model’s learning of the mapping relationship between queries and relevant docids. (2) For the data-level unification, the proposed UniGenIter model achieves the best retrieval performance after two iterations, outperforming existing generative retrieval models, dense retrieval, and sparse retrieval models. Specifically, UniGen-Iter surpasses the best baseline models on the MS MARCO and NQ datasets by 11.76% and 2.17% in terms of R@1, respectively. Furthermore, as shown in the blue lines in Figure 4, a continuous improvement in retrieval performance can be observed in terms of MRR@10 on MS MARCO and NQ datasets when comparing the non-iterative approach (UniGen-Base) with the iterative methods for 1 to 5 iterations. This clearly demonstrates the effectiveness of the proposed iterative enhancement strategy in improving retrieval performance. This is because the previously retrieved documents can provide relevant external knowledge, and the generated answers can also serve as references, enabling LLM to generate more relevant Q-Connectors and continuously enhance retrieval performance over iterations. In summary, the proposed UniGen model demonstrates superior retrieval performance compared to existing models, and the iterative enhancement strategy proves to be effective in improving retrieval performance. Question Answering Performances We also assess the performance of the proposed model in the QA task, and the results are shown in Table 2. (1) Under the closed-book setting, where external corpora are not accessible, the model directly generates answers 0 1 2 3 4 5 Iteration 52 53 54 55 56 57 MRR@10 18 20 22 24 26 BLEU-1 UniGen-Retr UniGen-QA (a) MS MARCO 0 1 2 3 4 5 Iteration 73.0 73.5 74.0 74.5 75.0 MRR@10 45 50 55 60 EM UniGen-Retr UniGen-QA (b) Natural Questions Figure 4: Analysis of retrieval and QA performance with different iterations. to input questions. Comparing the small model fine-tuned with labeled data and the large model without fine-tuning, the proposed UniGen-Base model significantly outperforms existing baseline models with statistical significance (p < 0.05). For the MS MARCO dataset, UniGen-Base surpasses BART by 9.10% in terms of Bleu-1, and for the NQ dataset, it outperforms T5 by 45.80% in terms of exact match (EM). Even without accessing external documents, UniGen-Base outperforms some retrieval-based models. This can be attributed to the Q-Connector generated by the LLM, which provides effective contextual information for query inputs. Besides, the joint learning of answer generation and DConnector enhances the model’s robustness in generating answers. (2) Under the open-book setting, comparing with existing retrieval-augmented answer generation models, the proposed UniGen-Iter outperforms the DPR+FID model by 28.94% in terms of Bleu-1 on the MS MARCO dataset and surpasses Ultron+FID by 3.73% in terms of EM on the NQ dataset. In addition, Figure 4 illustrates the improvement in QA performance, through the use of iterative methods compared to the non-iterative approach (UniGen-Base), as indiThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8693 Model # Params MS MARCO NQ B-1 R-L EM F1 Closed-book Answer Generation T5 220M 14.49 18.74 30.22 38.40 BART 340M 14.83 19.26 29.76 37.34 LLM-based Answer Generation (w/o finetuning) GPT-3.5 16.67 19.41 28.03 36.59 LLaMA2 16.08 18.61 25.27 33.56 Retrieval-augmented Answer Generation RAG 540M 17.32 21.83 53.32 62.86 DPR + FID 440M 18.97 23.15 54.47 63.53 DSI + FID 470M 15.64 19.79 43.58 52.08 Ultron + FID 477M 17.21 21.96 55.75 65.05 Unified Generative Retrieval and QA (QA Decode) UniGen-Base 367M 17.02† 21.90† 44.06† 53.39† UniGen-Iter 367M 24.46‡ 30.31‡ 57.83‡ 67.24‡ Table 2: Overall QA Performance, where # Param indicates the size of model parameters. The best results among all experiments are emphasized in bold, while the best results of baseline models are underlined. The symbol "†" and "‡" signifies that our model achieved superior results in the closedbook and open-book settings, respectively. cated by the red lines. This also highlights the effectiveness of the proposed iterative enhancement strategy in enhancing QA performance. As a result, the enhanced Q-Connectors has also contributed to a consistent enhancement in the performance of QA task throughout various iterations. To summarize, the UniGen model outperforms existing models in QA tasks, and the iterative enhancement strategy significantly contributes to its improved performance. Ablation Studies To validate the effectiveness of our proposed unified framework for retrieval and QA, we conduct experiments where we systematically remove each module and observe the resulting performance degradation, as presented in Table 3. We find that removing any of the modules, namely the shared encoder, Q-Connector, or D-Connector, leads to a noticeable decline in both retrieval and QA performance. Notably, the largest drop in performance is observed when the Q-Connector is removed. This highlights the significance of leveraging large-scale language models as external knowledge sources to provide context that is relevant to queries. Furthermore, removing the D-Connector also has a significant impact on the final performance. This demonstrates the contribution of the D-Connector in bridging the gap between the document and the answer, surpassing the capabilities of traditional methods such as hierarchical clusteringbased document identifiers. For methods that do not utilize a shared encoder, we still observe a decrease in performance, underscoring the advantages of our unified structure. This structure enables the training of more robust encoders, resulting in improved representations of inputs and enhanced retrieval and QA performance. Model Retrieval QA R@1 R@10 B-1 R-L UniGen-Base 38.75 79.07 17.02 21.90 w/o shared encoder 37.89 78.69 16.49 21.30 w/o Q-Connector 36.14 77.58 12.32 15.98 w/o D-Connector 37.44 78.25 15.06 18.74 UniGen-Iter 42.34 81.85 24.46 30.31 w/o shared encoder 41.76 81.29 23.72 29.71 w/o Q-Connector 37.43 78.66 21.21 26.39 w/o D-Connector 41.28 81.59 22.35 27.54 Table 3: Ablation study of our unified generation model on the MS MARCO dataset. 0k 5k 10k 15k 20k 25k Step 10 20 30 40 50 MRR@10 UniGen-Retr UniGen-QA DPR GPT-3.5 10 12 14 16 18 20 BLEU-1 (a) MS MARCO 0k 5k 10k 15k 20k 25k Step 20 40 60 MRR@10 UniGen-Retr UniGen-QA DPR GPT-3.5 25 30 35 40 45 EM (b) Natural Questions Figure 5: Learning curves of retrieval and QA performance. Study of Learning Curves To demonstrate the effectiveness of our proposed approach during the training process, we plot learning curves to showcase the retrieval and QA performance on the MS MARCO and NQ datasets. We utilize a combination of synthetic and labeled data to train the UniGen model. Figure 5 illustrates these curves, with the average values and standard deviations plotted for each metric, obtained from five separate training runs on each dataset. The retrieval performance, measured by MRR@10, is represented by the blue curve, while the red curve represents the QA performance, measured by BLEU-1 for MS MARCO and EM for NQ. Notably, both tasks exhibit stable optimization throughout the learning process, thereby confirming the effectiveness of our proposed unified framework for simultaneous learning of retrieval and QA tasks. Conclusion In this paper, we present UniGen, a unified generative framework for retrieval and question answering. Our approach optimizes both tasks simultaneously and employs connectors generated by large language models to establish semantic connections in the input-output and docid-answer spaces. Additionally, our iterative enhancement approach proves to be effective in enhancing retrieval and QA performance. Through extensive experiments conducted on public datasets, we demonstrate the effectiveness of UniGen in both retrieval and QA tasks. This work opens up new possibilities for jointly learning retrieval and other generation tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8694 Acknowledgements Zhicheng Dou is the corresponding author. This work was supported by the National Natural Science Foundation of China No. 62272467, Beijing Outstanding Young Scientist Program No. BJJWZYJH012019100020098, the fund for building world-class universities (disciplines) of Renmin University of China, and Public Computing Cloud, Renmin University of China. The work was partially done at Beijing Key Laboratory for Big Data Management and Analysis Methods. References Bevilacqua, M.; Ottaviano, G.; Lewis, P.; Yih, W.; Riedel, S.; and Petroni, F. 2022. Autoregressive Search Engines: Generating Substrings as Document Identifiers. CoRR, abs/2204.10628. Borgeaud, S.; Mensch, A.; Hoffmann, J.; Cai, T.; Rutherford, E.; Millican, K.; Van Den Driessche, G. B.; Lespiau, J.-B.; Damoc, B.; Clark, A.; et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, 2206–2240. PMLR. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877– 1901. Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J. E.; et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023). Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Guu, K.; Lee, K.; Tung, Z.; Pasupat, P.; and Chang, M. 2020. Retrieval augmented language model pre-training. In International conference on machine learning, 3929–3938. PMLR. Izacard, G.; and Grave, E. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282. Izacard, G.; Lewis, P.; Lomeli, M.; Hosseini, L.; Petroni, F.; Schick, T.; Dwivedi-Yu, J.; Joulin, A.; Riedel, S.; and Grave, E. 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299. Karpukhin, V.; O˘guz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W.-t. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906. Kwiatkowski, T.; Palomaki, J.; Redfield, O.; Collins, M.; Parikh, A.; Alberti, C.; Epstein, D.; Polosukhin, I.; Devlin, J.; Lee, K.; et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7: 453–466. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; and Zettlemoyer, L. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.-t.; Rocktäschel, T.; et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33: 9459–9474. Li, Y.; Yang, N.; Wang, L.; Wei, F.; and Li, W. 2023. Multiview Identifiers Enhanced Generative Retrieval. Liu, J.; Jin, J.; Wang, Z.; Cheng, J.; Dou, Z.; and Wen, J.R. 2023. RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit. arXiv preprint arXiv:2306.05212. Mao, Y.; He, P.; Liu, X.; Shen, Y.; Gao, J.; Han, J.; and Chen, W. 2020. Generation-augmented retrieval for open-domain question answering. arXiv preprint arXiv:2009.08553. Mehta, S. V.; Gupta, J.; Tay, Y.; Dehghani, M.; Tran, V. Q.; Rao, J.; Najork, M.; Strubell, E.; and Metzler, D. 2022. DSI++: Updating Transformer Memory with New Documents. arXiv preprint arXiv:2212.09744. Metzler, D.; Tay, Y.; Bahri, D.; and Najork, M. 2021. Rethinking search: making domain experts out of dilettantes. SIGIR Forum, 55(1): 13:1–13:27. Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016. Ms marco: A humangenerated machine reading comprehension dataset. Nogueira, R.; Lin, J.; and Epistemic, A. 2019. From doc2query to docTTTTTquery. Online preprint, 6: 2. Puri, R.; Spring, R.; Patwary, M.; Shoeybi, M.; and Catanzaro, B. 2020. Training question answering models from synthetic data. arXiv preprint arXiv:2002.09599. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485–5551. Ram, O.; Levine, Y.; Dalmedigos, I.; Muhlgay, D.; Shashua, A.; Leyton-Brown, K.; and Shoham, Y. 2023. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083. Ren, R.; Zhao, W. X.; Liu, J.; Wu, H.; Wen, J.; and Wang, H. 2023. TOME: A Two-stage Approach for Model-based Retrieval. CoRR, abs/2305.11161. Robertson, S.; Zaragoza, H.; et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval, 3(4): 333–389. Shi, W.; Min, S.; Yasunaga, M.; Seo, M.; James, R.; Lewis, M.; Zettlemoyer, L.; and Yih, W.-t. 2023. Replug: Retrievalaugmented black-box language models. arXiv preprint arXiv:2301.12652. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8695 Singh, D.; Reddy, S.; Hamilton, W.; Dyer, C.; and Yogatama, D. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. Advances in Neural Information Processing Systems, 34: 25968–25981. Sun, W.; Yan, L.; Chen, Z.; Wang, S.; Zhu, H.; Ren, P.; Chen, Z.; Yin, D.; de Rijke, M.; and Ren, Z. 2023. Learning to Tokenize for Generative Retrieval. CoRR, abs/2304.04171. Sun, Z.; Wang, X.; Tay, Y.; Yang, Y.; and Zhou, D. 2022. Recitation-augmented language models. arXiv preprint arXiv:2210.01296. Tang, Y.; Zhang, R.; Guo, J.; Chen, J.; Zhu, Z.; Wang, S.; Yin, D.; and Cheng, X. 2023. Semantic-Enhanced Differentiable Search Index Inspired by Learning Strategies. arXiv preprint arXiv:2305.15115. Tay, Y.; Tran, V. Q.; Dehghani, M.; Ni, J.; Bahri, D.; Mehta, H.; Qin, Z.; Hui, K.; Zhao, Z.; Gupta, J. P.; Schuster, T.; Cohen, W. W.; and Metzler, D. 2022. Transformer Memory as a Differentiable Search Index. CoRR, abs/2202.06991. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023b. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv preprint arXiv:2307.09288. Wang, Y.; Hou, Y.; Wang, H.; Miao, Z.; Wu, S.; Sun, H.; Chen, Q.; Xia, Y.; Chi, C.; Zhao, G.; Liu, Z.; Xie, X.; Sun, H. A.; Deng, W.; Zhang, Q.; and Yang, M. 2022. A Neural Corpus Indexer for Document Retrieval. CoRR, abs/2206.02743. Wang, Z.; Zhou, Y.; Tu, Y.; and Dou, Z. 2023. NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR. In CIKM, 2656–2665. ACM. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Xiong, L.; Xiong, C.; Li, Y.; Tang, K.-F.; Liu, J.; Bennett, P.; Ahmed, J.; and Overwijk, A. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808. Yu, W.; Iter, D.; Wang, S.; Xu, Y.; Ju, M.; Sanyal, S.; Zhu, C.; Zeng, M.; and Jiang, M. 2022. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063. Zhang, P.; Liu, Z.; Zhou, Y.; Dou, Z.; and Cao, Z. 2023. Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search Engines. arXiv preprint arXiv:2305.13859. Zhou, Y.; Dou, Z.; and Wen, J. 2023. Enhancing Generative Retrieval with Reinforcement Learning from Relevance Feedback. In EMNLP, 12481–12490. Association for Computational Linguistics. Zhou, Y.; Yao, J.; Dou, Z.; Wu, L.; and Wen, J. 2022a. DynamicRetriever: A Pre-training Model-based IR System with Neither Sparse nor Dense Index. CoRR, abs/2203.00537. Zhou, Y.; Yao, J.; Dou, Z.; Wu, L.; Zhang, P.; and Wen, J. 2022b. Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer. CoRR, abs/2208.09257. Zhuang, S.; Ren, H.; Shou, L.; Pei, J.; Gong, M.; Zuccon, G.; and Jiang, D. 2022. Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation. CoRR, abs/2206.10128. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8696
2024
966
18,813
MESED: A Multi-Modal Entity Set Expansion Dataset with Fine-Grained Semantic Classes and Hard Negative Entities Yangning Li1,2*, Tingwei Lu1*, Hai-Tao Zheng1,2†, Yinghui Li1†, Shulin Huang1, Tianyu Yu1, Jun Yuan3, Rui Zhang3 1Shenzhen International Graduate School, Tsinghua University 2PengCheng Laboratory 3Huawei Noah’s Ark Lab {yn-li23,ltw23}@mails.tsinghua.edu.cn Abstract The Entity Set Expansion (ESE) task aims to expand a handful of seed entities with new entities belonging to the same semantic class. Conventional ESE methods are based on monomodality (i.e., literal modality), which struggle to deal with complex entities in the real world such as (1) Negative entities with fine-grained semantic differences. (2) Synonymous entities. (3) Polysemous entities. (4) Long-tailed entities. These challenges prompt us to propose novel Multi-modal Entity Set Expansion (MESE), where models integrate information from multiple modalities to represent entities. Intuitively, the benefits of multi-modal information for ESE are threefold: (1) Different modalities can provide complementary information. (2) Multi-modal information provides a unified signal via common visual properties for the same semantic class or entity. (3) Multi-modal information offers robust alignment signals for synonymous entities. To assess model performance in MESE, we constructed the MESED dataset which is the first multi-modal dataset for ESE with large-scale and elaborate manual calibration. A powerful multi-modal model MultiExpan is proposed which is pre-trained on four multimodal pre-training tasks. The extensive experiments and analyses on MESED demonstrate the high quality of the dataset and the effectiveness of our MultiExpan, as well as pointing the direction for future research. The benchmark and code are public at https://github.com/THUKElab/MESED. Introduction The Entity Set Expansion (ESE) task aims to expand a handful of seed entities with new entities belonging to the same semantic class based on the given candidate entity vocabulary and corpus(Zhang et al. 2020; Li et al. 2022a). For example, given {Washington D.C., Chicago, Los Angeles}, ESE tries to retrieve other entities with the target semantic class US Cities, such as New York, NYC, Boston. ESE plays a significant role in knowledge mining and benefits a variety of downstream NLP and IR applications (Chen, Cafarella, and Jagadish 2016; Li et al. 2023b). Conventional ESE methods are based on mono-modality (i.e., literal modality), which typically suffer from limited *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Washington, D.C. Chicago Los Angeles Semantic Class: US Cities 1) Negative entities with fine-grained semantic differences share semantics on textual context Example 1: …, Florida which is located in the southeastern region … Explanation: US states which has semantic overlap with US cities as they are both located in somewhere might be retrieved incorrectly. 2) Context-sensitive Synonymous entities Example 2: … take a vacation to SEA with … Example 3: … The Big Apple never fails to amaze me … Explanation: whether abbreviations and nicknames refer to cities can’t be determined without clear textual clues, resulting in the neglect. 3) Polysemous entities Example 4: … Washington State government has announced that … Explanation: features of entities with common tokens are close due to the inherent characteristics of PLMs which may result in wrong retrieval. Example 5: … Pateros with 667 population in 2010 was established … 4) Long-tailed entities Explanation: model can’t comprehend long-tail entities due to paucity of textual information about entities, resulting in their failure to be retrieved. Mono-modal Expansion Visual Clues Multi-modal Expansion Figure 1: An example of tricky entities that a mono-modal ESE model cannot handle. information and sparse representation. Taking expanding US Cities as an example, the mono-modal ESE methods struggle to deal with complex entities in the real world from the following perspectives: • Negative entities with fine-grained semantic differences refer to entities that belong to the same coarse-grained semantic class as target class. These entities share semantics on textual context and are consequently challenging to be differentiated in detail. For instance, when expanding US Cities, it’s inevitable to expand entities with the same parent class (i.e., US Location), such as Florida and Texas that are also located in the US. • Synonymous entities mean entities have a variety of aliases. The ESE model can readily understand common aliases, while failing to comprehend these context-sensitive aliases (Henriksson et al. 2014; Schumacher and Dredze 2019) such as abbreviations and nicknames, since ascertaining the meaning of them necessitates explicit textual cues. For example, SEA only means Seattle in certain contexts, potentially leading to the omission of its retrieval. • Polysemous entities, which stand for possible ambiguThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8697 ity of a textual mention referring to multiple entities. Since pre-trained language models learn semantics through word co-occurrence (Kenton and Toutanova 2019; Lauscher et al. 2020), entities comprising the same tokens are inherently closer. For example, the L2 distance from Washington, D.C. to Washington State is instead smaller than the distance to many other cities like Austin (8.89 vs. 10.02 we measured). As a result, entities merely with the same textual tokens may be wrongly retrieved. • Long-tailed entities represent low-frequency entities in the corpus, such as obscure place names. Due to the inadequate textual description, the representation of these entities is frequently too sparse, posing a challenge to their retrieval. The aforementioned situations lead to the advent of Multimodal Entity Set Expansion (MESE), where we integrate information from multiple modalities to represent entities and expand them to target semantic classes. MESE can overcome the limitations of mono-modal approaches by leveraging multiple sources of information. The benefits of MESE include the following: Firstly, multi-modal information can complement the information provided by texts (especially for short texts), thereby enhancing model to comprehensively understand entities. Secondly, multi-modal information can serve as a cohesive signal that unites semantic classes based on shared visual properties or characteristics. For instance, when dealing with Comic Book Characters, the background and style of images can serve as uniform features of the comic book characters, distinguishing them from hard negative semantic class Movie Characters. Third, multi-modal information can facilitate the resolution of polysemous entities and provide clues for the alignment of synonymous entities. In addition, we argue that multi-modal information is particularly beneficial to rarely used synonymous entities or long-tail entities, as entities of lower frequencies tend to be more concrete concepts with stable visual representations. Regrettably, despite the availability of diverse multi-modal data types (Li et al. 2023a; Yu et al. 2023a,c; Cheng et al. 2023a,b,c), there is currently no multi-modal dataset structured based on fine-grained semantic classes. To address this gap, we have constructed a large-scale, manually annotated MESE dataset called MESED, comprising 14,489 entities sourced from Wikipedia and 434,675 image-sentence pairs. To the best of our knowledge, MESED is the first multi-modal dataset for ESE with large-scale and elaborate manual calibration. MESED features several elements to accentuate the challenges of ESE. Firstly, we meticulously crafted a semantic class schema that consists of 26 coarse-grained and 70 fine-grained classes, with fine-grained classes that are mutually ambiguous (e.g., Chinese actors versus US actors) being assigned as hard negative classes for each other. Furthermore, synonymous and polysemous entities are added to amplify confusion between entities. Additionally, to evaluate models’ capability in comprehending sparse entities, uncommon semantic classes were deliberately included. In experiments, conventional text-based models, as well as emerging GPT-3.5, and various visual and multi-modal baseline models are evaluated. We also propose a powerful multi-modal model MultiExpan trained with four selfsupervised multi-modal pre-training tasks that we designed, including masked entity prediction, contrastive learning, clustering learning, and momentum distillation. To summarize, the main contributions are as follows: • We present a novel Multi-modal Entity Set Expansion (MESE) task, which expands entities in multiple modalities. • We first release a large-scale human-annotated MESE dataset called MESED, which is challenging as its finegrained semantic classes and ambiguous candidate entities. • We provide strong multi-modal baseline models MultiExpan and explore diverse self-supervised pre-training objectives for representation learning of multi-modal entities. • Extensive experiments demonstrate the effectiveness of our MultiExpan and provide direction for future research. Task Formulation Definition 1 Multi-modal Entity Set Expansion (MESE). The inputs of MESE are a small set S = {e1, e2, ..., ek} that contains several seed entities describing a certain semantic class and a vocabulary V of candidate entities. Besides, a corpus D containing the multi-modal contexts {ei, (ti 1, vi 1), ..., (ti n, vi n)} for each entity ei is given, in which ti n is a sentence comprising ei and (ti n, vi n) forms an imagesentence pair. It is of note that arbitrary modality may be lacking in a given context. Dataset Construction In this section, we demonstrate the MESED construction procedure. Several factors, including the coverage and ambiguity of semantic classes, as well as the relevance between images and entities are considered to ensure the quality of MESED. Data Collection There are two ways to construct a multi-modal ESE dataset. The first straightforward approach is to first collect the imagesentence pairs and label the entities in the sentences. Then, for each semantic class, human annotators traverse the entire large-scale entity vocabulary once to pick up the corresponding entities. Although plenty of public datasets are available with massive image-sentence pairs, the labour cost of such a bottom-up manner is prohibitive. We therefore adopt the more practical top-down approach to constructing MESED. That is, the semantic classes and the corresponding entities are constructed first, and then the text and visual contexts corresponding to the entities are collected in turn. Step 1. Semantic Classes and Entities Collection Wikipedia has compiled a vast list of entities corresponding to semantic classes1, which are organized in a hierarchical structure. We pick a selection of semantic classes with certain principles (discussed in next Section) and crawl the corresponding entities. In addition, numerous entities randomly sampled from Wikipedia pages are appended to the entity vocabulary as negative entities. Further, polysemous and synonymous entities are also added to the vocabulary as hard negative entities and hard positive entities, respectively. 1https://en.wikipedia.org/wiki/List of lists of lists The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8698 Step 2. Entity-Labeled Sentences Collection We crawl Wikipedia articles containing abundant entity mentions with human-annotated hyperlinks2 that uniquely identify an entity. Since the entities crawled in Step 1 contain hyperlinks, we can utilize these hyperlinks to associate the entities with the respective sentences and convey the textual information to the entities. Step 3. Related Images Collection In this step, images corresponding to the entities or sentences are acquired through Google Image search engine. To remove the distraction of extraneous content in sentence, keywords in the sentence are extracted with KeyBERT (Grootendorst 2020). We stitch them with the entity name and semantic class as the search query, and obtain the top 10 images of the search results. Step 4. Images Re-ranking One of the 10 images needs to be selected as the visual information of the entity. An ideal image should reflect the content of the sentence and contain the entity simultaneously. With both aspects in mind, a simple but effective image re-ranking algorithm was devised to select the most appropriate image vi for sentence ti and entity e: score(vi, ti, e) = α CLIP-IMG(vi) ⊙CLIP-TEXT(ti) +(1 −α) max oj i ∈Obj(vi) (cos sim(oj i, Img(e))) (1) The first term measures the relevance of image vi and sentence ti, which is what CLIP excels at. The second leveraged FasterRCNN (Ren et al. 2015) to detect objects Obj(vi) in image and calculate their similarity to typical image Img(e) of entity in the Wikipedia Infobox. The second determines whether the entity appears in the image or not. We take the image with the highest score as the one corresponding to the sentence ti and entity e and leave the exploitation of multiple images for future research. Human Calibration and Annotation The dataset automatically generated after the above steps is inevitably noisy. Especially in Steps 3 and 4, A mismatch between images and sentences may exist. To improve the quality of images while verifying the effectiveness of the re-ranking algorithm, we hired human annotators who were required to evaluate the relevance of images to sentences and entities, categorized into three categories: relevant to both (R/T E&S), relevant to only the sentence (R/T S), and irrelevant to both (IR). For images that are irrelevant to both after re-ranking, the annotators need to select a new image. From Table 1, we observe that the re-ranking algorithm significantly improves the relevance of images to both text and entities, compared to using the Top 1 image returned by the search engine directly. The inter-annotator agreement measured by Fleiss’s Kappa (Fleiss 1971) all exceeded 0.8, demonstrating the reliability of the annotation results. The strategy using the Top 1 image has the highest image diversity (measured by the inverse of the average cosine similarity of image embeddings) due to the introduction of substantial irrelevant images. Whereas the first term of the re-ranking algorithm guarantees the relevance of images and sentences 2E.g., https://en.wikipedia.org/wiki/Earth while also avoiding a singular selection of typical images of the entity, potentially ensuring that there is no significant decrease in image diversity. Strategy R/T E&S (%) R/T S (%) IR (%) Kappa Diversity Top 1 52.7 14.8 32.5 0.842 1.813 Re-ranking 78.1 15.2 6.7 0.862 1.792 Annotation 80.8 19.2 0 0.858 1.798 Table 1: Relevance of images between entities and sentences when using different strategies to process images. Analysis of MESED MESED is the first multi-modal ESE dataset with meticulous manual calibration. It consists of 14,489 entities collected from Wikipedia, and 434,675 image-sentence pairs. The 70 fine-grained semantic classes in MESED contain an average of 82 entities with a minimum of 23 and a maximum of 362. Each fine-grained class contains 5 queries with three seed entities and 5 queries with five seed entities. MESED may not feature the largest total number of candidate entities, but we believe that the number of entities is not a key factor in measuring the quality of a dataset. Most candidate entities in previous datasets are randomly selected negative entities, which are significantly different from the target entities and do not enhance the challenge of the dataset. Wiki APR CoNLL ONs MESED # Classes 8 3 4 8 70 Granularity Coarse Coarse Coarse Coarse Fine # Queries / Class 5 5 1 1 10 # Seed / Query 3 3 10 10 3/5 # Entities 33K 76K 6K 20K 14K # Sentences 973K 1043K 21K 144K 434K Multi-Modal % % % % ! Table 2: Comparison of ESE datasets. We ensured that the MESED was challenging from multiple perspectives: (1) We meticulously designed the schema of semantic classes, which consists of three layers of granularity. Fine-grained semantic classes that belong to the same parent class have semantic overlap, making them hard negative semantic classes for each other. (2) We included entities sharing words with the target entities obtained through the BM25based Wikipedia search engine, as hard negative entities in the candidate word list. (3) We assessed the model’s ability to expand synonymous entities by obtaining the entity’s synonyms via Wikidata SPARQL and replacing a portion of the entity with synonyms having an edit distance greater than 5 from it. Due to space constraints, more detailed analysis and experiments on MESED are placed in the appendices in the Supplementary Material, and they are highly recommended to the reader. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8699 ResNet Transformer Layer … in east [MASK] and … Multi-modal Encoder Classification head f( ⋅) × 12 × 12 Transformer Layer × 3 Contrastive projection head pcon( ⋅) Clustering projection head pclu( ⋅) ̂h[MASK] ̂y ∈ℝVe c ∈ℝM Prediction Loss Lmask Z ∈ℝN×D N × N Contrastive Loss Lcon Clustering Loss Lclu … … … … Transformer Layer × 3 Momentum Model Momentum Update Momentum Distillation Pre-training Tasks z ∈ℝD Z′ ∈ℝN×D Minimize Maximize C ∈ℝN×M C′ ∈ℝN×M M × M Minimize Maximize Figure 2: The training framework of the multi-modal entity representation phase. Methods Overall Framework We describe the proposed MultiExpan method for MESE, which expands the initial entity set with multi-modal contexts. Inspired by the previous ProbExpan (Li et al. 2022b), we divide MultiExpan into two steps: multi-modal entity representation phase and entity expansion phase. In the first phase, we design a multi-modal entity-level encoder whose output is the probability distribution of masked span over candidate entities. The entity is represented as the average of the predicted entity distributions for all sentences containing it. Four multi-modal self-learning pre-training tasks are proposed to refine the entity representation. In the second phase, MultiExpan obtains the target entities according to the similarities of the probabilistic representation of the entities. We note that MultiExpan is proposed to provide a robust multi-modal baseline and to explore the effectiveness of different pre-training tasks. Multi-modal Entity Representation Multi-modal encoder first processes text and images separately with self-attention Transformer, then combines them for deep cross-modal interaction. Text Firstly, to handle the text information, we replace entity mentions in sentences with [MASK] to construct the inputs for text modality. Concerning the contextual text T = {w1, w2, ..., wL1} with masked entity mention, we directly use 12 layers of Transformer initialized by BERTBASE (Kenton and Toutanova 2019) to obtain the textual context’s embeddings: ˆW = { ˆw1, ˆw2, ..., ˆwL1} = BERTBASE(T) (2) where L1 is the max length of tokens in the sentences. Image Secondly, we deal with the image information. Different from the regional features and grid features widely used in the field of image feature extraction, the patch features we adopt are simple yet efficient. We transform each image into a fixed shape and determine the size of each patch, divide each image into 36 patches I = {i1, i2, ..., iL2}, and use the backbone Resnet to extract patch features: {v1, v2, ..., vL2} = Flat(Resnet(I)) (3) where L2 is the number of patches and Flat(·) indicates the flatting function that reshapes the patch features extracted from Resnet into one dimensional. Since the patch features will cause the loss of position information during segmentation, we add a learnable position embedding P = {p1, p2, ..., pL2} in order to mark the position information of each patch. Both patch features and position embeddings are combined through pair-wise add. Finally, we build a 3-layer transformer architecture as image encoder in the visual information processing: ˆV = {ˆv1, ˆv2, ..., ˆvL2} = EncoderV (Flat(Resnet(I))⊕P) (4) Cross-modal fusion After obtaining the information of the two modalities, the hidden states {h1, h2, ..., hL} are obtained through the concatenation of text features and visual features: concat( ˆW, ˆV ). Then we feed it into a 3-layer transformer for interaction and fusion between modalities so that the image-text pairs are fully aligned: {ˆh1, ˆh2, ..., ˆhL} = Encodercross({h1, h2, ..., hL}) (5) where L = L1 + L2 and the structure of the transformer is the same as the above-mentioned visual encoder. A classification head f is attached behind the multi-modal encoder. After getting the hidden state of the mask position, the embedding vector is transformed into the probability distribution of the masked entity over the possible candidate entities by MLP and Softmax function: ˆy = f(ˆh[MASK]) = Softmax(MLP(ˆh[MASK])), ˆy ∈RVe (6) in which Ve is the size of candidate entities vocabulary. Four self-supervised pre-training objectives are proposed for the training. The multi-modal encoder iteratively optimizes the four objectives: Masked entity prediction loss With respect to the masked entity prediction task, the model takes images and the masked sentences as input and obtains the entity probability distribution ˆy of the masked position as described above. Crossentropy loss with label smoothing is applied to allow the model to learn the underlying semantics of entities: Lmask = −1 N N X i Ve X j yi[j] · (1 −η) · log(ˆyi[j]) +(1 −yi[j]) · η · log(1 −ˆyi[j]) (7) where the ground truth y is the one-hot vector and N is the batch size. η is the smoothing factor that prevents entities sharing semantics with the target entity from being overly suppressed. Contrastive learning loss Contrastive learning provides clearer semantic boundaries of semantic classes through drawing the representation of the same semantic class entities closer and the representation of different semantic class entities further apart (Li et al. 2022d,c). We generate the positive and negative entities for each semantic class from the expanded list obtained in the previous iteration. The entities ranked in the top Kpos positions are defined as positive The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8700 entities, while the entities ranked from Lneg to Uneg are considered negative entities. The samples from positive/negative entities are paired to form positive/negative sample pairs. For a mini-batch of size N, each sample x2i−1 forms 2N−1 pairs with others, among which we pair x2i−1, x2i to be positive and define other 2N −2 pairs to be negative. Since directly performing contrastive learning on the hidden features ˆh[MASK] may cause information loss, we plugged in a two-layer MLP pcon(·) behind multi-modal encoder to map the hidden features to a normalized subspace via zi = pcon(ˆh[MASK]), where zi ∈RD and D is the dimension of subspace. The pair-wise similarity is measured by dot product: s(zi, zj) = zi · z⊤ j , i, j ∈[1, 2N] (8) The contrastive learning loss that concentrates on hard negative entities is applied. For a given sample zi (suppose it forms a positive pair with zj), the loss is defined as: li = −log es(zi,zj)/t es(zi,zj)/t + R− i , (9) R− i = max(−(2N −2) · τ · es(zi,zj)/t + f R− i 1 −τ + , e−1 t ) (10) f R− i = (2N −2) P k:k̸=i̸=j e(1+β)·s(zi,zk)/t P k:k̸=i̸=j eβ·s(zi,zk)/t (11) where τ, β, t are hyperparameters, representing class prior probability, hard negative entity concentration level, and temperature. The contrastive loss in a batch is computed as: Lcon = 2N X i=1 li (12) Clustering learning loss Similar to contrastive learning, clustering learning attracts positive semantic class pairs and repels negative semantic class pairs. We employ an alternative projection head, denoted as pclu, to map the input sample xi onto a semantic class subspace, resulting in ci = pclu(ˆh[MASK]). The dimension M of ci corresponds to the number of clusters, namely the number of target semantic classes. Each element of the feature indicates the probability that it belongs to a particular semantic class. We posit that a semantic class can be characterized by the probabilistic responses of a batch of entities towards it. Formally, let C = [c1, · · · , c2i−1, · · · , c2N−1] ∈ RN×M denotes the class probability distribution under samples {x1, · · · , x2i−1, · · · , x2N−1}, and C′ = [c2, · · · , c2i, · · · , c2N] for samples {x2, · · · , x2i, · · · , x2N}. The positive clustering pairs are formed by the semantic classes represented by the same columns of matrices C and C′, due to the fact that the entities x2i−1 and x2i, corresponding to each element of these column vectors, are positive sample pairs originating from the same semantic class. For brevity, we denote the i-th column of C as ˆc2i−1 and ˆc2i for the i-th column of C′. Similarly, dot product is adopted to quantify the similarity between ˆci and ˆcj: ˆs(ˆci, ˆcj) = ˆc⊤ i · ˆcj, i, j ∈[1, 2M] (13) For each semantic class ˆci, the clustering loss ˆli is computed in the same way as contrastive loss defined in Equation (9)(11), which distinguishes ˆci from other 2M −2 semantic classes except its positive counterpart ˆcj. The clustering loss is finally calculated as: Lclu = 2M X i=1 ˆli (14) Momentum distillation loss The image-sentence pairs in our MESED are collected from the web, often accompanied by noise, which causes the collected images may be weakly related to the sentences, or the extended entities belonging to the semantic class are not included in ground truth. To alleviate the above problems, we introduce momentum distillation learning. During training, a momentum version of the model is slowly updated by exponentially shifting the momentum factor m: θt ←mθt + (1 −m)θs and the momentum model is used to generate pseudo-labels as additional supervision, preventing the student model overfitting to noise. The momentum distillation loss is expressed as the KL divergence between the pseudo entities probability distribution ey generated by the momentum model and the predicted ˆy of the multi-modal encoder at current iteration: Lmod = − m X i=1 eyilog(eyi) −eyilog( ˆyi)) (15) Entity Expansion The entity is represented as the average of the predicted entity distributions for all sentences containing it. The semantic class is represented by the weighted average of entities in current expansion set and the weight is dynamically maintained by window search algorithm. In this way, candidate entities with similar distribution are placed in the current set measured by KL divergence. As the expansion process is not the focus of this work, we use window search and entity re-ranking algorithm from the ProbExpan (Li et al. 2022b) and will not repeat them here. Experiments Experiment Setup Compared Methods We compare three categories of models, the first is the traditional text-based ESE approach, including SetExpan (Shen et al. 2017), CaSE (Yu et al. 2019), CGExpan (Zhang et al. 2020), ProbExpan (Li et al. 2022b) and GPT-3.5. Of the above models, SetExpan, CaSE are the traditional statistical probability-based approaches, and CGExpan and ProbExpan are the most advanced methods based on pre-trained language model BERT. We also evaluated vision-based models: VIT (Dosovitskiy et al. 2020), BEIT (Bao et al. 2021) and image encoder of CLIP (CLIPIMG). For multi-modal expansion, we explored multi-modal models with different structures comprising CLIP (Radford et al. 2021) and ALBEF (Li et al. 2021). Both the abovementioned vision-based and multi-modal models are further pre-trained via entity prediction tasks, analogous to the method defined in Equation (7). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8701 Modality Method ∥Seed∥=3 MAP P Avg @10 @20 @50 @100 @10 @20 @50 @100 T SetExpan 26.10 20.98 15.83 13.91 34.25 29.58 24.25 22.96 23.48 CaSE 27.71 20.93 14.63 12.02 36.85 30.57 24.83 23.63 23.90 CGExpan 38.89 32.51 24.69 21.06 45.85 39.85 33.19 32.80 33.61 GPT-3.5 31.10 24.73 19.20 17.07 37.65 31.35 26.08 25.11 26.54 GPT+Name 42.12 35.32 26.83 23.21 52.32 41.23 35.89 35.73 36.58 ProbExpan 65.47 57.50 43.96 40.73 71.30 64.35 55.73 51.99 56.38 V VIT 65.02 55.94 41.89 32.40 67.95 59.53 46.08 36.94 50.72 BEIT 68.45 58.58 43.59 33.69 71.70 62.13 47.60 37.66 52.93 CLIP-IMG 66.39 57.04 41.72 32.42 68.85 60.90 45.79 36.81 51.24 T+V CLIP 76.41 65.75 49.58 40.08 79.20 69.53 53.10 43.66 59.66 ALBEF 83.55 75.46 63.02 54.47 86.60 79.15 68.03 61.12 71.43 Ours (MEP) 86.07 79.18 67.66 58.91 89.10 82.85 72.13 65.17 75.13 Ours (Full) 91.44 86.85 76.86 63.34 93.60 89.63 80.37 67.15 81.16 Modality Method ∥Seed∥=5 MAP P Avg @10 @20 @50 @100 @10 @20 @50 @100 T SetExpan 25.99 20.64 15.20 13.51 34.90 29.93 24.26 23.29 23.47 CaSE 32.01 24.63 17.99 14.58 41.50 34.75 28.83 27.03 27.67 CGExpan 38.86 31.49 23.54 20.23 45.55 38.28 31.88 32.15 32.75 GPT-3.5 31.79 25.46 20.12 19.94 39.40 33.13 28.67 30.45 28.62 GPT+Name 42.32 36.48 25.76 22.36 52.94 42.10 34.68 35.12 36.47 ProbExpan 66.29 59.31 48.90 42.51 73.15 66.78 58.51 54.54 58.75 V VIT 62.29 55.43 41.30 31.54 68.20 58.93 45.61 35.91 49.90 BEIT 70.14 59.04 43.08 33.21 73.45 62.93 47.25 37.17 53.28 CLIP-IMG 67.67 57.28 41.41 31.86 70.40 60.80 45.25 35.94 51.33 T+V CLIP 77.37 65.92 49.01 39.05 79.80 69.48 52.41 42.50 59.44 ALBEF 85.04 76.25 62.45 53.64 87.80 79.70 67.37 60.06 71.54 Ours (MEP) 87.77 79.96 67.24 57.62 90.90 83.55 71.41 63.41 75.23 Ours (Full) 92.67 87.27 75.70 61.36 94.30 89.68 78.56 64.46 80.50 Table 3: Main experiment results. Text-based, vision-based, and multi-modal expansion methods are evaluated. Evaluation Metrics The objective of ESE is to expand the ranked entity list based on their similarity to given seed entities in descending order. Following previous research (Zhang et al. 2020; Li et al. 2022b; Yan et al. 2020), two widely used evaluation metrics, MAP@K and P@K, are employed. The MAP@K metric is computed as follows: MAP@K = 1 |Q| X q∈Q APK(Rq, Gq) (16) Here, Q is the collection for each query q. APK(Rq, Gq) denotes the average precision at position K with the ranked list Rq and ground-truth list Gq. P@K is the precision of the top-K entities. In the experiment, queries with ∥Seed∥=3 and 5 are evaluated separately. Main Experiment The results of the main experiment are presented in Table 3, from which we observe that: (1) The multi-modal methods outperform the mono-modal methods in general. Remarkably, our MultiExpan achieves superior performance solely by employing masked entity prediction (MEP) task. Moreover, the full version of MultiExpan achieves the best performance. (2) In terms of the structure of multi-modal models, ALBEF and our MultiExpan exhibit deep modality interaction through the Transformer, which is better suited for the ESE task compared to the CLIP’s shallow modal interaction via dot product similarity calculation. These results indicate that deep modal interaction and fusion is a direction that can be explored in the future. (3) In terms of the vision-based models, BEIT excels in leveraging finer-grained image semantics, such as object and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8702 background information, by pre-training on masked image modeling. In contrast to the VIT model which learns the overall image semantics through image classifying images in the Image Net dataset, BEIT demonstrates better results in entity understanding. Meanwhile, the image encoder of CLIP also captures richer semantics than the VIT model owing to its linkage with the text modality. However, relying solely on image modality does not suffice to produce satisfactory results, and the text modality still remains dominant. (4) The increase of ∥Seed∥does not necessarily translate to an enhancement in overall performance. More seeds can describe the semantic classes more precisely and retrieve some “must be correct” entities more safely, so MAP/P improves when K is small (=10,20). However, more seed entities mean a larger search space for semantic classes, necessitating a more meticulous analysis of common entity properties than the current model allows. This issue represents the persistent challenge of semantic drift that confronts ESE models, so MAP/P decreases when K is larger. Of course, increasing ∥Seed∥helps disambiguate the query with entities belonging to multiple classes. Such as in the semantic class Light Novel, where some seed entities also are Manga, increasing ∥Seed∥ makes a gain of 17.5% average on all metrics. (5) GPT-3.5 did not achieve satisfactory results, and was even inferior to unsupervised CGExpan. Through meticulous examination of GPT-3.5’s performance on specific semantic classes, we discovered that model struggled with complex classes (e.g., 108 Martyrs of World War II). We explicitly instructed GPT-3.5 to reason about the class names first, and then expand based on them. This modification, named GPT+Name, exhibited a substantial improvement. This approach aligns with the idea of emerging chain-of-thought reasoning (Wei et al. 2022) for large language models (Touvron et al. 2023; Li et al. 2023c; Yu et al. 2023b), i.e., thinking step by step. We suggest future research to explore the combination of chain-of-thought and ESE tasks. Pre-training Tasks Analysis We compared the effects of different pre-training tasks on MultiExpan. The masked entity prediction task enables the model to learn the underlying semantics of entities, which is further enhanced by the addition of three pre-training tasks. The results presented in Table 4 demonstrate that each pretraining task confers a gainful effect on the model. Notably, we found that contrastive learning with hard negative entities yields the greatest performance improvement for the model, by providing clearer semantic boundaries. While clustering learning brings comparable gains to contrastive learning at MAP/P@K=10 and 20, it is less effective at larger K. This is because contrastive learning operates directly on entities and more directly aggregates target entities into tight clusters. In contrast, momentum distillation learning brings a smaller performance gain, which we believe is mainly attributed to its ability to prevent overfitting in the presence of noisy data. This observation underscores the high quality of the data provided by MESED, particularly the accurate annotation of entities in sentences. Extensive experiments on the hyperparameters sensitivity of the pre-training tasks are presented in Appendix, demonstrating the robustness of MultiExpan to the parameters. Modality Analysis We also carry out analysis experiments on each modality to answer the following questions. Are the multiple modalities complementary? T+V T V 15.17% 5.17% 4.01% 25.63% 18.69% 5.50% 2.47% Figure 3: The contribution of each modality. We present a Venn diagram illustrating the impact of different modalities on MESE, as depicted in Figure 3. T, V and T+V represent ProbExpan, BEIT and our MultiExpan respectively. The size of each circle corresponds to the proportion of the top 100 ranked entities that belong to the ground truth, and the intersection of the circles represents the overlap of entities. Our analysis shows that the textual modality still prevails over the visual modality. Whereas the visual modality is introduced as supplementary information, 15.17% of the target entities in MultiExpan are sorted to a higher position, while 5.17% of the entities that were originally correctly expanded are excluded, due to the image noise. Is it better to have multi-modal contexts of both seed and candidate entities? During the inference phase, we separately removed the textual and visual information from the candidate or seed entities in MultiExpan. The resulting performances are shown in the last 6 rows of Table 5, with subscripts indicating the operations performed on seeds (s) or candidates (c). Our results indicate that removing any part of the modal information for any part of the entities is detrimental to the overall performance. However, when particular modal information was removed from seed entities, it caused severe performance degradation, whereas removing modal information from candidate entities caused only a slight performance loss. These findings suggest that modeling the semantics of the seed entity set is more crucial than modeling individual entities. Additionally, MultiExpan demonstrated a decrease in performance when we removed the input text or images during the pre-training phase, further demonstrating its ability to effectively utilize multi-modal information. What visual clues are provided by the visual modality? We randomly sample 200 entities and determine that images can provide essential visual clues, including (1) Objects, which can augment the limited textual information by depicting the entities themselves, (2) Scenes, which showcase the environment where the entity exists to differentiate between the target semantic class and the hard negative semantic class, e.g., indoor vs. outdoor, water vs. land, (3) Properties, which demonstrate the common traits of entities to align entities of the same class, such as appearance of Cats, and (4) Other: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8703 Model MAP P Avg @10 @20 @50 @100 @10 @20 @50 @100 MultiExpan (MEP) 86.07 79.18 67.66 58.91 89.10 82.85 72.13 65.17 75.13 + Contrastive 90.71 86.58 75.58 62.69 93.35 89.60 79.23 67.10 80.61 + Clustering 89.10 82.83 70.85 60.48 91.65 86.05 74.75 65.92 77.70 + Distillation 86.97 80.48 68.30 59.43 89.85 83.65 72.34 65.23 75.78 MultiExpan (Full) 91.44 86.85 76.86 63.34 93.60 89.63 80.37 67.15 81.16 Table 4: Comparison of different pre-training tasks. Model MAP P Avg @10 @20 @50 @100 @10 @20 @50 @100 MultiExpan (MEP) 86.07 79.18 67.66 58.91 89.10 82.85 72.13 65.17 75.13 pre-train w/o T 65.97 57.87 42.84 33.39 70.45 62.50 48.85 39.70 52.70 pre-train w/o V 66.87 60.18 52.26 47.57 73.90 68.13 62.45 60.88 61.53 w/o Ts and Tc 20.67 18.32 13.13 9.66 27.80 26.10 21.54 18.17 19.42 w/o Ts 20.75 18.43 13.57 9.92 27.50 25.88 21.54 18.34 19.49 w/o Tc 85.45 77.99 66.53 56.58 88.10 81.95 71.17 62.68 73.81 w/o Vs and Vc 58.99 50.36 40.38 35.60 64.05 56.53 48.37 47.09 50.17 w/o Vs 60.44 51.95 41.92 37.18 65.05 57.55 49.25 47.67 51.38 w/o Vc 84.79 76.94 64.55 55.92 87.90 81.18 69.76 63.07 73.01 Table 5: Ablation study on modality absence. Visual Clues Proportion P@100 ProbExpan MultiExpan Object 46.3% 57.44 70.21 Scene 21.2% 67.44 72.09 Property 22.2% 66.66 80.00 Others 3.4% 61.90 76.19 Table 6: Model performance under different visual clues. Other important visual clues. We annotate 200 entity images with their corresponding visual clue types and assess MultiExpan’s capacity to leverage different visual clues. As Table 6 shows, all types of visual cues are beneficial to MESE, and visual modalities mainly supplement the textual information by highlighting objects in the images. In contrast, MultiExpan utilizes scenes to a lesser extent as they represent more abstract concepts. Case studies, visual clues examples and detailed performance on each semantic class can be found in Appendix. Conclusion In this paper, we introduce a novel task called Multi-modal Entity Set Expansion (MESE), which aims to leverage multiple modalities to represent and expand entities. The MESED dataset is the first multi-modal dataset for ESE with finegrained semantic classes and hard negative entities. In addition, A powerful multi-modal model MultiExpan is proposed which is pre-trained on four multimodal pre-training tasks. MultiExpan achieves state-of-the-art results compared to other mono/multi-modal models. In the future, we will investigate the applicability of generative PLMs, such as GPT-4, in addressing MESE task. MESED can also serve as a reliable benchmark for assessing the multi-modal entity understanding capacities of large PLMs. Acknowledgments This research is supported by National Natural Science Foundation of China (Grant No.62276154), Research Center for Computer Network (Shenzhen) Ministry of Education, the Natural Science Foundation of Guangdong Province (Grant No. 2023A1515012914), Basic Research Fund of Shenzhen City (Grant No. JCYJ20210324120012033 and JSGG20210802154402007), the Major Key Project of PCL for Experiments and Applications (PCL2021A06), Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (HW2021008), and Shenzhen Science and Technology Program (WDZC20231128091437002). References Bao, H.; Dong, L.; Piao, S.; and Wei, F. 2021. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254. Chen, Z.; Cafarella, M.; and Jagadish, H. V. 2016. Long-Tail Vocabulary Dictionary Extraction from the Web. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, WSDM ’16, 625–634. New York, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8704 NY, USA: Association for Computing Machinery. ISBN 9781450337168. Cheng, X.; Cao, B.; Ye, Q.; Zhu, Z.; Li, H.; and Zou, Y. 2023a. Ml-lmcl: Mutual learning and large-margin contrastive learning for improving asr robustness in spoken language understanding. In Findings of the Association for Computational Linguistics: ACL 2023, 6492–6505. Cheng, X.; Dong, Q.; Yue, F.; Ko, T.; Wang, M.; and Zou, Y. 2023b. M 3 st: Mix at three levels for speech translation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Cheng, X.; Xu, W.; Zhu, Z.; Li, H.; and Zou, Y. 2023c. Towards spoken language understanding via multi-level multigrained contrastive learning. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 326–336. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fleiss, J. L. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5): 378. Grootendorst, M. 2020. KeyBERT: Minimal keyword extraction with BERT. Henriksson, A.; Moen, H.; Skeppstedt, M.; Daudaraviˇcius, V.; and Duneld, M. 2014. Synonym extraction and abbreviation expansion with ensembles of semantic spaces. Journal of biomedical semantics, 5(1): 1–25. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT, 4171– 4186. Lauscher, A.; Vuli´c, I.; Ponti, E. M.; Korhonen, A.; and Glavaˇs, G. 2020. Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity. In Proceedings of the 28th International Conference on Computational Linguistics, 1371–1383. Li, J.; Selvaraju, R.; Gotmare, A.; Joty, S.; Xiong, C.; and Hoi, S. C. H. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34: 9694–9705. Li, Y.; Chen, J.; Li, Y.; Xiang, Y.; Chen, X.; and Zheng, H.-T. 2023a. Vision, Deduction and Alignment: An Empirical Study on Multi-Modal Knowledge Graph Alignment. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE. Li, Y.; Huang, S.; Zhang, X.; Zhou, Q.; Li, Y.; Liu, R.; Cao, Y.; Zheng, H.; and Shen, Y. 2022a. Automatic Context Pattern Generation for Entity Set Expansion. CoRR, abs/2207.08087. Li, Y.; Li, Y.; Chen, X.; Zheng, H.-T.; and Shen, Y. 2023b. Active relation discovery: Towards general and label-aware open relation extraction. Knowledge-Based Systems, 282: 111094. Li, Y.; Li, Y.; He, Y.; Yu, T.; Shen, Y.; and Zheng, H.-T. 2022b. Contrastive Learning with Hard Negative Entities for Entity Set Expansion. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1077–1086. Li, Y.; Ma, S.; Wang, X.; Huang, S.; Jiang, C.; Zheng, H.-T.; Xie, P.; Huang, F.; and Jiang, Y. 2023c. EcomGPT: Instruction-tuning Large Language Model with Chain-of-Task Tasks for E-commerce. arXiv preprint arXiv:2308.06966. Li, Y.; Ma, S.; Zhou, Q.; Li, Z.; Yangning, L.; Huang, S.; Liu, R.; Li, C.; Cao, Y.; and Zheng, H. 2022c. Learning from the Dictionary: Heterogeneous Knowledge Guided Fine-tuning for Chinese Spell Checking. In Findings of the Association for Computational Linguistics: EMNLP 2022, 238–249. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. Li, Y.; Zhou, Q.; Li, Y.; Li, Z.; Liu, R.; Sun, R.; Wang, Z.; Li, C.; Cao, Y.; and Zheng, H.-T. 2022d. The Past Mistake is the Future Wisdom: Error-driven Contrastive Probability Optimization for Chinese Spell Checking. In Findings of the Association for Computational Linguistics: ACL 2022, 3202–3213. Dublin, Ireland: Association for Computational Linguistics. Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR. Ren, S.; He, K.; Girshick, R. B.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Schumacher, E.; and Dredze, M. 2019. Learning unsupervised contextual representations for medical synonym discovery. JAMIA open, 2(4): 538–546. Shen, J.; Wu, Z.; Lei, D.; Shang, J.; Ren, X.; and Han, J. 2017. Setexpan: Corpus-based set expansion via context feature selection and rank ensemble. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 288–304. Springer. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E. H.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems. Yan, L.; Han, X.; He, B.; and Sun, L. 2020. Global bootstrapping neural network for entity set expansion. In Findings of the Association for Computational Linguistics: EMNLP 2020, 3705–3714. Yu, P.; Huang, Z.; Rahimi, R.; and Allan, J. 2019. Corpusbased set expansion with lexical features and distributed representations. In Proceedings of the 42nd International The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8705 ACM SIGIR Conference on Research and Development in Information Retrieval, 1153–1156. Yu, T.; Hu, J.; Yao, Y.; Zhang, H.; Zhao, Y.; Wang, C.; Wang, S.; Pan, Y.; Xue, J.; Li, D.; Liu, Z.; Zheng, H.-T.; and Sun, M. 2023a. Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants. arXiv:2310.00653. Yu, T.; Jiang, C.; Lou, C.; Huang, S.; Wang, X.; Liu, W.; Cai, J.; Li, Y.; Li, Y.; Tu, K.; Zheng, H.-T.; Zhang, N.; Xie, P.; Huang, F.; and Jiang, Y. 2023b. SeqGPT: An Out-ofthe-box Large Language Model for Open Domain Sequence Understanding. arXiv:2308.10529. Yu, T.; Li, Y.; Chen, J.; Li, Y.; Zheng, H.-T.; Chen, X.; Liu, Q.; Liu, W.; Huang, D.; Wu, B.; and Wang, Y. 2023c. Knowledge-augmented Few-shot Visual Relation Detection. arXiv:2303.05342. Zhang, Y.; Shen, J.; Shang, J.; and Han, J. 2020. Empower Entity Set Expansion via Language Model Probing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8151–8160. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8706
2024
967
18,814
A Generalized Neural Diffusion Framework on Graphs Yibo Li1, Xiao Wang2, Hongrui Liu3, Chuan Shi1* 1Beijing University of Posts and Telecommunications 2Beihang University 3Ant Group {yiboL, shichuan}@bupt.edu.cn, [email protected], [email protected] Abstract Recent studies reveal the connection between GNNs and the diffusion process, which motivates many diffusion-based GNNs to be proposed. However, since these two mechanisms are closely related, one fundamental question naturally arises: Is there a general diffusion framework that can formally unify these GNNs? The answer to this question can not only deepen our understanding of the learning process of GNNs, but also may open a new door to design a broad new class of GNNs. In this paper, we propose a general diffusion equation framework with the fidelity term, which formally establishes the relationship between the diffusion process with more GNNs. Meanwhile, with this framework, we identify one characteristic of graph diffusion networks, i.e., the current neural diffusion process only corresponds to the first-order diffusion equation. However, by an experimental investigation, we show that the labels of high-order neighbors actually exhibit monophily property, which induces the similarity based on labels among high-order neighbors without requiring the similarity among first-order neighbors. This discovery motives to design a new high-order neighbor-aware diffusion equation, and derive a new type of graph diffusion network (HiD-Net) based on the framework. With the high-order diffusion equation, HiD-Net is more robust against attacks and works on both homophily and heterophily graphs. We not only theoretically analyze the relation between HiD-Net with high-order random walk, but also provide a theoretical convergence guarantee. Extensive experimental results well demonstrate the effectiveness of HiD-Net over state-of-the-art graph diffusion networks. Introduction Graphs, such as traffic networks, social networks, citation networks, and molecular networks, are ubiquitous in the real world. Recently, Graph Neural Networks (GNNs), which are able to effectively learn the node representations based on the message-passing manner, have shown great popularity in tackling graph analytics problems. So far, GNNs have significantly promoted the development of graph analysis towards real-world applications. e.g, node classifification (Abu-ElHaija et al. 2019; Wu, He, and Xu 2019), link prediction (Kipf and Welling 2016b; You, Ying, and Leskovec 2019), *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. subgraph isomorphism counting (Yu et al. 2023), and graph classifification (Gao and Ji 2019; Zhang et al. 2018). Some recent studies show that GNNs are in fact intimately connected to diffusion equations (Chamberlain et al. 2021; Wang et al. 2021; Thorpe et al. 2022), which can be considered as information diffusion on graphs. Diffusion equation interprets GNNs from a continuous perspective (Chamberlain et al. 2021) and provides new insights to understand existing GNN architectures, which motives some diffusion-based GNNs. For instance, (Wang et al. 2021) proposes continuous graph diffusion. (Thorpe et al. 2022) utilizes the diffusion process to handle oversmoothing issue. (Song et al. 2022) considers a graph as a discretization of a Riemannian manifold and studies the robustness of the information propagation process on graphs. Diffusion equation can also build a bridge between traditional GNNs and control theory (Zang and Wang 2020). Although the diffusion process and graph convolution are closely related, little effort has been made to answer: Is there a unified diffusion equation framework to formally unify the current GNN architectures? A well-informed answer can deepen our understanding of the learning mechanism of GNNs, and may inspire to design a broad new class of GNNs based on diffusion equation. Actually, (Chamberlain et al. 2021) has explained GCN (Kipf and Welling 2016a) and GAT (Veliˇckovi´c et al. 2017) from diffusion equation. However, with more proposed GNN architectures, it is highly desired to formally revisit the relation between diffusion equation and GNNs. In this paper, we discover that many GNN architectures substantially can be unified with a general diffusion equation with the fidelity term, such as GCN/SGC [22], APPNP[14], GAT [19], AMP [16], DAGNN [15]. Basically, the diffusion equation describes that the change of a node representation depends on the movement of information on graphs from one node to its neighbors, and the fidelity term constraints that the change of a node representation depends on the difference with its initial feature. Furthermore, we show that the unified diffusion framework can also be derived from an energy function, which explains the whole framework as an energy minimization process in a global view. Compared with other unified frameworks (Zhu et al. 2021; Ma et al. 2021), our framework is from the diffusion perspective, which has many advantages. For example, diffusion-based methods are able to address the common plights of graph learning models such as The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8707 oversmoothing (Chamberlain et al. 2021; Wang et al. 2021). What’s more, the diffusion equation can be seen as partial differential equations (PDEs) (Chamberlain et al. 2021), thus introducing many schemes to solve the graph diffusion equation such as explicit scheme, implicit scheme, and multi-step scheme, some of which are more stable and converge faster. Based on the above findings, we can see that the diffusion process employed by most current GNNs just considers the first-order diffusion equation, which only diffuses messages among 1-hop neighbors. That is, the first-order diffusion has the underlying homophily assumption among 1hop neighbors. While we empirically discover that the labels of 2-hop neighborhoods actually appear monophily property (Altenburger and Ugander 2018), i.e., nodes may have extreme preferences for a particular attribute which are unrelated to their own attribute and 1-hop neighbors’ attribute, but are more likely to be similar with the attribute of their 2-hop neighbors. Simply put, monophily can induce a similarity among 2-hop neighbors without requiring similarity among 1-hop neighbors. So when the 1-hop neighbors are heterophily-dominant or have noise, the 2-hop neighbors will provide more relevant context. Therefore, a more practical diffusion process should take both the first-order and secondorder neighbors into account. How can we design a new type of graph diffusion networks satisfying the above requirement based on our framework? In this paper, we design a new high-order neighbor-aware diffusion equation in our proposed diffusion framework, and then derive a High-order Graph Diffusion Network (HiDNet). Specifically, our model simultaneously combines the first-order and second-order diffusion process, then we regularize the diffusion equation by minimizing the discrepancy between the estimated and the original graph features. The whole diffusion equation is finally integrated into the APPNP architecture. With second-order diffusion equation, HiD-Net is more robust against the attacks and more general on both homophily and heterophily graphs. We theoretically prove that HiD-Net is essentially related with the second-order random walk. We also provide the convergence guarantee that HiD-Net will converge to this random walk’s limit distribution as the number of layers increases, and meanwhile, the learned representations do not converge to the same vector over all nodes. The contributions of this paper are summarized as follows: • We propose a novel generalized diffusion graph framework, consisting of diffusion equation and fidelity term. This framework, formally establishing the relation between diffusion process with a wide variety of GNNs, describes a broad new class of GNNs based on the discretized diffusion equations on graphs and provides new insight to the current graph diffusion/neural networks. • We discover the monophily property of labels, and based on our diffusion framework, we propose a high-order graph diffusion network, HiD-Net, which is more general and robust. We theoretically build the relation between HiD-Net and second-order random walk, together with the convergence guarantee. • Our extensive experiments on both the homophily and heterophily graphs clearly show that HiD-Net outperforms the popular GNNs based on diffusion equation. Related Work Graph convolutional networks. Recently, graph convolutional network (GCN) models (Bruna et al. 2013; Defferrard, Bresson, and Vandergheynst 2016; Kipf and Welling 2016a; Veliˇckovi´c et al. 2017; Hamilton, Ying, and Leskovec 2017) have been widely studied. Based on the spectrum of graph Laplacian, (Bruna et al. 2013) generalizes CNNs to graph signal. Then (Defferrard, Bresson, and Vandergheynst 2016) further improves the efficiency by employing the Chebyshev expansion of the graph Laplacian. (Kipf and Welling 2016a) proposes to only aggregate the node features from the one-hop neighbors and simplifies the convolution operation. (Veliˇckovi´c et al. 2017) introduces the attention mechanisms to learn aggregation weights adaptively. (Hamilton, Ying, and Leskovec 2017) uses various ways of pooling for aggregation. More works on GNNs can be found in surveys (Wu et al. 2020b; Zhou et al. 2020). Diffusion equation on graphs. Graph Heat Equation (GHE) (Chung and Graham 1997), which is a well-known generalization of the diffusion equation on graph data, models graph dynamics with applications in spectral graph theory. GRAND (Chamberlain et al. 2021) studies the discretized diffusion PDE on graphs and applies different numerical schemes for their solution. GRAND++ (Thorpe et al. 2022) mitigates the oversmoothing issue of graph neural networks by adding a source term. DGC (Wang et al. 2021) decouples the terminal time and propagation steps of linear GCNs from a perspective of graph diffusion equation, and analyzes why linear GCNs fail to benefit from deep layers. ADC (Zhao et al. 2021) strategies to automatically learn the optimal diffusion time from the data. However, these works focus on specific graph diffusion network, thus there is not a framework to formally unify the GNNs. The unified GNN framework. (Zhu et al. 2021) establishes a connection between different propagation mechanisms with a unified optimization problem, and finds out that the proposed propagation mechanisms are the optimal solution for optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. (Ma et al. 2021) establishes the connections between the introduced GNN models and a graph signal denoising problem with Laplacian regularization. It essentially is still an optimization solving framework from the perspective of signal denoising. However, our framework is based on the diffusion equation, where the advantages are two fold: one is that diffusion-based methods are able to address the oversmoothing problem (Chamberlain et al. 2021; Wang et al. 2021). The other is that the diffusion equation can be seen as partial differential equations (PDEs) (Chamberlain et al. 2021) and thus can introduce many schemes that have many good properties such as fast convergence rate and high stability. The Generalized Diffusion Graph Framework Notations. Consider an undirected graph as G = (V, E) with adjacency matrix A ∈Rn×n, where V contains n nodes {v1, . . . , vn} and E is the set of edges. The initial The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8708 node feature matrix is denoted as X(0) ∈Rn×q, where q is the dimension of node feature. We denote the neighbors of node i at exactly k hops/steps away as Nk(i). For example, N1(i) = {j : (i, j) ∈E} are the immediate neighbors of i. Diffusion is a physical process that equilibrates concentration differences without creating or destroying mass. This physical observation can be easily cast in the diffusion equation, which is a parabolic partial differential equation. Fick’s law of diffusion describes the equilibration property (Weickert 1998): J = −f · ∇u, (1) where J is the diffusion flux, which measures the amount of substance that flows through a unit area during a unit time interval. f is the diffusivity coefficient, which can be constant or depend on time and position. ∇u is the concentration gradient. This equation states that a concentration gradient ∇u causes a flux J which aims to compensate for this gradient. The observation that a change in concentration in any part of the system is due to the influx and outflux of substance into and out of that part of the system can be expressed by the continuity equation: ∂u ∂t = −div J, (2) where t denotes the time. Plugging in Fick’s law (1) into the continuity equation we end up with the diffusion equation: ∂u ∂t = div(f · ∇u). (3) As div is the sum of the second derivatives in all directions, please note that normal first order derivatives and second order derivatives are on continuous space and can not be generalized directly to graph which is on discrete space. As (Chamberlain et al. 2021) defined, the first derivative is the difference between the feature of a node and its neighbor. And the second derivative can be considered as the difference between the first derivatives of the node itself and its neighbors. For better illustration, we provide an example. Consider a chain graph in Figure 1, where i, i + 1, and i −1 are the indexes of the nodes. The feature of node i is denoted as xi. Figure 1: Chain graph The first order derivatives on node i is defined as xi+1 −xi and xi −xi−1. The diffusion flux from node j to node i at time t on a graph is: J(t) ij = −f · (∇x)(t) ij = −f(x(t) j −x(t) i ). (4) The second order derivative is the difference of first order derivatives: (xi+1 −xi) −(xi −xi−1) = (xi+1 −xi) + (xi−1 −xi). We notice that the chain graph only has one dimension, so the divergence of node i on a chain graph is equal to its second order derivative: div(∇xi) = (xi+1 − xi) + (xi−1 −xi). Thus, on a normal graph, we have the generalized form: div(∇xi) = P j∈N1(i)(xj −xi). Here we normalize the diffusion process utilizing the degree of the nodes to down-weight the high-degree neighbors, and we have div(∇xi) = P j∈N1(i) ˜Aij p ˜di q ˜dj (xj −xi), where ˜Aij is the element of ˜A = A + I, and di = P j ˜Aij. So the diffusion equation on node i can be defined as: ∂x(t) i ∂t = −div J(t) ij = div[f(∇x)(t) ij ] = f X j∈N1(i) ˜Aij p ˜di q ˜dj (x(t) j −x(t) i ). (5) The diffusion equation models the change of representation x(t) i with respect to t, which depends on the difference between the nearby nodes, implying that the greater the difference between a node and its neighbors, the faster it changes. However, how fast x(t) i changes should not only depend on the representation difference between node i and its neighbors, otherwise, it will cause oversmoothing issue, i.e., as the diffusion process goes by, the nodes are not distinguishable. Based on this phenomenon, we think that the representation change of x(t) i should be also related with the node feature x(0) i itself, i.e., if the difference between x(t) i and x(0) i is small, the change of x(t) i should also be small. Then we add another fidelity term and obtain our general graph diffusion framework as follows: ∂x(t) i ∂t = α(x(0) i −x(t) i ) + β div(f(∇x)(t) ij ), (6) where α, β are coefficients. Remark 1. (6) can be derived from the energy function: E(x) := Z Ω  α · (xi −x(0) i )2 + β · |f(∇x)ij|2 dθ, (7) where θ represents the position of the nodes, and Ωrepresents the entire graph domain. The corresponding Euler–Lagrange equation, which gives the necessary condition for an extremum of (7), is given by: 0 = α(xi −x(0) i ) + β div(f(∇x)ij). (8) (8) can also be regarded as the steady-state equation of (6). Based on the energy function, we can see that (6) constrains space variation and time variation of the diffusion process, indicating that the representations of the graph nodes will not change too much between nearby nodes, as well as not change too much from the initial features. Remark 2. The framework (6) is closely related to many GNNs, such as GCN/SGC (Wu et al. 2019), APPNP (Klicpera, Bojchevski, and Günnemann 2018), GAT (Veliˇckovi´c et al. 2017), AMP (Liu et al. 2021), DAGNN (Liu, Gao, and Ji 2020), as demonstrated by the following propositions. We provide the proofs of all the subsequent propositions in Appendix. Proposition 1. With α = 0, β = 1, ∆t = 1 and f = 1 in (6), the diffusion process in SGC/GCN is: ∂x(t) i ∂t = div((∇x)(t) ij ). (9) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8709 Proposition 2. Introducing η as coefficient, with α = 1, β = 1 −1 η , ∆t = 1 and f = 1 in (6), the diffusion process in APPNP is: ∂x(t) i ∂t = (x(0) i −x(t) i ) + (1 −1 η ) div((∇x)(t) ij ). (10) Proposition 3. With α = 0, β = 1, ∆t = 1 and the learned similarity coefficient f (t) ij between nodes i and j at time t in (6), the diffusion process in GAT is: ∂x(t) i ∂t = div(f (t) ij (∇x)(t) ij ). (11) Proposition 4. With stepsize ϵ and coefficient λ, β(t) i = max 1 − ϵλ (1−2ϵ(1−λ))X(t) i +2ϵ(1−λ) ˜AX(t) i −(X(0))i 2 , 0 ! . Let α = 1 −β(t) i , β = 2ϵ(1 −λ)β(t) i , ∆t = 1 and f = 1 in (6), the diffusion process in AMP is: ∂x(t) i ∂t = (1−β(t) i )(x(0) i −x(t) i )+2ϵ(1−λ)β(t) i div((∇x)(t) ij ). (12) Proposition 5. With α + β = 1, ∆t = 1 and the learned coeffiecient f(t) at time t satisfying PT t=0(βf(t))t = 1 α in (6), the diffusion process in DAGNN is: ∂x(t) i ∂t = α(x(0) i −x(t) i ) + β div(f(t)((∇x)(t) ij )). (13) The High-order Graph Diffusion Network High-order Graph Diffusion Equation (a) (b) (c) Figure 2: Illustration of the same node pair in different contexts. In the first-order diffusion process, the diffusion equation only considers 1-hop neighbors. As shown in Figure 2, the nodes in (a), (b), and (c) are the same, but the structures are different. The first-order diffusion flux from node j to node i will be the same, even if the local structure of node i and j is very different. Based on the specific local environments, the diffusion flux should be either different, so as to provide more additional information and make the learned representations more discriminative. To better understand the effect of local structures, we conduct an experiment on six widely used graphs to evaluate the effect of 2-hop neighbors. First, we have the following definition of k-hop neighbor similarity score. Definition 1. Let yi be the label of node i, the k-hop neighbor similarity score hk = | P i∈V 1yi=O({yj,j∈Nk(i)})| |Nk(i)| , and ha+b = | P i∈V 1yi=O({yj,j∈{Na(i),Nb(i)}})| |{Na(i), Nb(i)}| , where O({yj, j ∈Nk(i)}) represents the element with the highest frequency in {yj, j ∈Nk(i)}. The similarity score is based on node labels, and higher similarity score implies the labels of a node and its k-hop neighbors are more consistent. The scores of the six graphs are shown in Table 1. Interestingly, we find that the labels of 2-hop neighbors show monophily property (Altenburger and Ugander 2018), i.e., as can be seen from both the homophily graphs (Cora, Citeseer, Pubmed) and heterophily graphs (Chameleon, Squirrel, Actor), without requiring the similarity among first-order neighbors, the second-order neighbors are more likely to have the same labels. cora citeseer pubmed chameleon squirrel actor h1 0.8634 0.7385 0.7920 0.2530 0.1459 0.2287 h2 0.8696 0.8476 0.7885 0.3131 0.1600 0.3716 h1+2 0.8737 0.8206 0.7880 0.3070 0.1530 0.3363 Table 1: The similarity scores of six graphs. To take advantage of 2-hop neighbors, we regularize the gradient ∇xi utilizing the average gradient of 1-hop neighbors: (∇x)j = avg(∇xjk) = X k∈N1(j) ˜Ajk q ˜dj p ˜dk (xk −xj). (14) We propose the high-order graph diffusion equation: ∂x(t) i ∂t = α(x(0) i −x(t) i )+β div(f(∇x(t) ij )+γ(∇x)(t) j ), (15) where γ is the parameter of the regularization term. The iteration step of (15) is: xt+∆t i = α∆tx(0) i + (1 −α∆t)x(t) i + β∆t div(f(∇x(t) i )) + βγ∆t div((∇x)(t) j ), (16) which is the diffusion-based message passing scheme (DMP) of our model. We can see that DMP utilizes the 2-hop neighbors’ information, where the advantages are two-fold: one is that the 2-hop neighbors capture the local environment around a node, even if there are some abnormal features among 1-hop neighbors, their negative effect can still be alleviated by considering a larger neighborhood size, making the learning process more robust. The other is that the monophily property of 2-hop neighbors provides additional stronger correlation with labels, thus even if the 1-hop neighbors may be heterophily, DMP can still make better predictions with information diffused from 2-hop neighbors. Comparison with other GNN models. Though existing GNN iteration steps can capture high-order connectivity through iterative adjacent message passing, they still have their limitations while having the same time complexity as DMP. DMP is superior because it can utilize the monophily property, adjust the balance between first-order and secondorder neighbors, and is based on diffusion equation which has some unique characteristics. More comparisons are discussed in Appendix. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8710 Theoretical Analysis Next, we theoretically analyze some properties of our diffusion-based message passing scheme. Definition 2. Consider a surfer walks from node j to i with probability Pij. Let Xt be a random variable representing the node visited by the surfer at time t. The probability Pij can be represented as a conditional probability P [Xt = i | Xt−∆t = j]. Let Pij =        1 −(α + β)∆t, i = j (β −βγ)∆t ˆ˜Aij, j ∈N1(i) βγ∆tBij, j ∈N2(i) α∆t, restart, (17) where ˆ˜A = ˜D−1 2 ˜A ˜D−1 2 , Bij is the element of B = ˆ˜A2, and restart means that the node i will teleport back to the initial root node i. Based on the definition, we have the following propositions. Proposition 6. Given the probability H(t) ij = P [Xt = i | X0 = j], DMP (16) is equivalent to the second-order random walk with the transition probability Pij in (17): x(t) i = X j∈V H(t) ij x(0) j . (18) Proposition 7. With f = 1, α, β, γ, ∆t ∈ (0, 1], DMP (16) converges, i.e., when t →∞, X(∞) = α((α + β)I −β(1 −γ) ˆ˜A −βγ ˆ˜A2)−1X(0). Proposition 8. When t →∞, the representations of any two nodes on the graph will not be the same as long as the two nodes have different initial features, i.e., ∀i, j ∈V , if x(0) i ̸= x(0) j , then x(t) i ̸= x(t) j as t →∞. The proofs of the above propositions are in Appendix. Our Proposed HiD-Net To incorporate the high-order graph diffusion DMP (16) into deep neural networks, we introduce High-Order Graph Diffusion Network (HiD-Net). In this work, we follow the decoupled way as proposed in APPNP (Klicpera, Bojchevski, and Günnemann 2018): Y′ = DMP  lω  X(0) , t, ∆t, α, β, γ  , (19) where lω is a representation learning model such as an MLP network, ω is the learnable parameters in the model. The training objective is to minimize the cross entropy loss defined by the final prediction Y′ and labels for training data. Because of DMP, HiD-Net is more robust and works well on both homophily and heterophily graphs in comparison with other graph diffusion networks. Time complexity. The time complexity of HiD-Net can be optimized as O(n2ζ), which is the same as the propagation step of GCN, where n is the number of the nodes, ζ is the dimension of the feature vector. We provide the proof in Appendix. Experiments Node Classification Datasets. For comprehensive comparison, we use seven realworld datasets to evaluate the performance of node classification. They are three citation graphs, i.e., Cora, Citeseer, Pubmed (Kipf and Welling 2016a), two Wikipedia networks, i.e., Chameleon and Squirrel (Pei et al. 2020), one Actor cooccurrence network Actor (Pei et al. 2020), one Open Graph Benchmark(OGB) graph ogbn-arxiv(Hu et al. 2020). Among the seven datasets, Cora, Citeseer, Pubmed and ogbn-arxiv are homophily graphs, Chameleon, Squirrel, and Actor are heterophily graphs. Details of datasets are in Appendix. Baselines. The proposed HiD-Net is compared with several representative GNNs, including three traditional GNNs: GCN (Kipf and Welling 2016a), GAT (Veliˇckovi´c et al. 2017), APPNP (Klicpera, Bojchevski, and Günnemann 2018), and four graph diffusion networks: GRAND (Chamberlain et al. 2021), GRAND++ (Thorpe et al. 2022), ADC (Zhao et al. 2021), DGC (Wang et al. 2021). They are implemented based on their open repositories, where the code can be found in Appendix. Experimental setting. We perform a hyperparameter search for HiD-Net on all datasets and the details of hyperparameter can be seen in Appendix. For other baseline models: GCN, GAT, APPNP, GRAND, GRAND++, DGC, and ADC, we follow the parameters suggested by (Kipf and Welling 2016a; Veliˇckovi´c et al. 2017; Klicpera, Bojchevski, and Günnemann 2018; Chamberlain et al. 2021; Thorpe et al. 2022; Wang et al. 2021; Zhao et al. 2021) on Cora, Citeseer, and Pubmed, and carefully fine-tune them to get optimal performance on Chameleon, Squirrel, and Actor. For all methods, we randomly run 5 times and report the mean and variance. More detailed experimental settings can be seen in Appendix. Results. Table 2 summarizes the test results. Please note that OGB prepares standardized evaluators for testing results and it only provides accuracy metric for ogbn-arxiv. As can be seen, HiD-Net outperforms other baselines on seven datasets. Moreover, in comparison with the graph diffusion networks, our HiD-Net is generally better than them with a large margin on heterophily graphs, which indicates that our designed graph diffusion process is more practical for different types of graphs. Robustness Analysis Utilizing the information from 2-hop neighbors, our model is more robust in abnormal situations. We comprehensively evaluate the robustness of our model on three datasets (Cora, Citeseer and Squirrel) in terms of attacks on edges and features, respectively. Attacks on edges. To attack edges, we adopt random edge deletions or additions following (Chen, Wu, and Zaki 2020; Franceschi et al. 2019). For edge deletions and additions, we randomly remove or add 5%, 10%, 15%, 20%, 25%, 30%, 35%, 40% of the original edges, which retains the connectivity of the attacked graph. Then we perform node classification task. All the experiments are conducted 5 times and we report the average accuracy. The results are plotted in Figure 3 and Figure 4. From the figures, we can see The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8711 Datasets Metric GCN GAT APPNP GRAND GRAND++ DGC ADC HiD-Net Cora F1-macro 81.5±0.6 79.7±0.4 82.2 ±0.5 79.4±2.4 81.3±3.3 82.1±0.1 80.0±1.0 82.8±0.6 F1-micro 82.5 ±0.6 80.1±0.8 83.2±0.2 80.1±2.7 82.95 ±1.4 83.1±0.1 81.0±0.7 84.0±0.6 AUC 97.3 ±0.1 96.4 ±0.5 97.5±0.1 96.0±0.3 97.3±0.5 97.2±0.0 97.1±0.1 97.6±0.0 Citeseer F1-macro 66.4 ±0.4 68.5 ±0.3 67.7±0.6 64.9±1.5 66.4±2.6 68.3±0.4 47.0±1.4 69.5±0.6 F1-micro 69.9 ±0.5 72.2 ±0.3 71.0 ±0.4 68.6±1.7 70.9±2.3 72.5±0.4 53.7±1.5 73.2±0.2 AUC 89.9 ±0.4 90.2 ±0.1 90.3 ±0.0 89.5±0.8 91.2±2.8 91.0 ±0.0 87.1±1.1 91.5±0.1 Pubmed F1-macro 78.4 ±0.2 76.7 ±0.5 79.3±0.2 77.5±3.2 78.9±2.5 78.4±0.1 73.7±2.3 80.1±0.1 F1-micro 79.1 ±0.4 77.3 ±0.4 79.9±0.3 78.0±3.2 79.8±1.6 79.2±0.1 74.3±2.3 81.1±0.1 AUC 91.2 ±0.2 90.3±0.5 92.2±0.1 90.7±1.6 91.5±2.2 92.0±0.0 89.1±1.9 92.2±0.1 Chameleon F1-macro 38.5 ±2.1 45.0 ±1.0 57.5±1.0 35.7±1.8 46.3±2.4 58.0±0.1 32.6±0.6 61.0±0.3 F1-micro 41.8 ±1.2 44.4 ±1.9 57.1±1.4 37.7±1.5 45.7±3.4 58.2±0.1 33.2±0.5 60.8±0.7 AUC 69.8±0.5 75.5 ±1.0 85.0±0.6 69.0 ±1.3 74.8 ±2.8 82.4±0.0 63.7±0.8 85.2±0.3 Squirrel F1-macro 25.2 ±1.2 26.5 ±1.3 41.1±1.1 24.7±2.0 30.5±3.7 42.1±0.4 24.7±1.2 47.5±0.9 F1-micro 25.8 ±0.8 27.3. ±0.7 43.2±1.0 28.6 ±1.0 34.6 ±2.5 43.1±0.3 25.4±1.0 48.4±0.8 AUC 57.5 ±0.5 58.2±1.1 78.9±0.3 60.2±1.0 65.6±1.4 74.3±0.0 55.0±2.4 79.4±0.3 Actor F1-macro 21.5±0.4 19.7 ±0.8 30.3±4.7 28.0±1.1 30.4±1.1 31.6±0.0 20.0±0.6 25.7±0.44 F1-micro 29.2 ±0.6 27.1 ±0.5 33.2±0.6 32.5±1.0 33.7±2.3 34.1±0.0 25.5±0.5 34.7±0.4 AUC 58.0 ±0.5 55.8±0.4 64.8±0.1 56.2±2.0 60.8.2±0.6 64.7±0.0 53.8±0.1 68.1±0.2 ogbn-arxiv Accuracy 71.5±0.3 71.6±0.5 71.2 ±0.3 71.7 ±0.1 71.9 ±0.6 70.9±0.2 70.0±0.1 72.2±0.1 Table 2: Quantitative results (%±σ) on node classification. (bold: best) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Addition Rate 0.60 0.65 0.70 0.75 0.80 0.85 Test Accuracy (b) Cora 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Addition Rate 0.3 0.4 0.5 0.6 0.7 Test Accuracy (c) Citeseer 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Addition Rate 0.20 0.25 0.30 0.35 0.40 0.45 Test Accuracy (d) Squirrel Figure 3: Results of different models under random edge addition. that as the addition or deletion rate rises, the performances on three datasets of all the models degenerate, and HiD-Net consistently outperforms other baselines. Attacks on features. To attack features, we inject random perturbation into the node features as in (Wu et al. 2020a). Firstly, we sample a noise matrix M ∈Rn×q, where each entry in M is sampled from the normal distribution N(0, 1). Then, we calculate reference amplitude r, which is the mean of the maximal value of each node’s feature. We add Gaussian noise µ · r · M to the original feature matrix, and get the attacked feature matrix, where µ is the noise ratio. The results are reported in Figure 5. Again, HiD-Net consistently outperforms all other baselines under different perturbation rates by a margin for three datasets. Non-over-smoothing with Increasing Steps To demonstrate that our model solves the oversmoothing problem compared with other graph diffusion networks, we test different graph diffusion models with increasing propagation step k from 2 to 20. Baselines include DGC, ADC and GRAND. The results are plotted in Figure 7. We can see that with the increase of k, HiD-Net consistently performs better than other baselines. Parameter Study In this section, we investigate the sensitivity of parameters on all datasets. Analysis of α. We test the effect of α in (16), and vary it from 0 to 1. From Figure 8(a) we can see that with the increase of α, the performances of citation graphs rise first and then start to drop slowly, the performances of Chameleon and Squirrel have not changed too much, and the performance of Actor first rises and then remains unchanged. As citation graphs are more homophily, we need to focus less on the node itself, implying a small α, while on heterophily graphs, we need to focus more on the node itself. Analysis of β. In order to check the impact of the diffusion term, we study the performance of HiD-Net with β varying from 0 to 1. The results are shown in Figure 8(b). We can see that as the value of β increases, the accuracies generally increase, while the accuracy on Actor remains relatively stable, implying that the features diffused from 1-hop and 2-hop neighbors are very informative. Analysis of γ. Finally we test the effect of γ in (16) and vary it from 0 to 0.6. With the increase of γ, the accuracies on different datasets do not change much, so we just separately plot each dataset for a clearer illustration here. As can be The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8712 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Deletion Rate 0.74 0.76 0.78 0.80 0.82 0.84 Test Accuracy (a) Cora 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Deletion Rate 0.55 0.60 0.65 0.70 Test Accuracy (b) Citeseer 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Deletion Rate 0.25 0.30 0.35 0.40 0.45 Test Accuracy (c) Squirrel Figure 4: Results of different models under random edge deletion. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Feature Noise Rate 0.70 0.75 0.80 Test Accuracy (a) Cora 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Feature Noise Rate 0.45 0.50 0.55 0.60 0.65 0.70 Test Accuracy (b) Citeseer 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Feature Noise Rate 0.25 0.30 0.35 0.40 0.45 0.50 Test Accuracy (c) Squirrel Figure 5: Results of different models under random feature perturbation. 0.0 0.2 0.4 0.6 0.8 1.0 Gamma 0.826 0.828 0.830 0.832 0.834 0.836 0.838 0.840 Accuracy (a) Cora 0.0 0.2 0.4 0.6 0.8 1.0 Gamma 0.614 0.616 0.618 0.620 0.622 0.624 0.626 Accuracy (b) Chameleon Figure 6: Analysis of parameter γ. (a) Chameleon (b) Actor Figure 7: Non-over-smoothing with increasing steps. seen in Figure 6, with the increase of γ, the performance on Cora and Chameleon rises first and then drops, and different graphs have different best choices of γ. The results on other datasets are shown in Appendix. Conclusion In this paper, we propose a generalized diffusion graph framework, which establishes the relation between diffusion equation with different GNNs. Our framework reveals that current graph diffusion networks mainly consider the first-order dif(a) α (b) β Figure 8: Analysis of parameter α and β. fusion equation, then based on our finding of the monophily property of labels, we derive a novel high-order diffusion graph network (HiD-Net). HiD-Net is more robust and general on both homophily and heterophily graphs. Extensive experimental results verify the effectiveness of HiD-Net. One potential issue is that our model utilizes a constant diffusivity coefficient, and a future direction is to explore a learnable diffusivity coefficient depending on time and space. Our work formally points out the relation between diffusion equation with a wide variety of GNNs. Considering that previous GNNs are designed mainly based on spatial or spectral strategies, this new framework may open a new path to understanding and deriving novel GNNs. We believe that more insights from the research community on the diffusion process will hold great potential for the GNN community in the future. Acknowledgments This work is supported in part by the National Natural Science Foundation of China (No. U20B2045, 62192784, U22B2038, 62002029, 62172052, 62322203). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8713 References Abu-El-Haija, S.; Perozzi, B.; Kapoor, A.; Alipourfard, N.; Lerman, K.; Harutyunyan, H.; Ver Steeg, G.; and Galstyan, A. 2019. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, 21–29. PMLR. Altenburger, K. M.; and Ugander, J. 2018. Monophily in social networks introduces similarity among friends-of-friends. Nature human behaviour, 2(4): 284–290. Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Chamberlain, B.; Rowbottom, J.; Gorinova, M. I.; Bronstein, M.; Webb, S.; and Rossi, E. 2021. Grand: Graph neural diffusion. In International Conference on Machine Learning, 1407–1418. PMLR. Chen, Y.; Wu, L.; and Zaki, M. 2020. Iterative deep graph learning for graph neural networks: Better and robust node embeddings. Advances in Neural Information Processing Systems, 33: 19314–19326. Chung, F. R.; and Graham, F. C. 1997. Spectral graph theory. 92. American Mathematical Soc. Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29. Franceschi, L.; Niepert, M.; Pontil, M.; and He, X. 2019. Learning discrete structures for graph neural networks. In International conference on machine learning, 1972–1982. PMLR. Gao, H.; and Ji, S. 2019. Graph u-nets. In international conference on machine learning, 2083–2092. PMLR. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, 30. Hu, W.; Fey, M.; Zitnik, M.; Dong, Y.; Ren, H.; Liu, B.; Catasta, M.; and Leskovec, J. 2020. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33: 22118–22133. Kipf, T. N.; and Welling, M. 2016a. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Kipf, T. N.; and Welling, M. 2016b. Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Klicpera, J.; Bojchevski, A.; and Günnemann, S. 2018. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997. Liu, M.; Gao, H.; and Ji, S. 2020. Towards deeper graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 338–348. Liu, X.; Ding, J.; Jin, W.; Xu, H.; Ma, Y.; Liu, Z.; and Tang, J. 2021. Graph Neural Networks with Adaptive Residual. Advances in Neural Information Processing Systems, 34. Ma, Y.; Liu, X.; Zhao, T.; Liu, Y.; Tang, J.; and Shah, N. 2021. A unified view on graph neural networks as graph signal denoising. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 1202– 1211. Pei, H.; Wei, B.; Chang, K. C.-C.; Lei, Y.; and Yang, B. 2020. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287. Song, Y.; Kang, Q.; Wang, S.; Kai, Z.; and Tay, W. P. 2022. On the robustness of graph neural diffusion to topology perturbations. arXiv preprint arXiv:2209.07754. Thorpe, M.; Nguyen, T. M.; Xia, H.; Strohmer, T.; Bertozzi, A.; Osher, S.; and Wang, B. 2022. GRAND++: Graph neural diffusion with a source term. In International Conference on Learning Representations. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Wang, Y.; Wang, Y.; Yang, J.; and Lin, Z. 2021. Dissecting the diffusion process in linear graph convolutional networks. Advances in Neural Information Processing Systems, 34. Weickert, J. 1998. Anisotropic diffusion in image processing, volume 1. Teubner Stuttgart. Wu, F.; Souza, A.; Zhang, T.; Fifty, C.; Yu, T.; and Weinberger, K. 2019. Simplifying graph convolutional networks. In International conference on machine learning, 6861–6871. PMLR. Wu, J.; He, J.; and Xu, J. 2019. Net: Degree-specific graph neural networks for node and graph classification. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 406–415. Wu, T.; Ren, H.; Li, P.; and Leskovec, J. 2020a. Graph information bottleneck. Advances in Neural Information Processing Systems, 33: 20437–20448. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020b. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1): 4–24. You, J.; Ying, R.; and Leskovec, J. 2019. Position-aware graph neural networks. In International Conference on Machine Learning, 7134–7143. PMLR. Yu, X.; Liu, Z.; Fang, Y.; and Zhang, X. 2023. Learning to Count Isomorphisms with Graph Neural Networks. In Williams, B.; Chen, Y.; and Neville, J., eds., ThirtySeventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, 4845–4853. AAAI Press. Zang, C.; and Wang, F. 2020. Differential Deep Learning on Graphs and its Applications. Zhang, M.; Cui, Z.; Neumann, M.; and Chen, Y. 2018. An end-to-end deep learning architecture for graph classification. In Thirty-second AAAI conference on artificial intelligence. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8714 Zhao, J.; Dong, Y.; Ding, M.; Kharlamov, E.; and Tang, J. 2021. Adaptive Diffusion in Graph Neural Networks. Advances in Neural Information Processing Systems, 34. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; and Sun, M. 2020. Graph neural networks: A review of methods and applications. AI Open, 1: 57–81. Zhu, M.; Wang, X.; Shi, C.; Ji, H.; and Cui, P. 2021. Interpreting and unifying graph neural networks with an optimization framework. In Proceedings of the Web Conference 2021, 1215–1226. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8715
2024
968
18,815
Learning to Rank in Generative Retrieval Yongqi Li1, Nan Yang2, Liang Wang2, Furu Wei2, Wenjie Li1, 1The Hong Kong Polytechnic University 2Microsoft [email protected], {nanya,wangliang,fuwei}@microsoft.com, [email protected] Abstract Generative retrieval stands out as a promising new paradigm in text retrieval that aims to generate identifier strings of relevant passages as the retrieval target. This generative paradigm taps into powerful generative language models, distinct from traditional sparse or dense retrieval methods. However, only learning to generate is insufficient for generative retrieval. Generative retrieval learns to generate identifiers of relevant passages as an intermediate goal and then converts predicted identifiers into the final passage rank list. The disconnect between the learning objective of autoregressive models and the desired passage ranking target leads to a learning gap. To bridge this gap, we propose a learning-to-rank framework for generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn to rank passages directly, optimizing the autoregressive model toward the final passage ranking target via a rank loss. This framework only requires an additional learning-to-rank training phase to enhance current generative retrieval systems and does not add any burden to the inference stage. We conducted experiments on three public benchmarks, and the results demonstrate that LTRGR achieves state-of-the-art performance among generative retrieval methods. The code and checkpoints are released at https://github.com/liyongqi67/LTRGR. Introduction Text retrieval is a crucial task in information retrieval and has a significant impact on various language systems, including search ranking (Nogueira and Cho 2019) and open-domain question answering (Chen et al. 2017). At its core, text retrieval involves learning a ranking model that assigns scores to documents based on a given query, a process known as learning to rank. This approach has been enduringly popular for decades and has evolved into point-wise, pair-wise, and list-wise methods. Currently, the dominant implementation is the dual-encoder approach (Lee, Chang, and Toutanova 2019; Karpukhin et al. 2020), which encodes queries and passages into vectors in a semantic space and employs a list-wise loss to learn the similarities. An emerging alternative to the dual-encoder approach in text retrieval is generative retrieval (Tay et al. 2022; Bevilacqua et al. 2022). Generative retrieval employs autoregressive Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. language models to generate identifier strings of passages as an intermediate target for retrieval. An identifier is a distinctive string to represent a passage, such as Wikipedia titles to Wikipedia passages. The predicted identifiers are then mapped to ranked passages as the retrieval results. In this manner, generative retrieval treats passage retrieval as a standard sequence-to-sequence task, maximizing the likelihood of the passage identifiers given the input query, distinct from previous learning-to-rank approaches. There are two main approaches to generative retrieval regarding the identifier types. One approach, exemplified by the DSI system and its variants (Tay et al. 2022), assigns a unique numeric ID to each passage, allowing predicted numeric IDs to directly correspond to passages on a one-to-one basis. However, this approach requires memorizing the mappings from passages to their numeric IDs, making it ineffective for large corpus sets. The other approach (Bevilacqua et al. 2022) takes text spans from the passages as identifiers. While the text span-based identifiers are effective in the large-scale corpus, they no longer uniquely correspond to the passages. In their work, a heuristic-based function is employed to rank all the passages associated with the predicted identifiers. Following this line, Li et al. proposed using multiview identifiers, which have achieved comparable results on commonly used benchmarks with large-scale corpus. In this work, we follow the latter approach to generative retrieval. Despite its rapid development and substantial potential, generative retrieval remains constrained. It relies on a heuristic function to convert predicted identifiers into a passage rank list, which requires sensitive hyperparameters and exists outside the learning framework. More importantly, generative retrieval generates identifiers as an intermediate goal rather than directly ranking candidate passages. This disconnect between the learning objective of generative retrieval and the intended passage ranking target brings a learning gap. Consequently, even though the autoregressive model becomes proficient in generating accurate identifiers, the predicted identifiers cannot ensure an optimal passage ranking order. Tackling the aforementioned issues is challenging, as they are inherent to the novel generative paradigm in text retrieval. However, a silver lining emerges from the extensive evolution of the adeptness learning-to-rank paradigm, which has demonstrated adeptness in optimizing the passage ranking objective. Inspired by this progress, we propose to enThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8716 hance generative retrieval by integrating it with the classical learning-to-rank paradigm. Our objective is to enhance generative retrieval to not solely generate fragments of passages but to directly acquire the skill of ranking passages. This shift aims to bridge the existing gap between the learning focus of generative retrieval and the envisaged passage ranking target. In pursuit of this goal, we propose a learning-to-rank framework for generative retrieval, dubbed LTRGR. LTRGR involves two distinct training phases, as visually depicted in Figure 1: the learning-to-generate phase and the learningto-rank phase. In the initial learning-to-generate phase, we train an autoregressive model consistent with prior generative retrieval methods via the generation loss, which takes queries as input and outputs the identifiers of target passages. Subsequently, the queries from the training dataset are fed into the trained generative model to predict associated identifiers. These predicted identifiers are mapped to a passage rank list via a heuristic function. The subsequent learningto-rank phase further trains the autoregressive model using a rank loss over the passage rank list, which optimizes the model towards the objective of the optimal passage ranking order. LTRGR includes the heuristic process in the learning process, rendering the whole retrieval process end-to-end and learning with the objective of passage ranking. During inference, we use the trained model to retrieve passages as in the typical generative retrieval. Therefore, the LTRGR framework only requires an additional training phase and does not add any burden to the inference stage. We evaluate our proposed method on three widely used datasets, and the results demonstrate that LTRGR achieves the best performance in generative retrieval. The key contributions are summarized: • We introduce the concept of incorporating learning to rank within generative retrieval, effectively aligning the learning objective of generative retrieval with the desired passage ranking target. • LTRGR establishes a connection between the generative retrieval paradigm and the classical learning-to-rank paradigm. This connection opens doors for potential advancements in this area, including exploring diverse rank loss functions and negative sample mining. • Only with an additional learning-to-rank training phase and without any burden to the inference, LTRGR achieves state-of-the-art performance in generative retrieval on three widely-used benchmarks. Related Work Generative Retrieval Generative retrieval is an emerging new retrieval paradigm, which generates identifier strings of passages as the retrieval target. Instead of generating entire passages, this approach uses identifiers to reduce the amount of useless information and make it easier for the model to memorize and learn (Li et al. 2023b). Different types of identifiers have been explored in various search scenarios, including titles (Web URLs), numeric IDs, and substrings, as shown in previous studies (De Cao et al. 2020; Li et al. 2023a; Tay et al. 2022; Bevilacqua et al. 2022; Ren et al. 2023). In 2023, Li et al. proposed multiview identifiers that represented a passage from different perspectives to enhance generative retrieval and achieve state-of-the-art performance. Despite the potential advantages of generative retrieval, there are still issues inherent in this new paradigm, as discussed in the previous section. Our work aims to address these issues by combining generative retrieval with the learning-to-rank paradigm. Learning to Rank Learning to rank refers to machine learning techniques used for training models in ranking tasks (Li 2011). This approach has been developed over several decades and is typically applied in document retrieval. Learning to rank can derive large-scale training data from search log data and automatically create the ranking model, making it one of the key technologies for modern web search. Learning to rank approaches can be categorized into point-wise (Cossock and Zhang 2006; Li, Wu, and Burges 2007; Crammer and Singer 2001), pair-wise (Freund et al. 2003; Burges et al. 2005), and list-wise (Cao et al. 2007; Xia et al. 2008) approaches based on the learning target. In the point-wise and pair-wise approaches, the ranking problem is transformed into classification and pair-wise classification, respectively. Therefore, the group structure of ranking is ignored in these approaches. The list-wise approach addresses the ranking problem more directly by taking ranking lists as instances in both learning and prediction. This approach maintains the group structure of ranking, and ranking evaluation measures can be more directly incorporated into the loss functions in learning. Method When given a query text q, the retrieval system must retrieve a list of passages {p1, p2, . . . , pn} from a corpus C, where both queries and passages consist of a sequence of text tokens. As illustrated in Figure 1, LTRGR involves two training stages: learning to generate and learning to rank. In this section, we will first provide an overview of how a typical generative retrieval system works. i.e. learning to generate, and then clarify our learning-to-rank framework within the context of generative retrieval. Learning to Generate We first train an autoregressive language model using the standard sequence-to-sequence loss. In practice, we follow the current sota generative retrieval method, MINDER (Li et al. 2023b), to train an autoregressive language model. Please refer to the MINDER for more details. Training. We develop an autoregressive language model, referred to as AM, to generate multiview identifiers. The model takes as input the query text and an identifier prefix, and produces a corresponding identifier of the relevant passage as output. The identifier prefix can be one of three types: "title", "substring", or "pseudo-query", representing the three different views. The target text for each view is the title, a random substring, or a pseudo-query of the target passage, respectively. During training, the three different samples are randomly shuffled to train the autoregressive model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8717 Query Autoregressive Model Target Identifiers 1. Titles of positive passages 2. Body of positive passages Query Autoregressive Model Predicted Identifiers 1. Predicted titles 2. Predicted body transform Passage 1 Passage 2 ..... Passgage rank list (a) Learning to generate (b) Learning to rank Figure 1: This illustration depicts our proposed learning-to-rank framework for generative retrieval, which involves two stages of training. (a) Learning to generate: LTRGR first trains an autoregressive model via the generation loss, as a normal generative retrieval system. (b) Learning to rank: LTRGR continues training the model via the passage rank loss, which aligns the generative retrieval training with the desired passage ranking target. For each training sample, the objective is to minimize the sum of the negative loglikelihoods of the tokens {i1, · · · , ij, · · · , il} in a target identifier I, whose length is l. The generation loss is formulated as, Lgen = − l X j=1 log pθ(ij|q; I<j), (1) where I<j denotes the partial identifier sequence {i0, · · · , ij−1}, i0 is a pre-defined start token, and θ is the trainable parameters in the autoregessive model AM. Inference. During the inference process, given a query text, the trained autoregressive language model AM could generate predicted identifiers in an autoregressive manner. The FM-index (Ferragina and Manzini 2000) data structure is used to support generating valid identifiers. Given a start token or a string, FM-index could provide the list of possible token successors. Therefore, we could store all identifiers of passages in C into FM-index and thus force the AM model to generate valid identifiers via constrained generation. Given a query q, we could set different identifier prefixes to generate a series of predicted identifiers I via beam search, formulated as, I = AM(q; b; FM-index), (2) where b is the beam size for beam search. In order to retrieve passages from a large corpus, a heuristic function is employed to transform the predicted identifiers I into a ranked list of passages. We give a simple explanation, and please refer to the original paper for details. For each passage p ∈C, we select a subset Ip from the predicted identifiers I, where ip ∈Ip if ip is one of the identifiers of the passage p. The rank score of the passage p corresponding to the query q is then calculated as the sum of the scores of its covered identifiers, s(q, p) = X ip∈Ip sip, (3) where sip represents the language model score of the identifier ip, and Ip is the set of selected identifiers that appear in the passage p. By sorting the rank score s(q, p), we are able to obtain a ranked list of passages from the corpus C. In practice, we can use the FM-index to efficiently locate those passages that contain at least one predicted identifier, rather than scoring all of the passages in the corpus. Learning to Rank As previously mentioned, it is insufficient for generative retrieval to only learn how to generate identifiers. Therefore, we develop a framework to enable generative retrieval to learn how to rank passages directly. To accomplish this, we continue training the autoregressive model AM using a passage rank loss. To begin, we retrieve passages for all queries in the training set using the trained autoregressive language model AM after the learning-to-generate phase. For a given query q, we obtain a passage rank list P = {p1, · · · , pj, · · · , pn}, where n is the number of retrieved passages. Each passage pj is assigned a relevant score s(q, pj) via Eq. 3, which is calculated as the sum of the language model scores of a set of predicted identifiers. It is important to note that the passage rank list includes both positive passages that are relevant to the query and negative passages that are not. A reliable retrieval system should assign a higher score to positive passages than to negative passages, which is the goal of the learning-to-rank paradigm. To achieve this objective in generative retrieval, we utilize a margin-based rank loss, which is formulated as follows: Lrank = max(0, s(q, pn) −s(q, pp) + m), (4) where pp and pn represent a positive and negative passage in the list P, respectively, and m is the margin. It is noted that the gradients could be propagated to the autoregressive model AM via the language model score sip, which is the logits of the neural network. In practice, we take two rank losses based on different sampling strategies for positive and negative passages. In Lrank1, the positive and negative passages are the ones with the highest rank scores, respectively. In Lrank2, both the positive and negative passages are randomly sampled from the passage rank list. While the rank loss optimizes the autoregressive model toward passage ranking, the generation of identifiers is also crucial for successful passage ranking. Therefore, we also incorporate the generation loss into the learning-to-rank stage. The final loss is formulated as a multi-task format: L = Lrank1 + Lrank2 + λLgen, (5) where λ is the weight to balance the rank losses and generation loss. We continue training the autoregressive model AM via Eq. 5. After training, AM can be used to retrieve passages The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8718 Methods Natural Questions TriviaQA @5 @20 @100 @5 @20 @100 BM25 43.6 62.9 78.1 67.7 77.3 83.9 DPR(Karpukhin et al. 2020) 68.3 80.1 86.1 72.7 80.2 84.8 GAR(Mao et al. 2021) 59.3 73.9 85.0 73.1 80.4 85.7 DSI-BART(Tay et al. 2022) 28.3 47.3 65.5 SEAL-LM(Bevilacqua et al. 2022) 40.5 60.2 73.1 39.6 57.5 80.1 SEAL-LM+FM(Bevilacqua et al. 2022) 43.9 65.8 81.1 38.4 56.6 80.1 SEAL(Bevilacqua et al. 2022) 61.3 76.2 86.3 66.8 77.6 84.6 MINDER(Li et al. 2023b) 65.8 78.3 86.7 68.4 78.1 84.8 LTRGR 68.8† 80.3† 87.1† 70.2† 79.1† 85.1† % improve 4.56% 2.55% 0.46% 2.63% 1.28% 0.35% Table 1: Retrieval performance on NQ and TriviaQA. We use hits@5, @20, and @100, to evaluate the retrieval performance. Inapplicable results are marked by “-”. The best results in each group are marked in Bold, while the second-best ones are underlined. † denotes the best result in generative retrieval. % improve represents the relative improvement achieved by LTRGR over the previously best generative retrieval method. Methods Model Size MSMARCO R@5 R@20 R@100 M@10 BM25 28.6 47.5 66.2 18.4 SEAL(Bevilacqua et al. 2022) BART-Large 19.8 35.3 57.2 12.7 MINDER(Li et al. 2023b) BART-Large 29.5 53.5 78.7 18.6 NCI(Wang et al. 2022) T5-Base 9.1 DSI(scaling up)(Pradeep et al. 2023) T5-Base 17.3 DSI(scaling up)(Pradeep et al. 2023) T5-Large 19.8 LTRGR BART-Large 40.2 64.5 85.2 25.5 % improve 36.3% 20.6% 8.26% 28.8% Table 2: Retrieval performance on the MSMARCO dataset. R and M denote Recall and MRR, respectively. “-” means the result not reported in the published work. The best results in each group are marked in Bold. % improve represents the relative improvement achieved by LTRGR over the previously best generative retrieval method. as introduced in the learning to generate section. Therefore, our learning-to-rank framework does not add any additional burden to the original inference stage. Experiments Datasets We conducted experiments using the DPR (Karpukhin et al. 2020) setting on two widely-used open-domain QA datasets: NQ (Kwiatkowski et al. 2019) and TriviaQA (Joshi et al. 2017). Additionally, we evaluated generative retrieval methods on the MSMARCO dataset (Nguyen et al. 2016), which is sourced from the Web search scenario where queries are web search queries and passages are from web pages. Importantly, we evaluated models on the full corpus set rather than a small sample, and we used widely-used metrics for these benchmarks. Baselines We compared LTRGR with several generative retrieval methods, including DSI (Tay et al. 2022), DSI (scaling up) (Pradeep et al. 2023), NCI (Wang et al. 2022), SEAL (Bevilacqua et al. 2022), and MINDER (Li et al. 2023b). Additionally, we included the term-based method BM25, as well as DPR (Karpukhin et al. 2020) and GAR (Mao et al. 2021). All baseline results were obtained from their respective papers. Implementation Details To ensure a fair comparison with previous work, we utilized BART-large as our backbone. In practice, we loaded the trained autoregressive model, MINDER (Li et al. 2023b), and continued training it using our proposed learning-torank framework. In the learning to rank phase, we used the Adam optimizer with a learning rate of 1e-5, trained with a batch size of 4, and conducted training for three epochs. For each query in the training set, we retrieved the top 200 passages and selected positive and negative passages from them. During training, we kept 40 predicted identifiers for each passage and removed any exceeding ones. The margin m and weight λ are set as 500 and 1000, respectively. Our main experiments were conducted on a single NVIDIA A100 GPU with 80 GB of memory. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8719 Retrieval Results on QA Table 1 summarizes the retrieval performance on NQ and TriviaQA. By analyzing the results, we discovered the following findings: (1) Among the generative retrieval methods, we found that SEAL and MINDER, which use semantic identifiers, outperform DSI, which relies on numeric identifiers. This is because numeric identifiers lack semantic information, and DSI requires the model to memorize the mapping from passages to their numeric IDs. As a result, DSI struggles with datasets like NQ and TriviaQA, which contain over 20 million passages. MINDER surpasses SEAL by using multiview identifiers to represent a passage more comprehensively. Despite MINDER’s superiority, LTRGR still outperforms it. Specifically, LTRGR improves hits@5 by 3.0 and 1.8 on NQ and TriviaQA, respectively. LTRGR is based on MINDER and only requires an additional learning-to-rank phase, which verifies the effectiveness of learning to rank in generative retrieval. (2) Regarding the NQ dataset, MINDER outperforms the classical DPR and achieves the best performance across all metrics, including hits@5, 20, and 100. This is particularly noteworthy as it marks the first time that generative retrieval has surpassed DPR in all metrics under the full corpus set setting. Turning to TriviaQA, our results show that LTRGR outperforms DPR in hits@100, but falls behind in hits@5 and hits@20. The reason for this is that MINDER, upon which LTRGR is based, performs significantly worse than DPR on TriviaQA. It’s worth noting that generative retrieval methods rely on identifiers and cannot "see" the content of the passage, which may explain the performance gap between MINDER and DPR on TriviaQA. Additionally, generative retrieval methods have an error accumulation problem in an autoregressive generative way. Retrieval Results on Web Search To further investigate generative retrieval, we conducted experiments on the MSMARCO dataset and presented our findings in Table 2. It’s worth noting that we labeled the model sizes to ensure a fair comparison, as larger model parameters typically result in better performance. Our analysis of the results in Table 2 revealed several key findings. Firstly, we observed that generative retrieval methods perform worse in the search scenario compared to the QA datasets. Specifically, SEAL, NCI, and DSI underperformed BM25, while MINDER and DSI (T5-large) only slightly outperformed BM25. This is likely due to the fact that the passages in MSMARCO are sourced from the web, and are therefore of lower quality and typically lack important metadata such as titles. Secondly, we found that LTRGR achieved the best performance and outperformed all baselines significantly. LTRGR surpassed the second-best approach, DSI (scaling up), by 5.7 points in terms of MRR@10, despite DSI using the larger T5-Large backbone compared to BART-Large. Finally, we observed that the learning-to-rank paradigm significantly improves existing generative retrieval methods in the search scenario. Specifically, LTRGR improved MINDER by 10.7 points and 6.9 points in terms of Methods Natural Questions @5 @20 @100 w/o learning-to-rank 65.8 78.3 86.7 w/ rank loss 1 56.1 69.4 78.7 w/o generation loss 63.9 76.1 84.4 w/o rank loss 65.8 78.6 86.5 w/o rank loss 1 68.2 80.8 87.0 w/o rank loss 2 67.9 79.8 86.7 LTRGR 68.8 80.3 87.1 Table 3: Ablation study of LTRGR with different losses in the learning-to-rank training phase. “w/o learning-to-rank” refers to the basic generative retrieval model, MINDER, without the learning-to-rank training. Recall@5 and MRR@10, respectively. These results provide strong evidence of the effectiveness of LTRGR, which only requires an additional training step on MINDER. Ablation Study The LTRGR model is trained by leveraging the MINDER model and minimizing the loss function defined in Eq. 5. This loss function consists of two margin-based losses and one generation loss. To shed light on the role of the learningto-rank objective and the impact of the margin-based losses, we conducted experiments where we removed one or more terms from the loss function. Specifically, we investigated the following scenarios: • “w/o generation loss”: We removed the generation loss term (Lgen) from the loss function, which means that we trained the autoregressive model solely based on the rank loss. • “w/o rank loss”: We removed both margin-based losses (Lrank1 and Lrank2) from the loss function, which means that we trained the autoregressive model solely based on the generation loss, following a common generative retrieval approach. • “w/o rank loss 1” and “w/o rank loss 2”: We removed one of the margin-based losses (Lrank1 or Lrank2) from the loss function, respectively. Our experiments aimed to answer the following questions: Does the performance improvement of the LTRGR model come from the learning-to-rank objective or from continuous training? Is it necessary to have two margin-based losses? What happens if we train the model only with the rank loss? We present the results of our ablation study in Table 3, which provide the following insights: (1) Removing the rank loss and training the model solely based on the generation loss does not significantly affect the performance. This observation is reasonable since it is equivalent to increasing the training steps of a generative retrieval approach. This result confirms that the learning-to-rank objective is the primary source of performance improvement and validates the effectiveness of our proposed method. (2) Removing either Lrank1 or Lrank2 leads to a drop in the performance of LTRGR. On The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8720 Methods Natural Questions @5 @20 @100 SEAL 61.3 76.2 86.3 SEAL-LTR 63.7 78.1 86.4 Table 4: Retrieval performance of SEAL and SEAL-LTR on NQ. SEAL-LTR represents applying our proposed LTRGR framework to the SEAL model. 0 100 200 300 400 500 65 70 75 80 85 90 hits The margin m hits@5 hits@20 hits@100 (a) 0 500 1000 1500 2000 65 70 75 80 85 90 Loss weight hits hits@5 hits@20 hits@100 (b) Figure 2: The retrieval performances of LTRGR on the NQ test set are shown in (a) and (b) against the margin values and balance weight λ, respectively. the one hand, having two rank losses allows the model to leverage a larger number of passages and benefits the rank learning. On the other hand, the two rank losses adopt different sample mining strategies, ensuring the diversity of the passages in the loss. (3) Removing the generation loss is the only variant underperforming the original MINDER model. During our experiments, we observed that the model tends to fall into local minima and assign smaller scores to all passages. This finding suggests the necessity of the generation loss in the learning-to-rank phase. (4) Overall, the current loss function is the best choice for the learning-to-rank phase. We also explore the list-wise rank loss in Section 4.7. In-depth Analysis Generalization of LTRGR. Our LTRGR builds on the generative retrieval model MINDER and continues to train it using the loss function described in Eq. 5. A natural question arises: can LTRGR be generalized to other generative retrieval models? To answer this question, we replaced MINDER with SEAL as the basic model and performed the same learning-to-rank training. The results, presented in Table 4, show that the proposed LTRGR framework can also improve the performance of SEAL. Specifically, the hits@5, 20, and 100 metrics improved by 3.6, 1.9, and 0.1 points, respectively. Interestingly, we observed that the improvement on hits@5 was larger than that on hits@100, which may be attributed to the optimization of the top ranking using Lrank1. List-wise loss. To facilitate generative retrieval learning to rank, we adopt a margin-based loss as the rank loss. By doing so, LTRGR effectively connects generative retrieval with the learning-to-rank paradigm, allowing for various types of rank loss to be applied. To examine the impact of different rank Rank loss Natural Questions @5 @20 @100 Margin loss 68.8 80.3 87.1 List-wise loss 65.4 78.5 86.3 Table 5: Performance comparison of LTRGR with the marginbased loss and the list-wise loss. losses, we substitute the original margin-based loss with a list-wise loss known as infoNCE, which is formulated as follows: Lrank = −log es(q,pp) es(q,pp) + P pn es(q,pn) . (6) We randomly selected 19 negative passages from the passage rank list P and presented the results in Table 5. It was observed that LTRGR with the infoNCE loss performed worse than the model with the margin-based loss. There are two potential reasons: Firstly, we only trained the model for one epoch due to the increased training cost, which may have resulted in insufficient training. Secondly, the passage scores were not normalized, making them difficult to optimize. The results also indicate that more suitable list-wise learning methods should be developed in generative retrieval. Inference speed. LTRGR simply adds an extra training step to existing generative models, without affecting inference speed. The speed of inference is determined by the underlying generative retrieval model and the beam size. We conducted tests on LTRGR using a beam size of 15 on one V100 GPU with 32GB memory. On the NQ test set, LTRGR based on MINDER took approximately 135 minutes to complete the inference process, while LTRGR based on SEAL took only 115 minutes. Notably, SEAL’s speed is comparable to that of the typical dense retriever, DPR, as reported in the work (Bevilacqua et al. 2022). Margin analysis. To assess the impact of margin values on retrieval performance, we manually set margin values ranging from 100 to 500 in Eq. 4. The results are summarized in Figure 2(a). Our findings indicate that LTRGR with a margin of 100 performs worse than other variants, suggesting that a minimum margin value is necessary. As the margin value increases from 200 to 500, performance improves slightly but not significantly. While a larger margin can help the model better differentiate between positive and negative passages, it can also make the learning objective hard to reach. λ analysis. In the loss function described by Equation 5, we use a weight λ to balance the contribution of the generation loss Lgen and the rank loss Lrank. To determine the optimal weight values, we conducted a tuning experiment with different λ values, and the results are summarized in Figure 2(b). Our analysis yielded the following insights: 1) Setting the weight to 0 leads to a significant performance gap, which confirms the importance of the generation loss, as discussed in Section 4.6. 2) Varying the weight value from 500 to 200 has little effect on the performance in terms of hits@100, but the performance gradually decreases for hits@5 and hits@20 as the weight of the generation loss inThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8721 Before learning to rank After learning to rank Method Target passage (represented by three types of identifiers) Query Title: Prime Rate in Canada Body: a guideline interest rate used by banks on loans for their most creditworthy, best, or prime clients. The prime rate rises and falls with the ebb and flow of the Canadian economy, influenced significantly by the overnight rate, which is set by the Bank of Canada.  Pseudo-queries: what is prime rate for loans || prime rate meaning || what is prime rate in canada || prime rate definition canada  || what is the prime interest rate in canada || prime rate definition || what is the prime rate || ...... What is prime rate in canada Predicted identifiers and the correspinding scores. The correct identifiers that belong to the target passage are colored in purple. what is the prime interest rate in canada, 391.98 what is the current prime rate in canada, 391.98 prime rates in canada, 391.98 what is the prime rate for canada, 385.90 what is prime rate in canada, 385.90 what is the current prime rate in canada, 385.90 Prime Rate in Canada, 372.01 what is the prime loan, 337.51 prime rate definition, 286.75 what is the current prime rate for canada, 387.91 what is the current prime rate in canada, 385.90 what is prime rate in canada, 342.22 what is the current prime rate of interest, 306.94 Prime Rate History, 300.95 what is the prime rate in canada, 292.57 Canada Prime Rate, 270.51 Prime Rate, 236.16 Prime Rate is now, 232.79 Figure 3: Case study on the MSMARCO dataset of the generative retrieval before and after learning to rank. The correctly predicted identifiers that belong to the target passage are colored in purple. 1 100 0 50 100 150 200 250 300 350 400 # Positive Passages Ranking position Before LTR After LTR Performance gap among top positions Figure 4: The distribution of the number of retrieved positive passages is plotted against the ranking position on the MSMARCO dataset. The labels “Before LTR” and “After LTR” represent the generative model without and with learning-torank training, respectively. creases. This suggests that a higher weight of the generation loss can interfere with the function of the rank loss, which typically affects the top-ranking results such as hits@5 and hits@20. Effectiveness Analysis of Learning to Rank To better illustrate how the LTRGR works and what causes the performance improvement, we performed quantitative analysis and qualitative analysis (case study). Quantitative analysis. We plotted the distribution of positive passages against their ranking positions in Figure 4(a). We used generative retrieval models before and after the learning-to-rank training to retrieve the top 100 passages from the MSMARCO dataset. We then counted the number of positive passages in each rank position in the retrieval list. By analyzing the results, we found that the performance improvement after the learning-to-rank training mainly comes from the top positions. LTRGR seems to push the positive passages to top-rank positions in the passage rank list. This vividly reflects the function of the rank loss Lrank, which brings a better passage rank order to the list. Case Study. To qualitatively illustrate the efficacy of the LTRGR framework, we analyzed the prediction results on MSMARCO in Figure 3. It is observed that the number of the correct predicted identifiers gets increased after the learningto-rank training phase. Besides, for the same predicted identifier, such as “what is prime rate in Canada” in the case, its corresponding score also gets augmented after the learningto-rank training. This clearly illustrates the effectiveness of the proposed learning-to-rank framework in generative retrieval, which enhances the autoregressive model to predict more correct identifiers with bigger corresponding scores. Conclusion In this study, we introduce LTRGR, a novel framework that enhances current generative systems by enabling them to learn to rank passages. LTRGR requires only an additional training step via a passage rank loss and does not impose any additional burden on the inference stage. Importantly, LTRGR bridges the generative retrieval paradigm and the classical learning-to-rank paradigm, providing ample opportunities for further research in this field. Our experiments demonstrate that LTRGR outperforms other generative retrieval methods on three commonly used datasets. Moving forward, we anticipate that further research that deeply integrates these two paradigms will continue to advance generative retrieval in this direction. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8722 Acknowledgments The work described in this paper was supported by Research Grants Council of Hong Kong (PolyU/5210919, PolyU/15207821, and PolyU/15207122), National Natural Science Foundation of China (62076212) and PolyU internal grants (ZVQ0). References Bevilacqua, M.; Ottaviano, G.; Lewis, P.; Yih, W.-t.; Riedel, S.; and Petroni, F. 2022. Autoregressive search engines: Generating substrings as document identifiers. arXiv preprint arXiv:2204.10628. Burges, C.; Shaked, T.; Renshaw, E.; Lazier, A.; Deeds, M.; Hamilton, N.; and Hullender, G. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, 89–96. Cao, Z.; Qin, T.; Liu, T.-Y.; Tsai, M.-F.; and Li, H. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, 129–136. Chen, D.; Fisch, A.; Weston, J.; and Bordes, A. 2017. Reading Wikipedia to Answer Open-Domain Questions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 1870–1879. Cossock, D.; and Zhang, T. 2006. Subset ranking using regression. In Learning Theory: 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006. Proceedings 19, 605–619. Springer. Crammer, K.; and Singer, Y. 2001. Pranking with ranking. Advances in neural information processing systems, 14. De Cao, N.; Izacard, G.; Riedel, S.; and Petroni, F. 2020. Autoregressive Entity Retrieval. In International Conference on Learning Representations. Ferragina, P.; and Manzini, G. 2000. Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, 390–398. Freund, Y.; Iyer, R.; Schapire, R. E.; and Singer, Y. 2003. An efficient boosting algorithm for combining preferences. Journal of machine learning research, 4(Nov): 933–969. Joshi, M.; Choi, E.; Weld, D. S.; and Zettlemoyer, L. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1601–1611. Karpukhin, V.; Oguz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W.-t. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the International Conference on Empirical Methods in Natural Language Processing, 6769–6781. ACL. Kwiatkowski, T.; Palomaki, J.; Redfield, O.; Collins, M.; Parikh, A.; Alberti, C.; Epstein, D.; Polosukhin, I.; Devlin, J.; Lee, K.; et al. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computational Linguistics, 7: 452–466. Lee, K.; Chang, M.-W.; and Toutanova, K. 2019. Latent Retrieval for Weakly Supervised Open Domain Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 6086–6096. ACL. Li, H. 2011. A short introduction to learning to rank. IEICE TRANSACTIONS on Information and Systems, 94(10): 1854– 1862. Li, P.; Wu, Q.; and Burges, C. 2007. Mcrank: Learning to rank using multiple classification and gradient boosting. Advances in neural information processing systems, 20. Li, Y.; Yang, N.; Wang, L.; Wei, F.; and Li, W. 2023a. Generative retrieval for conversational question answering. Information Processing & Management, 60(5): 103475. Li, Y.; Yang, N.; Wang, L.; Wei, F.; and Li, W. 2023b. Multiview Identifiers Enhanced Generative Retrieval. arXiv preprint arXiv:2305.16675. Mao, Y.; He, P.; Liu, X.; Shen, Y.; Gao, J.; Han, J.; and Chen, W. 2021. Generation-Augmented Retrieval for Open-Domain Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 4089–4100. ACL. Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPs. Nogueira, R.; and Cho, K. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085. Pradeep, R.; Hui, K.; Gupta, J.; Lelkes, A. D.; Zhuang, H.; Lin, J.; Metzler, D.; and Tran, V. Q. 2023. How Does Generative Retrieval Scale to Millions of Passages? arXiv preprint arXiv:2305.11841. Ren, R.; Zhao, W. X.; Liu, J.; Wu, H.; Wen, J.-R.; and Wang, H. 2023. TOME: A Two-stage Approach for Model-based Retrieval. arXiv preprint arXiv:2305.11161. Tay, Y.; Tran, V. Q.; Dehghani, M.; Ni, J.; Bahri, D.; Mehta, H.; Qin, Z.; Hui, K.; Zhao, Z.; Gupta, J.; et al. 2022. Transformer memory as a differentiable search index. arXiv preprint arXiv:2202.06991. Wang, Y.; Hou, Y.; Wang, H.; Miao, Z.; Wu, S.; Chen, Q.; Xia, Y.; Chi, C.; Zhao, G.; Liu, Z.; et al. 2022. A neural corpus indexer for document retrieval. Advances in Neural Information Processing Systems, 35: 25600–25614. Xia, F.; Liu, T.-Y.; Wang, J.; Zhang, W.; and Li, H. 2008. Listwise approach to learning to rank: theory and algorithm. In Proceedings of the 25th international conference on Machine learning, 1192–1199. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8723
2024
969
18,816
Orthogonal Dictionary Guided Shape Completion Network for Point Cloud Pingping Cai, Deja Scott, Xiaoguang Li, Song Wang University of South Carolina, USA {pcai,ds17,xl22}@email.sc.edu, [email protected] Abstract Point cloud shape completion, which aims to reconstruct the missing regions of the incomplete point clouds with plausible shapes, is an ill-posed and challenging task that benefits many downstream 3D applications. Prior approaches achieve this goal by employing a two-stage completion framework, generating a coarse yet complete seed point cloud through an encoder-decoder network, followed by refinement and upsampling. However, the encoded features suffer from information loss of the missing portion, leading to an inability of the decoder to reconstruct seed points with detailed geometric clues. To tackle this issue, we propose a novel Orthogonal Dictionary Guided Shape Completion Network (ODGNet). The proposed ODGNet consists of a Seed Generation U-Net, which leverages multi-level feature extraction and concatenation to significantly enhance the representation capability of seed points, and Orthogonal Dictionaries that can learn shape priors from training samples and thus compensate for the information loss of the missing portions during inference. Our design is simple but to the point, extensive experiment results indicate that the proposed method can reconstruct point clouds with more details and outperform previous state-ofthe-art counterparts. The implementation code is available at https://github.com/corecai163/ODGNet. Introduction Point cloud is an efficient data structure for representing 3D objects in the form of a set of point coordinates. Despite its advantages, raw point clouds collected by existing 3D sensors often suffer from sparsity and incompleteness (Geiger et al. 2013), which significantly hinders their usability in downstream applications like autonomous driving (Zeng et al. 2018; Li et al. 2021), object detection (Zhou and Tuzel 2018; Shi and Rajkumar 2020), and segmentation (Zhang et al. 2023; Zhao et al. 2022). Therefore, inferring and reconstructing the missing regions of the incomplete point cloud is an inevitable and essential task in 3D computer vision. However, this point cloud completion task is extremely challenging. The successful reconstruction of correct shapes in the missing portions relies on a combination of high-level semantic understanding of the target object and low-level spatial and geometric relationships of nearby Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Input (b) SnowFlake (c) Ours (d) Ground Truth Figure 1: The point cloud completion results from different methods. We see that the previous method, SnowFlake (Xiang et al. 2021), cannot reconstruct the detailed shape for the missing portion, while our proposed method can infer a plausible shape. points. Moreover, this completion task is regarded as an illposed inverse problem. In other words, a single incomplete input can correspond to multiple plausible outputs, further complicating the inference of possible geometric details for the missing portion. Early traditional methods for solving this ill-posed problem relied on shape priors or hand-crafted geometric regularities (Kazhdan and Hoppe 2013; Lozes, Elmoataz, and L´ezoray 2014; Hu, Fu, and Guo 2019; Pauly et al. 2008). However, these approaches have been overshadowed by deep learning-based methods. Previous state-of-the-art (SOTA) deep learning solutions follow the two-stage completion framework (Wenxiao Zhang 2020; Yan et al. 2022; Xiang et al. 2021, 2022; Pan et al. 2021; Yu et al. 2021; Zhou et al. 2022; Tang et al. 2022; Wang et al. 2022; Yu et al. 2023), where it first generates coarse but complete seed point clouds via an encoder-decoder network, and then employs an upsampling network to upsample and refine them. However, the encoded features derived from incomplete inputs represent only partial information and lack detailed geometric features for the missing parts. As a result, the seed points generated by the decoder may possess limited representation capability, which can potentially bottleneck the subsequent upsampling performance. Simply increasing the complexity of the upsampling network, as done by many previous works (e.g., SnowFlakeNet), might bring only limited benefit to the final performance, as illustrated in Figure 1, if the seed point clouds fail to adequately represent the underlying point-cloud shape. Thus, in this paper, we present a ”simple but straightforward” network, ODGNet, that mitigates the bottleneck observed in previous techniques and significantly improves The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 864 point cloud completion performance. Especially, the proposed ODGNet comprises two key components: the Seed Generation U-Net and the Dictionary Guidance Module. The Seed Generation U-Net effectively enhances the representation capability of generated seed points through multilevel feature extraction, concatenation, and the utilization of the local seed feature — a shape feature vector that captures the local geometry around each seed point. In parallel, the Dictionary Guidance Module plays a vital role by learning orthogonal shape priors from complete point clouds during supervised training and facilitating the recovery of better shapes during inference. Our key insight to mitigate shape information loss is the introduction of learnable shape dictionaries, enabling us to learn shape priors in the feature space. Furthermore, to ensure the shape dictionary captures distinguishable prior features effectively, we introduce additional orthogonal constraints to it. Lastly, we employ Upsample Transformers (Zhou et al. 2022) to upsample the seed points to the target resolution, further refining the completion results. To verify the effectiveness of the proposed method, we evaluate it on three standard datasets: PCN (Yuan et al. 2018), ShapeNet-55/34 (Yu et al. 2021), and KITTI (Geiger et al. 2013). Experiment results show that the proposed method can recover detailed and plausible shapes for the missing portion. It can also achieve promising results and outperform previous SOTA counterparts easily. Our primary contributions can be summarized as follows: 1. We present a pioneering approach by introducing learnable shape priors to a deep learning architecture, effectively addressing the ill-posed completion task. This is achieved through the Dictionary Guidance Module, which compensates for missing geometric details 2. We design a simple yet forthright shape completion network built upon Seed Generation U-Net and the Dictionary Guidance Module to improve the representation ability of the seed points and upsampling performance. 3. We conduct comprehensive experiments on three datasets and the results confirm the effectiveness of the proposed algorithm by outperforming previous SOTA counterparts. Related Work Based on the network architecture of previous point cloud completion methods, we can classify them into two categories: Voxelization-based and Point-based methods. Voxelization Based Method Voxelization-based methods attempt to migrate solutions from 2D completion tasks to 3D point clouds by voxelization and 3D convolutions (Dai, Qi, and Nießner 2017; Wu et al. 2015; Han et al. 2017; Xie et al. 2020). To begin, Wu et al. (2015) introduces the 3D occupancy grid, which designates each voxel as a probabilistic distribution of binary variables to represent 3D shapes and uses Convolutional Deep Belief Networks to hallucinate the missing regions. However, the resolution of the 3D voxel grid is limited because of the high computational cost, making it challenging to reveal fine local geometric details. To improve the representation capability of the 3D occupancy grid, 3D-EPN (Dai, Qi, and Nießner 2017) encodes the implicit distance field functions into the 3D voxels and leverages high-level semantic features from a classification network to guide the shape completion process. In addition, GRNet (Xie et al. 2020) proposed a novel gridding process to improve the representation ability of 3D grids. Although voxelization-based methods can take advantage of 3D convolution to regularize unordered point clouds, these methods suffer from extensive computational costs or information loss during voxelization. Point Based Method Recently, with advancements in the network architectures designed for the point cloud (Qi et al. 2017a,b; Zhao et al. 2021), point-based methods have evolved into mainstream solutions for point cloud completion tasks and have achieved promising progress (Tchapmi et al. 2019; Pan 2020; Xie et al. 2020; Yuan et al. 2018; Pan et al. 2021; Xiang et al. 2021; Wang, Ang, and Lee 2022; Yu et al. 2021; Liu et al. 2020; Wen et al. 2021, 2022; Wenxiao Zhang 2020; Yan et al. 2022). For example, TopNet (Tchapmi et al. 2019) introduced a one-stage framework by modeling the point cloud generation process as the growth of a rooted tree, where one parent feature is split to generate multiple child features. The generated point features, however, lack accurate shape information of missing parts and cannot be constrained explicitly. It was then surpassed by the two-stage framework (Yan et al. 2022; Xiang et al. 2021, 2022; Pan et al. 2021; Yu et al. 2021; Zhou et al. 2022). The two-stage completion framework can achieve better performance due to its ability to impose more constraints on the coarse-to-fine point cloud generation process. PCN (Yuan et al. 2018) is one of the pioneering works for the twostage point completion framework, wherein the first stage uses PointNet (Qi et al. 2017a) layers to extract the global feature vector and MLPs to produce a coarse point cloud. The second stage uses a folding-based upsampling block (MLPs) to generate a dense and complete point cloud. However, the simple MLPs cannot fully exploit and preserve intricate geometric shapes, which limits the overall performance of PCN. Thus, SnowFlake (Xiang et al. 2021, 2022) introduces a novel snowflake point deconvolution block to upsample the points in the feature space and achieves promising performance. Comparatively, FBNet (Yan et al. 2022) and SeedFormer (Zhou et al. 2022) also focus more on the upsampling stage by introducing the Feedback-Aware Completion block and Upsample Transformers to refine and upsample the low-quality point cloud, respectively. However, these two-stage shape completion methods overlook the importance of the seed generation stage and limit their upsampling performance. Proposed Method Overview Given an incomplete and sparse point cloud P ∈RNp∗3 as input, our objective is to infer its missing shapes and proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 865 128*256 256*256 a) Seed Generation Np*3 Input NO*3 Output Concatenate UpSample Global Fea 1*512 512*128 1024*128 512*3 Seed Points Seed Features 1024*3 512*128 512*3 128*256 Multi-Level Fea Encoder DeConv Coarse Fea 128*256 Refined Fea Concatenate DeConv Coarse Fea Refined Fea Refined Pos 512*128 MLP Mid-Level Low-Level Dictionary Guidance Dictionary Guidance Refine Unit b) Dictionary Guidance Coarse Fea Refined Fea Query Key Similarity Score Linear Linear Learnable Dictionary Learnable Dictionary Input Feature Refined Feature Figure 2: The overall architecture of the proposed network. a) The architecture for the Seed Generation U-Net. b) The detailed architecture for the Dictionary Guidance Module. duce a complete and dense point cloud O ∈RNo∗3. Following the two-stage point completion framework, we first design a seed generation U-Net to generate the coarse but complete seed points S ∈RNs∗3 and then upsample them to the target resolution using Upsample Transformers (Zhou et al. 2022). Figure 2 shows the overall architecture design of the proposed ODGNet. Seed Generation U-Net Our primary focus is to enhance the representation capabilities of the seed point cloud. Given that downsampling operations in the encoder can lead to the loss of fine details, making it challenging to recover them during decoding, we draw inspiration from evolved 2D image processing techniques, such as U-Net (Ronneberger, Fischer, and Brox 2015). Especially, we adopt a similar approach by extracting multilevel features to preserve fine details at various resolutions and concatenate them with the decoder’s output for better seed generation. Moreover, instead of merely generating coordinates of seed points, we introduce the concept of seed features, which are feature vectors representing rich local geometric details surrounding each seed point, to improve the representation capabilities. Figure 2a illustrates the overall architecture design of the Seed Generation U-Net. Encoder The primary objective of the seed generation UNet’s encoder is to extract multi-level shape features from an incomplete point cloud P. To achieve this, we leverage the set abstraction module proposed in (Qi et al. 2017b), which facilitates gradual sub-sampling of points and extraction of local shape features at various resolutions. In particular, for each level, we take the input point coordinates Pi and corresponding point features Fi, and model the output coordinates Pi+1 and features Fi+1 using the composition of two functions, expressed as follows: Pi+1, Fi+1 = PT(SA(Pi, Fi)) (1) Here, PT refers to the point transformer described in (Zhao et al. 2021), while SA represents the set abstraction module. Additionally, it’s worth noting that F0 = P0 for the initial input. By stacking multiple set abstractions and point transformers together, we can extract point features and point coordinates at multiple levels, including the global shape feature, which represents the general shape information of the incomplete input. Decoder The decoder module aims to generate the complete seed point cloud. Drawing inspiration from TopNet (Tchapmi et al. 2019), we adopt a similar approach to generate seed points in the feature space. This is accomplished by progressively splitting the input global shape code into multiple child features. Instead of employing multi-branch MLPs as done in TopNet, we utilize 1D deconvolution layers to generate these child features. Each child feature captures the local shape of the missing portion, and by stacking multiple 1D deconvolution layers with different kernel sizes and strides, we can effectively produce varying numbers of child features. However, it is important to note that these child features only represent coarse shape information and may suffer from information loss. To overcome this limitation, we introduce the Dictionary Guidance Module, which plays a pivotal role in reconstructing detailed geometry and generating refined features from these coarse child features. Subsequently, the refined feature is concatenated with the multi-level features extracted from the encoder, resulting in complete shape features as illustrated in Figure 2a. Finally, we employ an MLP layer to regress the refined point coordinates from the refined point features, completing the seed point cloud generation process. Dictionary Guidance Module Recall that another flaw in the seed generation process is the missing shape information from the incomplete input, which makes the point cloud completion task ill-posed and nontrivial. Presumably, without additional knowledge and guidance, the network can only generate ambiguous shapes for these missing portions. To tackle this challenging problem, we introduce prior knowledge into the reconstruction process. The idea of our The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 866 solution is that we can learn the common shape features from the ground truth point clouds, e.g. features of airplanes in the training set, during the supervised training. Subsequently, during inference, the learned common features can be treated as strong priors to guide the shape completion, e.g. unseen airplanes in the testing set. However, implementing this idea and seamlessly integrating it into the deep neural network is another challenge. Drawing inspiration from the Detection Transformer (DETR) (Carion et al. 2020), which leverages the learnable query (feature vector) to benefit object detection and bounding box generation, we take advantage of this learnable feature vector. We build a learnable dictionary that can learn the shape priors automatically from training samples. Figure 2b shows the network architecture of the Dictionary Guidance module. It contains a learnable dictionary and a Refine Unit to integrate shape priors into the coarse input features, ensuring the smooth guidance of the shape completion process. Refine Unit With the input coarse point features F ∈ RN∗C, our primary objective is to leverage additional shape information from the learnable dictionary D ∈RNd∗C and generate the refined point features F ′ ∈RN∗C. The implementation of our feature Refine Unit is designed to be straightforward and intuitive. Specifically, for the coarse features F, we first find their similar feature vectors in the learnable dictionary D. Then, we aggregate these similar feature vectors and integrate them with the coarse features, effectively compensating for any missing details. Regarding our first step to calculate the similarity score Sim ∈RN∗Nd between two feature tensors, we borrow the solution from the cross-attention mechanism: Q = ϕ(F); K = ψ(D); Sim = σ(QKT √dk ) (2) where ϕ and ψ are linear layers, σ is the Softmax function, dk is dimension of K. Then we aggregate the related feature vectors in the dictionary using the predicted similarity scores, and the refined features can be obtained by: F ′ = 0.5 ∗(MatMul(Sim, D) + F) (3) where MatMul is the matrix multiplication and 0.5 is the coefficient to balance two components. Othogonal Constraint Furthermore, as previously mentioned, to guarantee the representation ability of the learnable dictionary, we hope that each prior feature in the dictionary can be discriminative to others. To accomplish this, we introduce Orthogonal Constraints to each learnable dictionary D ∈RNd∗C so that each prior feature is orthogonal to others. Mathematically, this can be defined as follows: ˆD = Normalize(D); Loth = || ˆD ˆDT −I||2 2, (4) where I ∈RNd×Nd is the Identity Matrix, Nd is the number of learnable vectors in the dictionary, and Nd ≤C. Loss Function Similarly to prior two-stage completion pipelines, we use the Chamfer Distance (CD) as a loss function to explicitly guide the Seed Generation and Upsampling processes. In particular, the CD loss is defined as follows: CD = 1 N1 X o∈O min g∈GT ||o −g||2 2 + 1 N2 X g∈GT min o∈O ||o −g||2 2 (5) where O is the predicted completed point cloud with the number of N1 points and GT is the ground truth point cloud with the number of N2 points. Note that there are two variants for CD which we denote as CD-L2 and CD-L1. Specifically, CD-L2 is equal to CD while CD-L1 takes the square root of the L2-Norm and is divided by 2. To sum up, the total loss function used in training is defined as follows: L = CDseed + λCDupsample + β X Loth (6) where CDupsample is the coarse to fine upsampling loss for Upsample Transformers, P Loth is the Orthogonal Constraints for all learnable dictionaries. λ and β are the hyperparameter to balance different terms and are set to 1 for all experiments. Experiments Dataset and Evaluation Metric PCN: The PCN dataset is first introduced by Yuan et al. (2018) and contains pairs of partial and complete point clouds from 30,974 models of 8 categories collected from the ShapeNet (Chang et al. 2015). To maintain consistency with previous methods (Yuan et al. 2018; Xie et al. 2020; Xiang et al. 2021), we adopt the same train/test splitting strategy, comprising 28,974 training samples, 800 validation samples, and 1,200 testing samples. Additionally, in anticipation of the varying number of points for the incomplete point clouds, we follow prior works by resampling them to a standardized size of 2,048 points. ShapeNet-55/34: The ShapeNet-55/34 datasets, introduced in PoinTr (Yu et al. 2021), are also derived from ShapeNet (Chang et al. 2015). ShapeNet-55 consists of 55 categories and comprises 41,952 training shapes and 10,518 testing shapes. On the other hand, ShapeNet-34 contains 46,765 shapes from 34 categories for training, and the testing set consists of 5,705 shapes, divided into two parts: 3,400 shapes from 34 seen categories and 2,305 shapes from 21 unseen classes. Following the previous works, we evaluate the models on the point cloud data with different missing point ratios of 25%, 50%, and 75%, representing three difficulty levels of completion tasks: simple (S), moderate (M), and hard (H), respectively. KITTI: Since the previous two datasets are synthetic data generated from CAD models or meshes, which might be different from real scanned point clouds, we also include the KITTI dataset (Geiger et al. 2013). Essentially, it is collected from an autonomous driving platform and is a challenging real-world computer vision benchmark. We also follow the previous method by extracting a sequence of Velodyne scans from the KITTI dataset and only focusing on points within the object bounding boxes labeled as cars. In total, it has 2483 partial point clouds and no ground truth. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 867 Methods Average Plane Cabinet Car Chair Lamp Couch Table Watercraft FoldingNet(Yang et al. 2018) 14.31 9.49 15.80 12.61 15.55 16.41 15.97 13.65 14.99 TopNet(Tchapmi et al. 2019) 12.15 7.61 13.31 10.90 13.82 14.44 14.78 11.22 11.12 AtlasNet(Groueix et al. 2018) 10.85 6.37 11.94 10.10 12.06 12.37 12.99 10.33 10.61 PCN(Yuan et al. 2018) 9.64 5.50 22.70 10.63 8.70 11.0 11.34 11.68 8.59 GR-Net(Xie et al. 2020) 8.83 6.45 10.37 9.45 9.41 7.96 10.51 8.44 8.04 PMP(Wen et al. 2021) 8.73 5.65 11.24 9.64 9.51 6.95 10.83 8.72 7.25 PoinTr(Yu et al. 2021) 8.38 4.75 10.47 8.68 9.39 7.75 10.93 7.78 7.29 NSFA(Wenxiao Zhang 2020) 8.06 4.76 10.18 8.63 8.53 7.03 10.53 7.35 7.48 SnowFlake(Xiang et al. 2021) 7.19 4.24 9.27 8.20 7.75 5.96 9.25 6.45 6.37 FBNet(Yan et al. 2022) 6.94 3.99 9.05 7.90 7.38 5.82 8.85 6.35 6.18 ProxyFormer(Li et al. 2023) 6.77 4.01 9.01 7.88 7.11 5.35 8.77 6.03 5.98 AdaPoinTr(Yu et al. 2023) 6.53 3.68 8.82 7.47 6.85 5.47 8.35 5.80 5.76 SeedFormer(Zhou et al. 2022) 6.74 3.85 9.05 8.06 7.06 5.21 8.85 6.05 5.85 Ours 6.50 3.77 8.77 7.56 6.84 5.09 8.47 5.84 5.66 Table 1: Point cloud completion results on the PCN dataset compared to previous algorithms (CD-L1 ×10−3). Evaluation Metrics: To quantitatively evaluate the performance of different algorithms, we use three commonly adopted metrics: CD-L1, CD-L2, and F1-Score@1%. For the CD metric, the smaller value is better, while for the F1 score, the larger value is better. For the KITTI dataset, we use Fidelity and Minimal Matching Distance (MMD) since there is no ground truth. Specifically, Fidelity measures the average distance from each point in the input to its nearest neighbor in the output and MMD measures how much the output resembles a typical car by calculating the Chamfer Distance between the output and the car point cloud from ShapeNet that is closest to the output point cloud. Evaluation on PCN Dataset We evaluate the performance of the proposed network on the PCN dataset and compare it with previous methods. As the required output resolution for the PCN dataset is 16,384, we set the upsampling ratios of the upsampling module to {1, 4, 4}. Besides, we set the size of dictionaries to be equivalent to their input coarse feature dimension. To train the network from scratch, we set the total epochs to 400 with a batch size of 32 and use Adam as an optimization function with a learning rate of 0.0004 at the beginning and gradually decrease the learning rate by 0.8 for every 20 epochs. The training is carried out on two Nvidia V100 32G GPUs. Table 1 presents the quantitative results of our proposed method, along with the reported outcomes from previous algorithms. Notably, our method achieves an outstanding average CD-L1 score of 6.50 × 10−3, showcasing a significant improvement over the performance of prior methods. In particular, we demonstrate a remarkable advancement of 0.24 × 10−3 in comparison to the counterpart SeedFormer (Zhou et al. 2022). Figure 3 includes a visual representation of the PCN completion results. From the figure, it becomes evident that our proposed algorithm excels in preserving shape details for the missing parts, while minimizing the presence of noise points. In contrast, other algorithms may generate ambiguous shapes, often accompanied by a considerable number of outliers. Method CD-S CD-M CD-H CDAvg FScore FoldingNet 2.67 2.66 4.05 3.12 0.082 PCN 1.94 1.96 4.08 2.66 0.133 TopNet 2.26 2.16 4.3 2.91 0.126 PFNet 3.83 3.87 7.97 5.22 0.339 GRNet 1.35 1.71 2.85 1.97 0.238 SnowFlake 0.7 1.06 1.96 1.24 0.398 PoinTr 0.58 0.88 1.79 1.09 0.464 ProxyFormer 0.49 0.75 1.55 0.93 0.483 AdaPoinTr 0.49 0.69 1.24 0.81 0.503 SeedFormer 0.5 0.77 1.49 0.92 0.472 Ours 0.47 0.70 1.32 0.83 0.437 Table 2: The quantitative results of different methods on the ShapeNet-55 benchmark dataset (CD-L2 ×10−3). Evaluation on ShapeNet-55/34 Dataset To showcase the robust generalization capability of our proposed method, we performed additional experiments on the ShapeNet-55/34 dataset. As this dataset requires an output resolution of 8,192, we adjusted the upsampling ratios of the upsampling module to {1, 2, 4}. We used the same optimization settings as in the ShapeNet-PCN dataset to train our network from scratch, but we gradually decrease the learning rate by half for every 50 epochs. In Tables 2 and 3, we present the performance of our proposed method in comparison to previous algorithms. Impressively, our method achieves an average CD-L2 score of 0.83 × 10−3 on the ShapeNet-55 dataset, demonstrating a remarkable advancement over the performance of previous counterparts. Furthermore, even when dealing with the most challenging ShapeNet-34 seen and ShapeNet-21 unseen dataset, our method continues to outperform previous counterparts. Note that limited by space, detailed results can be found in the Supplementary. Evaluation on KITTI Dataset Finally, we examine the robustness of the proposed algorithm on the KITTI dataset. As the KITTI dataset conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 868 (a) Partial (b) GR-Net (c) PMP (f) Ground Truth (e) Ours (d) SnowFlake Figure 3: The completion results of various methods on the PCN dataset. Notably, our method can reconstruct the missing details, e.g. rearview mirror, better than others. Please zoom in for more details. Seen ShapeNet-34 Unseen ShapeNet-21 Method CD-S CD-M CD-H CD-Avg F-Score CD-S CD-M CD-H CD-Avg F-Score FoldingNet 1.86 1.81 3.38 2.35 0.139 2.76 2.74 5.36 3.62 0.095 PCN 1.87 1.81 2.97 2.22 0.154 3.17 3.08 5.29 3.85 0.101 TopNet 1.77 1.61 3.54 2.31 0.171 2.62 2.43 5.44 3.5 0.121 PFNet 3.16 3.19 7.71 4.68 0.347 5.29 5.87 13.33 8.16 0.322 GRNet 1.26 1.39 2.57 1.74 0.251 1.85 2.25 4.87 2.99 0.216 PoinTr 0.76 1.05 1.88 1.23 0.421 1.04 1.67 3.44 2.05 0.384 SnowFlake 0.6 0.86 1.5 0.99 0.422 0.88 1.46 2.92 1.75 0.388 ProxyFormer 0.44 0.67 1.33 0.81 0.466 0.60 1.13 2.54 1.42 0.415 AdaPoinTr 0.48 0.63 1.07 0.73 0.469 0.61 0.96 2.11 1.23 0.416 SeedFormer 0.48 0.7 1.3 0.83 0.452 0.61 1.07 2.35 1.34 0.402 Ours 0.44 0.64 1.14 0.75 0.451 0.59 1.01 2.26 1.29 0.415 Table 3: Shape completion results on Seen ShapeNet-34 test set and Unseen ShapeNet-21 test set (CD-L2 ×10−3). Input PoinTr Ours Figure 4: The visual comparison of different methods on the KITTI dataset. Our results are cleaner than PoinTr (Yu et al. 2021). tains only real-collected Lidar point clouds, we do not have ground-truth point clouds for training. Instead, we train our model on the PCN car dataset and test it on the KITTI dataset. Correspondingly, we use Fidelity and MMD to measure the performance. Please note that, since there is no ground truth, these metrics are not accurate measurements for the quality of generated point clouds. Table 4 shows the quantitative completion results, and Figure 4 shows some Method FD (×10−3) MMD (×10−3) PCN 2.235 1.366 TopNet 5.354 0.636 GR-Net 0.816 0.568 PoinTr 0.00 0.526 ProxyFormer 0.00 0.508 AdaPoinTr 0.237 0.392 SeedFormer 0.151 0.516 Ours 1.28 0.349 Table 4: The evaluation results on the KITTI dataset. visual examples. We see that the previous method PoinTr (Yu et al. 2021) tends to generate outlier points, while our method can generate cleaner point clouds. Ablation Study Effectiveness of Seed Generation U-Net To prove that the proposed seed generation method can bring clean and significant performance improvements to the entire point cloud completion system, our first and foremost ablation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 869 Seed Generation Upsampling CD-L1 (×10−3) FoldingNet Folding 14.31 Ours Folding 7.59 SnowFlake PSCU 7.04 Ours PSCU 6.80 SeedFormer UpTrans 6.74 Ours UpTrans 6.50 Table 5: Abaltion study on different seed generation methods and upsampling methods on the PCN dataset. PSCU means the Parametric Surface Constrained Upsampler (Cai et al. 2023). UpTrans means the Upsample Transformer in (Zhou et al. 2022). Visualization of generated seeds can be found in the Supplementary. Dictionary Guidance W/O With With Orthogonal Constraints W/O With CD-L1 (×10−3) 6.62 6.55 6.50 Table 6: Ablation study on the Dictionary Guidance Module on the PCN dataset. study is to determine the effectiveness of the proposed ODGNet in generating better seed points compared to other seed generation methods. We integrate the proposed Seed Generation U-Net into various upsampling methods and Table 5 shows the improvements. Remarkably, the use of our Seed Generation U-Net with the UpTrans demonstrates significant and direct improvement over the SeedFormer’s seed generation method (Zhou et al. 2022), where the CD-L1 decreases significantly from 6.74 × 10−3 to 6.50 × 10−3, showcasing a relative enhancement of 3.7%. Similar observations hold true for the remaining sections of the table. This insightful ablation study provides compelling evidence that the seed points generated by our method effectively preserve more intricate shape information, which in turn greatly benefits the upsampling modules and contributes to the overall performance improvement. Dictionary Guidance Module Furthermore, we investigate the importance of the Dictionary Guidance Module, which aims to compensate for missing detailed shape information. To achieve this, we remove the Dictionary Guidance modules and substitute them with MLPs so that they have similar parameter amounts. As depicted in Table 6, the optimal performance is attained when the Dictionary Guidance module and orthogonal constraints are applied. Especially, we observe a notable performance drop when the Dictionary Guidance module is removed, where the CD-L1 increases from 6.50 × 10−3 to 6.62 × 10−3, presenting a substantial gap of 0.12×10−3, which strongly validate the effectiveness of the proposed Dictionary Guidance module in enhancing the completion system’s overall performance. Analysis of the Learnable Dictionary In the previous ablation study, we illustrated the importance of the Dictionary Guidance module. Nevertheless, there still exists some curiosity about the meaning of the shape vectors in the dictio(a) (b) Figure 5: a) Visualization of reconstructed shapes from vectors in the learned dictionary. b) The density of the index of the maximum similarity scores in the learned dictionaries for 3 example classes. nary proposed in this paper. To obtain a deep insight into this learnable dictionary, we incorporate the proposed seed generation backbone with a Folding-based upsampler (Yang et al. 2018). The Foldingbased upsampler leverages seed features and predefined 2D grids to generate 3D points, enabling visualization of the shape features. After we trained this network on the PCN dataset, during the inference stage, we feed the shape vectors from the dictionary along with predefined 2D grids (16x16) into the Folding-based upsampler to showcase the shapes of learned vectors. The results are illustrated in Figure 5a. Since these shape vectors learn high-level shape features from training samples, they are not designed to represent real object parts, like desk corners or car wheels. Instead, we observe that each shape vector represents a distinct shape such as lines, planes, and curves, carrying a strong geometric meaning, which can be regarded as priors and fundamental building blocks for reconstructing missing components in the testing stage. Furthermore, since the learned dictionary contains the shape priors extracted from the training samples, Intuitively, given different categories of point clouds, e.g. , airplane and table, each category of point cloud should utilize different shape priors to compensate for the missing details as they have distinct geometries. To verify this, we record the index of the maximum similarity scores in the Refine Unit and plot their density distributions. Figure 5b shows that the distribution of three categories in the PCN dataset has distinct patterns, which indicates that our method can automatically select the best combinations to reconstruct more details for the missing portion and achieve better performance. Conclusion In this paper, we propose the ODGNet, a simple but effective point cloud completion network, that aims to mitigate the bottlenecks of the two-stage framework and especially focuses on the first stage. It incorporates the newly designed learnable shape dictionaries to recover the finedetailed shape information for the missing portions and multi-level feature extraction and concatenation to improve the representation ability of seed points. Without ornamentation, the experiment results show that our algorithm can efficiently reconstruct the missing portion with rich details and outperform previous state-of-the-art counterparts. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 870 Acknowledgements We sincerely thank the Senior Program Committee members and reviewers for their comments and contributions to the community. This work was supported, in part, by NEH PR284350-22. The GPU used in this work was provided by the NSF MRI-2018966. References Cai, P.; Wu, Z.; Wu, X.; and Wang, S. 2023. Parametric Surface Constrained Upsampler Network for Point Cloud. In AAAI conference on artificial intelligence. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European Conference on Computer Vision (ECCV). Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; Xiao, J.; Yi, L.; and Yu, F. 2015. ShapeNet: An InformationRich 3D Model Repository. arXiv:1512.03012. Dai, A.; Qi, C. R.; and Nießner, M. 2017. Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Geiger, A.; Lenz, P.; Stiller, C.; and Urtasun, R. 2013. Vision meets Robotics: The KITTI Dataset. International Journal of Robotics Research (IJRR), 32(11): 1231–1237. Groueix, T.; Fisher, M.; Kim, V. G.; Russell, B.; and Aubry, M. 2018. AtlasNet: A Papier-Mˆach´e Approach to Learning 3D Surface Generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Han, X.; Li, Z.; Huang, H.; Kalogerakis, E.; and Yu, Y. 2017. High-resolution shape completion using deep neural networks for global structure and local geometry inference. In IEEE International Conference on Computer Vision (ICCV). Hu, W.; Fu, Z.; and Guo, Z. 2019. Local frequency interpretation and non-local self-similarity on graph for point cloud inpainting. IEEE Transactions on Image Processing, 28(8). Kazhdan, M.; and Hoppe, H. 2013. Screened poisson surface reconstruction. ACM Transactions on Graphics (ToG), 32(3). Li, S.; Gao, P.; Tan, X.; and Wei, M. 2023. ProxyFormer: Proxy Alignment Assisted Point Cloud Completion With Missing Part Sensitive Transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9466–9475. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M. A.; Cao, D.; and Li, J. 2021. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review. IEEE Transactions on Neural Networks and Learning Systems, 32(8): 3412–3432. Liu, M.; Sheng, L.; Yang, S.; Shao, J.; and Hu, S.-M. 2020. Morphing and sampling network for dense point cloud completion. In AAAI conference on artificial intelligence. Lozes, F.; Elmoataz, A.; and L´ezoray, O. 2014. Partial difference operators on weighted graphs for image processing on surfaces and point clouds. IEEE Transactions on Image Processing, 23(9). Pan, L. 2020. ECG: Edge-aware point cloud completion with graph convolution. IEEE Robotics and Automation Letters, 5(3). Pan, L.; Chen, X.; Cai, Z.; Zhang, J.; Zhao, H.; Yi, S.; and Liu, Z. 2021. Variational Relational Point Completion Network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Pauly, M.; Mitra, N. J.; Wallner, J.; Pottmann, H.; and Guibas, L. J. 2008. Discovering structural regularity in 3D geometry. In ACM SIGGRAPH. ACM. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI). Shi, W.; and Rajkumar, R. 2020. Point-gnn: Graph neural network for 3d object detection in a point cloud. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Tang, J.; Gong, Z.; Yi, R.; Xie, Y.; and Ma, L. 2022. LAKeNet: Topology-Aware Point Cloud Completion by Localizing Aligned Keypoints. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1726–1735. Tchapmi, L. P.; Kosaraju, V.; Rezatofighi, H.; Reid, I.; and Savarese, S. 2019. TopNet: Structural Point Cloud Decoder. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Wang, X.; Ang, M. H.; and Lee, G. 2022. Cascaded Refinement Network for Point Cloud Completion with Selfsupervision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8139–8150. Wang, Y.; Tan, D. J.; Navab, N.; and Tombari, F. 2022. Learning Local Displacements for Point Cloud Completion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1568–1577. Wen, X.; Xiang, P.; Han, Z.; Cao, Y.-P.; Wan, P.; Zheng, W.; and Liu, Y.-S. 2021. PMP-Net: Point cloud completion by learning multi-step point moving paths. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Wen, X.; Xiang, P.; Han, Z.; Cao, Y.-P.; Wan, P.; Zheng, W.; and Liu, Y.-S. 2022. PMP-Net++: Point cloud completion by transformer-enhanced multi-step point moving paths. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1): 852–867. Wenxiao Zhang, C. X., Qingan Yan. 2020. Detail Preserved Point Cloud Completion via Separated Feature Aggregation. In European Conference on Computer Vision (ECCV). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 871 Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. 3d shapenets: A deep representation for volumetric shapes. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Xiang, P.; Wen, X.; Liu, Y.-S.; Cao, Y.-P.; Wan, P.; Zheng, W.; and Han, Z. 2021. SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with SkipTransformer. In IEEE International Conference on Computer Vision (ICCV). Xiang, P.; Wen, X.; Liu, Y.-S.; Cao, Y.-P.; Wan, P.; Zheng, W.; and Han, Z. 2022. Snowflake Point Deconvolution for Point Cloud Completion and Generation with SkipTransformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–18. Xie, H.; Yao, H.; Zhou, S.; Mao, J.; Zhang, S.; and Sun, W. 2020. GRNet: Gridding Residual Network for Dense Point Cloud Completion. In European Conference on Computer Vision (ECCV). Yan, X.; Yan, H.; Wang, J.; Du, H.; Wu, Z.; Xie, D.; Pu, S.; and Lu, L. 2022. FBNet: Feedback Network For Point Cloud Completion. In European Conference on Computer Vision (ECCV). Yang, Y.; Feng, C.; Shen, Y.; and Tian, D. 2018. Foldingnet: Point cloud auto-encoder via deep grid deformation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yu, X.; Rao, Y.; Wang, Z.; Liu, Z.; Lu, J.; and Zhou, J. 2021. Pointr: Diverse point cloud completion with geometryaware transformers. In IEEE International Conference on Computer Vision (ICCV), 12498–12507. Yu, X.; Rao, Y.; Wang, Z.; Lu, J.; and Zhou, J. 2023. AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware Transformers. arXiv:2301.04545. Yuan, W.; Khot, T.; Held, D.; Mertz, C.; and Hebert, M. 2018. PCN: Point Completion Network. In International Conference on 3D Vision (3DV). Zeng, Y.; Hu, Y.; Liu, S.; Ye, J.; Han, Y.; Li, X.; and Sun, N. 2018. RT3D: Real-Time 3-D Vehicle Detection in LiDAR Point Cloud for Autonomous Driving. IEEE Robotics and Automation Letters, 3(4): 3434–3440. Zhang, C.; Wu, Z.; Wu, X.; Zhao, Z.; and Wang, S. 2023. Few-Shot 3D Point Cloud Semantic Segmentation via Stratified Class-Specific Attention Based Transformer Network. In AAAI conference on artificial intelligence. Zhao, H.; Jiang, L.; Jia, J.; Torr, P. H.; and Koltun, V. 2021. Point transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Zhao, Z.; Wu, Z.; Wu, X.; Zhang, C.; and Wang, S. 2022. Crossmodal few-shot 3d point cloud semantic segmentation. In Proceedings of the 30th ACM International Conference on Multimedia, 4760–4768. Zhou, H.; Cao, Y.; Chu, W.; Zhu, J.; Lu, T.; Tai, Y.; and Wang, C. 2022. SeedFormer: Patch Seeds based Point Cloud Completion with Upsample Transformer. In European Conference on Computer Vision (ECCV). Zhou, Y.; and Tuzel, O. 2018. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 872
2024
97
18,817
Urban Region Embedding via Multi-View Contrastive Prediction Zechen Li1, Weiming Huang2, Kai Zhao3, Min Yang1∗, Yongshun Gong1, Meng Chen1,4 * 1 School of Software, Shandong University 2 School of Computer Science and Engineering, Nanyang Technological University 3 Robinson College of Business, Georgia State University 4 Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Natural Resources [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Recently, learning urban region representations utilizing multi-modal data (information views) has become increasingly popular, for a deep understanding of the distributions of various socioeconomic features in cities. However, previous methods usually blend multi-view information in a posterior stage, falling short in learning coherent and consistent representations across different views. In this paper, we form a new pipeline to learn consistent representations across varying views and propose the multi-view Contrastive Prediction model for urban Region embedding (ReCP), which leverages the multiple information views from point-of-interest (POI) and human mobility data. Specifically, ReCP comprises two major modules, namely an intra-view learning module utilizing contrastive learning and feature reconstruction to capture the unique information from each single view, and an interview learning module that perceives the consistency between the two views using a contrastive prediction learning scheme. We conduct thorough experiments on two downstream tasks to assess the proposed model, i.e., land use clustering and region popularity prediction. The experimental results demonstrate that our model outperforms state-of-the-art baseline methods significantly in urban region representation learning. Introduction A deep understanding of the spatial distribution of various socioeconomic factors in cities such as land use or population distribution, is important for urban planning and management. In recent years, an increasingly popular trend in the community of urban computing has been to partition a city into numerous regions and utilize various urban sensory data to learn the latent representations of the regions, which can subsequently be used in varying urban sensing tasks, e.g., land usage clustering. house price prediction, and population density inference (Liu et al. 2021; Li et al. 2022; Liu et al. 2023; Huang et al. 2023; Xu et al. 2023b; Li et al. 2023). This trend can also be attributed to the prosperity of mobile sensing technologies, which has led to the rapid accumulation of urban sensing data, such as human trajectories or points-of-interest (POIs) (Zheng et al. 2020, 2021; Chen, Yu, and Liu 2018; Zhang, Zhao, and Chen 2022; Xu et al. *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. graph embedding Region Attributes Human Mobility Single-view Representations graph embedding Attention Layer Embedding Fusion Multi-view Fusion region representation Consistency Learning consistent information region representation View-specific Representations intra-view learning intra-view learning Region Attributes Human Mobility 𝐼(𝐙𝑎, 𝐙𝑚) 𝐻(𝐙𝑎|𝐙𝑚) 𝐻(𝐙𝑚|𝐙𝑎) 𝐙𝑎 𝐙𝑚 𝐙𝑎 𝐙𝑚 max 𝐼(𝐙𝑎, 𝐙𝑚) min 𝐻(𝐙𝑎|𝐙𝑚) min 𝐻(𝐙𝑚|𝐙𝑎) Figure 1: Illustration of (a) multi-view fusion paradigm and our proposed (b) consistency learning paradigm for region embedding. In the right figure, the solid and dotted rectangles denote the region representations Za and Zm from the attribute and mobility views, respectively. The mutual information I(Za, Zm) (chartreuse area) quantifies the amount of information shared by Za and Zm; the conditional entropy H(Za|Zm) (grey area) quantifies the amount of information of Za conditioned on Zm. To learn consistent region representations across different views, it is encouraged to maximize I(Za, Zm) and minimize H(Za|Zm) and H(Zm|Za). 2023a; Zhang et al. 2023). Such various urban data provide more opportunities for tackling the problem of region representation learning. Many previous studies have attempted to learn region representations by utilizing human mobility data. For instance, Wang et al. (Wang and Li 2017) construct flow graphs and spatial graphs using taxi flow data and propose a graph embedding method to learn region representations. Yao et al. (Yao et al. 2018) extract human mobility patterns from taxi trajectories, and model the co-occurrence of origin-destination regions to learn region representations. The above methods merely rely on single-view data, which offers a limited perspective of regions and fails to provide a comprehensive representation. Further, recent studies (Zhang et al. 2021; Luo, Chung, and Chen 2022; Zhang, Long, and Cong 2022; Zhou et al. 2023) propose learning region representations through integrating data in mulThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8724 tiple modalities, thus forming multiple information views. In this context, the technical focus of recent region embedding studies has shifted towards the fusion between multiple information views, where they usually follow the same pipeline: separate single-view representation followed by multiple-view fusion. Such a pipeline is demonstrated in Figure 1(a), where, it (1) separately models each information view (usually with a graph structure) and learns multiple single-view representations for each region, and (2) leverages certain fusion techniques (e.g., based on attention mechanisms) to blend multiple representations and yield the final multi-view region representation. The previous multi-view region embedding methods have been effective in certain analyses, but they come with a notable limitation: neglecting the information consistency across different views when generating the final region representation. Intuitively, the information carried by multiple views of a region is highly correlated, and thus their representations should be consistent. For example, an entertainment region could contain multiple bars and restaurants (region attribute view based on POIs), as well as a large number of nighttime mobility flows (human mobility view). Both views can reflect the intrinsic characteristics of this region (i.e., entertainment function). If we manage to leverage such correlation, it could be served as the constraint during the process of learning representations for each view, and enable the knowledge of transferring from one view to the other. Ultimately, the multi-view representations would become highly consistent and naturally fused. Following the ideas above, we present a new pipeline consistency learning paradigm - for multi-view region embedding from an information theory perspective (Tsai et al. 2021; Lin et al. 2021), where the multi-view representations are naturally fused through exchanging information between views along with learning view-specific region representations, rather than treating fusion as a posterior process. This new pipeline is shown in Figure 1(b). Given two view-specific region representations Za and Zm (where they are from the region attribute view and the human mobility view, respectively), we maximize the mutual information I(Za, Zm) to increase the amount of the shared information (consistency) in the region representations of the two views. We also minimize the conditional entropy H(Za|Zm) and H(Zm|Za) to diminish the inconsistent information across the two views and improve the consistency further. Based on the consistency learning paradigm, we propose a multi-view Contrastive Prediction model for urban Region embedding (ReCP), which can effectively enhance the consistency of region representations across different views. ReCP consists of two major components: intra-view learning and inter-view learning. In the intra-view learning component, to learn view-specific region representations, we compare each region with other dissimilar ones to embed the region into a latent space via contrastive learning; in the meantime, we also utilize autoencoders to capture viewspecific region features for different views, which helps avoid model falling into a trivial solution. In the inter-view learning component, to learn the cross-view consistency of region representations, we design inter-view contrastive learning by maximizing I(Za, Zm) and dual prediction between views by minimizing H(Za|Zm) and H(Zm|Za). To summarize, our contributions are as follows: • We form a new pipeline following a consistency learning paradigm, to study the urban region embedding problem by exploring the consistency across different views, using both human mobility and POI data. Different from existing multi-view region embedding methods which adopt the attention mechanisms to fuse representations of different views, we propose to learn consistent multiview representations of regions by increasing the amount of shared information across multiple views from the information entropy perspective. • We design the inter-view contrastive learning and dual prediction processes to diminish the inconsistent information across views and learn an informative and consistent region representation between different views, achieved by maximizing the mutual information among different views and minimizing the conditional entropy among them. • We conduct extensive experiments to evaluate our model with real-world datasets. The results demonstrate that the proposed ReCP outperforms existing methods on two downstream tasks by a margin. Data and source code are available at https://github.com/lizc-sdu/ReCP. Problem Formulation Definition 1 (Urban Region) A city can be partitioned into n disjoint urban regions, denoted as R = {r1, r2, ..., rn}. Definition 2 (Region Attributes) In this study, region attributes are defined as inherent geographic features of regions. Specifically, we consider Point of Interest (POI) categories as region attributes following Zhang, Long, and Cong (2022); Fu et al. (2019). These region attributes are represented as a set A = {A1, A2, · · · , An}, where Ai ∈RF and F represents the total number of POI categories. Each dimension in Ai corresponds to the number of POIs with a specific category in the region ri. Definition 3 (Human Mobility) For a region ri, we define its outflow feature Sj,t i as the number of trips made by all individuals originating from region ri and destined for region rj during a specific time interval t. Consequently, we generate a collection of outflow features based on the mobility data encompassing all regions within the set R. This collection is represented as S = {S1, S2, · · · , Sn}, where Si ∈RM. Here, M is calculated as the product of the number of regions, n, and the number of time intervals, Nt, within a day, for instance, 24. Similarly, by considering ri as the destination region and the other regions rj as the source regions, we can obtain an inflow feature vector, denoted as Di, and finally obtain a collection D = {D1, D2, · · · , Dn} of inflow features for all regions. Problem 1 (Region Representation Learning) Given the attribute features A, outflow features S, and inflow features D of n regions, our objective is to acquire a collection of low-dimensional embeddings E = {E1, E2, · · · , En}, to serve as the latent representation for each region. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8725 Decoder Intra-view Reconstruction 𝐀𝑖 ෡𝐀𝑖 Decoder Intra-view Reconstruction (෡𝑺𝑖, ෡𝑫𝑖) Encoder 𝐒1, … , 𝐒𝑛 Human Mobility 𝐃1, … , 𝐃𝑛 𝐀1, … , 𝐀𝑛 Region Attributes Augment 𝐀𝑖 + 𝐀𝑖 − 𝐒𝑖 Augment 𝐒𝑖 + 𝐒𝑖 − 𝐃𝑖 𝐃𝑖 + 𝐃𝑖 − Encoder Intra-view Contrastive Learning Inter-view Contrastive Learning Dual Prediction 𝐙𝑎 𝐙𝑚 predict 𝐙𝑚 𝐙𝑎 predict max 𝐼(𝐙𝑎, 𝐙𝑚) max 𝐼(𝐙𝑎, 𝐙𝑚) Intra-view Learning Inter-view Learning 𝐙𝑎 𝐙𝑚 Intra-view Contrastive Learning Figure 2: The framework of ReCP. Methodology The framework of ReCP is illustrated in Figure 2, which includes two major components: 1) intra-view learning: for both region attribute and human mobility view, it captures the representative features of each region by intra-view contrastive learning to learn view-specific representations. Additionally, feature reconstruction is designed within each view to recover the original feature of the region, which helps avoid a trivial solution; 2) inter-view learning: within the same region, it integrates representations from different views through two learning objectives: inter-view contrastive learning is used to enhance the consistency across different views, and dual prediction is introduced to further diminish the inconsistent information between views. Intra-view Learning Initially, we learn view-specific region representations based on the region attribute features A and the mobility features S and D, respectively. Within each view, we learn the latent representation for each region by employing intra-view contrastive learning, i.e., we compare each region with others to highlight distinctive features within each region. Additionally, we design a within-view reconstruction loss to avoid the trivial solution. Intra-view Contrastive Learning To learn region representations within each view, we design an intra-view contrastive learning module, which compares each region with others. For a given region ri, we have three types of region features, including the attribute feature Ai, outflow feature Si, and inflow feature Di. For simplicity, let Xv i denote the raw feature for the v-th view. For a target region ri, we define its positive set as Pv i = {Xv 1, Xv 2, · · · , Xv K}, where Xv 1, Xv 2, · · · , Xv K are positive samples obtained through the data augmentation function following (Zhang, Long, and Cong 2022), and K is the number of positive samples. The negative set N v i is defined as N v i = {Xv t |t ̸= i}, which contains features of regions except ri. We then map the raw features of regions into a latent representation, Zv i = E(v)(Xv i ), (1) where E(v) denotes the encoder for the v-th view. In practice, we simply implement it as a fully connected neural network. As a result, we obtain three types of region representations, Za i , Zs i and Zd i . Further, we compute the region representation Zm i of the human mobility view as the average of Zs i and Zd i , i.e., Zm i = Zs i + Zd i  /2. To maximize the similarity of positive pairs while minimizing the similarity of negative pairs, the contrastive learning loss for the v-th view is defined as follows, Lv cl = X ri∈R [−log K X k=1 exp(Zv i · Zv k τ )+ log( K X k=1 exp(Zv i · Zv k τ ) + |N v i | X t=1 exp(Zv i · Zv t τ ))], (2) where τ is the temperature parameter and R is the set of regions. Further, the intra-view contrastive learning loss across all views is formulated as Lintra cl = µLa cl + Lm cl. (3) where µ is the parameter controlling the balance between the attribute view and the mobility view. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8726 Intra-view Reconstruction Given the feature Xv i for the v-th view of the region ri, we further optimize the latent region representations via an autoencoder and define the reconstruction loss Lv rec as Lv rec = X ri∈R Xv i −D(v)(E(v)(Xv i )) 2 2, (4) where E(v) is the same as that in Equation (1) and D(v) is the decoder for the v-th view to reconstruct the region features. Specifically, we employ a fully connected network to implement D(v), which shares the same number of layers and hidden sizes as E(v). Note that the autoencoder structure is helpful to avoid the trivial solution. The total reconstruction loss across all views is Lintra rec = µLa rec + Lm rec, (5) where µ is the same weight parameter as that in Equation (3). So far, we obtain two types of view-specific region representations Za i and Zm i from the region attribute and human mobility views. Inter-view Learning Different views of a region provide valuable information for describing the region, often offering complementary insights. To learn consistent and informative representations across different views, we employ inter-view contrastive learning to improve collaboration and information exchange between the views, achieved by maximizing the mutual information among different views. Additionally, dual prediction between two views is leveraged to reduce the impact of inconsistent information between the views by minimizing the conditional entropy across them. Inter-view Contrastive Learning In the latent embedding space, we conduct contrastive learning to learn consistent representations shared across different views, as recent contrastive learning studies (He et al. 2020; Lin et al. 2021) have shown that consistency could be learned by maximizing the mutual information of different views. Formally, given the two representations Za i and Zm i of region ri, we maximize the mutual information between Za i and Zm i from different views: Linter cl = − X ri∈R [I(Za i , Zm i ) + α(H(Za i ) + H(Zm i ))], (6) where I(·) represents mutual information, H(·) denotes information entropy, and the parameter α controls the balance between mutual information and information entropy. Note that the maximization of H (Za i ) and H (Zm i ) also helps prevent trivial solutions where all regions are represented by the same representation. Based on the definition of mutual information, I(·) is defined as I (Za i , Zm i ) = P (Za i , Zm i ) log P (Za i , Zm i ) P (Za i ) P (Zm i ), (7) where P (Za i , Zm i ) represents the joint probability distribution of Za i and Zm i . To represent the joint probability distribution, we employ a softmax function to transform the region representations Za i ∈Rd and Zm i ∈Rd (where d is the dimension of region representations) with Ba i = softmax (Za i ) , Bm i = softmax (Zm i ) , (8) where Ba i ∈Rd and Bm i ∈Rd can be interpreted as the probability distributions. Considering the entire set R containing n regions, we define the matrix M ∈Rd×d as the joint probability distribution of Za and Zm, M = 1 n n X i=1 Ba i (Bm i )T. (9) We denote the element located at the r-th row and the r′-th column of the matrix as Mrr′, and the sum of the elements in matrix M along the r-th row (the r′-th column) as Mr (Mr′). Mrr′ represents the joint probability, while Mr and Mr′ represent the marginal probability, respectively. Then we could compute the mutual information I (Za, Zm) as follows, I (Za, Zm) = d X r=1 d X r′=1 Mrr′ log Mrr′ Mr · Mr′ . (10) Information entropy H(Zv i ) is defined as follows, H(Zv i ) = −P(Zv i )logP(Zv i ), (11) where v ∈{a, m}. Following the above definition of M, H(Zv i ) could be computed as H(Za) = − d X r=1 Mr log Mr, H(Zm) = − d X r′=1 Mr′ log Mr′. (12) Combining Equations (6), (10), and (12), the inter-view contrastive learning loss is formulated as Linter cl = − d X r=1 d X r′=1 Mrr′ ln Mrr′ Mα+1 r · Mα+1 r′ . (13) where α is the weight parameter defined in the Equation (6). Inter-view Dual Prediction To further diminish the inconsistency across different views, we predict the viewspecific region representation by minimizing the conditioned entropy. Formally, given the region representations Za and Zm, we minimize the conditional entropy H(Zp|Zq), where p = a, q = m or p = m, q = a. On one hand, Zq contains nearly all the information required to represent the p-th view if Zq can perfectly predict Zp for any (Zp, Zq) ∼PZp,Zq. On the other hand, Zq diminishes the inconsistent information within the q-th view if Zp can perfectly predict Zq under the constraint where I(Zp, Zq) is maximized. Mathematically, H(Zp|Zq) is defined as H (Zp |Zq ) = −EP Zp,Zq [log P (Zp|Zq)] . (14) To minimize H (Zp |Zq ), a common approach is to assume a variational distribution Q (Zp|Zq) for Zp and Zq. Specially, we present to maximize EP Zp,Zq [log Q (Zp|Zq)], The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8727 which serves as a lower bound of EP Zp,Zq [log P (Zp|Zq)]. Q (·|·) can be any distribution such as Gaussian or Laplacian. In this work, we simply adopt the Gaussian distribution N (Zp|F (q) (Zq) , σI), where F (q) (·) represents a parameterized function mapping Zq to Zp, and σI denotes the variance matrix. By ignoring the constants derived from the Gaussian distribution, maximizing EP Zp,Zq [log Q (Zp|Zq)] is equivalent to minimizing EP Zp,Zq Zp −F (q) (Zq) 2 2 . (15) Then the dual prediction loss can be formulated as Linter dp = X ri∈R Zm i −F (a) (Za i ) 2 2 + Za i −F (m) (Zm i ) 2 2. Here, F (a) and F (m) are respectively implemented as fully-connected networks, with each layer followed by a batch normalization layer and a ReLU layer. Note that the above loss may lead to model collapse without the intra-view reconstruction loss (Equation (4)), i.e., Za i and Zm i from different views become equivalent to the same constant. Finally, the inter-view learning loss is defined as Linter = Linter dp + Linter cl . (16) Model Training The final objective function is defined as L = Linter + λ1Lintra cl + λ2Lintra rec , (17) where λ1 and λ2 are parameters controlling the weights of different losses. After learning the latent representations Za and Zm, we simply concatenate them as the final multi-view region representation, i.e., Ei = Za i || Zm i . Experiments Experimental Settings Datasets. We collect a diverse set of real-world data from NYC Open Data1 and use the Manhattan borough as the study area. We partition Manhattan into 270 regions based on the city boundaries designed by the US Census Bureau2. As for the human mobility data, we employ complete taxi trip records from February 2014 as our training data. We utilize the NYC check-in and POI data provided by Yang et al. (2014) for our model training and the popularity prediction task. The detailed description of datasets is shown in Table 1. Based on these data, we construct the region features including A, S, and D for model training. Model Parameters. In our experiments, the dimension of region representations is set to 96. In the intra-view reconstruction module, we set the number of layers at 3 and the hidden size at 128 for the encoder E(v) and decoder D(v); in the intra-view contrastive learning module, following the settings in Zhang, Long, and Cong (2022), we set the number of positive samples for region attribute and human mobility data at 3 and 4, and the parameter µ controlling the 1https://opendata.cityofnewyork.us 2https://www.census.gov/data.html Dataset Description Regions 270 regions divided by streets in Manhattan Taxi trips 10M taxi trips during February, 2014 POI data 10K POIs with 244 categories Check-in data 100K check-in records Table 1: Data description (K=103, M=106). balance between different views at 0.0001. In the inter-view dual prediction module, we set the number of layers at 3 and the hidden size at 96 for F (a) and F (m); in the inter-view contrastive learning module, we set the parameter α at 9. We set the hyper-parameters λ1 and λ2 in the final objective loss at 1. Note that the optimal model parameters are selected using grid search with a small but adaptive step size. To optimize our model, we adopt Adam and initialize the learning rate at 0.01 with a linear decay. Baselines. We compare the performance of ReCP with several state-of-the-art region embedding methods. • HDGE. (Wang and Li 2017) constructs flow graphs and spatial graphs using taxi data and learns region representations with graph embedding methods. • ZE-Mob. (Yao et al. 2018) models co-occurrence patterns between regions from mobility data to learn region representations. • MV-PN. (Fu et al. 2019) models both inter-region and intra-region information to construct multi-view POIPOI networks within each region. • CGAL. (Zhang et al. 2019) extends MV-PN and incorporates the spatial structure and spatial autocorrelation among regions to learn region representations. • MVURE. (Zhang et al. 2021) learns region representations by cross-view information sharing and multi-view fusion with human mobility and region attributes. • MGFN. (Wu et al. 2022) designs multi-level crossattention mechanisms to extract region representations from multiple mobility patterns. • ReMVC. (Zhang, Long, and Cong 2022) learns region representations through both intra-view and inter-view contrastive learning modules. • HREP. (Zhou et al. 2023) constructs heterogeneous graphs and uses relation-aware graph embedding to learn region representations. Land Usage Clustering We use the district division by the community boards (Berg 2007) as ground truth and divide the Manhattan borough into 29 districts, following the settings in Zhang, Long, and Cong (2022). We cluster regions into groups by k-means clustering (k = 29), using region representations as inputs. The regions with the same land usage type are expected to be assigned to the same cluster. The experimental results are evaluated using three metrics: Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), and F-measure following (Yao et al. 2018; Zhang et al. 2021). We assess all the methods using the same dataset and conduct 10 runs to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8728 Method Land Usage Clustering Region Popularity Prediction NMI ARI F-measure MAE RMSE R2 HDGE 0.469 ± 0.01 0.095 ± 0.01 0.117 ± 0.01 334.43 ± 10.17 474.94 ± 9.49 0.079 ± 0.04 ZE-Mob 0.437 ± 0.02 0.071 ± 0.01 0.097 ± 0.01 282.42 ± 13.71 418.02 ± 12.69 0.286 ± 0.04 MV-PN 0.407 ± 0.01 0.036 ± 0.01 0.070 ± 0.01 291.17 ± 16.54 435.23 ± 16.52 0.226 ± 0.06 CGAL 0.414 ± 0.08 0.059 ± 0.06 0.091 ± 0.06 351.10 ± 51.20 486.96 ± 52.58 0.021 ± 0.20 MVURE 0.735 ± 0.01 0.400 ± 0.02 0.415 ± 0.02 236.25 ± 7.86 347.01 ± 11.70 0.508 ± 0.03 MGFN 0.748 ± 0.01 0.424 ± 0.03 0.437 ± 0.03 240.37 ± 11.99 354.24 ± 17.14 0.487 ± 0.05 ReMVC 0.761* ± 0.02 0.455* ± 0.04 0.462* ± 0.04 283.02 ± 18.03 406.25 ± 18.00 0.325 ± 0.06 HREP 0.757 ± 0.01 0.448 ± 0.03 0.457 ± 0.03 217.52* ± 10.98 318.41* ± 14.54 0.585* ± 0.04 ReCP 0.780 ± 0.01 0.483 ± 0.01 0.499 ± 0.02 195.16 ± 18.70 291.19 ± 20.04 0.652 ± 0.05 Improvements 2.50% 6.15% 8.01% 10.28% 8.55% 11.45% Table 2: Performance comparison on two downstream tasks, where the performance improvements of ReCP are compared with the best of these baseline methods, marked by the asterisk. report the mean value with the standard deviation in Table 2. From the results, we observe that: • HDGE and ZE-Mob exhibit relatively inferior performance as they merely model co-occurrence patterns using human mobility data. MGFN demonstrates better performance than HDGE and ZE-Mob, as it designs a deep model based on cross-attention mechanisms to capture complex mobility patterns from spatial-temporal human mobility data. • The methods that model multi-view information generally achieve satisfactory results, validating the importance of effectively integrating multi-view information for region embedding. Specifically, MV-PN and CGAL exhibit poor performance as they simply combine region representations from two views, lacking the deep interaction between views; MVURE and HREP design attention-based mechanisms to fuse the multi-view information, consequently yielding superior performance; ReMVC adopts contrastive learning to model intra-view and inter-view information and also obtains good results. • The proposed ReCP outperforms all these baselines, as it explores the consistency across different views in region embedding. Compared with ReMVC, ReCP achieves average improvements of 2.50%, 6.15%, and 8.01% in terms of NMI, ARI, and F-measure, respectively. Moreover, the results of the superiority paired t-test indicate that the improvement of ReCP over the baselines is statistically significant, with a p-value less than 0.01. Region Popularity Prediction Another commonly-compared downstream task to evaluate the region representations is popularity prediction, where we aggregate the check-in counts within each region as the ground truth of popularity following Yang et al. (2014); Zhang, Long, and Cong (2022). We take region representations as input and train the Lasso regression model. The evaluation results including Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Coefficient of Determination (R2) are obtained by 5-fold cross-validation, as reported in Table 2. From the results, we observe that the multi-view fusion methods including MVURE and HREP achieve decent performance, which further validates the necessity of integrating multi-view information in region embedding. ReCP performs the best among all methods, e.g., compared to HREP, ReCP achieves average improvements of 10.28%, 8.55%, and 11.45% in terms of MAE, RMSE, and R2. These results indicate that it is an effective way to learn better region representations by utilizing the new pipeline following the consistency learning paradigm. Ablation Study and Parameter Analysis Ablation study We design four variants to explore how each module of ReCP affects the model performance. ReCP w/o CL removes the intra-view contrastive learning loss, ReCP w/o Rec removes the intra-view reconstruction loss and only uses the encoder to extract features, ReCP w/o IV removes the inter-view learning module and directly concatenates region representations from the two views without the constraint of consistency learning, and ReCP w/o DP removes the inter-view dual prediction loss. From the results in Figure 3, we observe that: 1) ReCP w/o CL achieves the lowest performance in both tasks, indicating that the intra-view contrastive learning loss is a crucial component in our model for learning viewspecific feature representations of regions. 2) ReCP w/o Rec achieves worse performance than ReCP, supporting the aforementioned claim that the intra-view reconstruction loss could help prevent the model from converging to a trivial solution. 3) ReCP demonstrates an improvement of 29.84% (in terms of ARI) and 4.00% (in terms of R2) when compared to ReCP w/o IV. This finding suggests that the proposed interview learning module effectively leverages the multi-view information and highlights the importance of consistency learning across different views. 4) ReCP w/o DP outperforms ReCP w/o IV but performs worse than ReCP, indicating that both the inter-view contrastive learning loss (which maximizes the mutual information between views) and the inter-view dual prediction loss (which minimizes the conditional entropy across them) are important for learning multi-view region representations. Parameter sensitivity The parameters λ1 and λ2 govern the weighting of various losses. We vary their values within The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8729 w/o…CL w/o…Rec w/o…IV w/o…DP ReCP 0.65 0.70 0.75 0.80 NMI w/o…CL w/o…Rec w/o…IV w/o…DP ReCP 0.25 0.35 0.45 0.55 ARI w/o…CL w/o…Rec w/o…IV w/o…DP ReCP 0.25 0.35 0.45 0.55 F-measure a Land Usage Clustering w/o…CL w/o…Rec w/o…IV w/o…DP ReCP 185 195 205 215 MAE w/o…CL w/o…Rec w/o…IV w/o…DP ReCP 265 290 315 340 RMSE w/o…CL w/o…Rec w/o…IV w/o…DP ReCP 0.4 0.5 0.6 0.7 R2 b Region Popularity Prediction Figure 3: Performance comparison of different modules. 100 10 1 0.1 0.01 0.2 0.4 0.6 0.8 0.01 0.1 1 10 100 0.0 0.2 0.4 0.6 0.01 0.1 1 10 100 100 10 1 0.1 0.01 0.0 0.2 0.4 0.6 0.01 0.1 1 10 100 F-measure 100 10 1 0.1 0.01 a Land Usage Clustering 240 160 80 0 0.01 0.1 1 10 100 100 10 1 0.1 0.01 0 120 240 360 0.01 0.1 1 10 100 100 10 1 0.1 0.01 0.00 0.25 0.50 0.75 0.01 0.1 1 10 100 R2 100 10 1 0.1 0.01 b Region Popularity Prediction Figure 4: Parameter analysis on both downstream tasks. the range of {0.01, 0.1, 1, 10, 100} to assess the impact of λ1 and λ2 on the model performance. As depicted in Figure 4, ReCP achieves satisfactory performance when we set both λ1 and λ2 at 1. Related Work Traditional methods for region embedding typically utilize human mobility data to analyze the transition patterns between urban regions. These methods are often based on the word2vec framework and learn the latent representations of regions (Wang and Li 2017; Yao et al. 2018). In a similar vein, Wu et al. (2022) incorporate mobility graphs with spatio-temporal similarity as mobility patterns and propose multi-level cross-attention mechanisms to extract comprehensive region representations from these patterns. Additionally, some studies focus on leveraging the inherent attributes of regions to learn latent representations. For instance, Zhang et al. (2019) construct multiple spatial graphs to represent the geographic structure of regions. By transforming the region embedding problem into a graph embedding problem, they primarily capture the spatial structure within regions and the spatial autocorrelation between regions. Another approach, proposed by Wang, Li, and Rajagopal (2020), involves mining street views and textual information of POIs within regions to learn representations. Moreover, there have been studies that learn region representations by incorporating both attribute features within regions and mobility patterns between regions. For instance, Fu et al. (2019) propose an autoencoder framework that effectively captures inter-region correlations and intra-region structural information during the process of region embedding. Zhang et al. (2021) model multi-view region correlations by leveraging human mobility data and inherent region attributes, and employ a graph attention mechanism to acquire region representations from each view of the established correlations. Zhou et al. (2023) learn relation-specific region representations from various types of relations in a heterogeneous graph constructed using human mobility, POI data, and geographic neighbors of regions. They devise an attention-based fusion technique to integrate shared information among different types of correlations. Additionally, Zhang, Long, and Cong (2022) introduce a multiview region embedding model based on contrastive learning, which incorporates an intra-view contrastive learning module to discern distinct representations and an inter-view contrastive learning module to facilitate the transfer of knowledge across multiple views. Conclusion In this paper, we form a new pipeline based on the consistency learning paradigm for multi-view region embedding. Under the hood, we propose a multi-view Contrastive Prediction model for urban Region embedding (ReCP) by exploring the consistency across two views, leveraging both POI and human mobility data. The ReCP model consists of two modules: an intra-view learning module that utilizes contrastive learning and feature reconstruction to learn region representations specific to each view, and an inter-view learning module utilizing a contrastive prediction learning scheme that enhances the consistency between two views. To evaluate the effectiveness of our proposed model, we conduct comprehensive experiments on two downstream tasks: land use clustering and region popularity prediction. The experimental results demonstrate that the proposed ReCP model outperforms state-of-the-art embedding methods, proving that retaining consistency across views is pivotal for effective region embedding. Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant No. 61906107 and 62202270, the Young Scholars Program of Shandong University, the Taishan Scholar Project of Shandong Province (tsqn202306066), and the Open Fund of Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Natural Resources. W.H. was supported by the Knut and Alice Wallenberg Foundation. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8730 References Berg, B. F. 2007. New York City Politics: Governing Gotham. Rutgers University Press. Chen, M.; Yu, X.; and Liu, Y. 2018. PCNN: Deep convolutional networks for short-term traffic congestion prediction. IEEE Transactions on Intelligent Transportation Systems, 19(11): 3550–3559. Fu, Y.; Wang, P.; Du, J.; Wu, L.; and Li, X. 2019. Efficient region embedding with multi-view spatial networks: A perspective of locality-constrained spatial autocorrelations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 906–913. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729–9738. Huang, W.; Zhang, D.; Mai, G.; Guo, X.; and Cui, L. 2023. Learning urban region representations with POIs and hierarchical graph infomax. ISPRS Journal of Photogrammetry and Remote Sensing, 196: 134–145. Li, T.; Xin, S.; Xi, Y.; Tarkoma, S.; Hui, P.; and Li, Y. 2022. Predicting multi-level socioeconomic indicators from structural urban imagery. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 3282–3291. Li, Y.; Huang, W.; Cong, G.; Wang, H.; and Wang, Z. 2023. Urban Region Representation Learning with OpenStreetMap Building Footprints. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1363–1373. Lin, Y.; Gou, Y.; Liu, Z.; Li, B.; Lv, J.; and Peng, X. 2021. Completer: Incomplete multi-view clustering via contrastive prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11174–11183. Liu, C.; Yang, Y.; Yao, Z.; Xu, Y.; Chen, W.; Yue, L.; and Wu, H. 2021. Discovering urban functions of high-definition zoning with continuous human traces. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 1048–1057. Liu, Y.; Zhang, X.; Ding, J.; Xi, Y.; and Li, Y. 2023. Knowledge-infused contrastive learning for urban imagerybased socioeconomic prediction. In Proceedings of the ACM Web Conference 2023, 4150–4160. Luo, Y.; Chung, F.-l.; and Chen, K. 2022. Urban region profiling via multi-graph representation learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 4294–4298. Tsai, Y.-H.; Wu, Y.; Salakhutdinov, R.; and Morency, L.P. 2021. Self-supervised learning from a multi-view perspective. In Proceedings of the International Conference on Learning Representations, 2021. Wang, H.; and Li, Z. 2017. Region representation learning via mobility flow. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 237– 246. Wang, Z.; Li, H.; and Rajagopal, R. 2020. Urban2vec: Incorporating street view imagery and pois for multi-modal urban neighborhood embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 1013–1020. Wu, S.; Yan, X.; Fan, X.; Pan, S.; Zhu, S.; Zheng, C.; Cheng, M.; and Wang, C. 2022. Multi-graph fusion networks for urban region embedding. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2312–2318. Xu, R.; Chen, M.; Gong, Y.; Liu, Y.; Yu, X.; and Nie, L. 2023a. TME: Tree-guided multi-task embedding learning towards semantic venue annotation. ACM Transactions on Information Systems, 41(4). Xu, R.; Huang, W.; Zhao, J.; Chen, M.; and Nie, L. 2023b. A spatial and adversarial representation learning approach for land use classification with POIs. ACM Transactions on Intelligent Systems and Technology, 14(6): 1–25. Yang, D.; Zhang, D.; Zheng, V. W.; and Yu, Z. 2014. Modeling user activity preference by leveraging user spatial temporal characteristics in LBSNs. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(1): 129–142. Yao, Z.; Fu, Y.; Liu, B.; Hu, W.; and Xiong, H. 2018. Representing urban functions through zone embedding with human mobility patterns. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. Zhang, C.; Zhao, K.; and Chen, M. 2022. Beyond the limits of predictability in human mobility prediction: contexttransition predictability. IEEE Transactions on Knowledge and Data Engineering, 35(5): 4514–4526. Zhang, D.; Xu, R.; Huang, W.; Zhao, K.; and Chen, M. 2023. Towards an integrated view of semantic annotation for POIs with spatial and textual information. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2441–2449. Zhang, L.; Long, C.; and Cong, G. 2022. Region embedding with intra and inter-view contrastive learning. IEEE Transactions on Knowledge and Data Engineering. Zhang, M.; Li, T.; Li, Y.; and Hui, P. 2021. Multi-view joint graph representation learning for urban region embedding. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 4431–4437. Zhang, Y.; Fu, Y.; Wang, P.; Li, X.; and Zheng, Y. 2019. Unifying inter-region autocorrelation and intra-region structures for spatial embedding via collective adversarial learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1700– 1708. Zheng, B.; Bi, L.; Cao, J.; Chai, H.; Fang, J.; Chen, L.; Gao, Y.; Zhou, X.; and Jensen, C. S. 2021. Speaknav: Voice-based route description language understanding for template-driven path search. Proceedings of the VLDB Endowment, 14(12): 3056–3068. Zheng, B.; Huang, C.; Jensen, C. S.; Chen, L.; Hung, N. Q. V.; Liu, G.; Li, G.; and Zheng, K. 2020. Online trichromatic pickup and delivery scheduling in spatial crowdsourcThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8731 ing. In 2020 IEEE 36th International Conference on Data Engineering, 973–984. IEEE. Zhou, S.; He, D.; Chen, L.; Shang, S.; and Han, P. 2023. Heterogeneous region embedding with prompt learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 4981–4989. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8732
2024
970
18,818
Hawkes-Enhanced Spatial-Temporal Hypergraph Contrastive Learning Based on Criminal Correlations Ke Liang1, Sihang Zhou2*, Meng Liu1 Yue Liu1, Wenxuan Tu1, Yi Zhang1, Liming Fang3, Zhe Liu4, Xinwang Liu1* 1School of Computer, National University of Defense Technology, Changsha, China 2School of Intelligence Science and Technology, National University of Defense Technology, Changsha, China 3Nanjing University of Aeronautics and Astronautics, Nanjing, China 4Zhejiang Lab, Hangzhou, China Abstract Crime prediction is a crucial yet challenging task within urban computing, which benefits public safety and resource optimization. Over the years, various models have been proposed, and spatial-temporal hypergraph learning models have recently shown outstanding performances. However, three correlations underlying crime are ignored, thus hindering the performance of previous models. Specifically, there are two spatial correlations and one temporal correlation, i.e., (1) cooccurrence of different types of crimes (type spatial correlation), (2) the closer to the crime center, the more dangerous it is around the neighborhood area (neighbor spatial correlation), and (3) the closer between two timestamps, the more relevant events are (hawkes temporal correlation). To this end, we propose Hawkes-enhanced Spatial-Temporal Hypergraph Contrastive Learning framework (HCL), which mines the aforementioned correlations via two specific strategies. Concretely, contrastive learning strategies are designed for two spatial correlations, and hawkes process modeling is adopted for temporal correlations. Extensive experiments demonstrate the promising capacities of HCL from four aspects, i.e., superiority, transferability, effectiveness, and sensitivity. Introduction Crimes, as a crucial social problem, are seen worldwide. If poorly managed, such activities will lead to severe adverse effects on urban safety (Wortley and Townsley 2016). Thus, the government has put forward higher requirements for effective crime prevention and combat. With the development of artificial intelligence (AI) technologies (Liu et al. 2023d,c; Chen et al. 2022a,b; Li et al. 2023; Tian et al. 2020; Peng et al. 2023; Luo et al. 2023; Liu et al. 2023b,a; Liu and Liu 2021; Liu, Wu, and Liu 2022) and the availability of crime data (Wang et al. 2016; Huang et al. 2019), various crime prediction models have emerged, which will assist governments in allocating police resources rationally based on spatial-temporal characteristics of different crimes and further contribute to the construction of the smart city in the future (Su, Li, and Fu 2011; Angelidou 2014). Early attempts for crime prediction leverage convolutional neural networks (CNNs) to model the region-wise correlations and temporal properties (Feng et al. 2020; Zhang, *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Zheng, and Qi 2017). Later on, recurrent neural networks (RNNs) are adopted to capture the temporal information (Huang et al. 2018; Wu et al. 2020a). After that, graph neural networks (GNNs) have shown their promising capacities for spatial dependency modeling among different locations. For example, DCRNN (Li et al. 2017) and STGCN (Yu, Yin, and Zhu 2017) leverage spectral graph-based message-passing for information aggregation. Besides, GMAN (Zheng et al. 2020) and ST-MetaNet (Pan et al. 2019) adopt the graph attention network (GAT) for spatial feature aggregation between different region blocks. More recently, hypergraph models, i.e., ST-HSL (Li et al. 2022), ST-SHN (Xia et al. 2022c), have shown their huge advantages on crime prediction, relying on more accurate and sufficient spatialtemporal modeling. However, they are still at an early stage. Specifically, they fail to leverage three correlations within crime data, including two spatial and one temporal correlation, thus hindering their capacity. At first, we will introduce two spatial correlations, i.e., type spatial correlations and neighbor spatial correlations, as follows. (1) Type Spatial Correlation. Different types of crimes may co-occur in the same region block, which is indicated in Fig. 1 (b). For example, robbery usually occurs along with the assault simultaneously. There are two main reasons for it. (i) Criminals often commit various crimes simultaneously unconsciously due to poor moral character and lack of legal awareness. (ii) Regions with poor law and order provide fertile soil for all kinds of crimes. (2) Neighbor Spatial Correlation. When crimes occur in a block, the government will usually warn the people to be careful when passing by its neighborhood areas. It reveals a typical characteristic of crime data, i.e., the farther from the crime center, the safer the region is; the closer, the more dangerous. Previous hypergraph crime prediction models ignore the spatial information underlying the aforementioned correlations. Motivated by the promising ability of CL techniques on mining information underlying data itself, especially for graph data (Thakoor et al. 2021; Lee, Lee, and Park 2022; Liang et al. 2023a), specific contrastive learning (CL) strategies are designed in this work to leverage the spatial correlations for better prediction performance. Besides the spatial correlations, temporal characteristics 1www.nyc.gov/site/nypd/stats/crime-statistics/historical.page The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8733 Figure 1: Characteristics of crime data. (a) is an example of used crime data. (b) describes the proportion of cooccurrence of different crimes in the same block. (c) presents the probability of the same crime occurring in both a block and its 1-hop neighbors. (d) reflects the dynamics1 of different crimes along the timeline. also play an essential role in crime prediction. Previous spatial-temporal hypergraph models (Li et al. 2022; Xia et al. 2022c) only use temporal relation encoding for dynamic information mining. Unlike these models, we designed a specific hawkes-enhanced spatial-temporal hypergraph model for feature encoding. Specifically, hawkes process modeling (Hawkes 1971), a classic yet effective technique, is adopted to enhance the dynamic interactions underlying the typical temporal correlations, i.e., the closer two moments are, the more relevant events are, which is named as Hawkes Temporal Correlations in this paper. Building upon the above ideas, we propose Hawkesenhanced Spatial-Temporal Hypergraph Contrastive Learning (HCL), which mines the aforementioned correlations via two specific strategies, i.e., contrastive learning and hawkes process modeling for spatial and temporal correlations, respectively. Concretely, the spatial criminal correlation extractor is designed to capture type and neighbor spatial correlations, further contributing to constructing contrastive pairs. Then, an alignment self-supervised loss, i.e., MSE loss (Ermolov et al. 2021), is adopted for learning and optimization. Besides, hawkes process is utilized for dynamic interaction modeling within specific historical scopes. To evaluate HCL, experiments are carried out from four aspects, i.e., superiority, transferability, effectiveness, and sensitivity. Our contributions are in three aspects: • Problem. To the best of our knowledge, we are the first to point out that three criminal correlations, i.e., type and neighbor spatial correlations and hawkes temporal correlation, should be exploited to benefit crime prediction. • Algorithm. We propose a Hawkes-enhanced spatialtemporal hypergraph Contrastive Learning framework (HCL), which mines spatial and temporal correlations via contrastive learning and hawkes process modeling. • Evaluation. We conduct experiments on two typical datasets, i.e., NYC and CHI, between seventeen state-ofthe-art crime prediction models. The results demonstrate promising capacities of HCL from four aspects, i.e., superiority, transferability, effectiveness, and sensitivity. Related Work GNN-based Crime Prediction Models Spatial-temporal scenarios are important in real-world applications, such as urban computing, which benefits public safety and resource optimization. Various models (Yu et al. 2023a,b,c; Shao et al. 2023) are developed based on it. Multiple GNNbased models are proposed for crime prediction due to their promising capacities to model spatial dependency among different locations (Wang et al. 2020; Shao et al. 2022; Ji et al. 2023b; Tang et al. 2023; Tang, Xia, and Huang 2023). For example, DCRNN (Li et al. 2017) and STGCN (Yu, Yin, and Zhu 2017) leverage spectral graph-based messagepassing for information aggregation. Then, GMAN (Zheng et al. 2020) and ST-MetaNet (Pan et al. 2019) adopt the graph attention network (GAT) for spatial message communication for feature aggregation between correlated regions. More recently, more works are developed based on hypergraph learning models (Wu et al. 2024b; Wu, Yan, and Ng 2023; Wu et al. 2024a). Meanwhile, spatial-temporal hypergraph learning models, i.e., ST-HSL (Li et al. 2022), ST-SHN (Xia et al. 2022c), have shown their huge advantage on crime prediction, relying on more accurate and sufficient spatial-temporal modeling. However, they are still at an early stage. Specifically, they fail to leverage three correlations within crime data, including two spatial and one temporal correlation, thus hindering their capacity. Graph Contrastive Learning Graph neural networks show their promising capacities in graph structural data (Yang et al. 2023c; Lee, Lee, and Park 2022; Zhao, Zhang, and Wang 2022). Contrastive learning (CL) is a simple yet effective way to enhance the representation capability of graph neural networks (GNN) (Liu et al. 2022a; Ji et al. 2023a; Xia et al. 2022a,b; Liu et al. 2022b; Liang et al. 2023b,c; Yang et al. 2022, 2023a,b). There are usually two essential elements for CL, i.e., correlated view and contrastive loss. CL methods usually mine information within the data itself by optimizing specific contrastive loss to reach an agreement between correlated views (Li et al. 2022), i.e., pull together the same samples across correlated views (positive samples) and push away the others (negative samples). GCL techniques are widely applied to various applications, such as recommendation (Wei et al. 2023, 2022; Wei, Xia, and Huang 2023; Xia et al. 2023), More recently, more researchers have believed that CL methods should pursue the precision of contrastive pairs rather than overemphasizing the completeness of positive and negative samples. Thus, many negative-free CL methods (Thakoor et al. 2021; Lee, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8734 Spatial-Temporal Crime Data Hawkes-enhanced Hypergraph Encoding Spatial Criminal Correlation Extraction   Assault Damage Robbery Theft , ... Robbery Theft AssaultDamage , Robbery DamageAssault Theft , , , , Correlation-based Contrastive Learning Robbery Spatial Spatial Temp. Theft Spatial Spatial Damage Spatial Spatial Assault Spatial Spatial Temp. Temp. Temp. Figure 2: Framework of HCL. HCL contains three main procedures, i.e., (a) hawkes-enhanced hypergraph encoding, (b) spatial criminal correlation extraction, and (c) correlation-based contrastive learning. The appeared notations can be found in Tab. 1. Lee, and Park 2022; Liang et al. 2023a) are proposed, which also achieve promising performances. When it comes to CLbased spatial-temporal data prediction, few works exist. In particular, only two typical works are there for crime prediction. ST-HSL (Li et al. 2022) performs cross-view CL between local and global patterns and AutoST (Zhang et al. 2023) proposes an automated graph-level CL model to address the data noise and distribution diversity issues. Compared to them, HCL is the first negative-free spatial-temporal hypergraph CL framework based on three aforementioned criminal correlations. Method Preliminary In this section, we introduce preliminaries i.e., problem formulation and notation summary to understand our methods better. The urban space is generally divided into different geographical region blocks with grid-based map segmentation in urban computing (Xia et al. 2022c). Following these settings, we assume that there are |B| urban region blocks, |T| timestamps, and |C| crime categories in the data. Based on it, we further define the initial crime feature as X ∈R|C|×|B|×|T |, which describes the occurrence of crimes. With such crime tensor as input, we aim to get the predictions XT +1 ∈R|C|×|B| on future crime occurrences. Moreover, we summarize the notations in Tab. 1. Hawkes-enhanced Hypergraph Encoding This procedure is mainly designed to compensate for the information underlying the hawkes temporal correlations. We leverage hawkes process modeling to enhance the dynamic interactions between different representations in this procedure. Specifically, given the initial criminal feature X ∈R|C|×|B|×|T |, the hawkes-enhanced representation H will be generated by two substeps, i.e., primitive hypergraph encoding, and hawkes process enhancement. Primitive Hypergraph Encoding As elaborated before, hypergraph learning paradigms can achieve better performance on crime prediction based on more accurate spatialtemporal modeling and more expressive representations. To inherit such good attributes, HCL is also developed based on hypergraph learning, and the adopted hypergraph backbones gb(·) will generate primitive spatial-temporal representations Hp by Eq. 1. Hp = gb(X), (1) where different hypergraph learning encoders can be selected as candidate backbones,e.g., ST-HSL (Li et al. 2022), and ST-SHN (Xia et al. 2022c). Hawkes Process Enhancement Hawkes process modeling (Hawkes 1971), a simple yet effective technique, is adopted to enhance the dynamic interactions underlying the typical temporal correlations (hawkes temporal correlations), i.e., the closer two moments are, the more relevant events are. Motivated by it, the procedure of hawkes process enhancement ft(·) aims to enhance the modeling of the temporal information underlying such correlations. As shown in Eq. 2, the primitive representation Hp is taken as input, and the hawkes-enhanced representation H will be generated by the following equation. H = ft(Hp), (2) where the representations (H and Hp) are composed of a sequence of representations at different timestamps i.e., H = {Htj | j ∈[0, T)}, and Hp = {Htj p | j ∈[0, T)}. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8735 Notation Explanation C, B, T set of crime categories, region blocks, timestamps ci, bk, tj ith crime category, kth region block, jth timestamp |·| quantity of the elements in set X initial crime feature gb(·) adopted hypergraph learning backbone ft(·) hawkes process enhancement procedure Hp, H primitive and hawkes-enhanced representation Htj, H tj p primitive and hawkes-enhanced representation at tj w calculated hawkes weight s historical scope for hawkes enhancement δ trade-off hyperparameter for hawkes enhancement Tbk,tj category set of occurred crimes in bk at tj T ∗ bk,tj correlated category set of the crime in bk at tj Nci,bk,tj anchor block with ci in bk at tj N ∗ ci,bk,tj correlated neighbor spatial set with ci in bk at tj Ct, Cn type, neighbor spatial correlation Stc, Snc type, neighbor spatial correlation set pc(·) correlation-based contrastive pair constructor Ptc, Pnc set of type, neighbor correlation-based contrastive pairs βci calculated category coefficient λtc, λnc trade-off hyperparameters of type, neighbor correlation Ltc, Lnc contrastive losses of type, neighbor spatial correlations L, Ltask overall loss, and original task loss Table 1: Notation summary Specifically, hawkes process modeling is performed on the representation level (the primitive representation H) to enhance the dynamic interactions between different features via different hawkes weights (Zuo et al. 2018) within a predefined historical scope along the timeline. Eq. 3 shows the enhancement procedure at timestamp tj. Htj = H tj p + δ · s X i=1 wj−i,j · H tj−i p , (3) where δ is the trade-off hyperparameter for hawkes enhancement. s denotes the predefined size of the historical scope, and wj−i,j represents the hawkes weights between timestamp tj−i and tj, which can be calculated by Eq. 4. wj−i,j = exp (−tj −tj−i + 1 tj −tj−s + 1). (4) Spatial Criminal Correlation Extraction Inspired by previous works (Liang et al. 2023a; Thakoor et al. 2021), we mine the information underlying two aforementioned spatial correlations i.e., type spatial correlations and neighbor spatial correlations via contrastive learning techniques. To prepare for it, we first formally define these two spatial correlations in Def. 1 and Def. 2, defined as tuples corresponding to the target samples and their correlated samples for contrastive learning. Both correlations are extracted in this procedure and further contribute to the construction of contrastive pairs in the next section. Type Spatial Correlation Extraction Type spatial correlation describes the characteristic, i.e., co-occurrence of different types of crimes in the same region block. Thus, the feature representations of different crimes will affect each other. In particular, the occurred crimes will influence the probability of occurrence of the others. Besides, the interactions among crime types will also differ in region blocks. For example, assuming two situations, i.e., (a) three types of crimes co-occur in one block, and (b) one type of crime occurs, the rest types of crime are more likely to happen in (a) compared to (b). To mine such information, we first traverse all of the region blocks and then extract the type spatial correlations Ct satisfying the Def. 1. As a result, all of the Ct constitute correlation set Stc (See an example in Fig. 1). Definition 1 (Type Spatial Correlation) Given crime data X ∈R|C|×|B|×|T |, type spatial correlation in kth block at jth timestamp is denoted as Ct bk,tj = (Tbk,tj, T ∗ bk,tj), iff the correlation exists. Tbk,tj = {ci | X[ci][bk][tj] > 0}, T ∗ bk,tj = {ci | X[ci][bk][tj] = 0}, (5) where ci ∈C, bk ∈B and tj ∈T. Neighbor Spatial Correlation Extraction Neighbor spatial correlation describes the characteristic, i.e., the closer to the crime center, the more dangerous it is around the neighborhood area. Thus, the crime in the center block will affect their occurrence in neighborhood blocks. However, different types of crimes may have different impacts on their neighbors. Fig. 1 (c) indicates that Larceny is more influenced by this correlation than other crimes. To mine such information, we first locate the blocks where the crimes exist and traverse their neighbors. Then, we extract the neighbor spatial correlations Cn satisfying the Def. 2 and further build the correlation set Snc (See an example in Fig. 2). Definition 2 (Neighbor Spatial Correlation) Given crime data X ∈R|C|×|B|×|T |, neighbor spatial correlation with ith crime category in kth block at jth timestamp is denoted as Cn ci,bk,tj = (Nci,bk,tj, N ∗ ci,bk,tj), iff the correlation exists. Nci,bk,tj = bk if X[ci][bk][tj] > 0, N ∗ ci,bk,tj = {ϕ(Nci,bk,tj) | X[ci][ϕ(Nci,bk,tj)][tj] = 0}, (6) where ci ∈C, bk ∈B, tj ∈T and ϕ(·) denotes the operation for locating the neighborhood block of a specific block. ϕ(N) = [N ± 1, N ± | cols. |], e.g.,, | cols. | = 4 in Fig. 2. Correlation-based Contrastive Learning After extracting spatial criminal correlations, we leverage the contrastive learning paradigm to mine their hidden semantics. It will endow the selected hypergraph learning backbone for better expressive and discriminative ability. We leverage a self-supervised alignment loss, i.e., MSE loss, to pull together the representations of the elements within each correlation tuple in latent space due to the positive effects on each other described in previous sections. Contrastive Pair Construction The contrastive pairs are constructed based on the former extracted type and neighbor spatial correlations. Whichever category it is, we should first encode the elements, i.e., Tbk,tj, T ∗ bk,tj, Nci,bk,tj, N ∗ ci,bk,tj, in the correlation tuples, and further construct the contrastive pairs. Specifically, we leverage the average pooling strategy The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8736 to generate the embeddings based on the hawkes-enhanced representation H ∈R|C|×|B|×|T |×D by Eq. 7.   HTbk,tj HT ∗ bk,tj HNci,bk,tj HN ∗ ci,bk,tj  = p(   Tbk,tj T ∗ bk,tj Nci,bk,tj N ∗ ci,bk,tj  ) =   1 |Tbk,tj | P ci∈Tbk,tj H[ci][bk][tj] 1 |T ∗ bk,tj | P ci∈T ∗ bk,tj H[ci][bk][tj] H[ci][bk][tj] 1 |N ∗ ci,bk,tj | P bk∈N ∗ ci,bk,tj H[ci][bk][tj]   (7) Afterward, we construct two corresponding contrastive pair sets, i.e., Ptc and Pnc, based on the generated correlation representations following the Eq. 8. Ptc = {(HTbk,tj , HT ∗ bk,tj ) | Ct bk,tj ∈Stc}, Pnc = {(HNci,bk,tj , HN ∗ ci,bk,tj ) | Cn ci,bk,tj ∈Snc}. (8) Correlation-based Training Objective As shown in Eq. 9, the overall training objective composes of two parts, i.e., task loss and correlation-based contrastive loss. Our HCL is optimized by minimizing the overall loss L. L = Ltask + λtc · Ltc + λnc · Lnc | {z } contrastive loss . (9) Specifically, compared to some of the other contrastive learning frameworks, HCL is more like a plug-and-play auxiliary module, which should be coupled with the task losses, i.e., regression loss for the quantity prediction of different crimes and classification loss for binary crime occurrence prediction (Xia et al. 2022c; Li et al. 2022). Besides, the correlation-based contrastive loss contains two parts, both relying on the self-supervised alignment loss, i.e., MSE loss, which aim to pull together the features in the correlations for training (See Eq. 10). Ltc = 1 |Ptc| Ptc X Ct bk,tj MSE(HTbk,tj , HT ∗ bk,tj ), Lnc = 1 |Pnc| Pnc X Cn ci,bk,tj βci · MSE(HNci,bk,tj , HN ∗ ci,bk,tj ). (10) Here, the βci is the category coefficient corresponding to the probability that the same type of crime occurs in the 1hop neighbors when a crime occurs, which is calculated during the spatial criminal correlation extraction(See Fig. 1 (c) as an example). Besides, the MSE(Ha, H+ a ) can be calculated as follows: MSE(Ha, H+ a ) = Ha ∥Ha∥2 − H+ a ∥H+ a ∥2 2 2 = 1 −2 · ⟨Ha, H+ a ⟩ ∥Ha∥2 · ∥H+ a ∥2 , (11) where ∥·∥2 denotes the L2-norm, and ⟨·⟩denotes the cosine value. Concretely, the correlated samples in each contrastive Data New York City-Crimes Chicago-Crime Time Span Jan, 2014 to Dec, 2015 Jan, 2016 to Dec, 2017 Category Burglary Robbery Theft Battery Cases # 31,799 33,453 124,630 99,389 Category Assault Larceny Damage Assault Cases # 40,429 85,899 59,886 37,972 Table 2: Two benchmark datasets for crime prediction. pair are pulled together in the latent space in this manner, thus improving the discriminative capability of the network. Algorithm 1 presents the pseudo-code of HCL. Complexity Analysis The overall complexity is O(|B| × |C| × (|B| × D + |T|)). Considering the complexity of adopted hypergraph models, e.g., O(|B|2 × |C|2 × D) in ST-HSL (Li et al. 2022), There is no redundant complexity and only a small proportion of extra complexity is added by HCL, thus leading to comparable model efficiency compared to previous models. Experiment This section presents the evaluation and analysis of HCL. Concretely, we first introduce the experiment setup. Then, different experiments are conducted for evaluation. We further put forward the following four questions in this section: • Q1: Superiority. Does HCL outperform the existing state-of-the-art existing crime prediction models? • Q2: Transferability. Will the proposed strategies be able to be transferred to different hypergraph backbones? • Q3: Effectiveness. Are the proposed strategies effective in leveraging criminal correlations? • Q4: Sensitivity. How does the performance fluctuation of HCL with different hyper-parameters? Experiment Setup Dataset Two real-world crime datasets in New York City (NYC) and Chicago (CHI) are used. Their statistics are reported in Tab. 2. They are split into disjoint regions with 3km × 3km spatial units. As in previous works, We use one day as the time interval to map crime records into a series. The training and test set are split with a ratio of 7:1 and use the crimes of the last month in the training set for validation. Implementation Detail All experiments are conducted with a single NVIDIA 3090 Ti GPU (24 GB) and intel core i9-9900K CPU. For a fair comparison, the parameter settings in HCL for the selected hypergraph backbones are the same as the original papers of them, i.e., ST-HSL (Li et al. 2022) for quantity prediction of different crimes, ST-SHN (Xia et al. 2022c) for occurrence prediction. We find the best combination of the hyperparameters in a grid-search manner. Concretely, the hyperparameter λtc and λtc are both searched in {0.05, 0.1, 0.15, 0.2, 0.25}. The size of the historical scope s is searched in {1, 3, 5, 7, 9}, and the temporal influential weight δ is searched in {0.001, 0.01, 0.05, 0.1}. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8737 Models New York City Chicago Burglary Larceny Robbery Assault Theft Battery Assault Damage MAE MAPE MAE MAPE MAE MAPE MAE MAPE MAE MAPE MAE MAPE MAE MAPE MAE MAPE SVM (2011) 1.1604 0.7653 1.4979 0.6417 1.1278 0.6733 1.1928 0.6964 1.7711 0.5629 1.3493 0.6027 1.0879 0.6560 1.1313 0.5721 ARIMA (2012) 0.8999 0.6305 1.3015 0.6268 0.9558 0.5969 0.9983 0.6198 1.5965 0.5720 1.3212 0.5792 0.8691 0.6044 1.0430 0.6134 ST-ResNet (2017) 0.8680 0.5603 1.1082 0.5329 0.8717 0.5209 0.9645 0.5749 1.3931 0.5488 1.1519 0.5719 0.7679 0.4633 0.9064 0.5018 DCRNN (2018) 0.8176 0.5324 1.0732 0.5492 0.9189 0.5532 0.9692 0.5955 1.3699 0.5770 1.1583 0.5528 0.7639 0.4600 0.8764 0.4756 STGCN (2018) 0.8366 0.5404 1.0629 0.5295 0.9035 0.5441 0.9375 0.5757 1.3628 0.5359 1.1512 0.5761 0.7963 0.4810 0.9068 0.4959 DeepCrime (2018) 0.8227 0.5508 1.0618 0.5351 0.8841 0.5537 0.9222 0.5677 1.3391 0.5430 1.1290 0.5389 0.7737 0.4616 0.9096 0.4960 GWN (2019) 0.7993 0.5235 1.0493 0.5405 0.8681 0.5351 0.8866 0.5646 1.3211 0.5502 1.1331 0.5503 0.7493 0.4580 0.8584 0.4850 STDN (2019) 0.8831 0.5768 1.1442 0.5889 0.9230 0.5649 0.9498 0.5661 1.5303 0.6287 1.2076 0.5791 0.8052 0.4820 0.9169 0.4869 ST-MetaNet (2019) 0.8285 0.5369 1.0697 0.5627 0.9214 0.5766 0.9323 0.5702 1.3369 0.5369 1.1762 0.5748 0.7904 0.4753 0.8907 0.4756 STtrans (2020) 0.8617 0.5592 1.0896 0.5478 0.8839 0.5651 0.9363 0.5679 1.3404 0.5356 1.1466 0.5684 0.7671 0.4499 0.8987 0.4842 GMAN (2020) 0.8652 0.5633 1.0503 0.5340 0.9234 0.5671 0.9338 0.5803 1.3235 0.5307 1.1442 0.5560 0.7852 0.4714 0.8823 0.4838 AGCRN (2020) 0.8260 0.5397 1.0499 0.5404 0.9013 0.5383 0.9063 0.5519 1.3281 0.5304 1.1432 0.5697 0.7669 0.4612 0.8712 0.4859 MTGNN (2020) 0.8329 0.5439 1.0473 0.5330 0.8759 0.5457 0.9090 0.5714 1.3054 0.5378 1.1307 0.5597 0.7571 0.4572 0.8667 0.4859 ST-SHN (2021) 0.8012 0.5198 1.0431 0.5291 0.8717 0.5362 0.9169 0.5682 1.3292 0.5310 1.1348 0.5544 0.7758 0.4574 0.8741 0.4747 DMSTGCN (2021) 0.8376 0.5485 1.0410 0.5464 0.8597 0.5403 0.9036 0.5601 1.3292 0.5291 1.1297 0.5552 0.8058 0.4759 0.8698 0.4877 ST-HSL (2022) 0.5413 0.3431 0.9131 0.4347 0.6397 0.3689 0.6672 0.3802 1.2917 0.4887 1.0895 0.4821 0.6417 0.3802 0.8246 0.4424 HCL (Ours) 0.5228 0.3146 0.9017 0.4142 0.6116 0.3391 0.6487 0.3662 1.2506 0.4502 1.0601 0.4621 0.6284 0.3712 0.8006 0.4269 Table 3: Performance (MAE, MAPE) comparison of quantity prediction of different crimes on NYC and CHI datasets. Evaluation Metric Following previous works (Li et al. 2022; Huang et al. 2018), (1) Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE) (Huang et al. 2018) are used for quantity prediction of different crimes (the lower, the better), and (2) Macro-F1 and Micro-F1 (Geng et al. 2019) are used for occurrence prediction (the higher, the better). The mean results of five runs are reported. Compared Baseline We compare HCL with seventeen SOTA baselines divided into five groups. (1) conventional machine learning models, i.e., ARIMA (Pan, Demiryurek, and Shahabi 2012), SVM (Chang and Lin 2011); (2) CNNbased models, i.e., ST-ResNet (Zhang, Zheng, and Qi 2017), UrbanFM (Liang et al. 2019); (3) RNN-based models, i.e., STDN (Yao et al. 2019), DeepCrime (Huang et al. 2018), STtrans (Wu et al. 2020a); (4) GNN-based models, i.e., DCRNN (Li et al. 2017), STGCN (Yu, Yin, and Zhu 2017), GWN (Wu et al. 2019), AGCRN (Bai et al. 2020), MTGNN (Wu et al. 2020b), GMAN (Zheng et al. 2020), ST-MetaNet (Pan et al. 2019), DMSTGCN (Han et al. 2021); (5) Hypergraph Learning-based models, i.e., ST-SHN (Xia et al. 2022c), ST-HSL (Li et al. 2022). We rerun ST-SHN and STHSL, as they are selected as backbones. The other results are copied from the original papers. More details for each compared baseline can be found in their references. Main Performance (RQ1 & RQ2) Tab. 3 and Tab. 4 show the main performances of HCL on two crime prediction tasks. Based on them, the Q1 and Q2 about the superiority and transferability can be answered. Superiority Analysis (RQ1) HCL outperforms other models on all the metrics. Specifically, for crime quantity prediction, HCL can boost best MAE and MAPE performances on average by 2.96% and 6.20% on NYC and improve MAE and MAPE performance by 2.72% and 4.74% on CHI. In particular, HCL shows great superiority in Burglary predictions in NYC, i.e., improving MAE and MAPE performances by 3.42% and 8.31%. Similar to it, the MAE and MAPE performance of Theft prediction in CHI can Models New York City Chicago Micro-F1 Macro-F1 Micro-F1 Macro-F1 SVM (2011) 0.4982 0.5049 0.6089 0.6068 ARIMA (2012) 0.4591 0.4629 0.4591 0.4629 ST-ResNet (2017) 0.5461 0.5497 0.6268 0.6339 DCRNN (2018) 0.5715 0.5773 0.6517 0.6507 STGCN (2018) 0.5722 0.5771 0.6752 0.6755 DeepCrime (2018) 0.5717 0.5820 0.6653 0.6717 STDN (2019) 0.5316 0.5355 0.6279 0.6261 UrbanFM (2019) 0.5631 0.5695 0.6464 0.6420 ST-MetaNet (2019) 0.5301 0.5316 0.6748 0.6765 STtrans (2020) 0.5767 0.5792 0.6501 0.6498 GMAN (2020) 0.5517 0.5570 0.6723 0.6759 ST-SHN (2021) 0.6111 0.6126 0.6868 0.6901 HCL (Ours) 0.6301 0.6304 0.6955 0.6973 Table 4: Performance (Micro-F1, Macro-F1) comparison of crime occurrence prediction on NYC and CHI datasets. also be improved by 3.18% and 7.88%. Meanwhile, as for crime occurrence prediction, our method can also achieve the best performance. Concretely, the values of Micro-F1 and Macro-F1 are raised by 3.1% and 2.9% compared to the previous best performances in NYC. Based on the above performances, we can find the performance of HCL in NYC is better than it on CHI. Transferability Analysis (RQ2) HCL with two different backbones, i.e., ST-HSL and ST-SHN, are evaluated on two subtasks, i.e., quantity prediction of crimes and crime occurrence prediction, respectively. Tracking all the records shows the superiority and effectiveness of HCL when adopting different backbones. Specifically, our HCL makes an average of 2.84% MAE and 5.47% MAPE improvements compared to the first backbone and boosts F1 performance by 2.08% compared to the second backbone. As shown in the above analysis, HCL outperforms previous typical crime prediction baselines, which shows its superiority. Moreover, HCL can also achieve promising performances when applied to different crime prediction tasks The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8738 Figure 3: Ablation study, where ”w.o. HE” denotes removing hawkes enhancement. ”w.o. CLT” and ”w.o. CLN” denote removing contrastive learning techniques for type and neighbor spatial correlations separately. and integrated with different hypergraph backbones, demonstrating its great transfer ability and generalizability of HCL. Ablation Study (RQ3) We conduct the ablation study to answer the Q3, i.e., the effectiveness of three leveraged correlations, i.e., type spatial correlations, neighbor spatial correlations and hawkes temporal correlations. Fig. 3 shows the performance of the ablation studies. Concretely, the results show that the information effectively mined from all these three correlations can benefit the prediction performance. In general, two spatial correlations are more effective than the temporal correlation in our HCL. Besides, the type spatial correlations are more useful than the neighbor spatial correlations in most situations. It is reasonable since the type spatial correlations are more common characteristics compared to neighbor spatial correlations. Moreover, there are slightly different categories of crimes on how effective these correlations are. Specifically, the spatial correlations are more meaningful in Burglary and Theft prediction than Assault prediction. These performance variances of different correlations are mainly because of the characteristic differences for different crimes. Theft and Burglary are more similar, thus leading to similar performances with different correlations. In conclusion, Both of the spatial correlations, i.e., type spatial correlations and neighbor spatial correlations, and temporal correlations, i.e., hawkes temporal correlations, are effectively mined by the designed strategies, which improves the capacities of the models for crime prediction. Hyperparameter Analysis (RQ4) We investigate the influence of the four extra introduced parameters, i.e., type spatial correlation tradeoff weight λtc, neighbor spatial correlation tradeoff weight λnc, hawkes enhancement tradeoff weight δ, and historical scope size s, in this section to answer Q4. We report the MAE performance on two crime prediction tasks, i.e., Robbery prediction in NYC and Damage prediction in CHI. The performances of different combinations of hyperparameters are present in Fig. 4. According to the figure, we can get the following three observations. (1) Referring to the absolute value fluctuation, the performances of the crime prediction are insensitive to different values within a specific scope, e.g., Figure 4: Hyperparameter analysis. The X-axes represent the scope of hyper-parameters. Both two Y-axes in each subfigure represent MAE, where the left axis is for ”Robbery” prediction and the right one is for ”Damage” prediction. [0.1, 0.15] for λtc for Robbery prediction in NYC, [0.2, 0.25] for λtc for Damage prediction in CHI. (2) The trends of performance fluctuation of each parameter are similar in different datasets but may have different best values. For example, HCL achieve the best performance for robbery prediction in NYC when s = 3, while for Damage prediction in CHI when s = 5. (3) The influence of the Hawkes enhancement weights is more stable than the other two correlations. When δ is around 0.01, HCL usually reaches the best performance. In summary, the hyperparameter discussion reveals how the performance fluctuates of HCL with different hyperparameters. Regarding to the results, we set λtc = 0.15, λnc = 0.15, δ = 0.01, s = 3 for crime prediction in NYC, and λtc = 0.2, λnc = 0.2, δ = 0.01, s = 5 for crime prediction in CHI. Conclusion In this paper, we first point out three omitted crime correlations, i.e., type spatial correlations, neighbor spatial correlations, and hawkes temporal correlations, which will benefit the crime prediction. Besides, we design Hawkes-enhanced Spatial-Temporal Hypergraph Contrastive Learning framework (HCL), which adopts contrastive learning techniques, and hawkes process modeling to mine the information underlying the spatial and temporal correlation, respectively. In the future, we plan to develop hyperparameter-efficient models by reducing the hyperparameters more adaptively and optimising the extra time consumption brought by HCL. Acknowledgments This work is supported by the National Key R&D Program of China (No.2021YFB3100700) and the National Natural Science Foundation of China (no. 62325604, 62276271). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8739 References Angelidou, M. 2014. Smart city policies: A spatial approach. Cities, 41: S3–S11. Bai, L.; Yao, L.; Li, C.; Wang, X.; and Wang, C. 2020. Adaptive graph convolutional recurrent network for traffic forecasting. NeurIPs, 33: 17804–17815. Chang, C.-C.; and Lin, C.-J. 2011. LIBSVM: a library for support vector machines. TIST, 2(3): 1–27. Chen, M.; Liu, T.; Wang, C.; Huang, D.; and Lai, J. 2022a. Adaptively-weighted Integral Space for Fast Multiview Clustering. In MM. Chen, M.; Wang, C.; Huang, D.; Lai, J.; and Yu, P. S. 2022b. Efficient Orthogonal Multi-view Subspace Clustering. In SIGKDD. Ermolov, A.; Siarohin, A.; Sangineto, E.; and Sebe, N. 2021. Whitening for self-supervised representation learning. In ICML, 3015–3024. PMLR. Feng, J.; Lin, Z.; Xia, T.; Sun, F.; Guo, D.; and Li, Y. 2020. A Sequential Convolution Network for Population Flow Prediction with Explicitly Correlation Modelling. In IJCAI, 1331–1337. Geng, X.; Li, Y.; Wang, L.; Zhang, L.; Yang, Q.; Ye, J.; and Liu, Y. 2019. Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting. In AAAI, volume 33, 3656–3663. Han, L.; Du, B.; Sun, L.; Fu, Y.; Lv, Y.; and Xiong, H. 2021. Dynamic and multi-faceted spatio-temporal deep learning for traffic speed forecasting. In SIGKDD, 547–555. Hawkes, A. G. 1971. Point spectra of some mutually exciting point processes. Journal of the Royal Statistical Society: Series B (Methodological), 33(3): 438–443. Huang, C.; Zhang, C.; Zhao, J.; Wu, X.; Yin, D.; and Chawla, N. 2019. Mist: A multiview and multimodal spatialtemporal learning framework for citywide abnormal event forecasting. In The world wide web conference, 717–728. Huang, C.; Zhang, J.; Zheng, Y.; and Chawla, N. V. 2018. DeepCrime: Attentive hierarchical recurrent networks for crime prediction. In CIKM, 1423–1432. Ji, C.; Li, J.; Peng, H.; Wu, J.; Fu, X.; Sun, Q.; and Yu, P. S. 2023a. Unbiased and Efficient Self-Supervised Incremental Contrastive Learning. In WSDM, 922–930. Ji, C.; Zhao, T.; Sun, Q.; Fu, X.; and Li, J. 2023b. HigherOrder Memory Guided Temporal Random Walk for Dynamic Heterogeneous Network Embedding. Pattern Recognition, 109766. Lee, N.; Lee, J.; and Park, C. 2022. Augmentation-free selfsupervised learning on graphs. In AAAI, 7372–7380. Li, L.; Zhang, J.; Wang, S.; Liu, X.; Li, K.; and Li, K. 2023. Multi-View Bipartite Graph Clustering With Coupled Noisy Feature Filter. T-KDE, 35(12): 1–13. Li, Y.; Yu, R.; Shahabi, C.; and Liu, Y. 2017. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926. Li, Z.; Huang, C.; Xia, L.; Xu, Y.; and Pei, J. 2022. Spatialtemporal hypergraph self-supervised learning for crime prediction. In ICDE. Liang, K.; Liu, Y.; Zhou, S.; Tu, W.; Wen, Y.; Yang, X.; Dong, X.; and Liu, X. 2023a. Knowledge Graph Contrastive Learning Based on Relation-Symmetrical Structure. T-KDE. Liang, K.; Meng, L.; Liu, M.; Liu, Y.; Tu, W.; Wang, S.; Zhou, S.; and Liu, X. 2023b. Learn from relational correlations and periodic events for temporal knowledge graph reasoning. In SIGIR, 1559–1568. Liang, K.; Zhou, S.; Liu, Y.; Meng, L.; Liu, M.; and Liu, X. 2023c. Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning. arXiv preprint arXiv:2307.03591. Liang, Y.; Ouyang, K.; Jing, L.; Ruan, S.; Liu, Y.; Zhang, J.; Rosenblum, D. S.; and Zheng, Y. 2019. Urbanfm: Inferring fine-grained urban flows. In SIGKDD, 3132–3142. Liu, M.; and Liu, Y. 2021. Inductive representation learning in temporal networks via mining neighborhood and community influences. In SIGIR, 2202–2206. Liu, M.; Wu, J.; and Liu, Y. 2022. Embedding global and local influences for dynamic graphs. In CIKM, 4249–4253. Liu, Y.; Liang, K.; Xia, J.; Yang, X.; Zhou, S.; Liu, M.; Liu, X.; and Li, S. Z. 2023a. Reinforcement Graph Clustering with Unknown Cluster Number. In MM, 3528–3537. Liu, Y.; Liang, K.; Xia, J.; Zhou, S.; Yang, X.; ; Liu, X.; and Li, Z. S. 2023b. Dink-Net: Neural Clustering on Large Graphs. In ICML. Liu, Y.; Tu, W.; Zhou, S.; Liu, X.; Song, L.; Yang, X.; and Zhu, E. 2022a. Deep Graph Clustering via Dual Correlation Reduction. In AAAI. Liu, Y.; Xia, J.; Zhou, S.; Wang, S.; Guo, X.; Yang, X.; Liang, K.; Tu, W.; Li, Z. S.; and Liu, X. 2022b. A Survey of Deep Graph Clustering: Taxonomy, Challenge, and Application. arXiv preprint arXiv:2211.12875. Liu, Y.; Zhang, R.; Guo, J.; de Rijke, M.; Chen, W.; Fan, Y.; and Cheng, X. 2023c. Black-Box Adversarial Attacks against Dense Retrieval Models: A Multi-View Contrastive Learning Method. CIKM. Liu, Y.; Zhang, R.; Guo, J.; de Rijke, M.; Chen, W.; Fan, Y.; and Cheng, X. 2023d. Topic-Oriented Adversarial Attacks against Black-Box Neural Ranking Models. SIGIR. Luo, X.; Tian, Z.; Zhang, T.; Yu, B.; Tang, Y. Y.; and Jia, J. 2023. PFENet++: Boosting Few-Shot Semantic Segmentation With the Noise-Filtered Context-Aware Prior Mask. T-PAMI. Pan, B.; Demiryurek, U.; and Shahabi, C. 2012. Utilizing real-world transportation data for accurate traffic prediction. In ICDM, 595–604. IEEE. Pan, Z.; Liang, Y.; Wang, W.; Yu, Y.; Zheng, Y.; and Zhang, J. 2019. Urban traffic prediction from spatio-temporal data using deep meta learning. In SIGKDD, 1720–1730. Peng, B.; Tian, Z.; Wu, X.; Wang, C.; Liu, S.; Su, J.; and Jia, J. 2023. Hierarchical Dense Correlation Distillation for Few-Shot Segmentation. Shao, Z.; Wang, F.; Xu, Y.; Wei, W.; Yu, C.; Zhang, Z.; Yao, D.; Jin, G.; Cao, X.; Cong, G.; et al. 2023. Exploring The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8740 Progress in Multivariate Time Series Forecasting: Comprehensive Benchmarking and Heterogeneity Analysis. arXiv preprint arXiv:2310.06119. Shao, Z.; Zhang, Z.; Wei, W.; Wang, F.; Xu, Y.; Cao, X.; and Jensen, C. S. 2022. Decoupled dynamic spatial-temporal graph neural network for traffic forecasting. arXiv preprint arXiv:2206.09112. Su, K.; Li, J.; and Fu, H. 2011. Smart city and the applications. In ICECC, 1028–1031. IEEE. Tang, J.; Xia, L.; Hu, J.; and Huang, C. 2023. SpatioTemporal Meta Contrastive Learning. In CIKM. Tang, J.; Xia, L.; and Huang, C. 2023. Explainable SpatioTemporal Graph Neural Networks. In CIKM, 2432–2441. Thakoor, S.; Tallec, C.; Azar, M. G.; Azabou, M.; Dyer, E. L.; Munos, R.; Veliˇckovi´c, P.; and Valko, M. 2021. Largescale representation learning on graphs via bootstrapping. arXiv preprint arXiv:2102.06514. Tian, Z.; Zhao, H.; Shu, M.; Yang, Z.; Li, R.; and Jia, J. 2020. Prior Guided Feature Enrichment Network for FewShot Segmentation. T-PAMI. Wang, H.; Kifer, D.; Graif, C.; and Li, Z. 2016. Crime rate inference with big data. In SIGKDD. Wang, X.; Ma, Y.; Wang, Y.; Jin, W.; Wang, X.; Tang, J.; Jia, C.; and Yu, J. 2020. Traffic flow prediction via spatial temporal graph neural network. In WWW, 1082–1092. Wei, W.; Huang, C.; Xia, L.; Xu, Y.; Zhao, J.; and Yin, D. 2022. Contrastive meta learning with behavior multiplicity for recommendation. In WSDM, 1120–1128. Wei, W.; Huang, C.; Xia, L.; and Zhang, C. 2023. MultiModal Self-Supervised Learning for Recommendation. In WWW, 790–800. Wei, W.; Xia, L.; and Huang, C. 2023. Multi-Relational Contrastive Learning for Recommendation. In RecSys, 338– 349. Wortley, R.; and Townsley, M. 2016. Environmental criminology and crime analysis. Taylor & Francis. Wu, H.; Li, N.; Zhang, J.; Chen, S.; Ng, M. K.; and Long, J. 2024a. Collaborative contrastive learning for hypergraph node classification. Pattern Recognition, 146: 109995. Wu, H.; Yan, Y.; and Ng, M. K.-P. 2023. Hypergraph collaborative network on vertices and hyperedges. T-PAMI. Wu, H.; Yip, A.; Long, J.; Zhang, J.; and Ng, M. K. 2024b. Simplicial Complex Neural Networks. T-PAMI, 46(1): 561– 575. Wu, X.; Huang, C.; Zhang, C.; and Chawla, N. V. 2020a. Hierarchically structured transformer networks for finegrained spatial event forecasting. In WWW, 2320–2330. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Chang, X.; and Zhang, C. 2020b. Connecting the dots: Multivariate time series forecasting with graph neural networks. In SIGKDD, 753–763. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; and Zhang, C. 2019. Graph wavenet for deep spatial-temporal graph modeling. arXiv preprint arXiv:1906.00121. Xia, J.; Wu, L.; Chen, J.; Hu, B.; and Li, S. Z. 2022a. Simgrace: A simple framework for graph contrastive learning without data augmentation. In WWW, 1070–1079. Xia, J.; Wu, L.; Wang, G.; and Li, S. Z. 2022b. ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning. In ICML. PMLR. Xia, L.; Huang, C.; Huang, C.; Lin, K.; Yu, T.; and Kao, B. 2023. Automated Self-Supervised Learning for Recommendation. In Proceedings of the ACM Web Conference 2023, 992–1002. Xia, L.; Huang, C.; Xu, Y.; Dai, P.; Bo, L.; Zhang, X.; and Chen, T. 2022c. Spatial-temporal sequential hypergraph network for crime prediction with dynamic multiplex relation learning. arXiv preprint arXiv:2201.02435. Yang, Y.; Guan, Z.; Li, J.; Zhao, W.; Cui, J.; and Wang, Q. 2023a. Interpretable and Efficient Heterogeneous Graph Convolutional Network. IEEE TKDE, 35(2): 1637–1650. Yang, Y.; Guan, Z.; Wang, Z.; Zhao, W.; Xu, C.; Lu, W.; and Huang, J. 2022. Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering. In NeurIPS, volume 35, 16962–16974. Yang, Y.; Guan, Z.; Zhao, W.; Lu, W.; and Zong, B. 2023b. Graph Substructure Assembling Network with Soft Sequence and Context Attention. IEEE TKDE, 35(5): 4894– 4907. Yang, Y.; Yang, J.; Bao, R.; Zhan, D.; Zhu, H.; Gao, X.; Xiong, H.; and Yang, J. 2023c. Corporate Relative Valuation Using Heterogeneous Multi-Modal Graph Neural Network. T-KDE. Yao, H.; Tang, X.; Wei, H.; Zheng, G.; and Li, Z. 2019. Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction. In AAAI, volume 33, 5668–5675. Yu, B.; Yin, H.; and Zhu, Z. 2017. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875. Yu, C.; Wang, F.; Shao, Z.; Sun, T.; Wu, L.; and Xu, Y. 2023a. DSformer: a double sampling transformer for multivariate time series long-term prediction. In CIKM. Yu, C.; Yan, G.; Yu, C.; and Mi, X. 2023b. Attention mechanism is useful in spatio-temporal wind speed prediction: Evidence from China. Applied Soft Computing, 148: 110864. Yu, C.; Yan, G.; Yu, C.; Yu, Z.; and Xiwei, M. 2023c. A multi-factor driven spatiotemporal wind power prediction model based on ensemble deep graph attention reinforcement learning networks. Energy, 263: 126034. Zhang, J.; Zheng, Y.; and Qi, D. 2017. Deep spatio-temporal residual networks for citywide crowd flows prediction. In AAAI, volume 31. Zhang, Q.; Huang, C.; Xia, L.; Wang, Z.; Li, Z.; and Yiu, S. 2023. Automated Spatio-Temporal Graph Contrastive Learning. In WWW, 295–305. Zhao, T.; Zhang, X.; and Wang, S. 2022. Exploring edge disentanglement for node classification. In WWW, 1028– 1036. Zheng, C.; Fan, X.; Wang, C.; and Qi, J. 2020. Gman: A graph multi-attention network for traffic prediction. In AAAI, volume 34, 1234–1241. Zuo, Y.; Liu, G.; Lin, H.; Guo, J.; Hu, X.; and Wu, J. 2018. Embedding temporal network via neighborhood formation. In SIGKDD, 2857–2866. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8741
2024
971
18,819
A Comprehensive Augmentation Framework for Anomaly Detection Jiang Lin,Yaping Yan* School of Computer Science and Engineering, Southeast University, Nanjing 210096, China Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China {220215663, yan}@seu.edu.cn Abstract Data augmentation methods are commonly integrated into the training of anomaly detection models. Previous approaches have primarily focused on replicating real-world anomalies or enhancing diversity, without considering that the standard of anomaly varies across different classes, potentially leading to a biased training distribution. This paper analyzes crucial traits of simulated anomalies that contribute to the training of reconstructive networks and condenses them into several methods, thus creating a comprehensive framework by selectively utilizing appropriate combinations. Furthermore, we integrate this framework with a reconstruction-based approach and concurrently propose a split training strategy that alleviates the overfitting issue while avoiding introducing interference to the reconstruction process. The evaluations conducted on the MVTec anomaly detection dataset demonstrate that our method outperforms the previous state-of-the-art approach, particularly in terms of object classes. We also generate a simulated dataset comprising anomalies with diverse characteristics, and experimental results demonstrate that our approach exhibits promising potential for generalizing effectively to various unseen anomalies encountered in real-world scenarios. Introduction Surface anomaly detection is an important task in quality inspection and automation. It aims to find outliers based on the normal samples provided. Many recent methods take a reconstructive approach that models the distribution of normal samples and then discriminates the anomalous ones. The training of the reconstruction process often requires the presence of anomalous samples, data augmentations are therefore often used as a way to produce simulated anomalies as it is difficult to collect real samples. In practice, the characteristics of the simulated anomalies significantly impact the quality of the reconstruction results. The core idea of existing data augmentation methods is to randomly replace a region of normal areas with other values, thus creating an anomaly. Two natural questions raised in this process are how to select the target region and what to use as the anomaly source. In previous methods, the choice of target region includes rectangular areas, *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. scars (thin rectangular areas) (Li et al. 2021), randomized masks (Zavrtanik, Kristan, and Skoˇcaj 2021), and masks obtained by thresholding difference (Schl¨uter et al. 2022). As for the anomaly source, simple CutOut (replace with zeros) (DeVries and Taylor 2017), random noises, external texture sources (Zavrtanik, Kristan, and Skoˇcaj 2021), and in-distribution sampling (Li et al. 2021) are widely adopted. As various as the choices could be, previous solutions create data augmentation methods based on two common perspectives. They either focus on creating anomalies with high variety or approximating realistic appearances. In this paper, we support the use of diverse shapes and randomly distributed locations in anomaly generation. However, we oppose the empirical thoughts that believe mimicking or approximating the distribution of real anomalies could lead to optimal solutions. The anomaly detection problem is known for the unpredictability of anomalies. It means that we shall hold no presumptions about the appearance of anomalies in the real world. The hypothesized anomaly distribution that previous methods try to approach is a biased distribution that is extrapolated from past observations of the test set or real-world experience. Mimicking a biased distribution leads to biased solutions which could provide false comfort in the test set which is heavily involved with human experience in its creation process. The results are therefore not representative enough, and the performance of these methods might not be as promising in the real world. On the other hand, other methods seek to create anomalies with a high variety, but the current way of executing it is unsatisfying as a randomized source does not produce anomalies with a truly high variety. Simply using randomly selected anomaly sources, whether internal or external, could only achieve variety in values, not the type of anomalies. These attempts previous methods have made are empirical, and further analysis is required to produce anomalies with different intrinsic natures to cover the possible situations from different angles. In terms of a reconstructive perspective, anomalies could be divided into transparent and opaque ones for they require to be treated differently by the network. For transparent ones, the original normal areas are covered by anomalies but still visible, so the goal is to retrieve them. For opaque ones, the original regions are completely gone, so the goal is to reconstruct based on the information from their surroundThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8742 ing areas. However, the premise of this division may not hold as normality varies within a certain range in reality. Anomalies that lie close to the normal distribution are less likely to be well reconstructed, so we separately propose a method to create near-distribution anomalies, tightening the decision boundary of normality. Additionally, we include the rotation anomaly, which differs from previous ones as it completely originates from manual definitions in certain object classes. Since providing samples with the same orientation during training is sufficient to learn that rotation is not allowed, and it is difficult to compute the anomaly mask created by the rotation operation, we apply this operation in reverse to classes that allow rotation to emphasize its irrelevance in creating an anomaly. The anomaly simulation process aims for diversity, but using all augmentations indiscriminately may result in suboptimal outcomes since the same augmentation may not be considered anomalous for different classes. Unlike previous automated methods, we believe that manual intervention is necessary to determine the appropriate combination of anomaly simulation methods since the human definition of anomalies varies in each class. Therefore, we additionally propose a simple yet effective selection strategy that disables certain augmentations if it is considered irrelevant in creating anomalies. We follow the previous work (Zavrtanik, Kristan, and Skoˇcaj 2021) to build a reconstructive framework to evaluate our anomaly simulation method. A key ingredient from the previous method in tackling the overfitting issues is to use rotation augmentation on normal samples before composing anomalies. However, it interferes with our new augmentation framework since rotation may introduce anomalies in certain classes. We remove the arbitrary use of rotation augmentation and propose a split training strategy to improve generalization. Specifically, we split the training data in half, and use different samples in reconstruction and localization, thus preparing the localization process for the reconstruction quality drop in practice. In experiments, our method demonstrates SOTA performance in benchmarks, and we show how different anomaly augmentations affect the final reconstruction quality on a simulated dataset. In ablation, the effect of rotation augmentation and the splitting strategy is investigated further. Our main contributions are listed as follows: • A comprehensive anomaly simulation framework that selectively applies different augmentations. • A near distribution anomaly augmentation method. • A split training strategy that alleviates the overfitting issue in two-stage frameworks. Related Work Most recent works on anomaly detection focus on unsupervised settings due to the fickle nature of anomalies and the difficulty in collecting them. The MVTec AD (Bergmann et al. 2019) provides a high-quality benchmark for this problem. Numerous methods have emerged after its release, and two major approaches are reconstruction-based methods and feature-based methods. Self-supervised methods are often integrated into other approaches to generate simulated anomalies for training. Here, we briefly review previous works on reconstruction-based anomaly detection and selfsupervised anomaly detection to provide the necessary context to understand our work. Reconstruction-basd Anomaly Detection Typical reconstruction-based methods consist of a reconstructive network that aims to restore anomalous inputs to normal images. Then, the anomaly level is determined by the reconstruction error between them. Autoencoders have also been widely used for image reconstruction. The SSIM (Wang, Simoncelli, and Bovik 2003) loss is adopted by (Bergmann et al. 2018) to aid the reconstruction process. A U-Net (Ronneberger, Fischer, and Brox 2015) is utilized to better distinguish the anomalies from the reconstruction results (Zavrtanik, Kristan, and Skoˇcaj 2021). A selfsupervised block (Ristea et al. 2022) is proposed to serve as a “plug and play” component which improves the performance of many methods. In this paper, we believe the right data stimulation is the key to better reconstructions and performance improvements in downstream tasks. Self-supervised Anomaly Detection The anomaly synthesis problem is closely connected to general data augmentation methods. In the development of selfsupervised anomaly detection, a lot of methods draw inspiration from previous general data augmentation methods. Cutout (DeVries and Taylor 2017) and RandomErasing (Zhong et al. 2020) randomly erase a part of images and then replace it with other values to create augmentations. CutMix (Yun et al. 2019) and Cutpaste (Li et al. 2021) utilize image patches that are selected from themselves and paste them to other regions to create anomalies. In DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021), an outside data source (Cimpoi et al. 2014) is used as the anomaly source, and a Perlin (Perlin 1985) mask is introduced to create a mask with uncertain shapes. NSA (Schl¨uter et al. 2022) proposes a method that could seamlessly blend scaled patches of various sizes from separate images through integrating Poisson image editing, creating anomalies that are visually close to real-world anomalies. The common assumption behind these methods is that the simulated anomaly should be as close to the real-world samples as possible. We agree that the reconstruction results will be better if similar anomalies have been seen in training. However, we think that the presumption of realworld anomalies is highly biased as anomalies do not follow certain patterns. Instead of trying to mimic the imaged real-world anomalies, which do not exist, this work aims to construct a framework that generates anomalies of different traits, so the reconstruction ability could later generalize to similar anomalies in inference. Method This paper presents a comprehensive anomaly simulation framework that aims to generate diverse anomalies while accounting for distinct standards of normality associated with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8743 each class. The final behavior of the reconstructive network is highly related to the anomaly samples it received during training. Since different classes do not share the same standard of normality, selectively applying different anomaly simulation methods is therefore optimal as the same augmentation may not be considered anomalous for all classes. A split training strategy is also proposed as an alternative to alleviate the intermediate inconsistency in two-stage frameworks. Anomaly Simulation Framework Our framework consists of a set of anomaly simulation methods. Since anomalies are known to be unpredictable both in shape and appearance, we follow the previous method (Zavrtanik, Kristan, and Skoˇcaj 2021) used to generate mask Mu with uncertain shapes. As for appearance, we believe that relying solely on arbitrary sources is insufficient to establish a credibly diverse training distribution. In this paper, we direct our attention toward the distinctions in the characteristics of anomalies and classify them into two categories, namely Transparent and Opaque. The intuition behind this is that the reconstructive network performs different actions for them: restoration and reconstruction, respectively. The original image I is covered with randomly sampled source N (Cimpoi et al. 2014). The transparent augmentation It is defined as: It = M u ⊙I + (1 −β)(Mu ⊙I) + β(Mu ⊙N), (1) where β is the opacity parameter that controls the transparency, and M u is the inverse version of Mu. The opaque ones are generated similarly except that the beta value is fixed to one. The opaque augmentation Io is defined as: It = M u ⊙I + Mu ⊙N. (2) If the normality is fixed, then all anomalies can be classified into the aforementioned two categories. However, the standard of normality changes within a certain range for one class, and it is different across different classes. The global position remains relatively fixed in certain classes, thereby rendering rotations as anomalies. Some other classes perceive alterations in relative position as anomalies, encompassing bends and other changes in relative position. In other classes (primarily some irregular textures), neither of these changes is a contributing factor to the occurrence of anomalies. To tighten the decision boundary of normality, we additionally include rotations and introduce a method called Near-distribution Anomaly Augmentation (NDAA). The aforementioned augmentation methods are different from the previous methods, and they should be judiciously employed during training to establish an optimal training distribution. As illustrated in Fig. 1, NDAA should only be employed when changes in relative position would generate anomalies within that particular class. If the augmented image does not consist of anomalies, then this augmentation method should be excluded. Near-distribution Anomaly Augmentation The purpose of this method is to enhance the ability to discern anomalies that are closely distributed to the normal Rotation :Optional :Stochastic Transparent Opaque NDAA Figure 1: This figure shows how to select the appropriate augmentations for training. Measure Difference Mask Filtering Anomaly Composition Figure 2: This figure illustrates the process of constructing a near-distribution anomaly. distribution. The anomaly created should essentially exhibit spatial proximity to its surroundings, manifesting in various forms such as bending or distortion, while not being limited to these manifestations. As illustrated in Fig. 2, we first select a rectangular area and distort it according to a sin curve. Then we take the absolute difference between the original image and the distorted image and use a threshold to filter their difference into a primitive mask Mo. This process creates many scattered dots that are distributed in the background. Therefore, we use block reduce first and then resize the reduced image back to the original size before thresholding it. This operation connects visually near dots into contiguous regions and produces mask Ml, but it also enlarges the original mask area. We calculated a pixel-wise product between Mo and the Ml, thus preserving the dense areas and filtering the noise. The final masked regions in the original image are then replaced with the distorted area to get the final augmented image. This simulation method is crucial in improving the reconstruction in a lot of classes because the normal samples often vary in detail. The reconstructive network could confuse these differences with subtle anomalies, leading to non-ideal results. The anomalies created by the NDAA method are similar to their surrounding area, thus pushing the network The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8744 to discriminate the difference between them. Reconstructive Framework Previous star work (Zavrtanik, Kristan, and Skoˇcaj 2021) provides a simple yet powerful solution for reconstructive methods. It comprises a reconstructive network and a discriminative network, where the former is responsible for transforming anomalous input into normal images, while the latter learns an appropriate distance function to quantify the level of anomaly based on the disparity between the input and the reconstructed image. In the previous work, rotation augmentation was used stochasticly to normal samples before they entered the reconstructive network. We believe that the purpose of this operation is to address issues caused by the limited number of training samples, such as overfitting. However, it affects the process of anomaly generation and could introduce anomalies by itself in certain cases, which leaves us concerned about its impact on the reconstruction process. If we remove this operation, generalization problems arise immediately. The reconstructive network can perfectly reconstruct the samples it has seen during training, but it produces blurry results for test samples. This inconsistency affects the downstream discriminative network because the learned distance measurement is no longer accurate due to the difference in reconstruction quality between training and testing. Considering this circumstance, we propose a direct solution to restore downstream performance while removing the global rotation augmentation. The discriminative network is trained with high-quality reconstruction during training, whereas the reconstructive network fails to exhibit satisfactory generalization and produces blurry reconstructions for test samples. Therefore, we have decided to address this issue reversibly by exposing the discriminative network in training to the same reconstruction quality it would have experienced in inference. As shown in Fig. 3, we propose a partition of the data into two disjoint subsets based on the parity of their indices. Give a set of normal samples I = {Ii : i ∈{1, 2, .. . , N}, we split it into IX = {Ii : i ∈{1, 2, .. . , N}, i mod 2 = 1} and IY = {Ii : i ∈ {1, 2, .. . , N}, i mod 2 = 0}. Then, IX is used to train the reconstructive network, but it does not participate in the training process of the discriminative network. IY is passed through the reconstructive network without calculating the reconstruction loss and the reconstruction results are provided to the reconstructive network to train it. Following previous works, we use L2 Loss as the reconstruction objective and additionally apply SSIM Loss (Wang et al. 2004) to stress the interdependence in the reconstructed image. Besides, Focal Loss (Lin et al. 2017) is used as the localization objective to focus learning on hard examples. The full objective could be formulated as follows: L(IX, IY , Mgt) = LSSIM(IX, R(IX)) + L2(IX −R(IX)) + Lfocal(S(IY ⊕R(IY )), Mgt), (3) where ⊕is the channel-wise concatenation, and Mgt is the ground truth mask. R and S represents the reconstructive network and the discriminative network respectively. LSSIM is a patch based SSIM loss, and Lfocal refers to Focal Loss. In this way, the reconstruction quality remains consistent in the samples encountered by the discriminative network during both training and testing, leading to stable performance in the downstream task. Different ratios of these two portions have been experimented with, and empirical evidence suggests that assigning equal sample sizes by parity produces more favorable outcomes. Experiments The performance of our method is compared with previous methods, and empirical evaluations demonstrate the quality improvements brought by our anomaly simulation method in reconstruction results We additionally assess the generalization performance by cross-comparing the performance of each method on a simulated dataset. Benchmarks In this section, we provide the benchmarks comparing our performance with previous methods (DRAEM (Zavrtanik, Kristan, and Skoˇcaj 2021), NSA (Roth et al. 2022), PatchCore(Roth et al. 2022)). Dataset Experiments in this paper are conducted on the MVTec (Bergmann et al. 2019) anomaly detection dataset. The MVTec dataset contains 15 classes including 5 classes of textures and 10 classes of objects. This dataset provides a training set with only normal images and a test set comprised of various anomalies. It provides pixel-level annotations which allow benchmarks for anomaly localization. Experimental settings All the images are resized to a size of 256 × 256 before entering the network. The training settings and the model choices mostly follow the previous work (Zavrtanik, Kristan, and Skoˇcaj 2021) to make a fair comparison, as this paper mainly focuses on improving performance through more comprehensive training data. We randomly split the training data in half and used them for training each network separately. The data collection process could store similar samples in near positions, so it is worth noting that the data is separated in parity order instead of upper and lower halves. Also, there is no indiscriminate use of image rotation (on anomaly-free images as a data augmentation method, not to simulate anomalies) to alleviate the overfitting issue. Metrics The norm in benchmarking anomaly detection methods is to report the AUROC score. Since the anomaly detection problem is severely imbalanced and AUROC may produce a less representative score (ˇSkv´ara, Pevn`y, and ˇSm´ıdl 2023), we additionally report the AP score to provide more representative benchmarks. Experimental results The results of image-level anomaly detection on MVTec are presented in Table 1. Our method demonstrates comparable performance in this task and achieves the highest scores in seven classes. The average performance of our methods surpasses that of DRAEM and NSA, achieving an AUROC and AP of 98.3% and 99.3% respectively, which is slightly lower by 0.3% and 0.2% compared to PatchCore. In Table 2, our method surpasses The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8745 Concat Aug Aug Lfocal L2+Lssim Aug Split by parity Figure 3: This figure shows the structure of the reconstructive framework and the concept of the split training strategy. The green samples and the orange samples are from the two different portions of the given training distribution. Class DRAEM NSA PatchCore Ours capsule 95.5 / 99.1 93.7 / 98.7 97.1 / 99.2 95.5 / 99.1 bottle 96.7 / 98.4 97.6 / 99.0 100 / 100 96.5 / 98.2 carpet 94.7 / 98.4 85.5 / 94.5 97.9 / 99.4 99.0 / 99.7 leather 100 / 100 100 / 100 100 / 100 100 / 100 pill 97.2 / 99.5 98.4 / 99.7 93.8 / 98.8 98.7 / 99.8 tran 90.8 / 89.2 94.2 / 93.1 100 / 100 100 / 99.9 tile 100 / 100 100 / 100 98.7 / 99.6 100 / 100 cable 92.9 / 96.0 94.6 / 94.0 98.5 / 99.0 92.5 / 95.8 zipper 100 / 100 100 / 100 99.6 / 99.9 100 / 100 toothbrush 100 / 100 100 / 100 100 / 100 100 / 100 metal nut 99.7 / 99.9 95.5 / 99.6 99.7 / 99.9 98.7 / 99.7 hazelnut 100 / 100 95.4 / 97.3 100 / 100 98.7 / 99.2 screw 97.3 / 96.7 88.6 / 96.3 97.2 / 98.9 95.7 / 98.6 grid 100 / 100 99.5 / 99.8 97.0 / 99.0 99.5 / 99.8 wood 99.6 / 99.9 96.6 / 98.9 99.4 / 99.8 100 / 100 avg 97.6 / 98.5 96.2 / 98.0 98.6 / 99.5 98.3 / 99.3 Table 1: Anomaly detection results (AUROC/AP). the state-of-the-art by 2.5% in terms of AP and delivers a slightly better result in AUROC on the anomaly localization task. Further inspection shows that the increase in performance is mainly because of the improvements in the reconstruction quality. Despite achieving a better score in seven classes, our method performs less optimally in other classes. After investigating the test set, we believe it may be attributed to the limited number of anomaly categories and inaccurate labels. For example, the capsule in Fig. 4 is squeezed, resulting in a thinner middle section compared to the right portion both above and below. However, only the missing region at the top is identified as an anomaly. Given Class DRAEM NSA PatchCore Ours capsule 94.3 / 49.4 97.6 / 55.5 98.7 / 45.5 94.6 / 41.9 bottle 99.1 / 86.5 98.3 / 82.0 97.9 / 76.5 98.7 / 86.2 carpet 95.5 / 53.5 90.5 / 36.2 98.6 / 59.4 99.3 / 82.4 leather 95.6 / 75.3 99.5 / 59.0 98.8 / 41.5 99.1 / 74.7 pill 97.6 / 48.5 98.1 / 71.0 97.3 / 74.9 96.9 / 41.8 transistor 90.9 / 50.7 84.8 / 49.5 96.6 / 69.4 93.1 / 56.9 tile 99.2 / 92.3 99.3 / 93.2 94.7 / 50.7 99.6 / 96.8 cable 94.7 / 52.4 87.2 / 29.5 97.9 / 64.9 97.9 / 72.4 zipper 98.8 / 81.5 94.2 / 67.8 97.9 / 52.8 99.0 / 66.5 toothbrush 98.1 / 44.7 92.9 / 40.5 98.6 / 56.6 98.3 / 42.3 metal nut 99.5 / 96.3 98.3 / 93.5 98.4 / 90.3 99.1 / 93.5 hazelnut 99.7 / 92.9 97.6 / 55.2 98.4 / 56.9 99.7 / 92.5 screw 97.6 / 58.2 96.1 / 42.3 99.0 / 35.9 99.4 / 68.2 grid 99.7 / 65.7 99.1 / 51.2 97.9 / 32.1 99.5 / 64.9 wood 96.4 / 77.7 91.1 / 55.6 93.0 / 46.6 96.7 / 82.3 avg 97.3 / 68.4 95.0 / 58.8 97.6 / 56.9 98.0 / 70.9 Table 2: Anomaly localization results (AUROC/AP). that, we believe that the standard benchmark only is insufficient, and provides closer inspections on the reconstruction quality. A simulated dataset is also constructed to evaluate our method from another perspective. Comparision on Reconstruction Quality The core idea of this work is to improve the reconstruction quality by providing more comprehensive simulated anomalies, thus benefiting the downstream detection and localization tasks. More reconstructed anomalous areas allow the discriminative network to produce more accurate results. We inspect and provide qualitative results comparing the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8746 reconstruction quality of hard samples between our method and the previous method (Zavrtanik, Kristan, and Skoˇcaj 2021) while using an identical reconstructive network. Qualitative results are provided in Fig. 4. The first column displays an anomaly that arises from the distortion of normal samples, wherein the metal structure at the top is tilted. The metal structure in its normal form could vary within a certain range, thus making it hard to distinguish between an anomaly and a normal variant. The previous approach encounters difficulties in detecting such anomalies, which we attribute to its training on simplistic simulated anomalies that do not encompass the scenarios of in-distribution anomaly sources. The proposed method is trained using a wider range of anomaly categories, and a selected combination of these categories is tailored to match the characteristics of specific classes. As a result, our method exhibits enhanced capability in accurately detecting anomalies that are closely distributed among normal samples. The second column shows a different situation, where the bent metal strip also creates a missing part on the left. The previous method is capable of effectively removing redundant components, but it fails to sufficiently recover missing components. At a local level, the missing part leaves a background that appears the same as other normal regions, making it hard to distinguish whether it should be recovered. The proposed method effectively eliminates the deformed metal strip and precisely restores it to its original position, showcasing its superior capability in accurately modeling global normality. The utilization of a more comprehensive simulated training dataset enables our model to possess a higher likelihood of achieving ideal reconstructions due to its prior exposure to anomalies exhibiting similar characteristics during training. The effectiveness of our proposed anomaly simulation framework in enhancing reconstruction quality is demonstrated, by utilizing a minimal structure comprising solely a reconstructive network and the corresponding anomaly simulation method. Therefore, it could presumably be integrated into other reconstruction-based methods that utilize simulated anomalies to further improve their reconstruction quality and subsequently increase downstream performance. Generalization on Simulated Anomalies In this study, we posit that the performance of the reconstructive network on unseen anomalies can be enhanced through training with simulated anomalies exhibiting similar characteristics. We argue that previous methods, which aim to replicate the distribution of actual anomalies, introduce an inductive bias into the network. The empirical conclusion of the distribution of real anomalies is based on our prior experience with the test set. Additionally, new datasets for testing are also created based on these observations. In essence, there could be a significant overlap between the modeled real-world distribution and the test set since both are generated under our assumptions regarding real-world anomalies. Besides, the test sets in current datasets only contain anomalies of limited types, which is concerning since the evaluation results could be presumably beneficial to certain methods. Given this, the test set might not be representative enough Figure 4: This figure compares the reconstruction quality between DRAEM (second row) and ours (third row). Class DRAEM NSA Patchcore Ours Cutpaste 86.8 / 58.1 82.2 / 30.2 93.5 / 52.8 89.3 / 60.4 NDAA 96.4 / 67.2 91.7 / 31.6 95.0 / 30.7 96.9 / 72.3 NSA 97.9 / 82.6 98.4 / 92.7 94.7 / 66.4 96.6 / 82.8 Opaque 100 / 99.5 87.0/ 49.2 94.0 / 45.4 99.9 / 98.8 Transpa100 / 99.9 86.5 / 43.2 95.5 / 48.1 100 / 99.3 Avg 96.2 / 81.4 89.1 / 49.3 94.5 / 48.6 96.5 / 82.7 Table 3: Anomaly localization results on the simulated dataset. to reflect the performance in real-world scenarios truthfully. However, it is currently infeasible for researchers to create a dataset with a truly representative test set, considering the massive cost and rarity of anomalies. Therefore, based on the definition we made previously on the categories of anomalies, we proposed to utilize these synthetic methods to create a simulated dataset to conduct further evaluations of the models. Although we could not guarantee that the methods that perform well in the simulated dataset will generalize well in real-world scenarios, it is however certain that a method that performs poorly under simulated scenarios will be harder to claim good generalization performance in the real world. Specifically, the simulated dataset is constructed by applying the proposed anomaly simulation methods on the anomaly-free images from the test set of the Mvtec AD (Bergmann et al. 2019). Since our method is trained with these anomalies, we could not eliminate the possibility that our method performs better because the anomalies are generated with the same method. Therefore, the anomaly simuThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8747 Class D.NoRot D.Our Our.D Ours capsule 93.9 / 47.3 93.0 / 37.2 89.5 / 51.8 94.6 / 41.9 bottle 98.3 / 86.5 98.7 / 85.9 98.9 / 87.3 98.7 / 86.2 carpet 92.9 / 28.0 98.8 / 72.2 93.6 / 50.0 99.3 / 82.4 leather 98.0 / 69.8 98.1 / 65.7 99.1 / 74.7 99.1 / 74.7 pill 94.6 / 50.0 96.1 / 33.1 97.1 / 41.4 96.9 / 41.8 transistor 82.6 / 39.5 85.4 / 33.0 87.9 / 46.7 93.1 / 56.9 tile 97.6 / 86.5 99.4 / 96.6 99.6 / 96.8 99.6 / 96.8 cable 92.6 / 52.1 92.9 / 51.4 94.8 / 59.0 97.9 / 72.4 zipper 95.2 / 47.5 81.6 / 14.4 97.1 / 72.5 99.0 / 66.5 toothbrush 98.4 / 58.5 96.0 / 26.1 97.7 / 52.0 98.3 / 42.3 metal nut 96.0 / 85.1 97.9 / 81.4 99.3 / 94.6 99.1 / 93.5 hazelnut 99.3 / 82.6 99.7 / 95.0 98.7 / 78.6 99.7 / 92.5 screw 98.7 / 41.2 99.4 / 66.5 98.8 / 65.6 99.4 / 68.2 grid 99.5 / 63.5 99.6 / 66.0 99.5 / 55.8 99.5 / 64.9 wood 84.7 / 42.8 95.6 / 68.2 96.7 / 76.5 96.7 / 82.3 avg 94.8 / 58.7 95.4 / 59.5 96.6 / 66.9 98.0 / 70.9 Table 4: Anomaly localization results of the ablation study. From left to right, the listed methods are DRAEM without using rotation augmentation, DRAEM using our simulation method, our architecture using the simulation method of DRAEM, and our original method. lation methods proposed by previous works are additionally included in the simulated dataset for a fair comparison. Each category contains simulated anomalies and their corresponding normal samples, and we used the same model trained in the previous section to conduct evaluations. By introducing other simulation methods, we show how our method reacts to anomalies it has not seen in training, verifying that the anomaly simulation method is comprehensive and its categories are inclusive and enable the network to generalize to anomalies of unseen appearance. The results of the anomaly localization task on the simulated data are presented in Table 3, and our method demonstrates superior performance on average. We have observed that our method has not only performed well on the simulated anomalies proposed in this paper but also achieved competitive results in the other two categories which are generated from methods proposed by previous studies. Besides, we could observe that the models exhibit a higher performance within the corresponding category utilized during training. The findings validate our hypothesis that models will exhibit superior performance when dealing with anomalies belonging to the same category as those encountered during training. Ablation Study In this work, the indiscriminate use of rotation augmentation is removed since it contradicts the core design philosophy of our anomaly simulation framework. We report on ablations for the removal of the rotation augmentation and the choice of the anomaly simulation framework. Rotation augmentation The rotation augmentation and the split training strategy are both solutions proposed to enable better generalization. We believe that the indiscriminate use of rotation augmentation in the anomaly detection task is undesirable as it can alter the characteristics of simulated data and introduce anomalies in inputs, significantly impacting reconstruction quality in certain classes. The removal of it, as demonstrated in Table 4, however, results in severe overfitting issues and a significant decrease in performance. Further inspection reveals that a lot of noises emerge in the anomaly-free area. The reconstructive model overfits and produces perfect reconstruction in training, while producing less satisfying reconstructions in inference, resulting in a quality gap. The discriminative network failed to adapt to this gap, and it started classifying normal areas with less accurate reconstructions as anomalies because it was only exposed to perfectly reconstructed samples during training. Therefore, we develop the split training strategy as an alternative for rotation augmentation while not interfering with the characteristics of the simulated data. Anomaly simulation methods To validate the effectiveness of the proposed anomaly simulation framework, we cross benchmark the results of using different architecture and data simulation methods and report the results in Table 4. We could observe that directly combining the DRAEM architecture with our simulation methods does not yield satisfying results. The first cause is that this combination is trained without both the rotation augmentation and the spitting training strategy, which leads to overfitting. Although NDAA helps the reconstructive network better model the variation range of normal samples, we think that, empirically, it does increase the risk of overfitting. The lack of methods to mitigate it makes this issue even more significant in this particular context. If we reversely use our new architecture with the anomaly simulation method of DRAEM, the results are only slightly different from the original DRAEM model, which is expected. The utilization of the split training strategy becomes less necessary, as the rotation augmentation in the anomaly simulation method of DRAEM effectively addresses overfitting concerns, therefore an integrated training scheme is potentially more stable in this case. Conclusion This paper introduces a comprehensive anomaly simulation framework, comprising four distinct anomaly simulation methods and a selective strategy for determining the appropriate combination of simulated anomalies. A reconstructive framework trained under a split training policy is developed to incorporate the anomaly simulation framework while utilizing its strength to serve the anomaly detection task. In experiments, the proposed method achieves a new state-of-the-art on the MVTec anomaly detection dataset by an AUROC of 98.0% and an AP of 70.9% on the anomaly localization task. Further experiments demonstrate the leading cause of the performance improvements is better reconstruction quality brought by a more comprehensive anomaly simulation framework. To enhance the representativeness of the results, a simulated anomaly dataset that contains anomalies of various kinds is created, and the benchmarks further show our method has more potential to excel against various unknown anomalies in the real world. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8748 Acknowledgments We thank anomalib (Akcay et al. 2022) for the code support. This work was supported by the National Natural Science Foundation of China under grant 62201142. References Akcay, S.; Ameln, D.; Vaidya, A.; Lakshmanan, B.; Ahuja, N.; and Genc, U. 2022. Anomalib: A Deep Learning Library for Anomaly Detection. arXiv:2202.08341. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9592–9600. Bergmann, P.; L¨owe, S.; Fauser, M.; Sattlegger, D.; and Steger, C. 2018. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; and Vedaldi, A. 2014. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3606–3613. DeVries, T.; and Taylor, G. W. 2017. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9664–9674. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Doll´ar, P. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988. Perlin, K. 1985. An image synthesizer. ACM Siggraph Computer Graphics, 19(3): 287–296. Ristea, N.-C.; Madan, N.; Ionescu, R. T.; Nasrollahi, K.; Khan, F. S.; Moeslund, T. B.; and Shah, M. 2022. Self-supervised predictive convolutional attentive block for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13576–13586. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer. Roth, K.; Pemula, L.; Zepeda, J.; Sch¨olkopf, B.; Brox, T.; and Gehler, P. 2022. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14318–14328. Schl¨uter, H. M.; Tan, J.; Hou, B.; and Kainz, B. 2022. Natural synthetic anomalies for self-supervised anomaly detection and localization. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXI, 474–489. Springer. ˇSkv´ara, V.; Pevn`y, T.; and ˇSm´ıdl, V. 2023. Is AUC the best measure for practical comparison of anomaly detectors? arXiv preprint arXiv:2305.04754. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600–612. Wang, Z.; Simoncelli, E. P.; and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, 1398–1402. Ieee. Yun, S.; Han, D.; Oh, S. J.; Chun, S.; Choe, J.; and Yoo, Y. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, 6023–6032. Zavrtanik, V.; Kristan, M.; and Skoˇcaj, D. 2021. Draem-a discriminatively trained reconstruction embedding for surface anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 8330–8339. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. 2020. Random erasing data augmentation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 13001–13008. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8749
2024
972
18,820
Temporally and Distributionally Robust Optimization for Cold-Start Recommendation Xinyu Lin1, Wenjie Wang1*, Jujia Zhao1, Yongqi Li2, Fuli Feng3, Tat-Seng Chua1 1National University of Singapore 2The Hong Kong Polytechnic University 3MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China {xylin1028, wenjiewang96, zhao.jujia.0913, liyongqi0, fulifeng93}@gmail.com, [email protected] Abstract Collaborative Filtering (CF) recommender models highly depend on user-item interactions to learn CF representations, thus falling short of recommending cold-start items. To address this issue, prior studies mainly introduce item features (e.g., thumbnails) for cold-start item recommendation. They learn a feature extractor on warm-start items to align feature representations with interactions, and then leverage the feature extractor to extract the feature representations of cold-start items for interaction prediction. Unfortunately, the features of cold-start items, especially the popular ones, tend to diverge from those of warm-start ones due to temporal feature shifts, preventing the feature extractor from accurately learning feature representations of cold-start items. To alleviate the impact of temporal feature shifts, we consider using Distributionally Robust Optimization (DRO) to enhance the generation ability of the feature extractor. Nonetheless, existing DRO methods face an inconsistency issue: the worse-case warm-start items emphasized during DRO training might not align well with the cold-start item distribution. To capture the temporal feature shifts and combat this inconsistency issue, we propose a novel temporal DRO with new optimization objectives, namely, 1) to integrate a worst-case factor to improve the worst-case performance, and 2) to devise a shifting factor to capture the shifting trend of item features and enhance the optimization of the potentially popular groups in cold-start items. Substantial experiments on three real-world datasets validate the superiority of our temporal DRO in enhancing the generalization ability of coldstart recommender models. Introduction Recommender systems are widely deployed to filter the overloaded multimedia information on the web for meeting users’ personalized information needs (He et al. 2017). Technically speaking, Collaborative Filtering (CF) is the most representative method (Koren, Bell, and Volinsky *Corresponding author. This work is supported by the National Key Research and Development Program of China (2022YFB3104701), the National Natural Science Foundation of China (62272437), and Huawei International Pte Ltd. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2009). In essence, CF methods learn the CF representations of users and items from historical interactions and utilize the learned CF representations to predict the users’ future interactions. As content production capabilities continue to advance, recommender systems face the challenge of accommodating an increasing influx of new items (a.k.a. coldstart items1). For example, 500 hours of video are uploaded to YouTube every minute2. Since the new items lack historical interactions and thereby have no CF representations, traditional CF methods fail to effectively recommend these cold items to users, disrupting the ecological balance of recommender systems on the item side. In light of this, it is essential to improve the cold-start item recommendation. Prior literature has integrated item features, such as categories and thumbnails of micro-videos, for cold-start item recommendation (Shalaby et al. 2022; Zhao et al. 2022). These methods essentially learn a feature extractor that encodes warm items (i.e., items in the training set) into feature representations and utilizes feature representations to fit the user-item interactions during training. For inference for cold items, given the lack of CF counterparts, only feature representations from the feature extractor are used to estimate user preference. The key of this paradigm lies in devising training strategies to align feature representations and user-item interactions, which mainly fall into two research lines. 1) Robust training-based methods (Volkovs, Yu, and Poutanen 2017; Du et al. 2020) use both feature representations and CF representations to predict interactions while CF representations are randomly corrupted to strengthen the alignment. 2) Auxiliary loss-based methods (Zhu et al. 2020) pay attention to minimizing the distance between the feature representations and CF representations learned from interactions via the auxiliary loss, e.g., contrastive loss (Wei et al. 2021) and GAN loss (Chen et al. 2022). Despite their success, existing methods suffer from a critical issue: item features temporally shift from warm to cold items (Wang et al. 2023b). As illustrated in Figure 1(a), the category features of newly-uploaded items are 1For simplicity, cold-start items and warm-start items are referred to as cold and warm items, respectively. 2https://www.statista.com/. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8750 shift shift (b) … … … $* $+ $, Cold Items Distribution $* $+ $, Uncertainty Set of DRO (d) Inconsistency (c) $* $+ $, $ # Uploaded Items Warm Cold $∗ Pandemic Category … # Uploaded Items (a) Time Period 1 Time Period 2 Figure 1: (a) An example of item category feature shifts towards sanitary products. (b) T-SNE visualization of visual features of item thumbnails in three time periods on a Micro-video dataset. The stars represent the average item features in each time period. (c) An example of the shifting trend of three item groups over time. (d) Illustration of the inconsistency issue of DRO. shifting over time due to various environmental factors, such as a pandemic outbreak. Empirical evidence from a real-world Micro-video dataset further substantiates this phenomenon. In Figure 1(b), we divide the micro-videos into three time periods according to the upload time and visualize the micro-video features, where a star represents the average item features in each time period. The moving stars across time periods validate that item features are gradually shifting over time. Since the feature extractor is typically trained on warm items using Empirical Risk Minimization (ERM) (Vapnik 1991), it easily overfits the majority group of warm items. Unfortunately, the majority group of cold items could deviate from that of warm items as depicted in Figure 1(a) and (b). Such temporal feature shifts hinder the feature extractor’s ability to accurately extract feature representations for cold items, thus degrading the performance of cold-start item recommendation. To tackle this issue, we consider learning a feature extractor with robust generalization ability to enhance the interaction prediction on temporally shifted cold items. To strengthen the generalization ability, Distributionally Robust Optimization (DRO) is a promising approach3. In general, DRO aims to enhance the worst-case performance over the pre-defined uncertainty set, i.e., potential shifted distributions (Duchi and Namkoong 2018). However, directly applying DRO in cold-start recommendation suffers from the inconsistency issue. DRO will overemphasize the minority groups4 in warm items at the expense of other 3Other potential solutions are discussed in Section . 4Minority group usually yields worse performance in recommendation (Wen et al. 2022). In DRO, the training distribution is assumed to be a mixture of subgroups, and the uncertainty set is defined on mixtures of these subgroups (cf. Section ). groups’ performance (Oren et al. 2019). Due to the fact that minority groups in warm items may not guarantee their popularity in subsequent cold items, the overemphasis on the minority group of warm items might compromise the performance of the popular groups in cold items. For example, in Figure 1(c), c1, c2, and c3 denote three item groups, where c3 is the minority group in the warm items that traditional DRO pays special attention to. However, c2 is gradually becoming popular, dominating the cold items. The inconsistency between the excessive emphasis on c3 and the shifting trend towards c2 prevents DRO from alleviating the impact of temporal feature shifts (see Figure 1(d)). To address this inconsistency issue and strengthen the generalization ability of the feature extractor under the temporal feature shifts, we put forth two objectives for DRO training: 1) enhancing the worst-case optimization on the minority group of warm items, thereby raising the lower bound of performance; and 2) capturing the shifting trend of item features and emphasizing the optimization of the groups likely to become popular. To this end, we propose a Temporal DRO (TDRO), which considers the temporal shifting trend of item features for cold-start recommendation. In particular, we consider two factors for the training of TDRO: 1) a worst-case factor to guarantee worst-case performance, where we divide the warm items into groups by the similarity of item features, and prioritize the improvements of the item groups with large training loss; and 2) a shifting factor to capture the shifting trend of item features, which utilizes a gradientbased strategy to emphasize the optimization towards the gradually popular item groups across time periods. We instantiate the TDRO on two State-Of-The-Art (SOTA) cold-start recommender methods and conduct extensive experiments on three real-world datasets. The empirical results under multiple settings (e.g., cold-start and warmstart recommendation, and recommendation with differing degrees of temporal feature shifts) validate the superiority of TDRO in enhancing the generalization ability of coldstart models. We release our codes and datasets at https: //github.com/Linxyhaha/TDRO/. The contributions of this work are summarized as follows. • We emphasize the vulnerability of ERM and underscore the necessity of adjusting the learning objective to strengthen the generalization ability of cold-start models under temporal feature shifts. • We propose a novel TDRO objective for cold-start recommendation, which extends the conventional DRO to avoid overemphasizing the minority groups and capture the temporal shifting trend of item features. • We conduct extensive experiments on three datasets, demonstrating the effectiveness of temporal DRO in attaining robust prediction under temporal feature shifts. Related Work • Cold-start Recommendation. Traditional CF methods typically rely on CF representations learned from historical interactions (Wang et al. 2022; Li et al. 2019; Sun et al. 2022). However, the influx of cold items hinders traditional The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8751 CF methods from providing appropriate recommendations due to the lack of historical interactions (Zhao et al. 2022; Rajapakse and Leith 2022; Raziperchikolaei, Liang, and Chung 2021; Pulis and Bajada 2021; Du et al. 2022a; Huan et al. 2022; Zhu et al. 2021; Sun et al. 2021; Wang et al. 2021; Chu et al. 2023). To remedy this, existing methods align the feature representations with interactions (Meng et al. 2020; Guo et al. 2017), falling into two research lines. 1) Robust training-based methods utilize both feature and CF representations for prediction while the CF representations are randomly corrupted (Volkovs, Yu, and Poutanen 2017). 2) Auxiliary loss-based methods introduce different auxiliary losses for minimizing the distance between the feature and CF representations (Wei et al. 2021; Chen et al. 2022). However, previous methods suffer from temporal feature shifts from warm to cold items. To solve this issue, a concurrent study (Wang et al. 2023b) explores equivariant learning over minority groups of warm items. Differently, we leverage the shifting trend and emphasize the optimization of the potentially popular item groups. • Distributionally Robust Optimization. DRO aims to achieve uniform performance against distribution shifts (He et al. 2022) by optimizing the worst-case performance over a pre-defined uncertainty set (Rahimian and Mehrotra 2019; Michel, Hashimoto, and Neubig 2022). The most representative line of work is discrepancy-based DRO which defines the uncertainty set as a ball surrounding the training distributions with different discrepancy metrics (Duchi and Namkoong 2018; Staib and Jegelka 2019; Liu et al. 2022). Since discrepancy-based DRO suffers from overpessimism issue (Oren et al. 2019; Sagawa et al. 2020; Duchi, Hashimoto, and Namkoong 2023)), another line of research falls into Group-DRO (Zhou et al. 2021; Goel et al. 2021). It defines the uncertainty set as a set of mixtures of subgroups in the training set, encouraging DRO to focus on meaningful distribution shifts (Oren et al. 2019; Wen et al. 2022). Some prior work (Zhou et al. 2023) explores DRO to alleviate long-tail users and items for warm-start recommendation, e.g., S-DRO (Wen et al. 2022) and PDRO (Zhao et al. 2023). However, directly applying DRO to coldstart recommendation may cause inconsistency issue. In this work, we consider leveraging a temporally DRO to focus on the mitigation of temporal item feature shifts for cold-start recommendation. Preliminary Cold-start Recommendation. To address the cold-start item issue, existing methods leverage the item features (e.g., categories and visual features) to predict the user-item interactions. Specifically, given the users U, warm items Iw with features {si|i ∈Iw}, and user-item interactions D = {(u, i, yui)|u ∈U, i ∈Iw} with yui ∈{0, 1} indicating whether the user u likes the item i (yui = 1) or not (yui = 0), the cold-start recommender model aims to learn a feature extractor, an interaction predictor, and the CF representations of users and items for aligning feature representations with user-item interactions. The learnable parameters of the cold-start recommender model, denoted as θ, are optimized via Empirical Risk Minimization (ERM). Formally, we have θ∗ ERM := arg min θ∈Θ E(u,i,yui)∈D[L (θ; (u, i, yui, si))], (1) where L(·) is the loss function of the cold-start recommender model and is particularly tailored to different coldstart methods to regulate the alignment. Nevertheless, such a learning paradigm merely minimizes the expected loss under the same distribution as the training data (Rahimian and Mehrotra 2019). The feature extractor could under-represent the minority groups (Wen et al. 2022), which however might be popular in cold items, leading to the vulnerability to the shifted cold item features. Distributionally Robust Optimization. To alleviate temporal feature shifts, DRO5 is an effective solution that could achieve consistently high performance across various distribution shifts (Zhou et al. 2021; Duchi and Namkoong 2018; Oren et al. 2019; Sagawa et al. 2020; Hu et al. 2018). In detail, DRO assumes the training distribution to be a mixture of K pre-defined groups {Pi|i = 1, . . . , K}. Then, it optimizes the worst-case performance over the K subgroups for controlling the performance lower bound. Formally, θ∗ DRO := arg min θ∈Θ  max j∈[K] E(u,i,yui)∼Pj[L (θ; (u, i, yui, si))]  . (2) A practical solution to Eq. (2) is to conduct interleave stepwise optimization (Piratla, Netrapalli, and Sarawagi 2022; Sagawa et al. 2020). Specifically, at each update step t, DRO first selects the group with the worst empirical performance: j∗= arg max j∈{1,...,K} E(u,i,yui)∼Pj[L(θ; (u, i, yui, si))] ≈arg min j∈{1,...,K} −¯Lj, (3) where ¯Lj = 1 Nj P (u,i,yui)∼˜ Pj Lj(θ; (u, i, yui, si)), ˜Pj is the empirical distribution of group j in dataset D, and Nj is the number of samples in group j. Subsequently, the model parameters θ are updated based on the selected group, i.e., θt+1 = θt −η∇θ ¯Lj∗(θt), where η is the learning rate. Despite the success of DRO in various domains (e.g., image classification (Zhai et al. 2021; Sagawa et al. 2020), natural language modeling (Oren et al. 2019; Michel, Hashimoto, and Neubig 2022)), directly applying DRO in cold-start recommendation faces an inconsistency issue. It is likely that DRO will overemphasize the minority group in warm items at the expense of performance of other groups (Wen et al. 2022). Besides, the majority and minority item groups may change due to temporal feature shifts, thereby hurting the cold item performance (cf. Section ). Temporally DRO To alleviate the impact of temporal feature shifts for coldstart recommendation, we propose two new objectives for DRO training: 1) enhancing the worst-case optimization on minority groups to raise the lower bound of performance, 5We adopt Group-DRO to avoid over-pessimism issue. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8752 !!" !"" !#" Time period 2 !!! !"! !#! Time period 1 !!# !"# !## Time period 3 # ' "' % "$ % "% % "&% "$ $ "% $ "& $ # ' "' $ # ' "' & "$ & "& & "% & (b) The shifting trend, i.e., the sum of weighted period gradient. (c) Group selection based on shifting factor. "$ "% "& Trend (% $ " &* % Period gradient Weighted period gradient (# $ * &* # (# = 0.6 (' $ * &* ' (% = 1.1 (' = 2 (a) Illustration of the group gradient within each time period (first row), the period gradient (second row), and the period gradient weighted by $) (third row). $ ! $ " ()&* ) *% " & #& % *' " & #& ' *( " & #& ( Trend < #&, -.)/0 > Group gradient in a time period Illustration of shifting factor with three groups and three time periods (\ie $i\in\{1,2,3\}$ and $e\in\{1,2,3\}$). (a) depicts each group gradient within a time period (first row), the period gradient (second row), and the period gradient weighted by $\beta_e$ (third row). (b) depicts the shifting-aware holistic gradient over the temporally shifted distributions. (c) demonstrates the selected group (marked in green) based on the similarity between the group gradient and the shifting trend. Step 1 Step 2 Step 3 Illustration of shifting factor with three groups and three time periods (\ie $i\in\{1,2,3\}$ and $e\in\{1,2,3\}$). (a) depicts the three steps of obtaining the weighted period gradient in each time period. And then, by summing up the weighted period gradient, we can obtain the shifting trend as shown in (b). Finally, the shifting factor for each group is obtained by calculating the similarity between the group gradient and shifting trend as shown in (c). Figure 2: Illustration of the shifting factor with three groups and three time periods (i.e., i ∈{1, 2, 3} and e ∈{1, 2, 3}). (a) depicts the three steps of obtaining the weighted period gradient in each time period. And then, by summing up the weighted period gradient, we can obtain the shifting trend as shown in (b). Finally, the shifting factor for each group is obtained by calculating the similarity between the group gradient and the shifting trend as presented in (c). and 2) capturing the temporal shifting trend of item features and emphasizing the optimization of groups that are likely to become popular. Group Selection It is noted that the group selection plays a critical role in DRO (Eq. (3)) to strengthen the model’s robustness (Piratla, Netrapalli, and Sarawagi 2022). As such, we propose a novel TDRO, which introduces two factors in group selection: 1) a worst-case factor to focus more on minority groups with larger losses and give them priorities for group selection, and 2) a shifting factor to emphasize the potentially popular groups in cold items by leveraging the temporal shifting trend. Besides, the shifting factor can alleviate the overemphasis on one particular worst-case group. Shifting Trend-guided Group Selection. In detail, we first split the warm items into K groups via K-means clustering based on their item features (e.g., visual features of thumbnails). We then split the chronologically sorted interactions into E time periods, e ∈{1, . . . , E}. We denote the average loss of group i in time period e as Le i(·). At each update step t, we consider both the worst-case factor and the shifting factor to select the group j∗for optimization, which is formulated as j∗= arg min j∈{1,...,K} −(1 −λ) ¯Lj(θt) | {z } (worst-case factor) + λ E X e=1 K X i=1 βeLe i(θt −η∇θ ¯Lj(θt)) | {z } (shifting factor) , (4) where λ is the hyper-parameter to balance the strength between two factors. The worst-case factor calculates the loss value of each group ¯Lj(θt) for group selection. The group with a larger loss will have a smaller −¯Lj(θt), thus being more likely to be selected. Besides, the shifting factor consists of two perspectives: • To alleviate the overemphasis on one particular worst-case group, the shifting factor selects the optimization group to improve the performance on all groups. Specifically, θt −η∇θ ¯Lj(θt) is the updated parameters if we choose group j for optimization. Thereafter, the loss of each group i in a time period e after parameter updating will be Le i(θt −η∇θ ¯Lj(θt)). And the performance improvements for all groups across all periods are measured by PE e=1 PK i=1 Le i(θt −η∇θ ¯Lj(θt)). • To emphasize the potentially popular groups in cold items, the shifting factor upweights the later time periods closer to the test phase. In detail, we use βe to re-weight the performance improvements over all groups for each time period e. We define βe = exp(p · e), where a later period e will have a higher weight and p > 0 is the hyperparameter to control the steepness. A smaller p encourages time periods to be uniformly important, while a larger p upweights the time periods closer to the test phase. However, directly applying Eq. (4) for group selection will incur extensive resource costs as we need to consider all possible cases of the updated parameters. Fortunately, we can approximate Eq. (4) into a gradient-based formulation via First-order Taylor formulation. j∗= arg max j∈{1,...,K} (1 −λ) ¯Lj(θt) | {z } (worst-case factor) + λ⟨gj, E X e=1 K X i=1 βege i ⟩ | {z } (shifting factor) , (5) where gj = ∇θ ¯Lj(θ) denotes the gradient of the average loss of group j, and ge i = ∇θLe i(θ) denotes the gradient of group i’s average loss in time period e. The ⟨·, ·⟩represents the inner product computation. Since PE e=1 PK i=1 βege i is a constant vector (referred to as shifting trend) for any group j, we can avoid this cumbersome computations in Eq. (4) for efficient group selection. Interpretation of Shifting Factor. For an intuitive understanding of the gradient-based shifting factor, we visualize a toy example in Figure 2, where we set K = 3 and E = 3. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8753 • Factor decomposition. As shown in Figure 2(a), we have three decomposed group gradients, ge i∈{1,2,3}, for each time period e. We can then obtain the period gradient PK i=1 ge i of time period e by summing up the decomposed group gradients. Since the gradient indicates the optimization direction, the sum of the gradient within each time period, i.e., period gradient, represents the optimal updating direction in each temporally shifted distribution. Subsequently, by multiplying the period importance βe to each time period and summing up the weighted period gradient, we can obtain the shifting trend PE e=1 PK i=1 βege i that reflects optimization direction on potentially popular groups (Figure 2(b)). • Factor interpretation. Finally, the shifting factor is obtained by calculating the inner product of the shifting trend and the group gradient gj (see Figure 2(c)). Since the shifting trend is a constant vector for all groups, the shifting factor essentially measures the similarity between each group gradient and the shifting trend, i.e., optimization direction emphasizing the potentially popular item groups. As for model optimization at each step, we first select the optimal group j∗via Eq. (5), and then update the parameters θ by gradient descent θt+1 = θt −η∇θ ¯Lj∗(θt). Gradient Smoothing Despite the success of step-wise optimization in many applications (Sagawa et al. 2020), directly employing such strategy in recommender systems suffers from training instability (Wen et al. 2022). As such, we follow the previous work (Piratla, Netrapalli, and Sarawagi 2022; Wen et al. 2022) by incorporating gradient smoothing for optimization from two aspects: group importance smoothing and loss consistency enhancement. • Group importance smoothing. We consider assigning weight vector w for groups and regulate the weight dynamic by ηw. Formally, wt+1 = arg max wi∈[K] X i wi[(1 −λ) ¯Li(θ) + λ⟨gi, E X e=1 K X j=1 βege j⟩] −1 ηw KL(w, wt), (6) where wi is the i-th entry of w, η is the learning rate, and KL(p, q) = P i pi log pi qi is the KL-divergence between p and q. By applying KKT conditions, we obtain the closedform solution of Eq. (6): wt+1 i = wt i exp(ηw[(1 −λ) ¯Li(θt) + λ⟨gi, PE e=1 PK j=1 βege j⟩]) P s wts exp(ηw[(1 −λ) ¯ Ls(θt) + λ⟨gs, PE e=1 PK j=1 βege j⟩]) . (7) Thereafter, the model parameters θ are updated through θt+1 = θt −η X i wt+1 i ∇¯Li(θt). (8) • Loss consistency enhancement. To alleviate the training instability caused by aggravated data sparsity after group and time period division, we follow (Wen et al. 2022) to keep the streaming estimations of empirical loss: ¯Lt j ←(1 −µ) ¯Lt−1 j + µ ¯Lt j, where µ is the hyper-parameter to control the streaming step size. A smaller µ leads to more conservative training. Algorithm 1: Training Procedure of TDRO Input: Number of groups K, number of time periods E, initial model parameters θ0, initial group weight w = ( 1 K , 1 K , . . . , 1 K ), initial group loss ¯L0 i∈[K], item features {si|i ∈Iw}, interactions D, shifting factor strength λ, period importance βe∈[E], weight step size ηw, streaming step size µ, and learning rate η. 1: while not converge do 2: for all i ∈{1, . . . , K} do 3: Calculate ¯Lt i(θt) via cold-start loss function. 4: ¯Lt i(θt) ←(1 −µ) ¯Lt−1 i (θt−1) + µ ¯Lt i(θt) 5: for all i ∈{1, . . . , K} do 6: wt+1 i ←wt i exp(ηw[(1 −λ) ¯Lt i(θt)+ λ(∇¯Lt i(θt) E P e=1 K P j=1 βe∇¯Le,t j (θt))]) 7: wt+1 i ←wt+1 i /∥wt+1∥1, ∀i ∈{1, . . . , K} ▷Normalize 8: θt+1 ←θt −η P i∈[K] wt+1 i ∇¯Lt i(θt) ▷Update Output: Optimized model parameters θ. • Instantiation. To instantiate TDRO on cold-start recommender models, we first calculate the group weight w via Eq. (7), where L(θ) can be substituted by any form of the loss function from the backend cold-start models. The model parameters will then be optimized based on weighted gradient descent via Eq. (8). Training details of TDRO are presented in Algorithm 1. Experiments We conduct extensive experiments on three real-world datasets to answer the following research questions: • RQ1: How does our proposed TDRO perform compared to the baselines under temporal feature shifts? • RQ2: How do the different components of TDRO (i.e., two factors for group selection) affect the performance? • RQ3: How does TDRO perform over different strengths of temporal feature shifts and how does TDRO mitigate the impact of shifts? Experimental Settings Datasets. We conducted experiments on three real-world datasets across different domains: 1) Amazon (He and McAuley 2016) is a representative clothing dataset with rich visual features of clothing images. 2) Micro-video is a real-world industry dataset collected from a popular micro-video platform, with rich visual and textual features from thumbnails and textual descriptions. 3) Kwai6 is a benchmark recommendation dataset provided with rich visual features. For Amazon and Micro-video datasets, we split the interactions into training, validation, and testing sets chronologically at the ratio of 8:1:1 according to the timestamps. For the Kwai dataset, due to the lack of global timestamps, we instead follow previous work (Wei et al. 2021) that randomly split the interactions. In addition, we divide the items in the validation and testing sets into warm 6https://www.kwai.com/. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8754 Metric Models Amazon Micro-video Kwai All Warm Cold All Warm Cold All Warm Cold Recall@20 DUIF 0.0042 0.0048 0.0129 0.0318 0.0537 0.0771 0.0208 0.0248 0.0158 DropoutNet 0.0050 0.0110 0.0050 0.0187 0.0494 0.0222 0.0099 0.0118 0.0066 M2TRec 0.0065 0.0058 0.0068 0.0131 0.0056 0.0298 0.0317 0.0320 0.0009 MTPR 0.0057 0.0116 0.0082 0.0303 0.0723 0.0542 0.0464 0.0550 0.0049 Heater 0.0065 0.0136 0.0040 0.0469 0.1153 0.0868 0.0452 0.0536 0.0087 CB2CF 0.0078 0.0170 0.0074 0.0496 0.0961 0.0928 0.0624 0.0737 0.0064 CCFCRec 0.0071 0.0175 0.0117 0.0435 0.0750 0.0699 0.0098 0.0141 0.0129 InvRL 0.0120 0.0183 0.0150 0.0578 0.0899 0.0754 0.0588 0.0701 0.0191 CLCRec 0.0106 0.0200 0.0135 0.0583 0.1135 0.0623 0.0743 0.0884 0.0160 66+S-DRO 0.0121 0.0237 0.0144 0.0656 0.1173 0.0719 0.0661 0.0787 0.0172 66+TDRO 0.0130* 0.0237* 0.0166* 0.0703* 0.1180* 0.0761* 0.0841* 0.1016* 0.0186* GAR 0.0079 0.0200 0.0124 0.0644 0.0962 0.0840 0.0588 0.0706 0.0051 66+S-DRO 0.0078 0.0189 0.0132 0.0626 0.0894 0.0874 0.0579 0.0690 0.0050 66+TDRO 0.0087* 0.0236* 0.0150* 0.0711* 0.1104* 0.0947* 0.0598* 0.0719* 0.0052 NDCG@20 DUIF 0.0020 0.0023 0.0058 0.0204 0.0295 0.0511 0.0158 0.0181 0.0070 DropoutNet 0.0021 0.0043 0.0021 0.0117 0.0286 0.0121 0.0054 0.0061 0.0030 M2TRec 0.0032 0.0029 0.0030 0.0075 0.0036 0.0211 0.0247 0.0248 0.0004 MTPR 0.0029 0.0056 0.0030 0.0175 0.0389 0.0362 0.0324 0.0369 0.0021 Heater 0.0037 0.0075 0.0015 0.0290 0.0653 0.0484 0.0276 0.0312 0.0030 CB2CF 0.0037 0.0076 0.0031 0.0254 0.0490 0.0636 0.0446 0.0504 0.0026 CCFCRec 0.0032 0.0074 0.0050 0.0321 0.0410 0.0464 0.0068 0.0092 0.0058 InvRL 0.0056 0.0079 0.0072 0.0355 0.0493 0.0503 0.0390 0.0444 0.0088 CLCRec 0.0054 0.0093 0.0061 0.0417 0.0728 0.0444 0.0536 0.0610 0.0071 66+S-DRO 0.0060 0.0107 0.0071 0.0451 0.0747 0.0480 0.0472 0.0536 0.0076 66+TDRO 0.0066* 0.0112* 0.0077* 0.0507* 0.0794* 0.0511* 0.0597* 0.0719* 0.0081* GAR 0.0041 0.0088 0.0060 0.0375 0.0496 0.0625 0.0421 0.0485 0.0021 66+S-DRO 0.0033 0.0089 0.0052 0.0385 0.0474 0.0532 0.0423 0.0481 0.0021 66+TDRO 0.0041 0.0110* 0.0066* 0.0419* 0.0571* 0.0638* 0.0431* 0.0495* 0.0024* Table 1: Overall performance comparison between the baselines and two SOTA models enhanced by TDRO on three datasets. The bold results highlight the better performance in the comparison between the backbone models with and without TDRO. ∗ implies that the improvements over the backbone models are statistically significant (p-value <0.01) under one-sample t-tests. and cold sets, where items that do not appear in the training set are regarded as cold items, and the rest as warm items. Evaluation. We adopt the full-ranking protocol (Wei et al. 2021) for evaluation. We consider three different settings: full-ranking over 1) all items, 2) warm items only, and 3) cold items only, denoted respectively as “all”, “warm”, and “cold” settings. The widely-used Recall@20 and NDCG@20 are employed as evaluation metrics. Baselines. We compare TDRO with competitive cold-start recommender models, including 1) robust training-based methods: DUIF (Geng et al. 2015), DropoutNet (Volkovs, Yu, and Poutanen 2017), M2TRec (Shalaby et al. 2022), and MTPR (Du et al. 2020)), and 2) auxiliary loss-based methods: Heater (Zhu et al. 2020), CB2CF (Barkan et al. 2019), CCFCRec (Zhou, Zhang, and Yang 2023), CLCRec (Wei et al. 2021), and GAR (Chen et al. 2022). Additionally, we also consider 3) potential methods to overcome temporal feature shifts: S-DRO (Wen et al. 2022) and invariant learning framework (Du et al. 2022b; Pan et al. 2023). Overall Performance (RQ1) The overall performance of the baselines and the two SOTA cold-start methods equipped with S-DRO and TDRO is reported in Table 1, from which we can observe the following: • Auxiliary loss-based methods typically outperform the robust training-based ones. The reason is that robust trainingbased methods directly utilize feature representations to fit interactions, which inevitably introduces noises. Meanwhile, auxiliary loss-based methods decouple the CF and feature representations space, which protects the CF representations from feature noises. • CLCRec consistently yields impressive performance across the three datasets. This is attributed to the contrastive loss, which maximizes the mutual information between feature and CF representations. Besides, by introducing adversarial constraints for similar distributions of CF and feature representations, GAR exhibits competitive performance despite its instability. • In most cases, S-DRO improves the performance of cold items compared to the backbone model. The stable improvements are attributed to the tail performance guarantee over potential shifted distributions, which may partially cover the shifted cold item distribution. In addition, our proposed TDRO consistently outperforms S-DRO and the backbone model on all and cold performance by a large margin, which justifies the effectiveness of TDRO. Moreover, capturing the shifting patterns is also helpful for achieving steady improvements for warm items, reflecting the superiority of TDRO in alleviating the temporal feature shifts issue. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8755 Amazon Micro-video Kwai Methods All Warm Cold All Warm Cold All Warm Cold CLCRec 0.0106 0.0200 0.0135 0.0583 0.1135 0.0623 0.0743 0.0884 0.0160 w/o Worst-case Factor 0.0121 0.0219 0.0157 0.0648 0.1138 0.0687 0.0790 0.0997 0.0145 w/o Shifting Factor 0.0126 0.0228 0.0160 0.0643 0.1145 0.0622 0.0797 0.0986 0.0165 TDRO 0.0130 0.0237 0.0166 0.0703 0.1180 0.0761 0.0814 0.1016 0.0186 Table 2: Ablation study of worst-case factor and shifting factor w.r.t. Recall@20. The best results are highlighted in bold. Amazon Micro-video G1 G2 G3 G1 G2 G3 Distance 48 62 123 13 19 39 CLCRec 0.0218 0.0075 0.0024 0.1131 0.0503 0.0116 TDRO 0.0254 0.0110 0.0027 0.1321 0.0598 0.0139 Table 3: Recall@20 over user groups with different strengths of temporal feature shifts under “all” setting. All Cold Worst-case Popular Worst-case Popular CLCRec 0.0166 0.0168 0.0088 0.0088 TDRO 0.0173 0.0195 0.0123 0.0125 Table 4: Recall@20 of the item group with the worst performance and the item group of top 25% popular items. In-depth Analysis Ablation Study (RQ2). To study the effectiveness of the worst-case and shifting factor, we implement TDRO without (w/o) each factor, separately. From Table 2, we can find that: 1) The performance declines if either the worst-case factor or the shifting factor is removed. This verifies the effectiveness of incorporating the optimization over worstcase group and the performance improvements for all groups based on the shifting trend. 2) Removing each factor still outperforms CLCRec (“all” setting). This indicates that either performance lower bound guarantee or leveraging shifting trends improves generalization ability. User Group Evaluation (RQ3). We further inspect how TDRO performs under different strengths of temporal feature shifts by evaluating TDRO on different user groups. Specifically, we calculate the Euclidean distance of the average item features between the training set and testing set for each user. Next, we rank the users according to the distance, and then split the users into three groups (denoted as Group 1, Group 2, and Group 3) based on the ranking. The results w.r.t. Recall@20 is given in Table 3. Despite that the performance of both CLCRec and TDRO declines gradually as the shifts become more significant, TDRO consistently outperforms CLCRec in each group, validating the effectiveness of TDRO in enhancing the generalization ability to temporal feature shifts. Item Group Analysis (RQ3). We analyze the generalization ability enhancement of TDRO on Amazon w.r.t. item groups. In detail, we calculate the item popularity (i.e., 0.02 0.024 0.028 0.011 0.015 0.019 0.1 0.3 0.5 0.7 0.9 Amazon all cold warm (a) Effect of 𝝀w.r.t. Recall@20 (Warm) (All, Cold) 0.005 0.008 0.011 0.014 0.005 0.006 0.007 0.008 0.1 0.3 0.5 0.7 0.9 Amazon all cold warm (b) Effect of 𝝀w.r.t. NDCG@20 (Warm) (All, Cold) Figure 3: Effect of the strength of shifting factor λ. interaction proportion) in the testing set and divide the items into four subgroups based on the popularity scores. We then conduct evaluation on each item subgroup to see whether TDRO: 1) guarantees the worst-case group performance, and 2) enhances the performance over the group with the top 25% popular items. As shown in Table 4, the boosted performance on worst-case group and popular items partially explains the superior performance of TDRO. Effect of shifting trend strength. We inspect the effect of the shifting factor by changing λ from 0.1 to 0.9. Stronger incorporation of shifting trend intends to yield better performance on cold items as shown in Figure 3, indicating the importance of shifting patterns in robustness enhancement. However, the all and warm performance declines if we consider the shifting factor too much, which is probably due to the overlook of the minority group of warm items. Conclusion and Future Work In this work, we revealed the critical issue of temporal item feature shifts in the cold-start recommendation. To overcome this issue, we proposed a novel temporal DRO learning framework called TDRO, which 1) considers the worst-case performance for the performance lower bound guarantee, and 2) leverages the shifting trend of item features to enhance the performance of popular groups in subsequent cold items. Empirical results on three real-world datasets validated the effectiveness of TDRO in achieving robust prediction under temporal item feature shifts. This work highlights temporal feature shifts in cold-start recommendation, leaving many promising directions to be explored in the future. One is to consider adaptive environment importance for more fine-grained modeling of the shifting trend. Moreover, it is worthwhile to explore more effective group division strategies beyond the pre-defined ones. It is also promising to leverage LLM for cold-start recommendation (Wang et al. 2023a; Bao et al. 2023b,a). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8756 References Bao, K.; Zhang, J.; Wang, W.; Zhang, Y.; Yang, Z.; Luo, Y.; Feng, F.; He, X.; and Tian, Q. 2023a. A bi-step grounding paradigm for large language models in recommendation systems. Bao, K.; Zhang, J.; Zhang, Y.; Wenjie, W.; Feng, F.; and He, X. 2023b. Large Language Models for Recommendation: Progresses and Future Directions. In SIGIR-AP, 306–309. Barkan, O.; Koenigstein, N.; Yogev, E.; and Katz, O. 2019. CB2CF: a neural multiview content-to-collaborative filtering model for completely cold item recommendations. In RecSys, 228– 236. ACM. Chen, H.; Wang, Z.; Huang, F.; Huang, X.; Xu, Y.; Lin, Y.; He, P.; and Li, Z. 2022. Generative adversarial framework for cold-start item recommendation. In SIGIR, 2565–2571. ACM. Chu, Z.; Wang, H.; Xiao, Y.; Long, B.; and Wu, L. 2023. Meta policy learning for cold-start conversational recommendation. In WSDM, 222–230. ACM. Du, J.; Ye, Z.; Yao, L.; Guo, B.; and Yu, Z. 2022a. Socially-aware dual contrastive learning for cold-start recommendation. In SIGIR, 1927–1932. ACM. Du, X.; Wang, X.; He, X.; Li, Z.; Tang, J.; and Chua, T.-S. 2020. How to learn item representation for cold-start multimedia recommendation? In MM, 3469–3477. ACM. Du, X.; Wu, Z.; Feng, F.; He, X.; and Tang, J. 2022b. Invariant representation learning for multimedia recommendation. In MM, 619–628. ACM. Duchi, J.; Hashimoto, T.; and Namkoong, H. 2023. Distributionally robust losses for latent covariate mixtures. Operations Research, 71(2): 649–664. Duchi, J.; and Namkoong, H. 2018. Learning models with uniform performance via distributionally robust optimization. arXiv:1810.08750. Geng, X.; Zhang, H.; Bian, J.; and Chua, T.-S. 2015. Learning image and user features for recommendation in social networks. In ICCV, 4274–4282. IEEE. Goel, K.; Gu, A.; Li, Y.; and R´e, C. 2021. Model patching: closing the subgroup performance gap with data augmentation. In ICLR. Guo, C.; Lu, H.; Shi, S.; Hao, B.; Liu, B.; Zhang, M.; Liu, Y.; and Ma, S. 2017. How integration helps on cold-start recommendations. In RecSys Challenge, 1–6. ACM. He, R.; and McAuley, J. 2016. Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW, 507–517. ACM. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.-S. 2017. Neural collaborative filtering. In WWW, 173–182. ACM. He, Y.; Wang, Z.; Cui, P.; Zou, H.; Zhang, Y.; Cui, Q.; and Jiang, Y. 2022. CausPref: Causal Preference Learning for Out-ofDistribution Recommendation. In WWW, 410–421. ACM. Hu, W.; Niu, G.; Sato, I.; and Sugiyama, M. 2018. Does distributionally robust supervised learning give robust classifiers? In ICML, 2029–2037. PMLR. Huan, Z.; Zhang, G.; Zhang, X.; Zhou, J.; Wu, Q.; Gu, L.; Gu, J.; He, Y.; Zhu, Y.; and Mo, L. 2022. An industrial framework for cold-start recommendation in zero-shot scenarios. In SIGIR, 3403– 3407. ACM. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8): 30–37. Li, Y.; Liu, M.; Yin, J.; Cui, C.; Xu, X.-S.; and Nie, L. 2019. Routing micro-videos via a temporal graph-guided recommendation system. In MM, 1464–1472. Liu, J.; Wu, J.; Li, B.; and Cui, P. 2022. Distributionally robust optimization with data geometry. In NeurIPS, 33689–33701. Curran Associates, Inc. Meng, Y.; Yan, X.; Liu, W.; Wu, H.; and Cheng, J. 2020. Wasserstein collaborative filtering for item cold-start recommendation. In UMAP, 318–322. ACM. Michel, P.; Hashimoto, T.; and Neubig, G. 2022. Distributionally robust models with parametric likelihood ratios. In ICLR. Oren, Y.; Sagawa, S.; Hashimoto, T. B.; and Liang, P. 2019. Distributionally robust language modeling. arXiv:1909.02060. Pan, H.; Chen, J.; Feng, F.; Shi, W.; Wu, J.; and He, X. 2023. Discriminative-invariant representation learning for unbiased recommendation. In IJCAI, 2270–2278. Piratla, V.; Netrapalli, P.; and Sarawagi, S. 2022. Focus on the common good: group distributional robustness follows. In ICLR. Pulis, M.; and Bajada, J. 2021. Siamese neural networks for content-based cold-start music recommendation. In RecSys, 719– 723. ACM. Rahimian, H.; and Mehrotra, S. 2019. Distributionally robust optimization: A review. arXiv:1908.05659. Rajapakse, D. C.; and Leith, D. 2022. Fast and accurate user coldstart learning using monte carlo tree search. In RecSys, 350–359. ACM. Raziperchikolaei, R.; Liang, G.; and Chung, Y.-j. 2021. Shared neural item representations for completely cold start problem. In RecSys, 422–431. ACM. Sagawa, S.; Koh, P. W.; Hashimoto, T. B.; and Liang, P. 2020. Distributionally robust neural networks for group shifts: on the importance of regularization for worst-case generalization. In ICLR. Shalaby, W.; Oh, S.; Afsharinejad, A.; Kumar, S.; and Cui, X. 2022. M2TRec: metadata-aware multi-task transformer for large-scale and cold-start free session-based recommendations. In RecSys, 573–578. ACM. Staib, M.; and Jegelka, S. 2019. Distributionally robust optimization and generalization in kernel methods. In NeurIPS, 9131–9141. Curran Associates, Inc. Sun, T.; Wang, W.; Jing, L.; Cui, Y.; Song, X.; and Nie, L. 2022. Counterfactual reasoning for out-of-distribution multimodal sentiment analysis. In MM, 15–23. Sun, X.; Shi, T.; Gao, X.; Kang, Y.; and Chen, G. 2021. FORM: follow the online regularized meta-leader for cold-start recommendation. In SIGIR, 1177–1186. ACM. Vapnik, V. 1991. Principles of risk minimization for learning theory. In NeurIPS, 831–838. Curran Associates, Inc. Volkovs, M.; Yu, G.; and Poutanen, T. 2017. Dropoutnet: addressing cold start in recommender systems. In NeurIPS, 4957–4966. Curran Associates, Inc. Wang, S.; Zhang, K.; Wu, L.; Ma, H.; Hong, R.; and Wang, M. 2021. Privileged graph distillation for cold start recommendation. In SIGIR, 1187–1196. ACM. Wang, W.; Lin, X.; Feng, F.; He, X.; and Chua, T.-S. 2023a. Generative recommendation: Towards next-generation recommender paradigm. Wang, W.; Lin, X.; Feng, F.; He, X.; Lin, M.; and Chua, T.S. 2022. Causal Representation Learning for Out-of-Distribution Recommendation. In WWW, 3562–3571. ACM. Wang, W.; Lin, X.; Wang, L.; Feng, F.; Wei, Y.; and Chua, T.S. 2023b. Equivariant Learning for Out-of-Distribution Cold-start Recommendation. In MM, 903–914. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8757 Wei, Y.; Wang, X.; Li, Q.; Nie, L.; Li, Y.; Li, X.; and Chua, T.-S. 2021. Contrastive learning for cold-start recommendation. In MM, 5382–5390. ACM. Wen, H.; Yi, X.; Yao, T.; Tang, J.; Hong, L.; and Chi, E. H. 2022. Distributionally-robust recommendations for improving worst-case user experience. In WWW, 3606–3610. ACM. Zhai, R.; Dan, C.; Kolter, Z.; and Ravikumar, P. 2021. DORO: distributional and outlier robust optimization. In ICML, 12345– 12355. PMLR. Zhao, J.; Wang, W.; Lin, X.; Qu, L.; Zhang, J.; and Chua, T.-S. 2023. Popularity-aware Distributionally Robust Optimization for Recommendation System. In CIKM, 4967–4973. Zhao, X.; Ren, Y.; Du, Y.; Zhang, S.; and Wang, N. 2022. Improving Item Cold-start Recommendation via Model-agnostic Conditional Variational Autoencoder. In SIGIR, 2595–2600. ACM. Zhou, C.; Ma, X.; Michel, P.; and Neubig, G. 2021. Examining and combating spurious features under distribution shift. In ICML, 12857–12867. PMLR. Zhou, R.; Wu, X.; Qiu, Z.; Zheng, Y.; and Chen, X. 2023. Distributionally Robust Sequential Recommnedation. In SIGIR, 279–288. Zhou, Z.; Zhang, L.; and Yang, N. 2023. Contrastive collaborative filtering for cold-start item recommendation. In WWW, 928–937. ACM. Zhu, Z.; Kim, J.; Nguyen, T.; Fenton, A.; and Caverlee, J. 2021. Fairness among new items in cold start recommender systems. In SIGIR, 767–776. ACM. Zhu, Z.; Sefati, S.; Saadatpanah, P.; and Caverlee, J. 2020. Recommendation for new users and new items via randomized training and mixture-of-experts transformation. In SIGIR, 1121–1130. ACM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8758
2024
973
18,821
Towards Continual Knowledge Graph Embedding via Incremental Distillation Jiajun Liu1*, Wenjun Ke1, 2*†, Peng Wang1, 2†, Ziyu Shang1, Jinhua Gao3, Guozheng Li1, Ke Ji1, Yanhe Liu1 1School of Computer Science and Engineering, Southeast University 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China 3Institute of Computing Technology, Chinese Academy of Sciences {jiajliu, kewenjun, pwang, ziyus1999, liguozheng, keji, liuyanhe}@seu.edu.cn, [email protected] Abstract Traditional knowledge graph embedding (KGE) methods typically require preserving the entire knowledge graph (KG) with significant training costs when new knowledge emerges. To address this issue, the continual knowledge graph embedding (CKGE) task has been proposed to train the KGE model by learning emerging knowledge efficiently while simultaneously preserving decent old knowledge. However, the explicit graph structure in KGs, which is critical for the above goal, has been heavily ignored by existing CKGE methods. On the one hand, existing methods usually learn new triples in a random order, destroying the inner structure of new KGs. On the other hand, old triples are preserved with equal priority, failing to alleviate catastrophic forgetting effectively. In this paper, we propose a competitive method for CKGE based on incremental distillation (IncDE), which considers the full use of the explicit graph structure in KGs. First, to optimize the learning order, we introduce a hierarchical strategy, ranking new triples for layer-by-layer learning. By employing the inter- and intra-hierarchical orders together, new triples are grouped into layers based on the graph structure features. Secondly, to preserve the old knowledge effectively, we devise a novel incremental distillation mechanism, which facilitates the seamless transfer of entity representations from the previous layer to the next one, promoting old knowledge preservation. Finally, we adopt a two-stage training paradigm to avoid the over-corruption of old knowledge influenced by under-trained new knowledge. Experimental results demonstrate the superiority of IncDE over state-of-the-art baselines. Notably, the incremental distillation mechanism contributes to improvements of 0.2%-6.5% in the mean reciprocal rank (MRR) score. More exploratory experiments validate the effectiveness of IncDE in proficiently learning new knowledge while preserving old knowledge across all time steps. Introduction Knowledge graph embedding (KGE) (Bordes et al. 2013; Wang et al. 2017; Rossi et al. 2021) aims to embed entities and relations from knowledge graphs (KGs) (Dong et al. 2014) into continuous vectors in a low-dimensional space, which is crucial for various knowledge-driven tasks, such as *These authors contributed equally. †Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. a b Time i Time i + 1 old entity old relation new entity new relation c KG KG d d Figure 1: Illustration of a growing KG. Two specific learning orders should be considered: entities closer to the old KG should be prioritized (a is prioritised over b); entities influenced heavier to new triples (e.g., connecting with more relations) should be prioritized (a is prioritised over c). question answering (Bordes, Weston, and Usunier 2014), semantic search (Noy et al. 2019), and relation extraction (Li et al. 2022). Traditional KGE models (Bordes et al. 2013; Trouillon et al. 2016; Sun et al. 2019; Liu et al. 2020) only focus on obtaining embeddings of entities and relations in static KGs. However, real-world KGs constantly evolve, especially emerging new knowledge, such as new triples, entities, and relations. For example, during the evolution of DBpedia (Bizer et al. 2009) from 2016 to 2018, about 1 million new entities, 2,000 new relations, and 20 million new triples emerged (DBpedia 2021). Traditionally, when a KG evolves, KGE models need to retrain the models with the entire KG, which is a non-trivial process with huge training costs. In domains such as bio-medical and financial fields, it is significant to update the KGE models to support medical assistance and informed market decision-making with rapidly evolving KGs, especially with substantial new knowledge. To this end, the continual KGE (CKGE) task has been proposed to alleviate this problem by using only the emerging knowledge for learning (Song and Park 2018; Daruna et al. 2021). In comparison with the traditional KGE, the key of CKGE lies in learning emerging knowledge well while preserving old knowledge effectively. As shown in Figure 1, new entities and relations (i.e., the new entity a, b, and c) should be learned to adapt to the new KG. Meanwhile, knowledge in the old KG (such as old entity d) should be preserved. Generally, existing CKGE methods can be categorized into three families: dynamic architecture-based, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8759 replay-based, and regularization-based methods. Dynamic architecture-based methods (Rusu et al. 2016; Lomonaco and Maltoni 2017) preserve all old parameters and learn the emerging knowledge through new architectures. However, retaining all old parameters hinders the adaptation of old knowledge to the new knowledge. Replay-based methods (Lopez-Paz and Ranzato 2017; Wang et al. 2019; Kou et al. 2020) replay KG subgraphs to remember old knowledge, but recalling only a portion of the subgraphs leads to the destruction of the overall old graph structure. Regularization-based methods (Zenke, Poole, and Ganguli 2017; Kirkpatrick et al. 2017; Cui et al. 2023) aim to preserve old knowledge by adding regularization terms. However, only adding regularization terms to the old parameters makes it infeasible to capture new knowledge well. Despite achieving promising effectiveness, current CKGE methods still perform poorly due to the explicit graph structure of KGs being heavily ignored. Meanwhile, previous research has emphasized the crucial role of the graph structure in addressing graph-related continual learning tasks (Zhou and Cao 2021; Liang et al. 2022; Febrinanto et al. 2023). Specifically, existing CKGE methods suffer from two main drawbacks: (1) First, regarding the new emerging knowledge, current CKGE methods leverage a random-order learning strategy, neglecting the significance of different triples in a KG. Previous studies have demonstrated that the learning order of entities and relations can significantly affect continual learning on graphs (Wei et al. 2022). Since knowledge in KGs is organized in a graph structure, a randomized learning order can undermine the inherent semantics conveyed by KGs. Hence, it is essential to consider the priority of new entities and relations for effective learning and propagation. Figure 1 illustrates an example where entity a should be learned before entity b since the representation of b is propagated through a from the old KG. (2) Second, regarding the old knowledge, current CKGE methods treat the memorization at an equal level, leading to inefficient handling of catastrophic forgetting (Kirkpatrick et al. 2017). Existing studies have demonstrated that preserving knowledge by regularization or distillation from important nodes in the topology structure is critical for continuous graph learning (Liu, Yang, and Wang 2021). Therefore, old entities with more essential graph structure features should receive higher preservation priority. In Figure 1, entity a connecting more other entities should be prioritized for preservation at time i + 1 compared to entity c. In this paper, we propose IncDE, a novel method for the CKGE task that leverages incremental distillation. IncDE aims to enhance the capability of learning emerging knowledge while efficiently preserving old knowledge simultaneously. Firstly, we employ hierarchical ordering to determine the optimal learning sequence of new triples. This involves dividing the triples into layers and ranking them through the inter-hierarchical and intra-hierarchical orders. Subsequently, the ordered emerging knowledge is learned layer by layer. Secondly, we introduce a novel incremental distillation mechanism to preserve the old knowledge considering the graph structure effectively. This mechanism incorporates the explicit graph structure and employs a layer-by-layer paradigm to distill the entity representation. Finally, we use a two-stage training strategy to improve the preservation of old knowledge. In the first stage, we fix the representation of old entities and relations. In the second stage, we train the representation of all entities and relations, protecting the old KG from disruption by under-trained emerging knowledge. To evaluate the effectiveness of IncDE, we construct three new datasets with varying scales of new KGs. Extensive experiments are conducted on both existing and new datasets. The results demonstrate that IncDE outperforms all strong baselines. Furthermore, ablation experiments reveal that incremental distillation provides a significant performance enhancement. Further exploratory experiments verify the ability of IncDE to effectively learn emerging knowledge while efficiently preserving old knowledge. To sum up, the contributions of this paper are three-fold: • We propose a novel continual knowledge graph embedding framework IncDE, which learns and preserves the knowledge effectively with explicit graph structure. • We propose hierarchical ordering to get an adequate learning order for better learning emerging knowledge. Moreover, we propose incremental distillation and a two stage training strategy to preserve decent old knowledge. • We construct three new datasets based on the scale changes of new knowledge. Experiments demonstrate that IncDE outperforms strong baselines. Notably, incremental distillation improves 0.2%-6.5% in MRR. Related Work Different from traditional KGE (Bordes et al. 2013; Trouillon et al. 2016; Kazemi and Poole 2018; Pan and Wang 2021; Shang et al. 2023), CKGE (Song and Park 2018; Daruna et al. 2021) allows KGE models to learn emerging knowledge while remembering the old knowledge. Existing CKGE methods can be divided into three categories. (1) Dynamic architecture-based methods (Rusu et al. 2016; Lomonaco and Maltoni 2017) dynamically adapt to new neural resources to change architectural properties in response to new information and preserve old parameters. (2) Memory reply-based methods (Lopez-Paz and Ranzato 2017; Wang et al. 2019; Kou et al. 2020) retain the learned knowledge by replaying it. (3) Regularization-based methods (Zenke, Poole, and Ganguli 2017; Kirkpatrick et al. 2017; Cui et al. 2023) alleviate catastrophic forgetting by imposing constraints on updating neural weights. However, these methods overlook the importance of learning new knowledge in an appropriate order for graph data. Moreover, they ignore how to preserve appropriate old knowledge for better integration of new and old knowledge. Several datasets for CKGE (Hamaguchi et al. 2017; Kou et al. 2020; Daruna et al. 2021; Cui et al. 2023) have been constructed. However, most of them restrict the new triples to contain at least one old entity, neglecting triples without old entities. In the evolution of real-world KGs like Wikipedia (Bizer et al. 2009) and Yago (Suchanek, Kasneci, and Weikum 2007), numerous new triples emerge without any old entities. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8760 Time i - 1 Time i Growing KG Hierarchical Ordering Inter-hierarchical Ordering Layer 1 Layer 2 Intra-hierarchical Ordering Distillation and Training Layer 1 Layer 2 Layer j Layer j Training Inherit embeddings of and Embedding Layers Scoring Function KGE S1 S2 Sj Training Step Embedding Layers Scoring Function KGE BFS for from Layer 1 Layer 2 old entity old relation new entity new relation (a) (b) (a) Sort each triple in order of importance. (b) Further stratify the intralayer triples. Two-Stage Strategy Stage 1 Stage 2 Only Training (current) Emerging Knowledge New triples at time i Triples Incremental Distillation Embeddings of and Figure 2: An overview of our proposed IncDE framework. Preliminary and Problem Statement Growing Knowledge Graph A knowledge graph (KG) G = (E, R, T ) contains the collection of entities E, relations R, and triples T . A triple can be denoted as (h, r, t) ∈T , where h, r, and t represent the head entity, the relation, and the tail entity, respectively. When a KG grows with emerging knowledge at time i, it is denoted as Gi = (Ei, Ri, Ti), where Ei, Ri, Ti are the collection of entities, relations, and triples in Gi. Moreover, we denote ∆Ti = Ti −Ti−1, ∆Ei = Ei −Ei−1 and ∆Ri = Ri −Ri−1 as new triples, entities, and relations, respectively. Continual Knowledge Graph Embedding Given a KG G, knowledge graph embedding (KGE) aims to embed entities and relations into low-dimensional vector space R. Given head entity h ∈E, relation r ∈R, and tail entity t ∈E, their embeddings are denoted as h ∈Rd, r ∈Rd, and t ∈Rd, where d is the embedding size. A typical KGE model contains embedding layers and a scoring function. Embedding layers generate vector representations for entities and relations, while the scoring function assigns scores to each triple in the training stage. Given a growing KG Gi at time i, continual knowledge graph embedding (CKGE) aims to update the embeddings of old entities Ei−1 and relations Ri−1 while obtaining the embeddings of new entities ∆Ei and relations ∆Ri. Finally, embeddings of all entities Ei and relations Ri are obtained. Methodology Framework Overview The framework of IncDE is depicted in Figure 2. Initially, when emerging knowledge appears at time i, IncDE performs hierarchical ordering on new triples ∆Ti. Specifically, inter-hierarchical ordering is employed to divide ∆Ti into multiple layers using breadth-first search (BFS) expansion from the old graph Gi−1. Subsequently, intra-hierarchical ordering is applied within each layer to further sort and divide the triples. Then, the grouped ∆Ti is trained layer by layer, with the embeddings of Ei−1 and Ri−1 inherited from the KGE model in previous time i −1. During training, incremental distillation is introduced. Precisely, if an entity in layer j has appeared in a previous layer, its representation is distilled with the closest layer to the current one. Additionally, a two-stage training strategy is proposed. In the first stage, only the representations of new entities ∆Ei and relations ∆Ri are trained. In the second stage, all entities Ei and relations Ri are trained in the training process. Finally, the embeddings of Ei and Ri at time i are obtained. Hierarchical Ordering To enhance the learning of the graph structure for emerging knowledge, we first order the triples ∆Ti at time i in an interhierarchical way and an intra-hierarchical way, based on the importance of entities and relations, as shown in Figure 2. Ordering processes can be pre-calculated to reduce training time. Then, we learn the new triples ∆Ti layer by layer and in order. The specific ordering strategies are as follows. Inter-Hierarchical Ordering For inter-hierarchical ordering, we split all new triples ∆Ti into multiple layers l1, l2, ..., ln at time i. Since the representations of new entities ∆Ei are propagated from the representations of the old entities Ei−1 and old relations Ri−1, we split new triples ∆Ti based on the distance between new entities ∆Ei and old graph Gi−1. We use the bread-first search (BFS) algorithm to progressively partition ∆Ti from Gi−1. First, we take the old graph as l0. Then, we take all the new triples that contain old entities as the next layer, l1. Next, we treat the new entities in l1 as the seen old entities. Repeat the above two processes until no triples can be added to a new layer. Finally, we use all remaining triples as the final layer. This way, we initially divide all the new triples ∆Ti into multiple layers. Intra-Hierarchical Ordering The importance of the triples in graph structure is also critical to the order in which entities Ei and relations Ri are learned or updated at time i. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8761 So for the triples of each layer, we further order them based on the importance of entities and relations in the graph structure, as shown in Figure 2 (a). To measure the importance of entities Ei in the new triples ∆Ti, we first calculate the node centrality of an entity e ∈Ei as fnc(e) as follow: fnc(e) = fneighbor(e) N −1 (1) where fneighbor(e) denotes the number of the neighbors of e, and N denotes the number of entities in the new triples ∆Ti at time i. Then, in order to measure the importance of relations Ri in the triples of each layer, we compute the betweenness centrality of a relation r ∈Ri as fbc(r): fbc(r) = X s,t∈Ei,s̸=t σ(s, t|r) σ(s, t) (2) where σ(s, t) is the number of shortest paths between s and t in the new triples ∆Ti, and σ(s, t|r) is the number of σ(s, t) passing through relation r. Specifically, we only compute fnc and fbc of emerging KGs, avoiding the graph being excessive. To obtain the importance of the triple (h, r, t) in each layer, we compute the node centrality of the head entity h, the node centrality of the tail entity t, and the betweenness centrality of the relation r in this triple. Considering the overall significance of entities and relations within the graph structure, we adopt fnc and fbc together. The final importance of each triple can be calculated as: IT(h,r,t) = max(fnc(h), fnc(t)) + fbc(r) (3) We sort the triples of each layer according to the values of their IT values. The utilization of intra-hierarchical ordering guarantees the prioritization of triples that are important to the graph structure in each layer. This, in turn, enables more effective learning of the structure of the new graph. Moreover, the intra-hierarchical ordering can help further split the intra-layer triples, as shown in Figure 2 (b). Since the number of triples in each layer is determined by the size of the new graph, it could be too large to learn. To prevent the number of triples in a particular layer from being too large, we set the maximum number of triples in each layer to be M. If the number of triples in one layer exceeds M, it can split into several layers not exceeding M triples in the intra-hierarchical ordering. Distillation and Training After hierarchical ordering, we train new triples ∆Ti layer by layer at time i. We take TransE (Bordes et al. 2013) as the base KGE model. When training the j-th layer (j > 0), the loss for the original TransE model is: Lckge = X (h,r,t)∈lj max(0, f(h, r, t) −f(h′, r, t′) + γ) (4) where (h′, r, t′) is the negative triple of (h, r, t) ∈lj, and f(h, r, t) = |h+r −t|L1/L2 is the score function of TransE. We inherit the embeddings of old entities Ei−1 and relations Ri−1 from the KGE model at time i−1 and randomly initialize the embeddings of new entities ∆Ei and relations ∆Ri. During training, we use incremental distillation to preserve the old knowledge. Further, we propose a two-stage training strategy to prevent the embeddings of old entities and relations from being overly corrupted at the start of training. Incremental Distillation In order to alleviate catastrophic forgetting of the entities learned in previous layers, inspired by the knowledge distillation for KGE models (Wang et al. 2021; Zhu et al. 2022; Liu et al. 2023), we distill the entity representation in the current layer with the entities that have appeared in previous layers as shown in Figure 2. Specifically, if entity e in the j-th (j > 0) layer has appeared in a previous layer, we distill it with the representation of e from the nearest layer. The loss of distillation for entity ek (k ∈[1, |Ei|]) is: Lk distill =  1 2(e′k −ek)2, |e′k −ek| ≤1 |e′k −ek| −1 2, |e′k −ek| > 1 (5) where ek denotes the representation of entity ek in layer j, e′k denotes the representation of entity ek from the nearest previous layer. By distilling entities that have appeared in previous layers, we remember old knowledge efficiently. However, different entities should have different levels of memory for past representations. Entities with higher importance in the graph structure should be prioritized and preserved to a greater extent during distillation. Besides the node centrality of the entity fnc, similar to the betweenness centrality of the relation, we define the betweenness centrality fbc(e) of an entity e at time i as: fbc(e) = X s,t∈Ei,s̸=t σ(s, t|e) σ(s, t) (6) We combine fbc(e) and fnc(e) to evaluate the importance of an entity e. Concretely, when training the j-th layer, for each new entity ek appearing at the time i, we compute fbc(ek) and fnc(ek) to get the preliminary weight λk as: λk = λ0 · (fbc(ek) + fnc(ek)) (7) where λ0 is 1 for new entities that have already appeared in previous layers, and λ0 is 0 for new entities that have not appeared. At the same time, we learn a matrix W ∈R1×|Ei| to dynamically change the weights of distillation loss for different entities. The dynamic distillation weights is: [λ ′ 1, λ ′ 2, ..., λ ′ |Ei|] = [λ1, λ2, ..., λ|Ei|] ◦W (8) where ◦denotes the Hadamard product. The final distillation loss for each layer j at the time i is: Ldistill = |Ei| X k=1 λ ′ k · Lk distill (9) When training the j-th layer, the final loss function can be calculated as: Lfinal = Lckge + Ldistill (10) After layer-by-layer training for new triples ∆Ti, all representations of entities Ei and relations Ri are obtained. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8762 Dataset Time 1 Time 2 Time 3 Time 4 Time 5 NE NR NT NE NR NT NE NR NT NE NR NT NE NR NT ENTITY 2,909 233 46,388 5,817 236 72,111 8,275 236 73,785 11633 237 70,506 14,541 237 47,326 RELATION 11,560 48 98,819 13,343 96 93,535 13,754 143 66,136 14,387 190 30,032 14,541 237 21,594 FACT 10,513 237 62,024 12,779 237 62,023 13,586 237 62,023 13,894 237 62,023 14,541 237 62,023 HYBRID 8,628 86 57,561 10,040 102 20,873 12,779 151 88,017 14,393 209 103,339 14,541 237 40,326 GraphEqual 2,908 226 57,636 5,816 235 62,023 8,724 237 62,023 11,632 237 62,023 14,541 237 66,411 GraphHigher 900 197 10,000 1,838 221 20,000 3,714 234 40,000 7,467 237 80,000 14,541 237 160,116 GraphLower 7,505 237 160,000 11,258 237 80,000 13,134 237 40,000 14,072 237 20,000 14,541 237 10,116 Table 1: The statistics of datasets. NE, NR and NT denote the number of cumulative entities, cumulative relations and current triples at each time i. Two-Stage Training During the training process, when incorporating the new triples ∆Ti into the existing graph Gi−1 at time i, the embeddings of old entities and relations that are not present in the new triples ∆Ti remain unchanged. However, the embeddings of old entities and relations that are included in the new triples ∆Ti are updated. Therefore, in the initial stage of each time i, part of the representations of entities Ei−1 and relations Ri−1 in the old graph Gi−1 will be corrupted by the new entities ∆Ei and relations ∆Ri that are not fully trained. To solve this problem, IncDE uses a two-stage training strategy to preserve the knowledge in the old graph better, as shown in Figure 2. In the first training stage, IncDE freezes the embeddings of all old entities Ei−1 and relations Ri−1 and trains only the embeddings of new entities ∆Ei and relations ∆Ri. Then, IncDE trains the embeddings of all entities Ei and relations Ri in the new graph in the second training stage. With the twostage training strategy, IncDE prevents the structure of the old graph from disruption by new triples in the early training phase. At the same time, the representations of entities and relations in the old graph and those in the new graph can be better adapted to each other during training. Experiments Experimental Setup Datasets We use seven datasets for CKGE, including four public datasets (Cui et al. 2023): ENTITY, RELATION, FACT, HYBRID, as well as three new datasets constructed by us: GraphEqual, GraphHigher, and GraphLower. In ENTITY, RELATION, and FACT, the number of entities, relations, and triples increases uniformly at each time step. In HYBRID, the sum of entities, relations, and triples increases uniformly over time. However, these datasets constrain knowledge growth, requiring new triples to include at least one existing entity. To address this limitation, we relax these constraints and construct three new datasets: GraphEqual, GraphHigher, and GraphLower. In GraphEqual, the number of triples consistently increases by the same increment at each time step. In GraphHigher and GraphLower, the increments of triples become higher and lower, respectively. Detailed statistics for all datasets are presented in Table 1. The time step is set to 5. The train, valid, and test sets are allocated 3:1:1 for each time step. The datasets are available at https://github.com/seukgcode/IncDE. Baselines We select two kinds of baseline models: noncontinual learning methods and continual learning-based methods. First, we select a non-continual learning method, Fine-tune (Cui et al. 2023), which is fine-tuned with the new triples each time. Then, we select three kinds of continual learning-based methods: dynamic architecture-based, memory replay-based baselines, and regularization-based. Specifically, the dynamic architecture-based methods are PNN (Rusu et al. 2016) and CWR (Lomonaco and Maltoni 2017). The memory replay-based methods are GEM (LopezPaz and Ranzato 2017), EMR (Wang et al. 2019), and DiCGRL (Kou et al. 2020). The regularization-based methods are SI (Zenke, Poole, and Ganguli 2017), EWC (Kirkpatrick et al. 2017), and LKGE (Cui et al. 2023). Metrics We evaluate our model performance on the link prediction task. Particularly, we replace the head or tail entity of the triples in the test set with all other entities and then compute and rank the scores for each triple. Then, we compute MRR, Hits@1, and Hits@10 as metrics. The higher the MRR, Hits@1, Hits@3, and Hits@10, the better the model works. At time i, we use the mean of the metrics tested on all test sets at the time [1, i] as the final metric. The main results are obtained from the model generated at the last time. Settings All experiments are implemented on the NVIDIA RTX 3090Ti GPU with the PyTorch (Paszke et al. 2019). In all experiments, we set TransE (Bordes et al. 2013) as the base KGE model and the max size of time i as 5. The embedding size for entities and relations is 200. We tune the batch size in [512, 1024, 2048]. We choose Adam as the optimizer and set the learning rate from [1e-5, 1e-4, 1e3]. In our experiments, we set the max number of triples in each layer M in [512, 1024, 2048]. To ensure fairness, all experimental results are averages of 5 running times. Results Main Results The results of the main experiments on the seven datasets are reported in Table 2 and Table 3. Firstly, it is worth noting that IncDE exhibits a considerable improvement when compared to Fine-tune. Specifically, IncDE demonstrates enhancements ranging from 2.9%-10.6% in MRR, 2.4%-7.2% in Hits@1, and 3.7%17.5% in Hits@10 compared to Fine-tune. The results suggest that direct fine-tuning leads to catastrophic forgetting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8763 Method ENTITY RELATION FACT HYBRID GraphEqual MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 Fine-tune 0.165 0.085 0.321 0.093 0.039 0.195 0.172 0.090 0.339 0.135 0.069 0.262 0.183 0.096 0.358 PNN 0.229 0.130 0.425 0.167 0.096 0.305 0.157 0.084 0.290 0.185 0.101 0.349 0.212 0.118 0.405 CWR 0.088 0.028 0.202 0.021 0.010 0.043 0.083 0.030 0.192 0.037 0.015 0.077 0.122 0.041 0.277 GEM 0.165 0.085 0.321 0.093 0.040 0.196 0.175 0.092 0.345 0.136 0.070 0.263 0.189 0.099 0.372 EMR 0.171 0.090 0.330 0.111 0.052 0.225 0.171 0.090 0.337 0.141 0.073 0.267 0.185 0.099 0.359 DiCGRL 0.107 0.057 0.211 0.133 0.079 0.241 0.162 0.084 0.320 0.149 0.083 0.277 0.104 0.040 0.226 SI 0.154 0.072 0.311 0.113 0.055 0.224 0.172 0.088 0.343 0.111 0.049 0.229 0.179 0.092 0.353 EWC 0.229 0.130 0.423 0.165 0.093 0.306 0.201 0.113 0.382 0.186 0.102 0.350 0.207 0.113 0.400 LKGE 0.234 0.136 0.425 0.192 0.106 0.366 0.210 0.122 0.387 0.207 0.121 0.379 0.214 0.118 0.407 IncDE 0.253 0.151 0.448 0.199 0.111 0.370 0.216 0.128 0.391 0.224 0.131 0.401 0.234 0.134 0.432 Table 2: Main experimental results on ENTITY, RELATION, FACT, HYBRID, and GraphEqual. The bold scores indicate the best results and underlined scores indicate the second best results. Method GraphHigher GraphLower MRR H@1 H@10 MRR H@1 H@10 Fine-tune 0.198 0.108 0.375 0.185 0.098 0.363 PNN 0.186 0.097 0.364 0.213 0.119 0.407 CWR 0.189 0.096 0.374 0.032 0.005 0.080 GEM 0.197 0.109 0.372 0.170 0.084 0.346 EMR 0.202 0.113 0.379 0.188 0.101 0.362 DiCGRL 0.116 0.041 0.242 0.102 0.039 0.222 SI 0.190 0.099 0.371 0.186 0.099 0.366 EWC 0.198 0.106 0.385 0.210 0.116 0.405 LKGE 0.207 0.120 0.382 0.210 0.116 0.403 IncDE 0.227 0.132 0.412 0.228 0.129 0.426 Table 3: Main experimental results on GraphHigher and GraphLower. Secondly, IncDE outperforms all CKGE baselines. Notably, IncDE achieves improvements of 1.5%-19.6%, 1.0%12.4%, and 1.9%-34.6%, respectively, in MRR, Hits@1, and Hits@10 compared to dynamic architecture-based approaches (PNN and CWR). Compared to replay-based baselines (GEM, EMR, and DiCGRL), IncDE improves 2.5%14.6%, 1.9%-9.4%, and 3.3%-23.7% in MRR, Hits@1, and Hits@10. Moreover, IncDE obtains 0.6%-11.3%, 0.5%8.2%, and 0.4%-17.2% improvements in MRR, Hits@1, and Hits@10 compared to regularization-based methods (SI, EWC, and LKGE). These results demonstrate the superior performance of IncDE on growing KGs. Thirdly, IncDE exhibits distinct improvements across different types of datasets when compared to the strong baselines. In datasets with equal growth of knowledge (ENTITY, FACT, RELATION, HYBRID, and GraphEqual), IncDE has an average improvement of 1.4% in MRR over the state-ofthe-art methods. In datasets with unequal growth of knowledge (GraphHigher and GraphLower), IncDE demonstrates an improvement of 1.8%-2.0% in MRR over the optimal methods. It means that IncDE is particularly well-suited for scenarios involving unequal knowledge growth. Notably, when dealing with a more real-scenario-aware dataset, GraphHigher, where a substantial amount of new knowledge emerges, IncDE demonstrates the most apparent advantages compared to other strongest baselines by 2.0% in MRR. It indicates that IncDE performs well when a substantial amount of new knowledge is emerging. Therefore, we verify the scalability of IncDE in datasets (GraphHigher, GraphLower, and GraphEqual) with varying sizes (triples from 10K to 160K, from 160K to 10K, and the remaining 62K). In particular, we observe that IncDE only improves by 0.6%-0.7% in MRR on RELATION and FACT compared to the best results among all baselines, where the improvements are insignificant as other datasets. This can be attributed to the limited growth of new entities in these two datasets, indicating that IncDE is highly adaptable to situations where the number of entities varies significantly. In real life, the number of relations between entities remains relatively stable, while it is the entities themselves that appear in large numbers. This is where IncDE excels in its adaptability. With its robust capabilities, IncDE can effectively handle the multitude of entities and their corresponding relations, ensuring seamless integration and efficient processing. Ablation Experiments We investigate the effects of hierarchical ordering, incremental distillation, and the two-stage training strategy, as depicted in Table 4 and Table 5. Firstly, when we remove the incremental distillation, there is a significant decrease in the model performance. Specifically, the metrics decrease by 0.2%-6.5% in MRR, 0.1%-5.2% in Hits@1, and 0.2%-11.6% in Hits@10. These findings highlight the crucial role of incremental distillation in effectively preserving the structure of the old graph while simultaneously learning the representation of the new graph. Secondly, there is a slight decline in model performance when we eliminate the hierarchical ordering and two-stage training strategy. Specifically, the metrics of MRR decreased by 0.2%-1.8%, Hits@1 decreased by 0.1%-1.8%, and Hits@10 decreased by 0.2%-4.4%. The results show that the hierarchical ordering and the two-stage training improve the performance of IncDE. Performance of IncDE in Each Time Figure 3 shows how well IncDE remembers old knowledge at different times. First, we observe that on several test data (D1, D2, D3, D4 in ENTITY; D3, D4 in HYBRID), the performance of IncDE decreases slightly by 0.2%-3.1% with increasing The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8764 Method ENTITY RELATION FACT HYBRID GraphEqual MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 IncDE w/o HO 0.248 0.148 0.441 0.186 0.105 0.344 0.197 0.119 0.347 0.210 0.122 0.380 0.230 0.131 0.426 IncDE w/o ID 0.188 0.099 0.354 0.134 0.070 0.254 0.167 0.090 0.321 0.185 0.105 0.340 0.199 0.107 0.383 IncDE w/o TS 0.250 0.149 0.444 0.186 0.099 0.354 0.213 0.126 0.389 0.220 0.127 0.397 0.231 0.132 0.430 IncDE 0.253 0.151 0.448 0.199 0.111 0.370 0.216 0.128 0.391 0.224 0.131 0.401 0.234 0.134 0.432 Table 4: Ablation experimental results on ENTITY, RELATION, FACT, HYBRID and GraphEqual. HO is the hierarchical ordering. ID is the incremental distillation. TS is the two-stage. We learn the new KG in randomized order w/o HO. Method GraphHigher GraphLower MRR H@1 H@10 MRR H@1 H@10 IncDE w/o HO 0.221 0.129 0.405 0.224 0.126 0.424 IncDE w/o ID 0.225 0.131 0.410 0.196 0.105 0.377 IncDE w/o TS 0.225 0.130 0.408 0.225 0.128 0.423 IncDE 0.227 0.132 0.412 0.228 0.129 0.426 Table 5: Ablation experimental results on GraphHigher and GraphLower. 1 2 3 4 5 0.30 0.32 0.34 MRR D1 2 3 4 5 0.23 0.24 0.25 0.26 D2 3 4 5 0.22 0.23 0.24 0.25 D3 4 5 0.23 0.24 0.25 D4 5 0.27 0.28 0.29 D5 Model-at-Each-Time(ENTITY) 1 2 3 4 5 0.23 0.24 0.25 0.26 MRR D1 2 3 4 5 0.30 0.31 0.32 0.33 D2 3 4 5 0.21 0.22 0.23 D3 4 5 0.20 0.21 0.22 D4 5 0.21 0.22 0.23 D5 Model-at-Each-Time(HYBRID) 1 2 3 4 5 0.21 0.22 0.23 0.24 MRR D1 2 3 4 5 0.21 0.22 0.23 D2 3 4 5 0.21 0.22 0.23 D3 4 5 0.19 0.20 0.21 D4 5 0.22 0.23 0.24 D5 Model-at-Each-Time(GraphLower) Figure 3: Effectiveness of IncDE at Each Time on ENTITY, HYBRID, and GraphLower. Different colors represent the performance of models generated at different times. Di denotes the test set at time i. time. In particular, the performance of IncDE does not undergo significant degradation on several datasets, such as D1 of HYBRID (Time 2 to Time 4) and D2 of GraphLower (Time 2 to Time 5). It means that IncDE can remember old knowledge well on most datasets. Second, on a few datasets, the performance of IncDE unexpectedly gains as it continues to be trained. Specifically, the performance of IncDE gradually increases by 0.6% on D3 of GraphLower in MRR. This demonstrates that IncDE learns emerging knowledge well and enhances the old knowledge with emerging knowledge. Effect of Learning and Memorizing In order to verify that IncDE can learn emerging knowledge well and remember old knowledge efficiently, we study the effect of IncDE and Fine-tune each time on the new KG and old KGs, re1 2 3 4 5 Time 0.15 0.20 0.25 0.30 0.35 MRR GraphEqual 1 2 3 4 5 Time 0.16 0.18 0.20 0.22 0.24 MRR GraphLower IncDE-in-the-new-KG IncDE-in-old-KGs Fine-tune-in-the-new-KG Fine-tune-in-old-KGs Figure 4: Effectiveness of learning emerging knowledge and memorizing old knowledge. 128 256 512 1024 2048 Max-Size-of-Each-Layer 0.195 0.205 0.215 0.225 0.235 0.245 0.255 MRR 128 256 512 1024 2048 Max-Size-of-Each-Layer 0.360 0.380 0.400 0.420 0.440 Hits@10 GraphLower GraphHigher GraphEqual ENTITY RELATION FACT HYBRID Figure 5: Results of MRR and Hits@10 with different max sizes of layers in all datasets. spectively, as shown in Figure 4. To assess the performance on old KGs, we calculated the mean value of the MRR across all past time steps. Firstly, we observe that IncDE outperforms Fine-tune on the new KG, with a higher MRR ranging from 0.5% to 5.5%. This indicates that IncDE is capable of effectively learning emerging knowledge. Secondly, IncDE has 3.8%-11.2% higher than Fine-tune on old KGs in MRR. These findings demonstrate that IncDE mitigates the issue of catastrophic forgetting and achieves more efficient retention of old knowledge. Effect of Maximum Layer Sizes To investigate the effect of the max size of each layer M in incremental distillation on model performance, we study the performances of IncDE models at the last time with different M, as shown in Figure 5. First, we find that the model performance on all datasets rises with M in the range of [128, 1024]. This indicates that, in general, the higher M, the more influential the incremental distillation becomes. Second, we observe a significant performance drop on some datasets when M reaches 2048. It implies that too large an M could lead to too The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8765 Query (Arizona State University, major field of study, ?) Methods Top 3 Candidates EWC Medicine, Electrical engineering, Computer Science PNN Medicine, Electrical engineering, Computer Science LKGE English Literature, Computer Science, Political Science IncDE Computer Science, University of Tehran, Medicine w/o HO Computer Science, Medicine, University of Tehran w/o ID Political Science, English Literature, Theatre w/o TS Computer Science, Medicine, University of Tehran Table 6: Results of the case study. We use the model generated at time 5 and randomly select a query appearing in ENTITY at time 1 for prediction. The italic one is the query, and the bold ones are true prediction results. few layers and limit the performance of incremental distillation. Empirically, M=1024 is the best size in most datasets. This further proves that it is necessary to limit the number of triples learned in each layer. Case Study To further explore the capacity of IncDE to preserve old knowledge, we conduct a comprehensive case study as shown in Table 6. In the case of predicting the major field of study of Arizona State University, IncDE ranks the correct answer Computer Science in the first position, outperforming other strong baselines such as EWC, PNN, and LKGE, which rank it second or third. It indicates that although other methods forget knowledge in the past time to some degree, IncDE can remember old knowledge at each time accurately. Moreover, when incremental distillation (ID) is removed, IncDE fails to predict the correct answer within the top three positions. This demonstrates that the performance of IncDE significantly declines when predicting old knowledge without the incremental distillation. Conversely, after removing hierarchical ordering (HO) and the two-stage training strategy (TS), IncDE still accurately predicts the correct answer in the first position. This observation strongly supports the fact that the incremental distillation provides IncDE with a crucial advantage over alternative strong baselines in preserving the old knowledge. Discussion Novelty of IncDE The novelty of IncDE can be summarized by the following two aspects. (1) Efficient knowledgepreserving distillation. Although IncDE utilizes distillation methods, it is different from previous KGE distillation methods (Wang et al. 2021; Zhu et al. 2022; Liu et al. 2023). For one thing, compared to other KGE distillation methods that mainly distill final distribution, incremental distillation (ID) distills the intermediate hidden states. Such a manner skillfully preserves essential features of old knowledge, making it adaptable to various downstream tasks. For another thing, only ID transfers knowledge from the model itself, thus mitigating error propagation compared to transferring knowledge from other models. (2) Explicit graph-aware mechanism. Compared to other CKGE baselines, IncDE stands out by incorporating the graph structure into continual learning. This explicit graph-aware mechanism allows IncDE to leverage the inherent semantics encoded within the graph, enabling it to intelligently determine the optimal learning order and effectively balance the preservation of old knowledge. Three Components in IncDE The three components of IncDE, hierarchical ordering (HO), incremental distillation (ID), and two-stage training (TS) are inherently dependent on each other and necessary to be used together. We explain it in the following two aspects. (1) Designing Principle. The fundamental motivation of IncDE lies in effectively learning emerging knowledge while simultaneously preserving old knowledge. This objective is accomplished by all three components: HO, ID, and TS. On the one hand, HO plays a role in dividing new triples into layers, optimizing the process of learning emerging knowledge. On the other hand, ID and TS try to distill and preserve the representation of entities, ensuring the effective preservation of old knowledge. (2) Inter Dependence. The three components are intrinsically interrelated and should be employed together. For one thing, HO plays a vital role in generating a partition of new triples, which are subsequently fed into ID. For another thing, by employing TS, ID prevents old entities from being disrupted in the early training stages. Significance of Incremental Distillation Even though the three proposed components of IncDE: incremental distillation (ID), hierarchical ordering (HO), and two-stage training (TS) are all effective for the CKGE task, ID serves as the central module among them. Theoretically, the primary challenge in the continual learning task is catastrophic forgetting that occurs when learning step by step, which is also suitable for the CKGE task. To tackle this challenge, ID introduces the explicit graph structure to distill entity representations, effectively preserving old knowledge layer by layer during the whole training time. However, HO focuses on learning new knowledge well, and TS can only alleviate catastrophic forgetting in the early stages of training. Therefore, ID plays the most important role among all components in the CKGE task. In experiments, we observe that ID exhibits significant improvements (4.1% in MRR on average) compared to HO (0.9% in MRR on average) and TS (0.5% in MRR on average) from Table 4 and Table 5. Such results further verify ID as the pivotal component compared with HO and TS. The three components interact with each other and work together to complete the CKGE task. Conclusion This paper proposes a novel continual knowledge graph embedding method, IncDE, which incorporates the graph structure of KGs in learning emerging knowledge and remembering old knowledge. Firstly, we perform hierarchical ordering for the triples in the new knowledge graph to get an optimal learning sequence. Secondly, we propose incremental distillation to preserve old knowledge when training the new triples layer by layer. Moreover, We optimize the training process with a two-stage training strategy. In the future, we will consider how to handle the situation where old knowledge is deleted as knowledge graphs evolve. Also, it is imperative to address the integration of cross-domain and heterogeneous data into expanding knowledge graphs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8766 Acknowledgments We thank the anonymous reviewers for their insightful comments. This work was supported by National Science Foundation of China (Grant Nos.62376057) and the Start-up Research Fund of Southeast University (RF1028623234). All opinions are of the authors and do not reflect the view of sponsors. References Bizer, C.; Lehmann, J.; Kobilarov, G.; Auer, S.; Becker, C.; Cyganiak, R.; and Hellmann, S. 2009. Dbpedia-a crystallization point for the web of data. Journal of web semantics, 7(3): 154–165. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. In NIPS. Bordes, A.; Weston, J.; and Usunier, N. 2014. Open question answering with weakly supervised embedding models. In ECML-PKDD. Cui, Y.; Wang, Y.; Sun, Z.; Liu, W.; Jiang, Y.; Han, K.; and Hu, W. 2023. Lifelong embedding learning and transfer for growing knowledge graphs. In AAAI. Daruna, A.; Gupta, M.; Sridharan, M.; and Chernova, S. 2021. Continual learning of knowledge graph embeddings. IEEE Robotics and Automation Letters, 6(2): 1128–1135. DBpedia. 2021. DBpedia - A community-driven knowledge graph. https://wiki.dbpedia.org/. Accessed: 2023-08-01. Dong, X.; Gabrilovich, E.; Heitz, G.; Horn, W.; Lao, N.; Murphy, K.; Strohmann, T.; Sun, S.; and Zhang, W. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In SIGKDD. Febrinanto, F. G.; Xia, F.; Moore, K.; Thapa, C.; and Aggarwal, C. 2023. Graph lifelong learning: A survey. IEEE Computational Intelligence Magazine, 18(1): 32–51. Hamaguchi, T.; Oiwa, H.; Shimbo, M.; and Matsumoto, Y. 2017. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In IJCAI. Kazemi, S. M.; and Poole, D. 2018. Simple embedding for link prediction in knowledge graphs. NeurIPS. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521–3526. Kou, X.; Lin, Y.; Liu, S.; Li, P.; Zhou, J.; and Zhang, Y. 2020. Disentangle-based Continual Graph Representation Learning. In EMNLP. Li, G.; Chen, X.; Wang, P.; Xie, J.; and Luo, Q. 2022. Fastre: Towards fast relation extraction with convolutional encoder and improved cascade binary tagging framework. In IJCAI. Liang, K.; Meng, L.; Liu, M.; Liu, Y.; Tu, W.; Wang, S.; Zhou, S.; Liu, X.; and Sun, F. 2022. A Survey of Knowledge Graph Reasoning on Graph Types: Static. Dynamic, and Multimodal. Liu, H.; Yang, Y.; and Wang, X. 2021. Overcoming catastrophic forgetting in graph neural networks. In AAAI. Liu, J.; Wang, P.; Shang, Z.; and Wu, C. 2023. IterDE: An Iterative Knowledge Distillation Framework for Knowledge Graph Embeddings. In AAAI. Liu, Y.; Wang, P.; Li, Y.; Shao, Y.; and Xu, Z. 2020. AprilE: Attention with pseudo residual connection for knowledge graph embedding. In COLING. Lomonaco, V.; and Maltoni, D. 2017. Core50: a new dataset and benchmark for continuous object recognition. In CoRL. Lopez-Paz, D.; and Ranzato, M. 2017. Gradient episodic memory for continual learning. In NeurIPS. Noy, N.; Gao, Y.; Jain, A.; Narayanan, A.; Patterson, A.; and Taylor, J. 2019. Industry-scale Knowledge Graphs: Lessons and Challenges: Five diverse technology companies show how it’s done. Queue, 17(2): 48–75. Pan, Z.; and Wang, P. 2021. Hyperbolic hierarchy-aware knowledge graph embedding for link prediction. In EMNLP. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; and et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In NeurIPS. Rossi, A.; Barbosa, D.; Firmani, D.; Matinata, A.; and Merialdo, P. 2021. Knowledge graph embedding for link prediction: A comparative analysis. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(2): 1–49. Rusu, A. A.; Rabinowitz, N. C.; Desjardins, G.; Soyer, H.; Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; and Hadsell, R. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671. Shang, Z.; Wang, P.; Liu, Y.; Liu, J.; and Ke, W. 2023. ASKRL: An Aligned-Spatial Knowledge Representation Learning Framework for Open-World Knowledge Graph. In ISWC. Song, H.-J.; and Park, S.-B. 2018. Enriching translationbased knowledge graph embeddings through continual learning. IEEE Access, 6: 60489–60497. Suchanek, F. M.; Kasneci, G.; and Weikum, G. 2007. Yago: A core of semantic knowledge. In WWW. Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; and Tang, J. 2019. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In ICLR. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, ´E.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In ICML. Wang, H.; Xiong, W.; Yu, M.; Guo, X.; Chang, S.; and Wang, W. Y. 2019. Sentence Embedding Alignment for Lifelong Relation Extraction. In NAACL. Wang, K.; Liu, Y.; Ma, Q.; and Sheng, Q. Z. 2021. Mulde: Multi-teacher knowledge distillation for low-dimensional knowledge graph embeddings. In WWW. Wang, Q.; Mao, Z.; Wang, B.; and Guo, L. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12): 2724–2743. Wei, D.; Gu, Y.; Song, Y.; Song, Z.; Li, F.; and Yu, G. 2022. IncreGNN: Incremental Graph Neural Network Learning by Considering Node and Parameter Importance. In DASFAA. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8767 Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. In ICML. Zhou, F.; and Cao, C. 2021. Overcoming catastrophic forgetting in graph neural networks with experience replay. In AAAI. Zhu, Y.; Zhang, W.; Chen, M.; Chen, H.; Cheng, X.; Zhang, W.; and Chen, H. 2022. Dualde: Dually distilling knowledge graph embedding for faster and cheaper reasoning. In WSDM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8768
2024
974
18,822
Graph Disentangled Contrastive Learning with Personalized Transfer for Cross-Domain Recommendation Jing Liu, Lele Sun, Weizhi Nie, Peiguang Jing, Yuting Su* School of Electrical and Information Engineering, Tianjin University, China {jliu tju, sunlele, weizhinie, pgjing, ytsu}@tju.edu.cn Abstract Cross-Domain Recommendation (CDR) has been proven to effectively alleviate the data sparsity problem in Recommender System (RS). Recent CDR methods often disentangle user features into domain-invariant and domain-specific features for efficient cross-domain knowledge transfer. Despite showcasing robust performance, three crucial aspects remain unexplored for existing disentangled CDR approaches: i) The significance nuances of the interaction behaviors are ignored in generating disentangled features; ii) The user features are disentangled irrelevant to the individual items to be recommended; iii) The general knowledge transfer overlooks the user’s personality when interacting with diverse items. To this end, we propose a Graph Disentangled Contrastive framework for CDR (GDCCDR) with personalized transfer by meta-networks. An adaptive parameter-free filter is proposed to gauge the significance of diverse interactions, thereby facilitating more refined disentangled representations. In sight of the success of Contrastive Learning (CL) in RS, we propose two CL-based constraints for item-aware disentanglement. Proximate CL ensures the coherence of domain-invariant features between domains, while eliminatory CL strives to disentangle features within each domains using mutual information between users and items. Finally, for domain-invariant features, we adopt meta-networks to achieve personalized transfer. Experimental results on four real-world datasets demonstrate the superiority of GDCCDR over state-of-the-art methods. Introduction Recommender systems (RS) find wide-ranging applications on consumer platforms such as Kuaishou and Amazon, primarily due to their effectiveness in capturing personalized user preferences. However, the presence of limited useritem interactions in certain scenarios (i.e., data sparsity issue) places difficulties in creating precise interest models. To tackle this, Cross-Domain Recommendation (CDR) seeks to transfer valuable knowledge from the source domain to improve performance on the target domain. The existing CDR methods can be roughly divided into two branches, which we call blended methods and disentangled methods. Blended approaches employ diverse transfer *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. layers to combine the representations learned within their respective domains (see Fig. 1 (a)). For instance, CoNet (Hu, Zhang, and Yang 2018) utilizes cross-connections networks to transfer information. To avoid the negative impact caused by transferring domain-specific features, Disentangled Cross-Domain Recommendation (DCDR) has gained traction. The typical paradigm of DCDR is illustrated in Fig. 1 (b). MADD (Zhang et al. 2023b) introduces an orthogonal loss to differentiate between domain-invariant and domain-specific user features within intra-domains. DisenCDR (Cao et al. 2022) employs variational inference for disentanglement, relying on the Kullback-Leibler (KL) divergence distance of user features. However, we argue that the existing DCDR methods lead to sub-optimal feature disentanglement due to three reasons. Firstly, it is overlooked that each interaction carries an individual underlying intent, implying that diverse interactions play varying roles in generating disentangled features. For example, when transferring knowledge from clothes to books, purchasing a cotton skirt may enhance domainspecific features more than domain-invariant features, as cotton material is of little relevance to book recommendation. Neglecting the distinctiveness of interactions hinders the model from capturing finer-grained disentangled representations. Secondly, existing orthogonal loss or KL divergence used for feature disentanglement only manipulate user features regardless of items. The domain-specific and domain-invariant user features are constrained to stay away from each other (Fig. 1 (b)) whereas there is no guarantee on their corresponding correlation to individual item. Yet, modern RSs collectively consider both user and item features for practical recommendation, which implies the inefficiency of existing feature disentanglement methods. Thirdly, even with disentangled user representations in hand, effective transfer of domain-invariant features remains a formidable challenge. The diversity of user personalities highlights the need for personalized cross-domain transfer, which is currently untouched in existing DCDR methods that simply adopt weighted fusion or concatenation (Zhang et al. 2023a). In this paper, we propose to address the above-mentioned limitations through Graph Disentanglement and Contrastive learning with meta-networks for CDR. Specifically, to capture interaction nuances and refine disentangled features, we design an adaptive parameter-free filter in graph convoThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8769 Figure 1: Comparison of existing CDR methods. lution. This filter gauges interaction significance based on user-item similarity when generating disentangled features (see the dashed arrow in the middle of Fig. 1 (c)). Leveraging the effectiveness of contrastive learning (CL) in aligning positive pairs and distinguishing negative pairs, we design two distinct forms of CL for user feature disentanglement. Proximate CL enhances the consistency of domain-invariant features between domains while eliminatory CL disentangles features using mutual information (MI) between users and items within each domains (see the red arrows in the top of Fig. 1 (c)). Finally, meta-networks are adopted to facilitate personalized transfer of domain-invariant features. Our main contributions are summarized as follows: • We propose a novel disentangled CDR model named GDCCDR. To the best of our knowledge, we are the first to introduce disentangled graph update to CDR. • We formulate two contrastive learning-based constraints to enhance disentanglement: one focuses on domaininvariant features, while the other targets domain-specific features by leveraging mutual information. • We adopt meta-networks to facilitate personalized transfer of domain-invariant features. • We conduct extensive experiments on four real-world CDR datasets to evaluate our proposed GDCCDR. Related Work Existing CDR can be roughly divided into two branches depending on the way to transfer knowledge across domains, which we call blended CDR and disentangled CDR. Blended Cross-Domain Recommendation Blended CDR methods mainly transfer and blend all information across different domains. CoNet (Hu, Zhang, and Yang 2018) establishes a cross-connections network between two domains to achieve knowledge transfer. DDTCDR (Li and Tuzhilin 2020) proposes the latent orthogonal mapping functions of shared users. PPGN (Zhao, Li, and Fu 2019) enhances transfer using multiple stacked GNN layers for robust representations, while BITGCF (Liu et al. 2020) designs a feature fusion module during GNNs for better knowledge transfer. The indiscriminate feature mixture brings the risk of negative transfer (i.e., transferring domain-specific user features). Consequently, several disentanglement-based CDR approaches have arisen. Disentangled Cross-Domain Recommendation Disentangled representations of user latent intents from implicit feedback have received attention in recommender systems. In CDR, ATLRec (Li et al. 2020) uses MLPs to extract domain-invariant and domain-specific features, employing a GRL-based domain discriminator to align domaininvariant user features across domains. MADD (Zhang et al. 2023b) disentangle features within domains by orthogonal constraints based on ATLRec. DisenCDR (Cao et al. 2022) introduces variational inference to widen the gap in user features within domains using KL divergence. DCCDR (Zhang et al. 2023a), the latest method, employs two parallel GNNs for disentanglement. While existing methods mainly focus on user features, our approach uniquely includes mutual information of user and item features for disentanglement. Contrastive Learning (CL) in CDR Contrastive learning, a potent self-supervised technique, has been used to tackle data sparsity in RS. CCDR (Xie et al. 2022) is the first to introduce CL into CDR, aiming to achieve the consistency across domain representations for the same user. DR-MTCDR (Guo et al. 2023) utilizes CL to ensure the consistency of augmented views. UniCDR (Cao et al. 2023) applies CL to user features before and after masking. These CL-based CDR methods close all user features across domains, including domain-specific information that should remain distinct, leading to apparent flaws. DCCDR (Zhang et al. 2023a) tackles this issue by considering only domain-invariant features. Unlike these methods, our model employs two forms of CL for invariant and specific features, respectively, to achieve desired disentanglement. Methodology Problem Definition and Notations In this work, we focus on the CDR scenario with shared users between two domains. Fig. 2 overviews the proposed GDCCDR model. The user features are disentangled into domain-invariant and domain-specific representations with adaptive graph disentanglement and contrastive learning. This facilitates personalized transfer of domain-invariant features, thereby enhancing performance in both domains. Two domains are denoted as DA and DB. U represents the common set of users, VA and VB represent the set of items. Additionally, we represent two interaction matrices as RA ∈R|U|×|VA| and RB ∈R|U|×|VB|, where the entry Rij = 1 indicates user i has interacted with item j, otherwise Rij = 0. Disentangled Embedding Initialization. To model the intricate user-item relationships, we embed them into a ddimensional vector space. We initially parameterize user and item ID embeddings into independent embedding matrices: uppercase UA 0, UB 0 ∈R|U|×d for users, and VA 0 ∈R|VA|×d, VB 0 ∈R|VB|×d for items. Lowercase ui and vj denote individual user i and item j embeddings. To guarantee the independence of user domain-invariant (I) and domain-specific (S) representations, distinct projections are used to map U∗ 0 into separate vector spaces, which can be written as: U∗,I 0 = U∗ 0 ⊙σ(U∗ 0W∗ I +b∗ I), U∗,S 0 = U∗ 0 ⊙σ(U∗ 0W∗ S +b∗ S ), (1) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8770 Figure 2: Overview of our GDCCDR model. Best viewed in color. where ∗denotes the chosen DA or DB, U∗,I 0 and U∗,S 0 are user domain-invariant and domain-specific initial embedding matrices. σ(·) and ⊙denote sigmoid activation function and element-wise multiplication. W∗ I , W∗ S ∈Rd×d and b∗ I, b∗ S ∈Rd×1 are the learnable projection and bias parameters. Our main focus is transferring user-invariant features, without the need for slicing or projecting item embeddings. Adaptive Graph Disentanglement Message Propagation. Graph neural networks (GNNs) are widely recognized for their broad application, such as in DisenKGAT (Wu et al. 2021a) and KCRL (Nie et al. 2023), and have become the dominate solution for recommender systems, e.g., SGL (Wu et al. 2021b) and SimGCL (Yu et al. 2022). Drawing from these, we’ve devised our model using graph-based message passing for representations. Without loss of generality, we describe domain-invariant modeling, with message propagation as follows: eU ∗,I l+1 = ¯R∗· V∗,I l , eV ∗,I l+1 = ¯R∗T · U∗,I l , (2) where U∗,I l and V∗,I l are the domain-invariant embeddings for users and items in the l-th GNN layer. U∗,I 0 , V∗,I 0 are initial projection embeddings. Please note that V∗ 0 = V∗,I 0 = V∗,S 0 . ¯R∗∈R|U|×|V∗| denotes the normalized adjacent matrix derived from R∗calculated as: ¯R∗= (D∗)−1/2 (i) · R∗· (D∗)−1/2 ( j) , where (D∗)(i) and (D∗)(j) are diagonal degree matrices. Adaptive Parameter-Free Filter. Different from GNNbased DCDR methods such as DCCDR, we argue that interactions contribute differently to generating disentangled features. When considering the clothing and book domains, if user i purchases clothing item j due to a domain-specific factor, such as cotton material (which is less relevant to the book domain), we can deduce that, in the clothing domain, this purchase behavior is likely to contribute more to the domain-specific features compared to domain-invariant features. With this insight, we design an Adaptive ParameterFree Filter (APFF) that evaluates each interaction’s contributions. It gauges the similarity between embeddings of ui and v j based on domain-invariant and domain-specific representations, without involving additional parameters. The adaptive filter for each interaction during graph disentanglement is computed as follows: F ∗,c l (i, j) = σ  s (u∗,c i,l , v∗,c j,l )  , c ∈{I, S}, (3) where s(·, ·) measures the similarity, and in this case, we simplify it to a dot product. A higher weight of F (i, j) signifies that the model assigns greater importance to the interaction’s role in generating domain-specific or domain-invariant feature. Once the adaptive filter weights for all interactions are obtained, we can adaptively update the graph for certain feature by element-wise multiplication of the original normalized adjacent matrix ¯R∗with F ∗,c l ∈R|U|×|V∗| as follows: ¯ G∗,I l = ¯R∗⊙F ∗,I l , ¯ G∗,S l = ¯R∗⊙F ∗,S l . (4) After enhancing the adaptive graph, we combine it with the message propagation scheme (Eq. 2) to obtain augmented representations, formally described as follows: ¨U ∗,c l+1 = ¯ G∗,c l · V∗,c l , ¨V ∗,c l+1 = ¯ G∗,c l T · U∗,c l . (5) Afterwards, residual connections are employed during aggregation phase. The ultimate embeddings of each layer are: U∗,c l+1 = eU ∗,c l+1 + α · ¨U ∗,c l+1, V∗,c l+1 = eV ∗,c l+1 + α · ¨V ∗,c l+1 (6) where α is a hyper-parameter regulating the weight assigned to the adaptive graph disentanglement. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8771 Information Aggregation. To capture valuable information from higher-order neighbors, we further stack all embeddings from L graph layers to obtain the final embeddings: U∗,c = 1 L + 1 L X l=0 U∗,c l , V∗,c = 1 L + 1 L X l=0 V∗,c l (7) Hence, the final item embeddings are represented as: V∗= f(V∗,I, V∗,S ) (8) where f(·) is a feature fusion function. Here we utilize mean operation and other functions can also be adopted. Personalized Transfer across Domains Given the effective domain-invariant features, existing methods apply weighted fusion (Zhu et al. 2020) or concatenation (Zhang et al. 2023a) to transfer the domain-invariant user features. However, real-world scenarios demonstrate that the impact of these features varies from user to user. For instance, some users may prioritize domain-invariant attributes like price and quality, while others prioritize domain-specific factors like brands. This diversity highlights the need to personalize the cross-domain transfer based on individual user preferences, which has never been explored so far. To this end, we adopt meta-networks to generate personalized transfer matrices for users and items. Meta Knowledge. Initially, we extract meta-knowledge for personalized transfer from representations after GNN. The user side involves both intra- and inter-domain details, while the item side connects items and users. For example, the transfer from DB to DA is: HA U = UA,I ∥UB,I ∥UA,S ∥ X j∈Ni vA j , HA V = VA ∥ X i∈N j (uA,I i +uA,S i ) (9) where ∥denotes concatenation, while Ni and N j are the neighboring sets of nodes i and j. HA U ∈R|U|×4d and HA V ∈ R|IA|×2d denote user and item meta-knowledge, encoding contextual information for personalized knowledge transfer, encompassing vital data essential for tailored transfers. Meta Network. Inspired by (Chen et al. 2023a; Xia et al. 2021), we also employ a low-rank transformation to extract parameterized transfer matrices. As an example, let’s reconsider the transfer from DB to DA: eWA U = F A,U mlp ( HA U ), eWA V = F A,V mlp ( HA V ) (10) where F A,U mlp and F A,V mlp are personalized transfer matrices extractors with two tanh-activated fully-connected layers. By restricting the transformation rank to k < d, the personalized transfer matrices eWA U ∈R|U|×d×k and eWA V ∈R|IA|×k×d reduce trainable parameters. The final cross-domain transfer features of the interaction between user i and item j are: uA,T i,j = ewA uiewA v juB,I i + uB,I i (11) Subsequently, we integrate it with the original domaininvariant features in DA through weighted fusion, creating the ultimate domain-invariant user features in DA: uA,F i,j = uA,I i + β · uA,T i,j (12) where β denotes the hyper-parameter which controls the weight of personalized transfer features for each interaction. Contrastive Learning for Disentanglement Another limitation of current DCDR method lies in how to achieve thorough disentanglement of domain-invariant and domain-specific features. In view of the great success of Contrastive Learning (CL) in SSL addressing paired data, we introduce two novel forms of CL for feature disentanglement in CDR task. Proximate CL. InfoNCE-based (Gutmann and Hyv¨arinen 2010) contrastive learning enhances the consistency in representing different views of the same node (or entity), aligning with the perspective that the domain-invariant features across domains should exhibit proximity. We treat domaininvariant features transferred across domains for the same user as positive pairs, and those from different users as negative pairs (the item subscripts are omitted for simplicity): L∗ pcl = X i∈U −log exp  ϕ (u∗,I i , u∗,T i )/τp  P i′∈U exp  ϕ (u∗,I i , u∗,T i′ )/τp  (13) where ϕ (·, ·) measures representations similarity using cosine similarity function here; τp is temperature coefficient. Eliminatory CL. Ideally, user feature disentanglement implies thorough separation of domain-invariant information in domain-specific features for recommendation. In other words, it is unfeasible to recommend item based on cross-domain-specific features. However, as mentioned above, conventional approaches such as orthogonal or irrelevant loss put all the effort on the investigation of user features while overlooking the crucial items, thus only achieving partial disentanglement. In contrast, we propose eliminatory CL based on the mutual information between users and the items for efficient disentanglement. Specifically, in DA, the mutual information of its domain-specific features for items in DA surpasses that of DB for the same item: LA ecl = X (i,j)∈RA+ −log exp  s(uA,S i , vA j )  exp  s(uA,S i , vA j )  + PL l=0 exp  s(uB,S i,l , vA j )  (14) where RA+ are observed interactions in DA, s(·, ·) is dot product to measure MI and L denotes GNN layers. This formula indicates that domain-specific scores of DA is higher than domain-specific score of DB for the items in DA. Optimization Objectives Following recent works (Zhao et al. 2022; Liu et al. 2022), we adopt Bayesian Personalized Ranking (BPR), a pairwise loss. Each training sample includes a positive observed item j+ and a negative unobserved item j−for user i. BPR promotes higher scores (ˆy∗ i,j = u∗,F i v∗ j + u∗,S i v∗ j) for j+ than j−: L∗ bpr = − X (i, j+,j−) ∈O∗ lnσ(ˆy∗ i,j+ −ˆy∗ i, j−) (15) Finally, we combine the recommendation loss and the self-supervised loss to derive the ultimate joint loss: L∗= L∗ bpr + λp · L∗ pcl + λe · L∗ ecl + λl · ∥Θ∗∥2 F (16) where λp, λe and λl control the weights of Lpcl, Lecl and L2 regularization term, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8772 Experiments We evaluate our GDCCDR on four real-world datasets and analyze the following key research questions (RQs): • RQ1: How does GDCCDR perform in comparison with other representative and state-of-the-art methods? • RQ2: Do the designed key components of our model contribute to achieving performance improvement? • RQ3: How does the performance of our model at various levels of sparsity in user interaction data? • RQ4: How does the performance of our method vary with different hyper-parameter settings? • RQ5: Does our model achieve desired disentanglement? Experimental Settings Datasets We evaluate GDCCDR on the Amazon dataset1, specifically Sport&Phone, Sport&Cloth, Elec&Phone, and Elec&Cloth. To ensure equitable comparisons, we preprocess the original dataset following the BITGCF. Additionally, we employ the DisenCDR to filter out cold-start items from the test set, i.e., items without records in the training set. Comprehensive dataset statistics are shown in Table 1. Evaluation Protocols and Metrics We followed leaveone-out strategy to evaluate our model wherein we sampled one positive item (interacted) and 99 negative items (noninteracted) for each user and predict 100 candidate scores for ranking (Xue et al. 2017). And we use two widely adopted metrics to evaluate all methods: Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG). To ensure reliability, each experiment is repeated five times, and the average top-10 ranking results are reported. Baselines To verify the effectiveness and the superiority of our model, we compare GDCCDR with the following stateof-the-art single-domain and cross-domain baselines. (i) SDR methods: BPR (Rendle et al. 2012) is a classical method based on MF and optimized by pairwise ranking loss. NCF (He et al. 2017) combines the linearity of MF and the nonlinearity of MLPs to learn representations. LightGCN (He et al. 2020) is a significant method which simplifies the message passing rule of GNN to generate representations. DCCF (Ren et al. 2023) stands as the forefront intent disentanglement method in SDR. (ii) CDR methods: CoNet (Hu, Zhang, and Yang 2018) transfers knowledge through a cross-connections network connecting two domains. DDTCDR (Li and Tuzhilin 2020) seeks to learn a latent orthogonal mapping function to transfer user preferences across domains. DML (Li and Tuzhilin 2023) builds upon dual metric learning to enhance DDTCDR. BITGCF (Liu et al. 2020) incorporates a feature transfer layer that facilitates feature fusion across domains during graph convolution module. DisenCDR (Cao et al. 2022) is a recent CDR model utilizing a variational inference framework to disentangle user representations and incorporates a feature fusion module to generate domainshared features. MADD (Zhang et al. 2023b) utilizes MLPs 1http://jmcauley.ucsd.edu/data/amazon/index 2014.html Datasets Users Items Ratings Density Sport 4,998 20,837 54,256 0.052% Phone 13,666 46,448 0.068% Sport 9,928 30,761 100,903 0.033% Cloth 38,943 95,300 0.025% Elec 3,325 38,717 118,127 0.092% Phone 17,725 52,983 0.090% Elec 15,761 51,399 224,641 0.028% Cloth 48,777 133,590 0.017% Table 1: Statistics of four Amazon CDR datasets. to extract both domain-invariant and domain-specific features from pre-trained features. ETL (Chen et al. 2023b) employs equivalent transformations to capture overlapping and domain-specific attributes, improving performance across domains. DCCDR (Zhang et al. 2023a) incorporates two parallel graph convolution modules for disentanglement, while also being constrained with contrastive learning. Implementation Details In our PyTorch implementation of GDCCDR, we utilize the Adam optimizer (Kingma and Ba 2015) and Xavier initializer. The embedding dimension (d) is set to 128 for all methods, with a fixed learning rate of 0.001, a batch size of 1024, and a dropout rate of 0.5. The low-rank (k) is 10, the proximate temperature (τp) is 0.05, the L2 regularization coefficient (λl) is selected from {0.05, 0.005, 0.0005}. The final embeddings of GNN-based methods are obtained through mean pooling. For point-wise loss, we have four negative samples per positive sample. Experimental Results and Analysis Performance Comparisons (RQ1). Table 2 shows the results of HR@10 and NDCG@10 for compared methods across the four datasets. These experiments have yielded some intriguing findings: (1) GNN-based methods, LightGCN and DCCF, exhibit significant performance improvements compared to BPR and NCF, indicating that incorporating higher-order neighborhood information enables more effective learning of user and item representations. (2) CDR methods generally outperform SDR methods, suggesting that transferring useful information from other domains effectively alleviates the data sparsity problem. (3) DisenCDR and ETL demonstrate satisfactory performance, implying that incorporating variational inference into CDR can lead to more robust user and item representations. (4) DCCDR and DisenCDR outperform many CDR methods, highlighting the importance of disentangling user features and transferring only domain-invariant features for enhanced performance. (5) BITGCF emerges as the best-performing method across all baselines, showcasing the effectiveness of crossdomain knowledge transfer during graph convolution as a powerful transfer strategy. (6) Compared to all state-of-theart methods, our method consistently achieves the highest performance across four datasets. This indicates that our method excels in efficiently disentanglement and personalized transfer of domain-invariant features, resulting in superior recommendation performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8773 DataSets Sport Phone Sport Cloth Elec Phone Elec Cloth Metrics HR NDCG HR NDCG HR NDCG HR NDCG HR NDCG HR NDCG HR NDCG HR NDCG BPR 35.20 23.93 42.42 28.31 29.64 18.73 24.39 14.46 52.42 34.02 52.59 36.29 45.42 29.11 22.18 13.11 NCF 43.01 26.22 52.28 31.42 40.47 23.85 39.84 24.40 51.33 32.65 54.09 33.05 51.97 33.62 37.78 22.80 LightGCN 47.71 33.02 57.10 38.88 48.78 31.98 42.96 27.35 59.62 39.52 62.55 43.67 55.26 36.56 40.61 25.64 DCCF 48.98 30.99 57.76 36.07 47.50 28.99 44.47 27.57 56.36 36.80 60.26 38.05 54.51 36.07 42.73 26.07 CoNet 46.63 29.19 54.63 33.75 43.07 25.40 41.60 26.05 56.41 36.04 59.45 37.47 53.88 35.34 42.25 24.97 DDTCDR 45.52 27.32 56.59 34.67 43.07 24.99 43.47 25.56 56.91 36.28 59.21 36.91 54.54 35.16 42.48 24.84 DML 45.85 28.44 56.97 35.00 42.92 25.00 42.84 25.31 57.43 36.83 59.85 37.01 55.39 35.66 42.46 24.82 MADD 45.64 26.95 53.59 31.92 43.28 25.28 43.53 25.86 54.20 34.01 56.79 33.87 54.44 35.25 42.45 24.87 ETL 49.14 31.28 58.70 36.84 47.82 29.90 46.20 28.62 61.23 40.37 62.76 41.72 57.67 38.00 44.33 26.79 DisenCDR 48.81 31.34 58.76 37.55 46.10 27.68 45.06 27.23 60.00 38.52 61.66 40.96 56.76 36.92 44.62 26.95 DCCDR 49.15 33.80 58.15 40.01 51.40 34.03 46.03 30.08 61.51 41.28 63.82 44.62 57.18 38.21 41.81 26.39 BITGCF 52.57 35.87 58.40 39.39 54.58 36.46 51.80 34.01 62.98 42.65 65.03 44.93 58.78 39.17 46.03 28.56 GDCCDR 56.73 37.96 64.71 43.59 59.67 40.76 54.73 37.58 64.58 43.85 68.73 47.74 60.40 40.16 49.83 31.01 p-value 6.3e−4 1.4e−3 1.9e−5 2.8e−5 6.4e−5 4.4e−6 4.8e−4 2.7e−4 2.3e−3 4.3e−3 3.0e−3 2.3e−4 4.3e−4 2.2e−4 8.1e−5 2.0e−3 Table 2: Performance comparison (%) of different methods for four datasets based on HR@10 and NDCG@10. The best results are bold, and the second-best results are underlined. The p-value is calculated from our proposed model and runner-up results. Variants Sport Phone Sport Cloth HR NG HR NG HR NG HR NG GDCCDR 56.73 37.96 64.71 43.59 59.67 40.76 54.73 37.58 w/o-ecl 54.59 36.90 62.69 42.73 58.56 40.25 52.72 36.42 w/o-pcl 55.89 37.51 64.07 43.04 54.88 35.69 51.40 32.52 w/o-apff 55.91 36.98 63.72 41.80 58.51 39.85 53.48 36.70 w/o-meta 55.47 37.21 63.79 42.69 56.13 36.82 52.63 33.94 Table 3: Ablation study on key components of GDCCDR. Ablation Studies (RQ2). In this section, we conduct ablation studies to verify the essential components of GDCCDR. Specifically, w/o-ecl removes eliminatory contrastive learning on domain-specific features. w/o-pcl disables proximate contrastive learning approximating the similarity of domaininvariant features within inter-domains. w/o-meta replaces meta network with average pooling, resulting in the failure to attain personalized transfer knowledge. w/o-apff drops the adaptive parameter-free filter, which consequently prevents the possibility of adaptive graph update. Due to space limitations, we report results on two datasets in Table 3. GDCCDR outperforms w/o-ecl significantly, indicating that excluding domain-invariant information from domainspecific features through user and item mutual information can achieve better disentanglement. w/o-pcl exhibits inferior performance compared to GDCCDR, demonstrating the importance of utilizing contrastive learning to align domaininvariant features across domains. w/o-apff shows suboptimal performance, emphasizing the necessity of recognizing each interaction’s contributions and employing a graph update strategy during graph disentanglement. w/o-meta’s performance degradation validates the idea that personalized cross-domain knowledge transfer is needed. In summary, each of the key modules in the GDCCDR has a role to play. Figure 3: Performance comparison (%) w.r.t data sparsity over different user groups on the Elec&Cloth dataset. Data Sparsity (RQ3). To assess the robustness of GDCCDR in addressing data sparsity issues compared to other methods, we partition users into distinct groups based on the number of interactions they exhibit within the training set. Additionally, to further substantiate the preeminence of disentanglement via mutual information, we impose irrelevant constraints (Wang et al. 2020) on user domain-invariant and domain-specific features instead of eliminatory contrastive loss, named V-IR. Moreover, we introduce V-ND as a baseline, aligned with conventional CDR methods that use a single user representation without disentanglement. From the results in Fig. 3, we derive two fundamental observations: (i) Our model surpasses BITGCF and LightGCN for both inactive and active users by leveraging contrastive learning for thorough disentanglement and incorporating meta-networks to facilitate efficient personalized knowledge transfer. (ii) The performance gain of our model over V-IR, particularly for inactive users, indicates that leveraging mutual information between users and items for disentanglement is more effective than relying solely on user features. Hyper-parameter (RQ4). We investigate the effect of the following hyper-parameters on two datasets: the eliminatory CL factor λe, the proximate CL factor λp, the graph layers L, the adaptive filter factor α and the personalized transfer facThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8774 Figure 4: Results (%) of different key hyper-parameters on the Sport&Phone dataset. Figure 5: Comparison (%) of the predictive ability of the disentangled representations on the Elec&Phone dataset. tor β. The experimental results on Fig. 4 show that (1) Despite varying hyper-parameter impacts on different datasets, all performances initially show an upward trend, further proving the effectiveness of individual modules. (2) The performance changes on λe and λp support the feasibility of using contrastive learning for disentanglement. Proximate CL enforces consistency among domain-invariant features, while eliminatory CL eliminates domain-invariant information from domain-specific features. (3) Optimal GNN layers aggregate higher-order neighbor information, but excessive layers cause over-smoothing issue, degrading recommendations. (4) An appropriate α-value enhances the model’s ability to explore interaction effects on various features, yielding better disentangled representations. However, an excessively large α can cause the model to overly emphasize interaction variability, resulting in performance decline. (5) Once performance reaches the optimum, further increasing β doesn’t significantly reduce model performance, demonstrating the robustness of our model for personalized transfer. Disentanglement Comparisons (RQ5). Feature disentanglement lies at the core of our paper. For robust disentanglement, domain-specific features must be devoid of domain-invariant information aiding dual-domain prediction, while maximizing the domain-invariant features for transfer. To evaluate our model’s disentanglement ability, we compared it with state-of-the-art DCDR methods in Fig. 5. V-rand employs randomly initialized user features for prediction. V-spe utilizes cross-domain domainspecific features (lower values indicating decreased domaininvariant information within these features). V-inv relies solely on domain-invariant features. Our findings show that (1) Among the compared DCDR methods, only our V-spe variant is lower than V-rand, suggesting that our domainspecific features contain minimal information for predicting other domains. (2) Our V-inv variant ranks the highest among all methods, showcasing the maximization of our Figure 6: Case study of significance nuances of interactions. The user’s primary concerns about the item are in red. domain-invariant features. These findings affirm the comprehensive disentanglement achieved by our model. Case Study. We now explore the interpretability of significance nuances of interaction behaviors on the Elec&Phone dataset. We randomly select two users u1078 and u1581. From Fig. 6, it is observed that our model gives higher significance (i.e., the adaptive filter in Eq. 3 ) in terms of domainspecific feature generation for interactions between u1078 and i3482, i2234. It also gives higher significance in terms of domain-invariant feature generation for interactions between domains u1581 and i7740, i11074. Comments show u1078’s bias towards intra-domain features like image quality, sound size, and u1581’s bias towards inter-domain features like product quality and price, which demonstrates that our model effectively uncovers user intents from their historical behaviors. Conclusion In this paper, we propose a novel disentangled method for cross-domain recommendation named GDCCDR to achieve thorough feature disentanglement and personalized transfer. Adaptive parameter-free filters are introduced to control each interaction’s significance on disentangled feature generation. Distinct from conventional disentanglement approaches that only manipulate user features regardless of items, two novel contrastive learning-based (CL) constraints are designed for item-aware disentanglement. Proximate CL ensures the consistency of domain-invariant feature across domains, while eliminatory CL disentangles features within each domains through mutual information between users and items. Additionally, meta-networks are employed for personalized transfer of domain-invariant features. Ultimately, comprehensive experiments on four realworld datasets demonstrate the superior performance of GDCCDR compared to state-of-the-art methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8775 Acknowledgments This work was supported in part by the National Key Research and Development Program of China (2021YFF0901603) and in part by the National Natural Science Foundation of China (62371330, 62371333, 62272337). References Cao, J.; Li, S.; Yu, B.; Guo, X.; Liu, T.; and Wang, B. 2023. Towards Universal Cross-Domain Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 78–86. Cao, J.; Lin, X.; Cong, X.; Ya, J.; Liu, T.; and Wang, B. 2022. Disencdr: Learning disentangled representations for cross-domain recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 267–277. Chen, M.; Huang, C.; Xia, L.; Wei, W.; Xu, Y.; and Luo, R. 2023a. Heterogeneous graph contrastive learning for recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 544– 552. Chen, X.; Zhang, Y.; Tsang, I. W.; Pan, Y.; and Su, J. 2023b. Toward Equivalent Transformation of User Preferences in Cross Domain Recommendation. ACM Transactions on Information Systems, 41(1): 1–31. Guo, X.; Li, S.; Guo, N.; Cao, J.; Liu, X.; Ma, Q.; Gan, R.; and Zhao, Y. 2023. Disentangled Representations Learning for Multi-Target Cross-Domain Recommendation. ACM Transactions on Information Systems, 41(4). Gutmann, M. U.; and Hyv¨arinen, A. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In International Conference on Artificial Intelligence and Statistics. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 639–648. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.S. 2017. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, 173– 182. Hu, G.; Zhang, Y.; and Yang, Q. 2018. Conet: Collaborative cross networks for cross-domain recommendation. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 667–676. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, 1–15. Li, P.; and Tuzhilin, A. 2020. DDTCDR: Deep dual transfer cross domain recommendation. In Proceedings of the 13th International Conference on Web Search and Data Mining, 331–339. Li, P.; and Tuzhilin, A. 2023. Dual Metric Learning for Effective and Efficient Cross-Domain Recommendations. IEEE Transactions on Knowledge and Data Engineering, 35(1): 321–334. Li, Y.; Xu, J.; Zhao, P.; Fang, J.; Chen, W.; and Zhao, L. 2020. Atlrec: An attentional adversarial transfer learning network for cross-domain recommendation. Journal of Computer Science and Technology, 35(4): 794–808. Liu, F.; Chen, H.; Cheng, Z.; Liu, A.; Nie, L.; and Kankanhalli, M. 2022. Disentangled Multimodal Representation Learning for Recommendation. IEEE Transactions on Multimedia, 1–11. Liu, M.; Li, J.; Li, G.; and Pan, P. 2020. Cross domain recommendation via bi-directional transfer graph collaborative filtering networks. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, 885–894. Nie, W.; Wen, X.; Liu, J.; Chen, J.; Wu, J.; Jin, G.; Lu, J.; and Liu, A.-A. 2023. Knowledge-Enhanced Causal Reinforcement Learning Model for Interactive Recommendation. IEEE Transactions on Multimedia, 1–14. Ren, X.; Xia, L.; Zhao, J.; Yin, D.; and Huang, C. 2023. Disentangled Contrastive Collaborative Filtering. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2012. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. Wang, X.; Jin, H.; Zhang, A.; He, X.; Xu, T.; and Chua, T.S. 2020. Disentangled Graph Collaborative Filtering. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1001–1010. Wu, J.; Shi, W.; Cao, X.; Chen, J.; Lei, W.; Zhang, F.; Wu, W.; and He, X. 2021a. DisenKGAT: knowledge graph embedding with disentangled graph attention network. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, 2140–2149. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; and Xie, X. 2021b. Self-supervised graph learning for recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 726–735. Xia, L.; Xu, Y.; Huang, C.; Dai, P.; and Bo, L. 2021. Graph Meta Network for Multi-Behavior Recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 757–766. Xie, R.; Liu, Q.; Wang, L.; Liu, S.; Zhang, B.; and Lin, L. 2022. Contrastive cross-domain recommendation in matching. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4226–4236. Xue, H.-J.; Dai, X.-Y.; Zhang, J.; Huang, S.; and Chen, J. 2017. Deep Matrix Factorization Models for Recommender Systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, 3203–3209. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8776 Yu, J.; Yin, H.; Xia, X.; Chen, T.; Cui, L.; and Nguyen, Q. V. H. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1294– 1303. Zhang, R.; Zang, T.; Zhu, Y.; Wang, C.; Wang, K.; and Yu, J. 2023a. Disentangled Contrastive Learning for CrossDomain Recommendation. In The 28th International Conference on Database Systems for Advanced Applications, 163–178. Zhang, X.; Li, J.; Su, H.; Zhu, L.; and Shen, H. T. 2023b. Multi-level Attention-based Domain Disentanglement for BCDR. ACM Transactions on Information Systems, 41(4): 1–24. Zhao, C.; Li, C.; and Fu, C. 2019. Cross-domain recommendation via preference propagation graphnet. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2165–2168. Zhao, S.; Wei, W.; Zou, D.; and Mao, X. 2022. Multi-view intent disentangle graph networks for bundle recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, 4379–4387. Zhu, F.; Wang, Y.; Chen, C.; Liu, G.; and Zheng, X. 2020. A Graphical and Attentional Framework for Dual-Target Cross-Domain Recommendation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 3001–3008. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8777
2024
975
18,823
Multimodal Event Causality Reasoning with Scene Graph Enhanced Interaction Network Jintao Liu, Kaiwen Wei *, Chenglong Liu University of Chinese Academy of Sciences {liujintao201, weikaiwen19, liuchenglong20}@mails.ucas.ac.cn Abstract Multimodal event causality reasoning aims to recognize the causal relations based on the given events and accompanying image pairs, requiring the model to have a comprehensive grasp of visual and textual information. However, existing studies fail to effectively model the relations of the objects within the image and capture the object interactions across the image pair, resulting in an insufficient understanding of visual information by the model. To address these issues, we propose a Scene Graph Enhanced Interaction Network (SEIN) in this paper, which can leverage the interactions of the generated scene graph for multimodal event causality reasoning. Specifically, the proposed method adopts a graph convolutional network to model the objects and their relations derived from the scene graph structure, empowering the model to exploit the rich structural and semantic information in the image adequately. To capture the object interactions between the two images, we design an optimal transport-based alignment strategy to match the objects across the images, which could help the model recognize changes in visual information and facilitate causality reasoning. In addition, we introduce a cross-modal fusion module to combine textual and visual features for causality prediction. Experimental results indicate that the proposed SEIN outperforms state-of-the-art methods on the Vis-Causal dataset. Introduction Understanding causality from multimodal daily events is a challenging task and has attracted increasing attention from the community. Take Fig. 1 as an example, the reasoning model should be able to identify the causal relations between the two events based on A girl throws a plate in the air and A dog jumps to catch the plate as well as their associated image pairs. This task exhibits extensive applications in text and vision domains, including visual commonsense reasoning (Hildebrandt et al. 2020), dense video captioning (Iashin and Rahtu 2020), and machine reading comprehension (Rajani et al. 2019). Recently, many studies have concentrated on this task. Zhang et al. (2021) have embarked on extracting causal relations from time-consecutive images by incorporating event descriptions and visual context representations. Chadha and *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Visual Scene Graph Event1 Event2 Reasoning Visual Perception Textual Description Interaction SGG SGG Textual A girl throws a plate in the air A dog jumps to catch the plate holding front near holding holding near front Causal Event1 Event2 Result Figure 1: An example of employing scene graph for multimodal event causality reasoning, which could provide rich structural and semantic information for visual understanding of the image. Jain (2021) utilized both videos and natural language captions to infer visual-semantic commonsense knowledge with causal rationalization. Afterward, Ma and Tong (2022) combined visual perception and linguistic commonsense to enhance daily events causality reasoning and exploited object features to refine visual perception. Despite promising advancements achieved by existing studies, they still tend to overlook the importance of the following two critical concerns: (1) The relations between objects in the image. Previous works mainly focus on global features or object features of the image, ignoring the significance of modeling the relations between objects. We contend that the relations between paired objects are crucial for understanding the structural and semantic information of the image. As shown in Fig. 1, for the first image, if the model could discern a girl is holding a plate and a dog is near the girl, it would better comprehend the visual semantic information conveyed by this image. Recently, scene graph generation (SGG), which The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8778 aims to express objects and relations between objects in the image, has been gradually applied to various vision-based tasks. As a result, adopting SGG to recognize the objects and their relations can foster a more structured understanding of the image. Nevertheless, how to effectively model the objects and their relations remains to be studied. (2) The object interactions across the image pair. Due to the lack of object interactions, the model struggles to recognize the visual information variation between the images. Intuitively, humans can identify the association between two images by observing changes in objects and their relations across the image pair. Inspired by this, it is desirable for our model to capture the interactions of objects from the images. For example in Fig. 1, through object alignment and interaction between the images, the model could understand the changes in visual information from a girl is holding a plate in the first image to a dog is holding a plate in the second image, thus facilitating the identification of event causalities. However, capturing such interactions is challenging, and directly combining objects in two images might lead to inconsistencies and introduce noise. To address the above issues, we propose a novel Scene Graph Enhanced Interaction Network (SEIN) in this paper, which can leverage the interactions of the generated scene graph for multimodal event causality reasoning. Concretely, we first construct a scene graph for each image to obtain a sufficient structured understanding, where the nodes represent objects or relations, and edges represent the connections between them. Then we employ a Graph Convolutional Network (GCN) to model the objects and their relations to obtain context-aware node embeddings. To capture the object interactions across the two images, we propose optimal transport-based alignment to match the objects in the images, which could recognize the changes in visual information and enhance reasoning capability. And we combine the object features from two images according to the transportation cost matrix. Besides, we adopt a cross-modal fusion module based on the multi-head attention mechanism to integrate textual and visual features for causality prediction. The main contributions of this paper can be summarized as follows: • This paper proposes a SEIN framework, the first work to our knowledge that exploits the interactions of the scene graph to recognize visual information changes for multimodal event causality reasoning. • We adopt a GCN architecture to model the objects and their relations in the image, and design an optimal transport-based alignment strategy to capture the object interactions across the image pair. • Experimental results demonstrate that the proposed SEIN achieves state-of-the-art performance on the Vis-Causal dataset. Further analyses indicate the effectiveness and generalization ability of SEIN. Related Work Multimodal Event Causality Reasoning Previous approaches for event causality identification primarily focus on textual modality (Cao et al. 2021; Wei et al. 2021; Liu et al. 2023b). They seek to leverage external knowledge (Liu, Chen, and Zhao 2020; Cao et al. 2021; Wei et al. 2022) or prompt-tuning technique (Shen et al. 2022; Liu et al. 2023a; Wang et al. 2022b) to identify event causalities. Although these methods have achieved some success, their reliance on a single modality limits their applicability in real-world scenarios. Recently, multimodal event causality reasoning has garnered increasing attention. Zhang et al. (2021) first extract event causalities from time-consecutive images and natural language descriptions using event and visual context representations. Chadha and Jain (2021) utilize both videos and natural language captions to infer visualsemantic commonsense knowledge with causal rationalization. After that, Ma and Tong (2022) leverage visual perception and linguistic commonsense for this task and exploit object features to refine visual perception. However, these methods do not consider the rich structural and semantic information in the scene graph and the interactions of objects between images, making it challenging to adequately exploit the visual information. Scene Graph Generation Scene Graph Generation (SGG) has gained substantial interest in the field of computer vision since proposed in Xu et al. (2017). The purpose of SGG is to recognize objects and the relations between paired objects within an image, and then construct a graph where the objects serve as nodes and the relations between them serve as either nodes (Zareian, Karaman, and Chang 2020) or edges (Li, Zhang, and He 2022). It can provide valuable structural and semantic information for a range of downstream tasks, such as image retrieval (Yoon et al. 2021), visual commonsense reasoning (Wang et al. 2022c), and multimodal information extraction (Wang et al. 2022a). Various approaches have emerged to generate scene graphs in different ways (Wang et al. 2019; Lu et al. 2021), and some studies extend SGG from images to videos (Ji et al. 2020; Cong et al. 2021). Nevertheless, since multimodal event causality reasoning focuses on identifying causal relations between two events, it is not feasible to directly introduce SGG for this task. Optimal Transport Optimal Transport (OT) mainly studies how to achieve the optimal allocation of resources between two probability distributions, which has a wide range of applications in many areas such as self-supervision learning (Wu et al. 2022), domain adaptation (Xu et al. 2022), and label assignment (Wei et al. 2023). The fundamental idea behind OT is to determine the most efficient way to transform one distribution into another, taking into account the costs associated with the transportation plan. An influential work in OT is the fast solver proposed by Cuturi (2013), which adopted Sinkhorn’s matrix scaling algorithm with an entropic regularization term to solve the OT problem orders of magnitude faster than transport solvers. Building upon this, Xie et al. (2019) optimized the Sinkhorn algorithm and proposed an IPOT solver that leveraged an inexact proximal point method with proximal operator approximately evaluated at each iteration. Li et al. (2022) seeks to capture event argument structures with event The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8779 [CLS] A girl throws a plate in the air.[SEP] A dog jumps to catch the plate.[SEP] Time-consecutive Scene Graph Generation holding front near ... on holding front near ... on Faster R-CNN BERT . . . GCN . . . GCN Optimal Transport Causal Event1 Event2 Multi-head Attention Multi-head Attention Fuse Textual Classifier Adaptive Prediction Multimodal Classifier Result Multimodal E E1 2 I1 I2 Ho Ho 1 2 C  T m Textual Cross-Modal Fusion  a t He 1o H  2o H  Sence Graph Hs h Hm c Figure 2: The overview of the proposed SEIN framework. graph alignment. In this paper, we employ optimal transport to enhance the global alignments and semantic interactions of the scene graphs. Task Formulation The goal of multimodal event causality reasoning is to recognize causality between two given events, which contain images cropped from the videos and corresponding natural language descriptions. Following Zhang et al. (2021), this task is formally defined as follows: (1) The input of the model is two time-consecutive images and candidate event sets. The images I are cropped from the daily life video at equal time intervals. Each image pair consists of two images I1, I2 ∈I in temporal order (i.e., I1 appears before I2, and I1 and I2 correspond to cause and effect images, respectively). The event set associated with I1 is denoted as E1 and the event set encompassing all images sampled from the video is denoted as Ev. (2) Given an image pair I1, I2 and event set E1, for each E1 ∈E1, the objective is to find all events E2 ∈Ev that E1 causes E2. The output of the model is a causality score indicating the probability that E1 leads to the occurrence of E2 for each E2 ∈Ev. Methodology The overview of the proposed method is illustrated in Fig. 2. For the text modality, we concatenate the two event descriptions and encode them into hidden representations with BERT architecture. For the visual modality, we first construct scene graph for each image and employ a GCN architecture to model the objects and relations between paired objects. To capture the object interactions across the two images, we adopt optimal transport-based alignment to match the objects from the scene graphs. Then we combine the text and image representations with a multi-head attention mechanism and integrate the object features based on the cost matrix. Finally, in order to obtain the overall prediction results, we introduce an adaptive prediction strategy to fuse the outputs from textual and multimodal classifiers. In this section, we first introduce the acquisition of text and visual representations. Then we introduce the optimal transport-based alignment strategy. Subsequently, we present the cross-modal fusion module. Finally, we elucidate model training and prediction processes. Textual Representation To acquire a textual comprehension of the events, we first concatenate the two event descriptions with [SEP] token and add a [CLS] token at the beginning. Then we adopt pretrained BERT (Devlin et al. 2019) architecture to encode the sequence into hidden states: Hs = BERT([CLS] E1 [SEP] E2 [SEP]) (1) where Hs ∈Rn×d, d is the dimension of hidden states. Note that we pre-train BERT with event pairs from ATOMIC (Sap et al. 2019) knowledge base to improve the reasoning ability of the model before fine-tuning. Scene Graph In this work, we adopt the technique of SGG to extract objects and relations between the paired objects, which could enable the model to grasp a higher-level visual understanding of the image. Specifically, we first leverage the object detector Faster R-CNN (Ren et al. 2015) pre-trained on Visual Genome (Krishna et al. 2017) to detect a set of objects for each image. Then the public Scene Graph Diagnosis toolkit (Tang et al. 2020) is utilized to recognize relations between each pair of objects. Formally, for an image, the object set is denoted as O = {oi} and the relation set is denoted as R = {rij}, where rij indicates the relation between objects oi and oj. We define The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8780 the scene graph as G = (N, E), where N = O ∪R denotes the set of nodes containing objects and relations, E denotes the set of directed edges. It is worth noting that when rij exists, we add two directed edges oi →rij and rij →oj to E during the construction process of the edge set. Visual Representation For the embedding layer of the graph, we design three types of feature representations for each object and relation: (1) visual feature, (2) position feature, (3) category feature. Specifically, the visual features of objects f v oi ∈Rdv are obtained from the region of interest (ROI) features of the object detector and the visual features of relations f v rij ∈Rdv are the relation representations before the final prediction layer in the SGG model. The position features of objects f p oi ∈ Rdp and relations f p rij ∈Rdp are converted from bounding box coordinates and union box coordinates, respectively. Besides, the category features of objects f c oi ∈Rdc and relations f c rij ∈Rdc are obtained from pre-trained Glove word embeddings (Pennington, Socher, and Manning 2014) corresponding to the category labels of objects and relations. Then we fuse the three types of features with a linear layer followed by a ReLU activation function: foi = ReLU(W v oif v oi + W p oif p oi + W c oif c oi) frij = ReLU(W v rijf v rij + W p rijf p rij + W c rijf c rij) (2) where W v ∈Rdv×d, W p ∈Rdp×d, and W c ∈Rdc×d are trainable parameters. The fused object and relation features foi ∈Rd, frij ∈Rd are employed to initialize node embeddings in the graph. After that, we adopt Graph Convolutional Networks (Kipf and Welling 2017) to aggregate information of neighborhoods and get context-aware representations for objects. Each node in the l-th GCN layer is updated according to the representations of neighbor nodes as: F l o = F l−1 o + ReLU(AroF l−1 r W l r) F l r = F l−1 r + ReLU(AorF l−1 o W l o) (3) where Fo = [foi] ∈RNo×d and Fr = [frij] ∈RNr×d, No and Nr are the number of objects and relations respectively, Aro ∈RNo×Nr and Aor ∈RNr×No are the normalized adjacency matrices from objects to relations, and from relations to objects, W l ∈Rd×d are trainable parameters of the l-th GCN layer. Finally, we can obtain the object representations Ho = F L o ∈RNo×d from the output of the L-th GCN layer. It should be noted that we use the above approach to get visual representations of objects for each image in the pair, which can be denoted as H1 o = [h1 oi] ∈RN 1 o ×d and H2 o = [h2 oj] ∈RN 2 o ×d, respectively. Optimal Transport-Based Alignment Since the same object has similar representations in different images, we adopt optimal transport to achieve global alignment and interactions between the object features in the two time-consecutive images, which is beneficial for recognizing visual information changes. This work seeks to get the minimal OT distance between H1 o and H2 o, which is defined as: OTA(H1 o, H2 o) = min T ⟨T, C⟩ (4) where ⟨T, C⟩= Tr(T ⊤C) denotes the Frobenius inner product, T ∈RN 1 o ×N 2 o denotes the transportation plan, C represents the cost matrix between H1 o and H2 o. In the implementations, we use the cosine distance between the two objects to compute the cost matrix: Cij = 1 − h1 oi · h2 oj ⊤ ||h1oi||2 ||h2oj||2 (5) To solve Eq 4, we employ the IPOT method (Xie et al. 2019) to calculate the approximated T. Cross-Modal Fusion After obtaining textual and visual representations, a crossmodal fusion module is designed to effectively fuse the two modalities. We first adopt a multi-head attention mechanism (Vaswani et al. 2017) to capture interactions between the textual and visual modalities, which can be formulated as: Headi = softmax([QW Q i ][KW K i ]⊤ p d/h )[V W V i ] MHA(Q, K, V ) = [Head1 ⊕· · · ⊕Headh]Wa (6) where h represents the number of heads, ⊕denotes concatenation operation, {W Q i , W K i , W V i } ∈Rd×d/h are trainable parameters. We take the object representations from each image as query, and the textual representations as key and value respectively to obtain representations as: ˘H1 o = MHA(H1 o, Hs, Hs) ˘H2 o = MHA(H2 o, Hs, Hs) (7) Empirically, object pairs exhibiting smaller cosine distances tend to indicate strong object matching or semantic relevance. Therefore, these pairs hold greater significance in mining causal clues. Based on this observation, we select the top-K object pairs with the lowest cosine distance in cost matrix C and concatenate their features as the fused representations: He = [hk e] ∈RK×2d, where hk e = [˘h1 oi; ˘h2 oj] ∈ R1×2d. Afterward, we seek to aggregate the features with the guide of textual information. We take the representation of [CLS] token hc as the overall textual representation. Then we concatenate hc and hi e and feed them into a fullyconnected layer to compute attention score. Finally, we sum the fused representations with the attention score to obtain multimodal representations: αi = softmax(We[hc; hi e]) Hm = K X i=1 αi · hi e (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8781 Algorithm 1: The Training Process of SEIN Input: Training set D = {(Ei 1, Ii 1), (Ei 2, Ii 2)}N i=1, where E1 and I1 represent the text and image of the first event, E2 and I2 represent the second. Training: 1: for each batch Db ∈D do 2: for any event pair ∈Db do 3: Get Hs by Eq. 1; 4: Construct scene graph G for each image; 5: Get H1 o and H2 o by Eq. 2 and Eq. 3; 6: OT-based alignment OTA(H1 o, H2 o) by Eq. 4; 7: Get ˘H1 o and ˘H2 o by Eq. 7; 8: Fuse ˘H1 o and ˘H2 o into He according to C in Eq. 5; 9: Get Hm by Eq. 8; 10: Compute classification loss Lt and Lm; 11: Compute object alignment loss La; 12: end for 13: Compute batch loss L = λ1Lt + λ2Lm + λ3La; 14: Stochastic gradient update model parameters; 15: end for Model Training and Prediction The textual and multimodal representations are fed into fully-connected layers to obtain the predicted causal scores ˆyt and ˆym, respectively. We adopt binary cross-entropy loss as a training objective for textual classification: Lt = −1 N N X i=1 yi t log ˆyi t + (1 −yi t) log(1 −ˆyi t) (9) Similarly, we use cross-entropy loss to compute multimodal classification loss Lm. We also regard the distance between two scene graphs as a training objective: La = 1 N N X i=1 OTA(H1 o, H2 o) (10) The overall training loss can be calculated as: L = λ1Lt + λ2Lm + λ3La (11) The training process of SEIN is summarized in Algorithm 1. In the prediction stage, due to the different contributions of textual and multimodal classification to the outcomes (Ma and Tong 2022), we leverage an adaptive prediction strategy to calculate the causal score. The confidence score is defined as s = max(ˆy, 1−ˆy) to measure the significance of different modalities, and the final predicted causal score is: ˆyf = (1 + β2) ˆyt · ˆym β2ˆyt + ˆym (12) where β is an adaptive weight factor, which is defined as: β = ( e √sm−st, sm > st e−√st−sm, sm < st (13) Split #Video #Image Pair #Event Train 800 1609 82731 Valid 100 208 10608 Test 100 191 9053 Table 1: Statistics of the Vis-Causal dataset. Experiments Experimental Settings Dataset. We conduct experiments to evaluate our model on the Vis-Causal dataset (Zhang et al. 2021), which is widely used for multimodal daily event causality reasoning. The images in the dataset are collected from YouTube videos, which cover most categories of daily life, i.e., Sports, Socializing, Household, Personal Care, and Eating. Based on the images, the goal is to find the event from the candidate set that has causality with the given event. The statistics of the dataset are listed in Table 1. Evaluation Metrics. In line with previous works (Zhang et al. 2021; Ma and Tong 2022), we employ Recall@K (R@K) as evaluation metric. R@K reflects the ratio of the correct outcomes in the top-K plausible scores to the total number of ground truth causality events. This paper uses R@1, R@5, and R@10 to evaluate the model performance. Implementation Details. All experiments are conducted on NVIDIA Tesla V100 GPU with Pytorch framework. We adopt pre-trained BERT-BASE-UNCASED architecture from HuggingFace’s Transformers library as textual encoder. We use Faster R-CNN (Ren et al. 2015) pre-trained on Visual Genome to detect objects and leverage the public Scene Graph Diagnosis toolkit (Tang et al. 2020) to identify relations between each pair of objects. The hyper-parameters λ1, λ2, and λ3 are set to 0.5, 0.3, and 0.1, respectively. The number of paired objects K is set to 10. The number of GCN layers L is set to 2. The model is trained for 25 epochs with a learning rate of 5e-5 and a batch size of 16. The dimension of the hidden representations d is set to 768. We utilize an early stop strategy and Adam optimizer to update model parameters. Compared Methods In this work, we compare the proposed SEIN with the following baselines: (1) Random, which means randomly selecting an event from the candidate event set as the prediction result. (2) BERT (Devlin et al. 2019), a method that leverages the BERT model to encode the textual modality for causality reasoning without considering the visual modality. (3) VCC (Zhang et al. 2021), which uses event descriptions and visual context representations to extract causal clues from time-consecutive images. (4) iReason (Chadha and Jain 2021), which seeks to infer visual-semantic commonsense knowledge using both videos and natural language captions with causal rationalization. (5) OARNet (Ma and Tong 2022), which combines visual perception and linguistic commonsense for daily events causality reasoning and exploits object representations to refine visual perception. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8782 Method Metrics Sports Socializing Household Care Eating Overall Random R@1 0.67 3.64 1.69 0.00 9.09 2.13 R@5 14.19 16.36 15.25 11.11 27.27 15.25 R@10 28.38 38.18 27.12 33.33 27.27 30.14 BERT R@1 12.16 7.27 3.39 0.00 18.18 9.22 R@5 29.05 32.73 37.29 55.56 54.55 33.33 R@10 62.84 67.27 49.15 55.56 72.73 60.99 VCC R@1 8.78 7.27 6.78 11.11 27.27 8.87 R@5 37.16 36.36 28.81 33.33 45.45 34.75 R@10 64.86 58.18 62.71 55.56 72.73 63.12 iReason R@1 9.27 8.09 7.91 12.72 28.89 9.21 R@5 38.71 36.36 29.92 34.73 45.75 35.87 R@10 65.12 58.52 62.71 55.56 72.73 63.51 OARNet R@1 20.95 14.55 11.86 11.11 9.09 17.38 R@5 56.76 49.09 37.29 33.33 45.45 50.00 R@10 75.68 74.55 59.32 55.56 72.73 71.28 SEIN (Ours) R@1 19.59 16.36 16.95 11.11 27.27 18.09 R@5 58.78 56.36 38.98 33.33 54.55 53.19 R@10 77.03 78.18 61.02 77.78 63.64 73.40 Table 2: Overall performance compared to the state-of-the-art methods on the test set. The best results are denoted in bold. Method R@1 R@5 R@10 w/o SG 16.35 51.22 71.63 w/o OTA 17.21 51.87 71.92 w/o CMF 17.62 52.11 72.24 w/o APS 17.12 52.09 71.88 SEIN 18.09 53.19 73.40 Table 3: Experimental results of ablation study. The best results are denoted in bold. Main Results The main experimental results of our method and baselines are reported in Table 2. We can observe that: (1) The proposed method achieves the best performance in terms of R@1, R@5, and R@10 compared to the baseline methods, which suggests the effectiveness of the SEIN framework in addressing this task. Besides, SEIN consistently exhibits excellent performance improvement across different daily life categories. (2) SEIN demonstrates a significant performance gain over the BERT baseline, which indicates that incorporating visual modality can provide valuable information for multimodal event causality reasoning and help rectify certain non-commonsense errors. (3) Compared to VCC and iReason, our method performs far better than them on R@1, R@5, and R@10. The reason behind this improvement may be that VCC and iReason regard the object context representations as features instead of exploiting rich visual features. While our method can make full use of visual and textual features to recognize event causalities more effectively. (4) Our method surpasses OARNet by a substantial margin on R@1, R@5, and R@10. We attribute this to the fact that OARNet primarily uses the co-occurrence of the objects in two images as visual features to identify the causal relation, disregarding the relations between objects in the image and the changes in visual information of the objects. In contrast, SEIN can leverage the structural and semantic information of the image and capture the object interactions across the image pair, thus achieving better performance. Analysis and Discussion Ablation Study. To verify the contributions of each component, we conduct ablation studies by comparing SEIN with the proposed variant methods. As illustrated in Table 3, we can find that: (1) After removing scene graph (w/o SG), the model performance drops significantly. The performance gap indicates the importance of the scene graph in modeling the objects and relations between paired objects, which could provide valuable structural and semantic knowledge for each image and facilitate event causality reasoning. (2) After removing optimal transport-based alignment (w/o OTA), the model performance becomes worse. This illustrates that the optimal transport-based alignment strategy can capture the interactions of objects across the image pair, which is beneficial for recognizing the changes in visual information and mining implicit causal clues. (3) After removing the cross-modal fusion module (w/o CMF), the model also suffers from performance decay. This result demonstrates that the multi-head attention mechanism is effective for capturing cross-modal interactions and fusing textual and visual modalities, enabling enhanced reasoning and prediction. (4) After removing the adaptive prediction strategy (w/o APS), which means the causality score is predicted by an average operation, the model performance decreases. This performance gap illustrates that the adaptive prediction strategy can balance the influence of textual and multimodal reasoning for causality prediction, especially in the case of single outcome prediction errors. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8783 building girl dog dog bike bike dog dog railing fence (a) The input events and object detection results (b) Transportation cost matrix A girl is watching two dogs. The two dogs are going to play. Figure 3: Visualization of a typical instance. Figure 4: Experimental results under different number of GCN layers. Effect of the Number of GCN Layers. To investigate the effect of GCN layers, we conduct experiments with the number of GCN layers ranging from 1 to 5. The model performance on R@1, R@5, and R@10 is plotted in Fig. 4. The observations drawn from the results are as follows: (1) SEIN produces the best performance when using two layers on R@1, R@5, and R@10. Therefore, we argue that adopting two GCN layers is most effective in modeling the objects and relations between paired objects to obtain a sufficient understanding of the image. (2) The model performance drops rapidly when the number of GCN layers becomes too large. This illustrates that increasing the number of GCN layers beyond a certain point does not contribute to improving the performance of multimodal event causality reasoning. Generalization. To prove the generalization ability of the SEIN framework, we leverage different pre-trained models to encode text and image modalities for comparison. The results are presented in Fig. 5. The following observations can be made: (1) SEIN consistently yields the best performance among different methods, indicating the effectiveness and generalization ability of the proposed method. This also suggests that leveraging the structural information of the image and interactions of objects across the image pair enables the model to uncover implicit causal clues, thus boosting reasoning performance. (2) The methods that incorporate visual information generally perform better than BERT, which Figure 5: Experimental results of using different pre-trained models. indicates that the inclusion of global visual features can enhance the model’s understanding of multimodal daily events. Visualization. We present the visualization of a typical instance to demonstrate the object interactions across the image pair. As shown in Fig. 3(a), we adopt the pre-trained Faster R-CNN (Ren et al. 2015) to obtain object information from each image. After training the SEIN framework, the cost matrix from the optimal transport-based alignment strategy is illustrated in Fig. 3(b). We can find that the same kind of objects exhibit relatively lower transportation costs. Additionally, the cost associated with the same object is lower compared to different objects across the image pair. This indicates that using the cost matrix to guide the combination of object features is reasonable and effective. Conclusion In this paper, we propose a SEIN framework to tackle the multimodal event causality reasoning task. The proposed method exploits GCN to model the objects and relations from scene graph structure, allowing for a sufficient visual understanding of the image. Then an optimal transportbased alignment approach is designed to capture changes in visual information between the image pair and facilitate causality reasoning. Besides, SEIN adopts a cross-modal fusion module to combine textual and visual features, and introduces an adaptive prediction strategy for better inference. Experimental results illustrate that SEIN achieves state-ofthe-art performance on the Vis-Causal dataset. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8784 References Cao, P.; Zuo, X.; Chen, Y.; Liu, K.; Zhao, J.; Chen, Y.; and Peng, W. 2021. Knowledge-Enriched Event Causality Identification via Latent Structure Induction Networks. In Zong, C.; Xia, F.; Li, W.; and Navigli, R., eds., Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, 4862–4872. Association for Computational Linguistics. Chadha, A.; and Jain, V. 2021. iReason: Multimodal Commonsense Reasoning using Videos and Natural Language with Interpretability. CoRR, abs/2107.10300. Cong, Y.; Liao, W.; Ackermann, H.; Rosenhahn, B.; and Yang, M. Y. 2021. Spatial-Temporal Transformer for Dynamic Scene Graph Generation. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 16352–16362. IEEE. Cuturi, M. 2013. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. In Burges, C. J. C.; Bottou, L.; Ghahramani, Z.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, 2292–2300. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J.; Doran, C.; and Solorio, T., eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), 4171–4186. Association for Computational Linguistics. Hildebrandt, M.; Li, H.; Koner, R.; Tresp, V.; and G¨unnemann, S. 2020. Scene Graph Reasoning for Visual Question Answering. CoRR, abs/2007.01072. Iashin, V.; and Rahtu, E. 2020. Multi-modal Dense Video Captioning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020, 4117–4126. Computer Vision Foundation / IEEE. Ji, J.; Krishna, R.; Fei-Fei, L.; and Niebles, J. C. 2020. Action Genome: Actions As Compositions of Spatio-Temporal Scene Graphs. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 10233–10244. Computer Vision Foundation / IEEE. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Li, L.; Shamma, D. A.; Bernstein, M. S.; and Fei-Fei, L. 2017. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. Int. J. Comput. Vis., 123(1): 32–73. Li, M.; Xu, R.; Wang, S.; Zhou, L.; Lin, X.; Zhu, C.; Zeng, M.; Ji, H.; and Chang, S. 2022. CLIP-Event: Connecting Text and Images with Event Structures. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 16399– 16408. IEEE. Li, R.; Zhang, S.; and He, X. 2022. SGTR: End-to-end Scene Graph Generation with Transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 19464–19474. IEEE. Liu, J.; Chen, Y.; and Zhao, J. 2020. Knowledge Enhanced Event Causality Identification with Mention Masking Generalizations. In Bessiere, C., ed., Proceedings of the TwentyNinth International Joint Conference on Artificial Intelligence, IJCAI 2020, 3608–3614. ijcai.org. Liu, J.; Zhang, Z.; Guo, Z.; Jin, L.; Li, X.; Wei, K.; and Sun, X. 2023a. KEPT: Knowledge Enhanced Prompt Tuning for event causality identification. Knowl. Based Syst., 259: 110064. Liu, J.; Zhang, Z.; Wei, K.; Guo, Z.; Sun, X.; Jin, L.; and Li, X. 2023b. Event Causality Extraction via Implicit CauseEffect Interactions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 6792–6804. Lu, Y.; Rai, H.; Chang, J.; Knyazev, B.; Yu, G. W.; Shekhar, S.; Taylor, G. W.; and Volkovs, M. 2021. Context-aware Scene Graph Generation with Seq2Seq Transformers. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 15911–15921. IEEE. Ma, B.; and Tong, C. 2022. Joint Visual Perception and Linguistic Commonsense for Daily Events Causality Reasoning. In IEEE International Conference on Multimedia and Expo, ICME 2022, Taipei, Taiwan, July 18-22, 2022, 1–6. IEEE. Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global Vectors for Word Representation. In Moschitti, A.; Pang, B.; and Daelemans, W., eds., Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, 1532–1543. ACL. Rajani, N. F.; McCann, B.; Xiong, C.; and Socher, R. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Korhonen, A.; Traum, D. R.; and M`arquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, 4932–4942. Association for Computational Linguistics. Ren, S.; He, K.; Girshick, R. B.; and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Cortes, C.; Lawrence, N. D.; Lee, D. D.; Sugiyama, M.; and Garnett, R., eds., Advances in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8785 Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, 91–99. Sap, M.; Bras, R. L.; Allaway, E.; Bhagavatula, C.; Lourie, N.; Rashkin, H.; Roof, B.; Smith, N. A.; and Choi, Y. 2019. ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, 3027–3035. AAAI Press. Shen, S.; Zhou, H.; Wu, T.; and Qi, G. 2022. Event Causality Identification via Derivative Prompt Joint Learning. In Calzolari, N.; Huang, C.; Kim, H.; Pustejovsky, J.; Wanner, L.; Choi, K.; Ryu, P.; Chen, H.; Donatelli, L.; Ji, H.; Kurohashi, S.; Paggio, P.; Xue, N.; Kim, S.; Hahm, Y.; He, Z.; Lee, T. K.; Santus, E.; Bond, F.; and Na, S., eds., Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, 2288–2299. International Committee on Computational Linguistics. Tang, K.; Niu, Y.; Huang, J.; Shi, J.; and Zhang, H. 2020. Unbiased Scene Graph Generation From Biased Training. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 1319, 2020, 3713–3722. Computer Vision Foundation / IEEE. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 5998–6008. Wang, J.; Yang, Y.; Liu, K.; Zhu, Z.; and Liu, X. 2022a. M3S: Scene graph driven multi-granularity multi-task learning for multi-modal NER. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31: 111–120. Wang, S.; Wei, K.; Zhang, H.; Li, Y.; and Wu, W. 2022b. Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation. arXiv preprint arXiv:2209.00455. Wang, W.; Wang, R.; Shan, S.; and Chen, X. 2019. Exploring Context and Visual Pattern of Relationship for Scene Graph Generation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 8188–8197. Computer Vision Foundation / IEEE. Wang, Z.; You, H.; Li, L. H.; Zareian, A.; Park, S.; Liang, Y.; Chang, K.-W.; and Chang, S.-F. 2022c. SGEITL: Scene graph enhanced image-text learning for visual commonsense reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 5914–5922. Wei, K.; Sun, X.; Zhang, Z.; Jin, L.; Zhang, J.; Lv, J.; and Guo, Z. 2022. Implicit Event Argument Extraction With Argument-Argument Relational Knowledge. IEEE Transactions on Knowledge and Data Engineering. Wei, K.; Sun, X.; Zhang, Z.; Zhang, J.; Zhi, G.; and Jin, L. 2021. Trigger is not sufficient: Exploiting frame-aware knowledge for implicit event argument extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4672–4682. Wei, K.; Yang, Y.; Jin, L.; Sun, X.; Zhang, Z.; Zhang, J.; Li, X.; Zhang, L.; Liu, J.; and Zhi, G. 2023. Guide the Many-to-One Assignment: Open Information Extraction via IoU-aware Optimal Transport. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 4971–4984. Wu, B.; Cheng, R.; Zhang, P.; Gao, T.; Gonzalez, J. E.; and Vajda, P. 2022. Data Efficient Language-Supervised ZeroShot Recognition with Optimal Transport Distillation. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Xie, Y.; Wang, X.; Wang, R.; and Zha, H. 2019. A Fast Proximal Point Method for Computing Exact Wasserstein Distance. In Globerson, A.; and Silva, R., eds., Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, July 22-25, 2019, volume 115 of Proceedings of Machine Learning Research, 433–453. AUAI Press. Xu, B.; Zeng, Z.; Lian, C.; and Ding, Z. 2022. Few-Shot Domain Adaptation via Mixup Optimal Transport. IEEE Trans. Image Process., 31: 2518–2528. Xu, D.; Zhu, Y.; Choy, C. B.; and Fei-Fei, L. 2017. Scene Graph Generation by Iterative Message Passing. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 3097–3106. IEEE Computer Society. Yoon, S.; Kang, W.; Jeon, S.; Lee, S.; Han, C.; Park, J.; and Kim, E. 2021. Image-to-Image Retrieval by Learning Similarity between Scene Graphs. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, 10718–10726. AAAI Press. Zareian, A.; Karaman, S.; and Chang, S. 2020. Weakly Supervised Visual Semantic Parsing. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 3733– 3742. Computer Vision Foundation / IEEE. Zhang, H.; Huo, Y.; Zhao, X.; Song, Y.; and Roth, D. 2021. Learning Contextual Causality Between Daily Events From Time-Consecutive Images. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2021, virtual, June 19-25, 2021, 1752–1755. Computer Vision Foundation / IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8786
2024
976
18,824
AT4CTR: Auxiliary Match Tasks for Enhancing Click-Through Rate Prediction Qi Liu1*, Xuyang Hou2, Defu Lian1†, Zhe Wang2, Haoran Jin1, Jia Cheng2, Jun Lei2 1 University of Science and Technology of China 2 Meituan {qiliu67,HaoranJin}@mail.ustc.edu.cn, {liandefu}@ustc.edu.cn {houxuyang,wangzhe65,jia.cheng.sh,leijun}@meituan.com Abstract Click-through rate (CTR) prediction is a vital task in industrial recommendation systems. Most existing methods focus on the network architecture design of the CTR model for better accuracy and suffer from the data sparsity problem. Especially in industrial recommendation systems, the widely applied negative sample down-sampling technique due to resource limitation worsens the problem, resulting in a decline in performance. In this paper, we propose Auxiliary Match Tasks for enhancing Click-Through Rate (AT4CTR) prediction accuracy by alleviating the data sparsity problem. Specifically, we design two match tasks inspired by collaborative filtering to enhance the relevance modeling between user and item. As the ”click” action is a strong signal which indicates the user’s preference towards the item directly, we make the first match task aim at pulling closer the representation between the user and the item regarding the positive samples. Since the user’s past click behaviors can also be treated as the user him/herself, we apply the next item prediction as the second match task. For both the match tasks, we choose the InfoNCE as their loss function. The two match tasks can provide meaningful training signals to speed up the model’s convergence and alleviate the data sparsity. We conduct extensive experiments on one public dataset and one large-scale industrial recommendation dataset. The result demonstrates the effectiveness of the proposed auxiliary match tasks. AT4CTR has been deployed in the real industrial advertising system and has gained remarkable revenue. Introduction Click-through rate (CTR) prediction is crucial in industrial web applications, e.g. recommendation systems and online advertising. It estimates the probabilities of the user clicking on items and displays the top-ranked items to the user. In online advertising, the platform can only charge the advertiser once the ad is clicked by the user. Thus, accurate CTR estimation can maintain the user’s satisfaction and maximize the revenue for both the platform and the advertiser. Existing advance in CTR mainly focuses on network architecture and have gained huge success. Traditional meth*This work was done when the author Qi Liu was at Meituan for intern. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ods, like Logistic Regression (Richardson, Dominowska, and Ragno 2007) and Factorization Machine (FM) (Rendle 2010), can only capture the low-order feature interactions. Recently, deep learning has been exploited for CTR prediction. Methods, such as Wide&Deep (Cheng et al. 2016), DeepFM (Guo et al. 2017), and DCN (Wang et al. 2017), focus on capturing the high-order feature interactions through the neural network. On the other side, DIN (Zhou et al. 2018b), DIEN (Zhou et al. 2019), and DBPMaN (Dong et al. 2023) extract user interest from user behavior sequences like click or conversion. Those methods all improve the performance of CTR prediction by a large margin. The success of network architecture makes researchers ignore another important problem of data sparsity which means that positive samples take up only a small part of the total samples. Especially in the industrial scenarios, negative sample down-sampling which abandons each negative sample based on the Bernoulli distribution with certain probability is widely used to reduce the computing and storage cost when training the CTR model. However, drastically abandoning negative samples will worsen the data sparsity issue and degrade the performance. As there always exists abandoned hard negative samples which are important for the CTR model’s updating. A few works devote efforts to solving the data sparsity issue. DeepMCP (Ouyang et al. 2019) applies a matching subnet to strengthen the relevance between the user and the item, and a correlation subnet to improve item representation. But it introduces no extra training signals. DMR (Lyu et al. 2020) designs an auxiliary network to predict the last behavior based on previous behaviors. The auxiliary losses used in DeepMCP and DMR are all negative sampling (Mikolov et al. 2013) which is an approximation of full softmax. It is not strong enough and implementation unfriendly as the item has a large magnitude and changes over time. CL4CTR (Wang et al. 2023) exploits an auxiliary network to perform self-supervised contrastive learning but triples the training cost. To alleviate the data sparsity problem, we propose two novel auxiliary match tasks to provide more helpful training signals. As shown in Figure 1, it contains two auxiliary tasks. The ”click” action demonstrates the user’s strong preference for the clicked item. Intuitively, the representation of positive samples between the user side and the item side features should be highly relevant. The main binary clasThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8787 sification task can not fully concentrate on user-item relevance learning because it needs to deal with the context and other interaction features simultaneously (Lin et al. 2023). Thus, we propose the first auxiliary match task named UserItem Match (UIM). Inspired by the success of contrastive learning in CV&NLP, we take the InfoNCE (Oord, Li, and Vinyals 2018) as the loss of the UIM task. It treats each positive sample’s user and item as positive pairs and pulls closer their representation. It takes the other samples in the same batch to synthesize negative samples whose representation of user and item should be pushed away. The UIM task provides explicit signals to model the relevance between the user and the item, which relieves the stress of the main task and speeds up convergence. The behavior sequence consists of the user’s clicked items in chronological order, which contains the causality of the user’s decision and can represent himself/herself to some extent. We design the second auxiliary match task named Next Item Prediction (NIP). Specifically, we aggregate the previous behaviors through self-attention (Vaswani et al. 2017) as the user’s representation and predict the next item, which can also be regarded as a micro UIM task. We exploit the InfoNCE as an approximation of full softmax due to the large magnitude of items. The NIP task can accelerate the convergence of the behavior sequence modeling module. We choose InfoNCE because of its inherent ability to mine hard negative samples(Wang and Liu 2021). This ability can release the power of the synthetic negative samples from the in-batch negative construction and benefits the negative sample down-sampling industrial scenarios more by supplementing more hard negative signals. In summary, the main contributions of this paper are: • We reveal the neglected problem of data sparsity which is serious in the negative sample down-sampling industrial scenarios and propose two auxiliary match tasks to provide extra meaningful training signals. • We propose the AT4CTR, which contains two auxiliary match tasks to strengthen the relevance between the user and the item with the help of InfoNCE loss. • We conduct offline experiments on both public and industry datasets to verify the effectiveness of the proposed AT4CTR, which achieves remarkable improvement in the online A/B test. Related Work Deep CTR Prediction Recent CTR research based on deep learning can be mainly divided into two directions: feature interaction and user behavior sequence modeling. The feature interaction methods believe that the interaction between different features is important for CTR modeling. Early FM-based methods only model the second-order pairwise interactions by using factorized parameters, which limits the performance of CTR modeling. Many works explore how to capture high-order and informative feature interactions efficiently. Wide&Deep (WDL) (Cheng et al. 2016) exploits Deep Neural Network (DNN) to capture the high-order feature interaction implicitly for capturing high-order feature interaction. DeepFM (Guo et al. 2017) combines DNN and FM, and xDeepFM (Lian et al. 2018) further proposes the Compressed Interaction Network to model the high-order feature interaction explicitly. DCN (Wang et al. 2017) and DCNV2 (Wang et al. 2021) apply cross-vector/matrix network to achieve informative feature interaction automatically. User behavior sequence modeling is another important part of CTR modeling. It focuses on extracting the user’s interest from the behavior sequence which is composed of interacted items by the user in chronological order. DIN (Zhou et al. 2018b) first applies the attention mechanism to mine user interest by activating items related to the target item and gains huge performance improvement. Based on DIN, DIEN (Zhou et al. 2019) utilizes a two-layer GRU to capture the dynamic change of the user’s interest. Works (Pi et al. 2020; Chang et al. 2023; Lin et al. 2022) propose to extract long-term interest from the user’s ultra-long behavior sequence by taking the approximate nearest neighbor search algorithm to reduce latency. Some works (Guo et al. 2019; Zhou et al. 2018a) introduce multiple types of behavior sequences to obtain fine-grained user interest. DBPMaN (Dong et al. 2023) proposes a new perspective for behavior sequence modeling. It introduces the concept of behavior path to understand the psychological procedure behind the user’s decision. However, all the above methods devote too much effort to the network structure’s design and ignore the data sparsity problem. Contrastive Learning for Recommendation Contrastive learning is a self-supervised learning algorithm, aiming to obtain invariant representation by optimizing the goal of mutual information maximization, and gains huge success in CV&NLP (Gao, Yao, and Chen 2021; Chen et al. 2020). In contrastive learning, the key is to construct positive pair of each sample through reasonable data augmentation methods. The InfoNCE loss will bring closer representation of positive pairs and push away the representation of negative samples. Recently, some works have introduced contrastive learning into the recommendation system. In sequential recommendation, task (Zhou et al. 2021; Xie et al. 2022; Zhang et al. 2023), the augmented user’s behavior sequence, produced by inserting, masking, shuffling, etc, is treated as a positive pair. The additional contrastive learning task enhances the representation learning ability of the recommendation model and thus gains performance improvement. In the CTR prediction task, contrastive learning has not been well explored. MISS (Guo et al. 2022) focuses on sequential-based CTR tasks, which apply interest-level contrastive learning to enhance the behavior sequence modeling. CL4CTR (Wang et al. 2023) improves the quality of feature representation by designing three self-supervised tasks: contrastive learning, feature alignment constraint, and field uniformity constraint. However, the proposed three selfsupervised tasks are unrealistic for industrial CTR model training because they need to regularize the huge embedding table and triple the training overhead at least. AT4CTR also exploits contrastive learning but can enhance performance efficiently with little extra training cost. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8788 Auxiliary Match Tasks for CTR Overview CTR modeling is a binary classification machine learning task based on sparse features. The sparse features mainly contain information about the user, item, and context. The user side information contains the user’s profile (e.g., age, gender, city) and the user’s behavior sequence. The item side information consists of item id, category id, etc. The context information is composed of position, time, etc. The CTR model needs to estimate the probability of the user clicking on the item under the given context. We represent the instance by {x, y}, where x = [xUP, xUS, xI, xC], y ∈{0, 1} indicates click or not. UP, US, I, and C represent the features’ set of user profile, user behavior sequence, item, and context respectively. The CTR task can be formulated as the following Eq (1): P(y = 1|x) = F(x) (1) where F is the CTR model. AT4CTR takes x as input and transforms it into dense vector through the embedding layer. As shown in Figure 1, the UIM task takes the xUP, xUS and xI as input. It applies an independent self-attention (Vaswani et al. 2017) network to aggregate behavior sequence. The user representation is composed of the aggregated behavior sequence and user profile. Then the InfoNCE loss strengthens the relevance between the user and the item of positive samples. NIP task exploits another self-attention network to perform the causal behavior sequence aggregation based on xUS. And the Infonce loss is applied to supervise the process. The two auxiliary match tasks only add little training cost and do not increase the parameters and latency during inference. Embedding Layer The embedding layer transforms the high-dimensional sparse vector x into low-dimensional dense representations. Specifically, each feature field will be assigned with an embedding matrix E = [e1; e2; ...; eK] ∈RK×d, where K represents the cardinality of this feature field and d donates the embedding size. If the index value of the feature is i, then ei serves as its embedding. UIM: User-Item Match Task The UIM which applies InfoNCE loss aims at pulling closer the representation between user and item of the positive sample. To avoid representation collapse, we combine the user/item in the positive samples with the item/user of the other samples in the same batch as the negative term of the InfoNCE loss. The representation between the user and the item of synthetic negative samples will be pushed away. Specifically, the representation of the user side contains the embedding of user profile and the aggregated behavior sequence. There are multiple types of features in xUP, we concatenate the embedding of all these features and get the user profile representation eUP. For the behavior sequence, we first apply self-attention (Vaswani et al. 2017) to refine behavior sequence representation as it can capture the relatedness between the clicked items as Eq (2): SA(X) = Softmax(XW Q(XW K)T √ d )XW V (2) where X is the user’s behavior sequence xUS, W Q, W K, W V ∈Rd×d are the weight matrix to generate query, key, and value respectively. After that, we perform the mean pool aggregation operation to get the representations of the user’s interest eUS as Eq (3): eUS = mean pool(SA(xUS)) (3) We then concat the user profile representation and the user’s interest as the user’s representation eU. eU = concat(eUP, eUS) (4) For the representation of the item side, we concatenate the embeddings of features in xI to get the item representation eI. The number of features differs between the user side and the item side. We apply two separate Multi-Layer Perceptron (a.k.a Projection Head) (Chen et al. 2020) to align their representation, which follows the Eq( 5): rU = MLPu(eU), rI = MLPI(eI) (5) where MLPu and MLPI are all two-layer MLP with ReLU as activation, rU and rI are the aligned representations. Then we apply the InfoNCE for all positive samples based on the aligned representations as Eq( 6). Lui = −1 n+ n+ X k=1 log exp( sim(rU +,rI +) τ1 ) Pn j=1 exp( sim(rU +,rI j) τ1 ) (6) where sim(·) represents the cosine similarity, τ1 is the temperature hyperparameter, n is the batch size, and n+ is the number of positive samples in the batch. The InfoNCE loss should be symmetrical, so we also compute the Liu as follows Eq( 7): Liu = −1 n+ n+ X k=1 log exp( sim(rI +,rU +) τ1 ) Pn j=1 exp( sim(rI +,rU j ) τ1 ) (7) Combining the two losses, we obtain the UIM auxiliary loss as LUIM = Lui + Liu. NIP: Next Item Prediction Task The user behavior sequence which consists of interacted items contains the causal psychological clues of the user implicitly. One obvious intuition is that the past behaviors of the user will affect the current behavior. We design the second auxiliary match task which performs the next item prediction task by taking InfoNCE as an approximation of full softmax. From another side, we can also treat the past behaviors as the user’s representation and the next behavior as the positive target item, which is consistent with the mechanism of the UIM. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8789 Embedding Layer …… …… Item 1 Item 2 Item m User Profile Context eUS 𝑀𝐿𝑃U 𝑈𝐼𝑀 𝐽𝑜𝑖𝑛𝑡 𝑙𝑜𝑠𝑠 eUP eC eU 𝑀𝐿𝑃I Ad Item eI SA CSA UBM 𝑁𝐼𝑃 Concat Concat Concat Feature Interaction Prediction pCTR Label BCE𝑙𝑜𝑠𝑠 Figure 1: The overall framework of AT4CTR. SA represents self-attention and CSA indicates causal self-attention. UBM means user behavior sequence modeling. UIM and NIP are the two proposed auxiliary match tasks. Specifically, we take the auto-regressive causal selfattention (Vaswani et al. 2017) as the encoder of behavior sequence in NIP as Eq (8): [r1, r2, .., rm−1] = Causal SA(xUS) (8) where the rt, t ∈{1, 2, .., m −1} is the aggregated representation of oldest t behaviors. We treat the embedding eI of the next item as the representation rI of the next item. To calculate the InfoNCE loss, we still apply the in-batch negative construction. Items at the same position in the behavior sequence in other samples of the same batch are regarded as negative samples. We calculate InfoNCE loss of the next item prediction task as Eq (9): Lpi = −1 nm n X i=1 m−1 X k=1 log exp( sim(rk i ,ek+1 i ) τ2 ) Pn j=1 exp( sim(rk i ,ek+1 j ) τ2 ) (9) where n is the batch size, m is the length of behavior sequence, τ2 is the temperature hyperparameter, rk i is the aggregated representation of the first k behaviors, and ek+1 i is the embedding of the k + 1-th item. We also have the symmetrical InfoNCE loss Lip as Eq( 10). Lip = −1 nm n X i=1 m−1 X k=1 log exp( sim(ek+1 i ,rk i ) τ2 ) Pn j=1 exp( sim(ek+1 i ,rk j ) τ2 ) (10) Adding both losses together, we get the loss of the auxiliary NIP task as LNIP = Lpi + Lip. Multi-task Training We use the widely applied negative log-likelihood loss as the main loss of CTR prediction as Eq (11): Lmain = −y log F(x) −(1 −y) log(1 −F(x)) (11) We add the main loss and losses of the two auxiliary tasks together to supervise the model training. The total loss is as Eq (12): Ltotal = Lmain + λUIMLUIM + λNIP LNIP (12) where the λUIM and λNIP are the weight coefficients. Experiment Settings Datasets We conduct extensive experiments on both the industry and the public datasets. The statistics information about the two datasets are shown in Table 1. Taobao Dataset The Taobao dataset (Zhu et al. 2018) is widely used in CTR research. It consists of a set of user behaviors from Taobao’s industry recommendation system. The dataset contains about 1 million users whose behaviors include clicking, purchasing, adding items to the shopping cart, etc. The click behaviors for each user are taken and sorted according to the timestamp to construct the behavior sequence. We filter out users who have less than 10 behaviors. The split standard is the same as what CAN (Bian et al. 2022) does. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8790 Datasets #Users #Items #Fields #Instances Taobao 987,991 4,161,138 7 100,095,182 Industry 40M 417K 168 6.6B Table 1: Statistics of datasets. Industry Dataset We collect traffic logs from the search advertising system in the location-based service ecommerce platform of Meituan. The last ten months’ samples are used for training and samples of the following day are for testing. For the training set, we perform negative sample down-sampling with the ratio 0.1. As the testing set is only for evaluating the offline metric, we don’t perform negative sample down-sampling anymore. Following (He et al. 2014), we re-calibrate the model for the online severing. Baselines We compare AT4CTR with three types of CTR modeling methods. The first type are feature interaction methods including FM (Rendle 2010), WideDeep (Cheng et al. 2016), DeepFM (Guo et al. 2017), which focus on second and high order feature interactions. The second type is the behavior sequence modeling method. DIN (Zhou et al. 2018b) uses the attention mechanism to extract the user’s candidateaware interest. DIEN (Zhou et al. 2019) extends DIN with the interest extractor layer and interest evolution layer. DBPMaN (Dong et al. 2023) exploits the behavior path to capture the psychological procedure behind user decisions. The third type is the auxiliary task method. DeepMCP (Ouyang et al. 2019) uses the matching subnet to capture the user-item relation, and the correlation subnet to explore the item-item correlation. DMR (Lyu et al. 2020) proposes an auxiliary matching loss to measure the correspondence between the user preference and the target item in the behavior sequence. CL4CTR (Wang et al. 2023) exploits an auxiliary network to perform contrastive learning, which aims to enhance the embedding representation. Evaluation Metric Two widely used metrics AUC, and Logloss are chosen. The AUC (Area Under the ROC Curve) measures the comprehensive ranking ability of the CTR model for all samples in the testing set. The Logloss measures the accuracy of the estimated probability depending on the ground-truth label. A slight improvement of AUC or Logloss at 0.001-level is significant in a mature recommendation system (Guo et al. 2017), as it means a huge promotion in revenue. Implementation Details We implement AT4CTR with Tensorflow. For the industrial dataset, the embedding size is 16 and the learning rate is 5e−4. We train the model using 8 80G A100 GPUs with the batch size 1500 of a single card. For the Taobao dataset, we set the embedding size to be 18, the learning rate to be 1e − 3, and use one single 80 A100 for training with batch size 1024. We set τ1 to be 0.07 and τ2 to be 0.1. We use Adam as the optimizer for both datasets. We run all experiments Industry Taobao AUC Logloss AUC Logloss FM 0.7177 0.1955 0.8025 0.2723 WideDeep 0.7335 0.1923 0.8733 0.2233 DeepFM 0.7327 0.1924 0.8690 0.2263 DIN 0.7420 0.1906 0.9402 0.1544 DIEN 0.7424 0.1905 0.9479 0.1430 DBPMaN 0.7426 0.1905 0.9509 0.1382 DeepMCP 0.7426 0.1906 0.9514 0.1376 DMR 0.7421 0.1906 0.9405 0.1540 CL4CTR 0.7428 0.1904 0.9508 0.1384 AT4CTR 0.7441∗ 0.1902∗ 0.9535∗ 0.1345∗ Table 2: Performance comparison of baselines on two datasets. The best result is in boldface and the second best is underlined. * indicates that the difference to the best baseline is statistically significant at 0.01 level. five times and report the average result. For DeepMCP and CL4CTR, we exploit the DBPMaN to do behavior sequence modeling. Experiment Results Performance Comparison Table 2 shows the results of all methods. AT4CTR obtains the best performance on both the Industry and Taobao datasets, which shows the effectiveness of AT4CTR. There are some insightful findings from the results. (1) The proposed AT4CTR beats all baselines on both metrics on both datasets. Compared with methods of feature interaction and behavior sequence modeling, AT4CTR forces the neural network to capture the relevance between user and item through two proposed auxiliary tasks. This demonstrates that the proposed auxiliary tasks based on the intrinsic character of the data itself can alleviate the data sparsity issue and improve the training of the model. (2) FM performs worse than WideDeep and other deep learning models significantly, which reveals the importance of non-linear transformation and high-order feature interactions. (3) From Table 2, behavior sequence modeling methods outperform feature interaction methods significantly, which verifies the necessity of extracting user interest. DIN extracts the user interest by considering the relevance of the behavior sequence with regard to the target item but ignores the sequential character of user behavior. DIEN applies the two-layer GRU structure and the auxiliary binary classification task to capture the evolution of user interest, and thus performs better than DIN. DBPMaN exploits the behavior path to understand the psychological procedure behind user decisions and achieves the best performance among behavior sequence modeling methods. (4) Auxiliary task methods also benefit the performance. The DeepMCP uses a matching subnet to strengthen the correlation between user and item through binary classification task and takes the skip-gram algorithm to model items’ correlation. However, it can only achieve slight improvement and we think the reason is that its auxiliary binary classification task is homogeneous with the main CTR task, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8791 Industry Taobao AUC Logloss AUC Logloss DBPMaN 0.7426 0.1905 0.9509 0.1382 +UIM 0.7438 0.1903 0.9529 0.1360 DBPMaN (+UIM,+NIP) 0.7441 0.1902 0.9535 0.1345 Table 3: Effect of each auxiliary task. which can not provide extra signals. For DMR, its auxiliary task is to predict the last behavior based on the previous behaviors with the negative sampling technique to approximate the multi-classification task. It provides weak training signal and obtains little earnings. The CL4CTR shows almost no improvement on both datasets. Ablation Study We investigate how the UIM and NIP auxiliary tasks influence AT4CTR and show the results in Table 3. The ablation experiments are conducted based on the DBPMaN. We first integrate UIM with DBPMaN. And then based on UIM, we further combine NIP with DBPMaN. From Table 3, both the auxiliary tasks are beneficial for the DBPMaN on both datasets, which indicates their ability to alleviate data sparsity problem. The UIM works by directly capturing the finegrained relevance between the user and the item, which is not easy for the main task as it needs to provide compromised gradients for all features. For the NIP, it models the items’ correlation of behavior sequence. However, the information of each behavior which only contains item id, category id, etc in the behavior sequence is limited due to the storage cost. This explains why the benefit enhanced by NIP is less than UIM, the latter contains plenty of features. The Generalization Ability of AT4CTR Since we focus on designing auxiliary tasks for CTR prediction which is orthogonal to most existing CTR prediction models. We show the result by combining AT4CTR with different CTR prediction models. For methods of feature interaction WideDeep and DeepFM, we only combine UIM with them. For the rest behavior sequence modeling methods, we integrate both the UIM and the NIP with them. The result in Table 4 shows that AT4CTR can boost various CTR models’ performance and demonstrates its generalization ability. We have two findings from the result. First, the feature interaction methods gain huge improvements when combined with AT4CTR. The absence of behavior sequence makes the features of user profile gain unexpected stress on user representation. The original binary click/non-click signals together with the feature interaction components (e.g., FM, DNN) are not enough to provide sufficient signals for the representation and relevance learning. The UIM gives an explicit signal to pull closer the representation between user and item with regard to the positive samples and push away the representation of user and item for the synthetic negative samples in InfoNCE loss. The result demonstrates the UIM improves the embedding representation of the user Industry Taobao AUC Logloss AUC Logloss WideDeep 0.7335 0.1923 0.8733 0.2233 WideDeepAT 0.7389 0.1912 0.8781 0.2189 DeepFM 0.7327 0.1924 0.8690 0.2263 DeepFMAT 0.7375 0.1916 0.8802 0.2174 DIN 0.7420 0.1906 0.9402 0.1544 DINAT 0.7433 0.1904 0.9430 0.1509 DIEN 0.7423 0.1905 0.9479 0.1430 DIENAT 0.7434 0.1903 0.9506 0.1394 DBPMaN 0.7426 0.1905 0.9509 0.1382 DBPMaNAT 0.7441 0.1902 0.9536 0.1344 Table 4: Results of combining AT4CTR with different CTR Models, AT means combination. Figure 2: Result of different negative sample sampling ratios on Industry dataset. NSR means negative sampling ratio. and item features. Then the improved embedding representation eases the learning of feature interaction components (e.g., FM, DNN). Second, when combining both the auxiliary tasks with methods of behavior sequence modeling, performance is also enhanced. Since the mechanism of extracting user interest through the attention network between the target item and the behavior sequence is similar to the effect of two auxiliary tasks. The effect is to constrain the representation between user and item according to given signals, the improvement of AT4CTR here is less than that of the feature interaction methods. All the results indicate that only the main binary classification task can’t fully unleash the potential of the CTR model. The proposed auxiliary tasks always help the model’s training. The Influence of Negative Down-sampling Ratio As we have claimed the applied InfoNCE loss can provide plenty of synthetic negative samples to make up for the losing performance caused by negative sample downsampling. In this subsection, we perform ablation studies to observe the influence of different negative sampling ratios on the AT4CTR. We present the result of different negative sampling ratios in Figure 2. From Figure 2, we have the following findings. (1) The proposed AT4CTR can enhance the performance under all negative sampling ratios. (2) Under the negative sampling ratio 0.1, the AT4CTR achieves the same performance as the situation without negative samThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8792 Figure 3: Result of various loss weights on Industry dataset. Figure 4: Result of various loss weights on Taobao dataset. pling. What’s more, the relative promotion enhanced by AT4CTR decreases as the negative sampling ratio increases. This phenomenon is consistent with our hypothesis. As the negative sampling ratio increases, there are more real negative samples for training and the effect of synthetic negative samples goes weak. (3) We find a counterintuitive result that the performance reaches best under the negative sampling ratio 0.3 rather than 1.0. We leave it for future research. Hyperparameter For hyperparameters, we study the effect of two auxiliary tasks’ loss weight on both datasets. Figure 3 and Figure 4 show the results of the two datasets respectively. For the loss weight of the UIM task, the improvements increase when the loss weight enlarges at the beginning, but then decrease when it further enlarges. The NIP is sensitive to the loss weight hyperparameter on both datasets. It needs to maintain the loss weight at a magnitude of small order otherwise the performance will fade away or even decline. Overall, properly tuned hyperparameters of loss weights can provide useful signals to accelerate the model’s convergence. Case Study In this section, we study whether AT4CTR can strengthen the relevance between user and item of positive samples. We randomly select 10, 000 positive samples from the test set on the Taobao dataset. Then we extract the embedding of user side features and embeddings of item side features respectively. After that, we compute the cosine similarity of the concatenated embedding between user and item. We choose DBPMaN, DBPMaNUIM, DBPMaNAT 4CT R for analysis. The results in Figure 5 demonstrate the two auxiliary match tasks can enhance the relevance between user and item. This reveals the deficiency of the main CTR task in learning the user-item relevance due to the data sparsity. AT4CTR makes up the deficiency and enhances the performance. Resource Cost In this section, we collect the statistical data to analyze the storage cost and the model training time on the industry Figure 5: Relevance between user and item. Ratio 0.1 0.3 0.5 0.7 1.0 Storage(T) 39.0 88.3 137.0 181.6 230.2 Time(h) 6.1 13.9 21.3 29.1 39.5 TimeAT (h) 6.4 14.6 22.5 31.0 Table 5: Resource cost of different negative sampling ratios. T means terabyte and h means hour. dataset. From Table 5, the storage cost and the model training time increase rapidly as the negative sampling ratio increases. A large negative sampling ratio will influence the daily updating of the CTR model as the CTR model is only one component of the industry search advertisement system. Thus, negative down-sampling sometimes is unavoidable. But, AT4CTR just brings a little extra training time and no storage cost. When we take the negative sampling ratio as 0.1, AT4CTR can obtain the same performance as the situation without negative down-sampling but save 6 times storage cost and model training time. For the platform, saving costs can also increase revenue. Online Results We conduct an A/B test in the industry online search advertising system to measure the benefits of AT4CTR compared with the online baseline DBPMaN. The AT4CTR is allocated with 10% serving traffic for one month. It achieves 1.27% relative promotion on the Revenue Per Search and 7.21% relative increase in the Return on Investment. Conclusion In this paper, we propose the AT4CTR for enhancing the CTR model’s performance. AT4CTR which contains UIM and NIP auxiliary tasks aims at alleviating the data sparsity problem by providing extra training signals. We conduct offline/online experiments to verify the effectiveness of the AT4CTR. Finally, we do some ablation studies and visualization to show the correctness of AT4CTR’s component. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8793 Acknowledgemnts The work was supported by Meituan. Defu Lian was supported by grants from the National Natural Science Foundation of China (No. 62022077 and 61976198). References Bian, W.; Wu, K.; Ren, L.; Pi, Q.; Zhang, Y.; Xiao, C.; Sheng, X.-R.; Zhu, Y.-N.; Chan, Z.; Mou, N.; et al. 2022. CAN: feature co-action network for click-through rate prediction. In Proceedings of the fifteenth ACM international conference on web search and data mining, 57–65. Chang, J.; Zhang, C.; Fu, Z.; Zang, X.; Guan, L.; Lu, J.; Hui, Y.; Leng, D.; Niu, Y.; Song, Y.; et al. 2023. TWIN: TWo-stage Interest Network for Lifelong User Behavior Modeling in CTR Prediction at Kuaishou. arXiv preprint arXiv:2302.02352. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR. Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st workshop on deep learning for recommender systems, 7–10. Dong, J.; Yu, Y.; Zhang, Y.; Lv, Y.; Wang, S.; Jin, B.; Wang, Y.; Wang, X.; and Wang, D. 2023. A Deep Behavior Path Matching Network for Click-Through Rate Prediction. arXiv preprint arXiv:2302.00302. Gao, T.; Yao, X.; and Chen, D. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821. Guo, H.; Tang, R.; Ye, Y.; Li, Z.; and He, X. 2017. DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247. Guo, L.; Hua, L.; Jia, R.; Zhao, B.; Wang, X.; and Cui, B. 2019. Buying or browsing?: Predicting real-time purchasing intent using attention-based deep network with multiple behavior. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 1984–1992. Guo, W.; Zhang, C.; He, Z.; Qin, J.; Guo, H.; Chen, B.; Tang, R.; He, X.; and Zhang, R. 2022. Miss: Multi-interest selfsupervised learning framework for click-through rate prediction. In 2022 IEEE 38th international conference on data engineering (ICDE), 727–740. IEEE. He, X.; Pan, J.; Jin, O.; Xu, T.; Liu, B.; Xu, T.; Shi, Y.; Atallah, A.; Herbrich, R.; Bowers, S.; et al. 2014. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the eighth international workshop on data mining for online advertising, 1–9. Lian, J.; Zhou, X.; Zhang, F.; Chen, Z.; Xie, X.; and Sun, G. 2018. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 1754–1763. Lin, J.; Qu, Y.; Guo, W.; Dai, X.; Tang, R.; Yu, Y.; and Zhang, W. 2023. MAP: A Model-agnostic Pretraining Framework for Click-through Rate Prediction. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1384–1395. Lin, Q.; Zhou, W.-J.; Wang, Y.; Da, Q.; Chen, Q.-G.; and Wang, B. 2022. Sparse Attentive Memory Network for Click-through Rate Prediction with Long Sequences. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 3312–3321. Lyu, Z.; Dong, Y.; Huo, C.; and Ren, W. 2020. Deep match to rank model for personalized click-through rate prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 156–163. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. Oord, A. v. d.; Li, Y.; and Vinyals, O. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Ouyang, W.; Zhang, X.; Ren, S.; Qi, C.; Liu, Z.; and Du, Y. 2019. Representation learning-assisted click-through rate prediction. arXiv preprint arXiv:1906.04365. Pi, Q.; Zhou, G.; Zhang, Y.; Wang, Z.; Ren, L.; Fan, Y.; Zhu, X.; and Gai, K. 2020. Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2685–2692. Rendle, S. 2010. Factorization machines. In 2010 IEEE International conference on data mining, 995–1000. IEEE. Richardson, M.; Dominowska, E.; and Ragno, R. 2007. Predicting clicks: estimating the click-through rate for new ads. In Proceedings of the 16th international conference on World Wide Web, 521–530. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, F.; and Liu, H. 2021. Understanding the behaviour of contrastive loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2495– 2504. Wang, F.; Wang, Y.; Li, D.; Gu, H.; Lu, T.; Zhang, P.; and Gu, N. 2023. CL4CTR: A Contrastive Learning Framework for CTR Prediction. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 805–813. Wang, R.; Fu, B.; Fu, G.; and Wang, M. 2017. Deep & cross network for ad click predictions. In Proceedings of the ADKDD’17, 1–7. Wang, R.; Shivanna, R.; Cheng, D.; Jain, S.; Lin, D.; Hong, L.; and Chi, E. 2021. Dcn v2: Improved deep & cross network and practical lessons for web-scale learning to rank systems. In Proceedings of the web conference 2021, 1785– 1797. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8794 Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; and Cui, B. 2022. Contrastive learning for sequential recommendation. In 2022 IEEE 38th international conference on data engineering (ICDE), 1259–1273. IEEE. Zhang, Z.; Liu, Q.; Jiang, H.; Wang, F.; Zhuang, Y.; Wu, L.; Gao, W.; and Chen, E. 2023. FairLISA: Fair User Modeling with Limited Sensitive Attributes Information. In Thirtyseventh Conference on Neural Information Processing Systems. Zhou, C.; Bai, J.; Song, J.; Liu, X.; Zhao, Z.; Chen, X.; and Gao, J. 2018a. Atrank: An attention-based user behavior modeling framework for recommendation. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Zhou, C.; Ma, J.; Zhang, J.; Zhou, J.; and Yang, H. 2021. Contrastive learning for debiased candidate generation in large-scale recommender systems. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 3985–3995. Zhou, G.; Mou, N.; Fan, Y.; Pi, Q.; Bian, W.; Zhou, C.; Zhu, X.; and Gai, K. 2019. Deep interest evolution network for click-through rate prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 5941–5948. Zhou, G.; Zhu, X.; Song, C.; Fan, Y.; Zhu, H.; Ma, X.; Yan, Y.; Jin, J.; Li, H.; and Gai, K. 2018b. Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 1059–1068. Zhu, H.; Li, X.; Zhang, P.; Li, G.; He, J.; Li, H.; and Gai, K. 2018. Learning tree-based deep model for recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1079–1088. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8795
2024
977
18,825
Online Conversion Rate Prediction via Multi-Interval Screening and Synthesizing under Delayed Feedback Qiming Liu1,2, Xiang Ao1,2,3*, Yuyao Guo1,2, Qing He1,2* 1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2University of Chinese Academy of Sciences, CAS, Beijing 100049, China 3Institute of Intelligent Computing Technology, Suzhou, CAS {liuqiming21s, aoxiang, guoyuyao21s, heqing}@ict.ac.cn Abstract Due to the widespread adoption of the cost-per-action (CPA) display strategy that demands a real-time conversion rate prediction (CVR), delayed feedback is becoming one of the major challenges in online advertising. As the true labels of a significant quantity of samples are only available after long delays, the observed training data are usually biased, harming the performance of models. Recent studies show integrating models with varying waiting windows to observe true labels is beneficial, but the aggregation framework remains far from reaching a consensus. In this work, we propose the MultiInterval Screening and Synthesizing model (MISS for short) for online CVR prediction. We first design a multi-interval screening model with various output heads to produce accurate and distinctive estimates. Then a light-weight synthesizing model with an assembled training pipeline is applied to thoroughly exploit the knowledge and relationship among heads, obtaining reliable predictions. Extensive experiments on two real-world advertising datasets validate the effectiveness of our model. Introduction In the advertising market, advertisers purchase ads via realtime bidding platforms with various paying options such as cost-per-click (CPC) and cost-per-action (CPA) (Guo et al. 2023; Hojjat et al. 2017; Zhang, Yuan, and Wang 2014; Liu et al. 2023). Cost-per-action, which enables advisers to bid on pre-defined conversions, e.g., purchase or registration, has become the primary objective due to its strong connections to the final return and resistance to notorious frauds (Chapelle, Manavoglu, and Rosales 2014; Goldfarb and Tucker 2011). Therefore, a precise estimation of conversion rate (CVR) becomes a critical demand for all advertising platforms. Especially, online systems require a streaming serving paradigm that continuously predicts and learns with the latest data (He et al. 2016; Guo et al. 2022). As a consequence, the delayed feedback problem is becoming one of the imperative challenges. Concretely, after an ad is clicked, it takes a delay ranging from several seconds to a few days to receive the corresponding conversion. For example, Figure 1 exhibits the feedback proportion *Correspondence to Xiang Ao and Qing He. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Conversion distribution (bar) and accumulated conversion distribution (line) of Criteo dataset. w.r.t the delayed time on the Criteo dataset. Only about 60 percents of conversions happen in the first day after clicks, while the longest delay between the click and conversion could reach 30 days. Online models that wait for 30 days to train would suffer severely from data staleness. On the other hand, a shorter wait time would introduce fake negatives that are temporarily observed as negative but may convert later, which affect the label accuracy of training. In previous studies, various strategies were put out to address the delayed feedback issue. Early strategies estimate the anticipated conversion delay via jointly trained models for accurate CVR prediction (Chapelle 2014; Yoshikawa and Imai 2018), but they require lots of offline data and are not suited for streaming deployment. As for online learning, custom approaches choose a proper wait interval, known as the waiting window, to wait and observe samples (Yang et al. 2021), achieving a trade-off between data freshness and label accuracy. Several methods utilize no or one short waiting window and rely on importance sampling (Bottou et al. 2013) to adjust the weights of samples, making fake negatives less influential. Despite their success, these methods still suffer from highly biased training data with MissingNot-At-Random problems, resulting in sub-optimal performance (Chen et al. 2022; Gao and Yang 2022). The delayed feedback provides samples with an extra dimension of waiting time. Simply determining a waiting interval and screening out one distinctive data slice is inadequate. Recently, merging models with different waiting windows becomes one alternative solution for the delayed feedback issue (Gao and Yang 2022; Hou et al. 2021; The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8796 Li et al. 2021). Multi-task approaches categorize conversions into various bins and model them separately (Gao and Yang 2022; Wang et al. 2020). Several methods predict CVR in various waiting windows to help update the final estimation by conditional probability (Gu et al. 2021; Hou et al. 2021). Waiting windows stand for unique observation views to balance label accuracy and data freshness. Therefore, these models based on difference waiting windows can give distinctive predictions and are naturally suitable for ensemble methods. (Li et al. 2021) comes up with an objective-oriented way to aggregate models, but the aggregation framework and effective fusion strategy remains far from reaching a consensus. In this paper, we propose an approach named MISS (short for the Multi-Interval Screening and Synthesizing models) for online CVR prediction. MISS provides a universal framework for measuring CVR at unique views and aggregating them effectively. First, a multi-head screening model is devised to estimate CVR under various waiting windows, with different optimization tasks for each head. Additionally, a global weighting method is used to increase the accuracy of heads while preserving their individuality. The relationships among the heads are then investigated using a light-weight synthesizing model with normalization. The synthesizing model gives dynamic weights to aggregate predictions and is trained on an assembled pipeline comprising the most recent positive samples and real negatives. Through this way, our model simultaneously replicates the ideal unbiased distribution and enables the data freshness. Experiments on two widely-used benchmarks show MISS significantly outperforms existing methods. The main contributions are summarized as follows: • We underline the significance of incorporating models with various waiting windows to screen out abundant information in the online CVR task, which offers unique perspectives and forms accurate predictions. • We propose MISS, which offers a general aggregation framework that exploits information behind various waiting windows. MISS also decreases the bias of integrated models while maintaining their uniqueness. • We conduct extensive experiments on two real-world CVR prediction datasets and demonstrate the state-ofthe-art performance of our method. Notably, MISS significantly outperforms other multi-task approaches. Related Work Delayed Feedback Models Conversion modeling has been thoroughly studied in the literature for its high value in online advertising (Badanidiyuru et al. 2021; Choi et al. 2020; Lee et al. 2012). Under the hypothesis of an exponential delay distribution, Chapelle (Chapelle 2014) presented DFM, which contains two jointly-trained models for estimating the CVR and the delay time, respectively. While an exponential distribution is not necessarily practical in real scenarios, DFM was later evolved into a non-parametric delayed feedback model NoDeF (Yoshikawa and Imai 2018) without any assumptions about parametric distributions. Lately, studies like GDFM (Yang and Zhan 2022) focused on specific scenarios with user behaviors and use such auxiliary sequence information to assist training (Su et al. 2021). Multi-task models (Gao and Yang 2022; Hou et al. 2021; Huangfu et al. 2022; Wang et al. 2020) can make use of conversions by categorizing them into different bins based on their delay time. (Li et al. 2021) developed a multi-head framework FTP to model conversions in various delay settings and aggregate head outputs by imitating an ideal CVR model. Existing multi-task models employ head outputs directly as predictions or auxiliary knowledge, whereas our MISS utilizes a synthesizing method to further explore the data pipeline characteristics of each head. Due to their capacity to infer real data distribution from observed biased data, unbiased estimate models gained prominence in online CVR prediction (Gu et al. 2021; Ktena et al. 2019). Existing methods could create a unique data pipeline with a well-designed importance weighting (Bottou et al. 2013) formula to regulate the weight of each sample, yielding an unbiased estimate of conversions (Yasui et al. 2020). To name a couple, the FNW (Ktena et al. 2019) method would label all new samples as negative and duplicate delayed positive samples with a corrected label when conversions arrive. To minimize fake negatives, ESDFM (Yang et al. 2021) establishes a waiting window and only duplicates positive samples that do not receive conversion within the waiting period. Recently, (Chen et al. 2022) proposed DEFUSE, which further separates observed samples into four groups with different importance weights. While the majority of unbiased approaches concentrate on sampling strategy designs and add additional models to precisely determine every sample weight, MISS applies a simple, low-cost way instead to reduce global bias. Preliminary Data Pipeline In this work, we focus on the online CVR prediction problem. At time 𝜏, the ground-truth dataset can be formulated as: ˆD𝜏= {(𝑐𝑖, 𝑣𝑖, 𝑥𝑖, ˆ𝑦𝑖)}𝑁𝜏 𝑖=1, (1) where a sample contains 𝑐𝑖the timestamp when it is clicked, 𝑣𝑖the timestamp when a conversion action happens, 𝑥𝑖the feature, and ˆ𝑦𝑖∈(0, 1) the ground-truth label indicating whether a conversion takes place in the end. D𝜏contains all samples that were clicked before time 𝜏. Notice that the sample never has a conversion would be given 𝑣𝑖≡∞, and only the conversion that happens in the maximum attribution time 𝑑𝑚𝑎𝑥would be regarded as valid. Thus we can define the label ˆ𝑦𝑖as: ˆ𝑦𝑖=  1, 𝑣𝑖≤𝑐𝑖+ 𝑑𝑚𝑎𝑥, 0, 𝑣𝑖> 𝑐𝑖+ 𝑑𝑚𝑎𝑥. (2) However, due to the delayed feedback, it is unable to directly obtain the full ground-truth dataset. Models have to wait for at most a maximum attribution time 𝑑𝑚𝑎𝑥to see the real label of a sample after it gets clicked, the effect of data staleness is intolerable in online services. Most online models choose to wait for a shorter waiting window 𝑑< 𝑑𝑚𝑎𝑥to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8797 determine the label of each sample. We can define the training dataset they observed as: D𝜏,𝑑= {(𝑐𝑖, 𝑣𝑖, 𝑥𝑖, 𝑦𝑖)}𝑁𝜏 𝑖=1|𝑐𝑖<𝜏−𝑑 = {(𝑐𝑖, 𝑣𝑖, 𝑥𝑖, 𝑦𝑖)}𝑁𝜏−𝑑 𝑖=1 . (3) D𝜏,𝑑limits the training data to a subset logged before 𝜏−𝑑to make sure label 𝑦of every sample is always available at time 𝜏, which would be: 𝑦𝑖=  1, 𝑣𝑖≤𝑐𝑖+ 𝑑, 0, 𝑣𝑖> 𝑐𝑖+ 𝑑. (4) The observed real positive samples and real negative samples at time 𝜏is: P𝜏= ˆD𝜏|𝑣𝑖≤𝜏, (5) N𝜏= ( ˆD𝜏\ P𝜏)|𝑐𝑖≤𝜏−𝑑𝑚𝑎𝑥. (6) Note that it takes a maximum attribution time 𝑑𝑚𝑎𝑥to identify real negative samples. Some real positive samples with a delay longer than 𝑑in P𝜏can be observed at time 𝜏but were already falsely labeled as negative in 𝐷𝜏,𝑑. To narrow the gap between the training dataset 𝐷𝜏,𝑑and the ground-truth dataset ˆD𝜏, duplication mechanisms are applied to ingest those observed delayed positive samples with correct labels into training pipelines again. The adjusted training dataset would be defined as: D+ 𝜏,𝑑= P𝜏|𝑐𝑖+𝑑<𝑣𝑖∪D𝜏,𝑑. (7) Importance Sampling The goal of the online CVR prediction problem is to continuously learn a function 𝑓with parameter 𝜃that minimizes the following ideal loss: Lideal = ∑︁ (𝑥𝑖, ˆ𝑦𝑖)∈ˆD𝜏 ℓ( ˆ𝑦𝑖, 𝑓𝜃(𝑥𝑖)) = E(𝑥,𝑦)∼ˆ𝑝(𝑥,𝑦)ℓ(𝑦, 𝑓𝜃(𝑥)), (8) where ℓdenotes the binary cross-entropy loss, ˆ𝑝is the ideal data distribution of ˆD𝜏. The altered training dataset D+ 𝜏,𝑑lacks freshness and contains fake negatives in comparison to the ideal dataset. Models trained on such biased data distributions can, nevertheless, approximate the distribution of ground-truth data via importance sampling (Bottou et al. 2013). We define 𝑞as the data distribution of the training dataset D+ 𝜏,𝑑. Following previous work (Ktena et al. 2019; Yang et al. 2021), we assume ˆ𝑝(𝑥) ≈𝑞(𝑥) and derive the ideal loss as : L𝑖𝑑𝑒𝑎𝑙= E(𝑥,𝑦)∼ˆ𝑝(𝑥,𝑦)ℓ(𝑦, 𝑓𝜃(𝑥)) = ∫ ˆ𝑝(𝑥)𝑑𝑥 ∫ ˆ𝑝(𝑦|𝑥)ℓ(𝑦, 𝑓𝜃(𝑥))𝑑𝑦 ≈ ∫ 𝑞(𝑥)𝑑𝑥 ∫ 𝑞(𝑦|𝑥) ˆ𝑝(𝑦|𝑥) 𝑞(𝑦|𝑥) ℓ(𝑦, 𝑓𝜃(𝑥))𝑑𝑦 ≈E(𝑥,𝑦)∼𝑞(𝑥,𝑦) ˆ𝑝(𝑦|𝑥) 𝑞(𝑦|𝑥) ℓ(𝑦, 𝑓𝜃(𝑥)) ≈ ∑︁ (𝑥𝑖,𝑦𝑖)∈D+ 𝜏,𝑑 𝑤(𝑥𝑖, 𝑦𝑖)ℓ(𝑦𝑖, 𝑓𝜃(𝑥𝑖)). (9) By controlling the weight term 𝑤(𝑥, 𝑦) in Eq. (9), the bias of training on D+ 𝜏,𝑑could be reduced. Existing approaches utilize the output of their CVR model and extra models to calculate accurate weight 𝑤(𝑥, 𝑦) for each sample. In contrast, we present a light-weight technique to lessen the bias of heads globally while maintaining their distinction. Methodology In this part, we present our approach MISS to address the delay feedback issue. First, we introduce the multi-interval screening model in MISS, which includes shared neural network layers and a predefined number of unique output heads trained on data pipelines with various waiting windows. Then, adopting an assembled training pipeline, we demonstrate the synthesizing aggregation strategy to thoroughly use the knowledge of heads. Lastly, a low-cost method is adopted to globally enhance the weights of real positive samples, lowering the prediction bias brought on by the delayed feedback. Figure 2 illustrates the design of MISS. Multi-Interval Screening Modeling Recall that, the delay feedback problem leads to the change of sample labels and makes it difficult to learn the real distribution of online data. In recent years, it has been observed that integrating models with multiple waiting windows helps in addressing CVR prediction tasks with delayed feedback (Hou et al. 2021; Li et al. 2021). While shorter waiting windows would guarantee models to capture the most recent information, models with longer waiting windows are likely to have samples with accurate labels. Suppose the training dataset is D+ 𝜏,𝑑, the length of 𝑑depends on the trade-off between label accuracy and data freshness, both of which are important to the performance of models. We design the multi-interval screening model to balance the needs. The model consists of shared bottom layers, including an embedding layer and hidden layers, as well as numerous output heads that independently predict the probability of conversion. Concretely, we allocate different waiting windows 𝑑1, 𝑑2, ..., 𝑑𝑁for the output heads ℎ1, ℎ2, ..., ℎ𝑁on the top of model. We assume that 𝑑𝑚𝑎𝑥≥𝑑1>𝑑2>...>𝑑𝑁>0. Each head ℎ𝑖is training on its own data pipeline D+ 𝜏,𝑑𝑖. The loss function for the multiinterval screening model would be: Lheads = ∑︁ 1≤𝑖≤𝑁 ∑︁ (𝑥𝑗,𝑦𝑗)∈D+ 𝜏,𝑑𝑖 ℓ(𝑦𝑗, ℎ𝑖(𝑠(𝑥𝑗))) = ∑︁ 1≤𝑖≤𝑁 ∑︁ (𝑥𝑗,𝑦𝑗)∈D+ 𝜏,𝑑𝑖 ℓ(𝑦𝑗, 𝑦ℎ𝑖)), (10) where 𝑠is the shared layers. The training gradients from each pipeline would only update the parameters of the corresponding output head and shared layers. We employ the adjusted dataset with duplication techniques instead of the naive dataset D𝜏,𝑑𝑖to train heads, thereby reducing the discrepancy between training data and real data. Additionally, real positives repeatedly update the weights of shared layers, reducing the impact of fake negatives. Heads with adjusted training data would produce more The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8798 Figure 2: An illustration of MISS, including the multi-interval screening model, the assembled pipeline synthesizing model, and their distinct training data pipelines. The screening model has shared bottom layers and multiple heads training on various pipelines. Their predictions would be concatenated as the input of the synthesizing model and aggregate the final estimate. DP stands for delayed positive samples. robust predictions. This is one of the major differences between our method and previous studies. Assembled Pipeline Aggregation The multi-head architecture and adjusted data help improve the accuracy of each prediction. The multiple waiting windows, however, still ensure that the heads would learn varied knowledge. Each head represents a unique degree of tradeoff between introducing fake negatives and sacrificing data freshness, and thus has a distinct contribution to the final prediction. For instance, if the distribution of the data did not vary over time, the head with the longest waiting window would have the best prediction for adding fewer fake negatives. Otherwise, only the heads with short waiting windows could detect the trend in time and update their layers when a large number of rapid conversions arrived all at once. This example also indicates the importance to analyze the relationship of predictions, e.g., the maximum value. Because a large value predicted by a head with a short waiting window may suggest the arrival of instant conversions. The diverse properties of heads make them naturally suitable for ensemble methods like bagging and stacking (Breiman 2004; Wolpert 1992), which concentrate on combining the output of a few models to produce more reliable estimations. Here, we utilize a light-weight model to generate dynamic weights for each head and perform aggregation. After the prediction of the multi-interval screening model, we concatenate the outputs from heads as the input 𝑥𝑝𝑟𝑒𝑑for the synthesizing model. To compare the predictions from various heads and provide more information, we also apply normalization to generate extra input 𝑥𝑛𝑜𝑟𝑚. 𝑥𝑝𝑟𝑒𝑑= [𝑦ℎ1, 𝑦ℎ2, ..., 𝑦ℎ𝑁]𝑐𝑜𝑛𝑐𝑎𝑡, (11) 𝑥𝑛𝑜𝑟𝑚= [𝑥𝑝𝑟𝑒𝑑]𝑛𝑜𝑟𝑚, (12) 𝑥= [𝑥𝑝𝑟𝑒𝑑, 𝑥𝑛𝑜𝑟𝑚]𝑐𝑜𝑛𝑐𝑎𝑡. (13) Instead of middle results from the embedding layer or hidden layers, we directly use the head predictions as input, which decreases model complexity without losing valuable information. We validate the contribution of head predictions and middle results input in ablation study. We apply dense layers with small sizes, as well as a softmax activation function to generate a set of dynamic weights 𝑤= [𝑤1, 𝑤2, ..., 𝑤𝑁], and then produce the final estimate 𝑦𝑠. 𝑦𝑠= 𝑁 ∑︁ 𝑖=1 𝑤𝑖· 𝑦ℎ𝑖. (14) The synthesizing model requires a reliable training pipeline. Previous methods choose D𝜏,𝑑𝑚𝑎𝑥, guaranteeing label accuracy at the cost of training on old samples from timestamp 𝜏−𝑑𝑚𝑎𝑥. Our method, however, develops an assembled data pipeline 𝑀𝜏including the latest positive samples in 𝑃𝜏and real negatives in 𝑁𝜏. Negative samples in 𝑁𝜏 are the same with D𝜏,𝑑𝑚𝑎𝑥as it requires a maximum attribution time 𝑑𝑚𝑎𝑥to find them. On the contrary, positive samples get confirmed once they receive conversions. Therefore, we use the latest converted samples from 𝑃𝜏to substitute the old positive samples from D𝜏,𝑑𝑚𝑎𝑥. Positive samples with a delay time 𝑑and a click timestamp at 𝜏−𝑑𝑚𝑎𝑥would be replaced by positive samples with the same delay time but were clicked at 𝜏−𝑑and converted at 𝜏. Intuitively, the data freshness would be improved due to the ingestion of new samples. To further illustrate the effect of updating positive samples, we calculate the accumulated Kullback-Leibler divergence between the conversion distribution of ideal positive samples from 𝐷𝜏, and distribution of old and latest positives from 𝐷𝜏,𝑑𝑚𝑎𝑥and 𝑃𝜏respectively on Criteo. The results are shown in Figure 3. Obviously, the latest positive samples from 𝑃𝜏have a closer distribution to the ideal one, leading to better performance of the synthesizing model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8799 Figure 3: Accumulated KL divergence of positive samples in D𝜏,𝑑𝑚𝑎𝑥and 𝑀𝜏. Global Positive Weighting The synthesizing model above produces robust predictions by a dynamic weighting aggregation strategy, but there is still a risk of underestimating CVR. Note that, each head ℎ𝑖from the multi-interval screening model is training on a pipeline D+ 𝜏,𝑑𝑖. if 𝑑𝑖<𝑑𝑚𝑎𝑥, extra fake negatives would decrease the global prediction values of heads. The weighted ensemble results from heads that underestimates CVR still suffer from the influence of fake negatives. To tackle such a problem, we utilize the importance sampling (Bottou et al. 2013) approach to globally amplify the weights of all positive samples. Following Eq. (9), the weights for positive samples from the pipeline for head ℎ𝑖would be: 𝑤(𝑥, 𝑦, 𝑑) = ˆ𝑝(𝑦= 1|𝑥) 𝑞(𝑦= 1|𝑥) , (15) where 𝑑is the delay time of the positive sample. D+ 𝜏,𝑑𝑖 inserted duplicated positives to cover all the positives, we could normalize ˆ𝑝(𝑦= 1) to get 𝑞(𝑦= 1|𝑥). 𝑞(𝑦= 1|𝑥) = ˆ𝑝(𝑦= 1|𝑥) 1 + ˆ𝑝(𝑦= 1|𝑥) ˆ𝑝(𝑑> 𝑑𝑖|𝑦= 1, 𝑥) . (16) So the positive weights could be reformulated as: 𝑤(𝑥, 𝑦, 𝑑) = 1 + ˆ𝑝(𝑦= 1|𝑥) ˆ𝑝(𝑑> 𝑑𝑖|𝑦= 1, 𝑥) = 1 + 𝛼 Í (𝑥𝑗,𝑦𝑗,𝑑𝑗)∈P𝜏𝟙(𝑑𝑗> 𝑑) |P𝜏| . (17) Recall that, positive samples from P𝜏make up a delay distribution relatively close to the ideal one, so we count the number of delayed positives among P𝜏to simulate ˆ𝑝(𝑑> 𝑑𝑖|𝑦= 1, 𝑥). Moreover, as the ideal CVR prediction ˆ𝑝(𝑦= 1|𝑥) could be difficult to calculate, we utilize a predefined hyperparameter 𝛼∈[0, 1] to replace it, reflecting the global CVR. A higher 𝛼would lead to a larger degree of amplification to the weights of positive samples. We use the same 𝛼for every head in our experiment for simplicity and convenience. A short discussion. There are several differences between earlier studies and MISS in terms of weighted training. Previous methods focus on obtaining precise sample weights to enhance model performance. They train auxiliary models to assist in computing weight terms like ˆ𝑝(𝑑> 𝑑𝑖|𝑦= 1, 𝑥) for each sample, respectively. However, excluding the extra cost of models, predicting those terms is as difficult as the CVR prediction itself. Contrarily, MISS only aims to lessen the global bias as our method relies on the following synthesizing model to increase accuracy. No extra models for weights calculation are used in MISS. Besides, the static setting of weights(e.g., the weights for positive samples from the same batch would be the same) avoids excessively affecting the ranking ability of heads, maintaining their distinction for subsequent aggregation. Experiments In this section, we first provide an overview of the design and implementation of experiments, and then validate our proposed model on two representative public advertising datasets, responding to the following research questions: • RQ1: How does MISS perform in conversion rate prediction tasks, compared to the state-of-the-art models? • RQ2: How does the global positive weighting help decrease the global bias? • RQ3: How do adjusted datasets and the synthesizing model affect the performance of MISS respectively? Datasets The statistics of the two datasets are given in Table 1. We follow the original maximal attribution window setting of datasets in experiments. Criteo Conversion Logs Criteo1 is a widely used dataset for CVR prediction task (Chen et al. 2022). It contains 60 days data, with a 30 days attribution window 𝑑𝑚𝑎𝑥. Tencent Advertising Algorithm Competition 2017 Tencent dataset2 includes 9 days of data with a 5 days attribution window 𝑑𝑚𝑎𝑥. The dataset contains 22 million samples. Experimental Settings Online Simulation The online streaming process requires models to keep predicting CVR of new samples and then training on them. Following previous work (Chen et al. 2022; Yang et al. 2021), we separate the dataset into pretraining part and streaming part. Methods are allowed to use all the data from the former part to complete pretraining as they need. Then, models keep getting evaluated and updated hour by hour on the streaming part. The online training data only contains information available at the current timestamps. We adopted three distinctive evaluation metrics to evaluate the model performance, the area under the ROC curve (AUC), the negative log likelihood (NLL), and the precision-recall curve (PR-AUC). Compared Baselines Oracle: An ideal model training on dataset ˆ𝐷𝜏with the ground-truth label, representing the upper bound. Pretrain: A model training on pretraining data, representing the lower bound. 1https://labs.criteo.com/2013/12/conversion-logs-dataset/ 2https://algo.qq.com/?lang=en The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8800 Dataset # features # conversions # Samples # average CVR # log period # attribution period Criteo Dataset 17 3,619,793 15,898,863 0.2277 60 days 30 days Tencent Dataset 19 624,411 22,601,402 0.0276 9 days 5 days Table 1: Statistics of Criteo and Tencent dataset. Datasets Criteo Dataset Tencent Dataset Metrics AUC NLL PR-AUC AUC NLL PR-AUC Pretrain 0.0%(0.833) 0.0%(0.407) 0.0%(0.628) 0.0%(0.775) 0.0%(0.108) 0.0%(0.089) Vanilla 27.5% 33.9% 26.5% 70.0% 60.3% 75.1% FNW (Ktena et al. 2019) 46.0% 49.2% 20.1% 74.3% 70.7% 61.1% FNC (Ktena et al. 2019) 44.9% 22.8% 7.4% 73.0% 67.2% 55.7% ES-DFM (Yang et al. 2021) 64.6% 74.0% 66.4% 70.2% 65.5% 56.6% DEFUSE (Chen et al. 2022) 66.9% -19.3% 66.8% 70.0% -70.7% 58.4% MTDFM (Huangfu et al. 2022) 65.8% 56.8% 65.4% 75.6% 68.4% 70.8% FTP (Li et al. 2021) 59.5% 50.8% 54.8% 79.9% 72.4% 72.8% MISS 83.7%* 83.9%* 78.1%* 86.0%* 82.8%* 88.2%* Oracle 100%(0.851) 100%(0.382) 100%(0.656) 100%(0.818) 100%(0.102) 100%(0.111) Table 2: Performance comparisons of the proposed model with baseline models on AUC, NLL, and PR-AUC metrics. The Pretrain method and the Oracle method respectively correspond to 0% and 100%, their absolute performance is in parentheses. The best value in one column is displayed by the bold value, and the second is indicated by the underlined value. * indicates statistical significance improvement compared to the best baseline measured by t-test at p-value of 0.05. Vanilla: A model training with a finetuned waiting window. FNW (Ktena et al. 2019): A model training with a duplication mechanism and the fake negative weighted loss. FNC (Ktena et al. 2019): A model training with a duplication mechanism and the fake negative calibration. ES-DFM (Yang et al. 2021): A model training with a duplication mechanism using the ES-DFM loss. DEFUSE (Chen et al. 2022): A model training with a duplication mechanism using the DEFUSE loss. MTDFM (Huangfu et al. 2022): A two-task model training with a duplication mechanism. FTP (Li et al. 2021): A model training with a multi-task learning mechanism and aggregation strategy. We choose DEFUSE instead of Bi-DEFUSE (Chen et al. 2022) as the former achieved much better results on Criteo in the original paper. We also apply detailed comparison with existing multi-task approaches like FTP. Parameter Settings We implement the MISS in Tensorflow and the source code will be available on GitHub3. A DNN model with a fixed hidden size (128,128) is used as the base model for all the methods. Each hidden layer is followed by the Leaky ReLU activation function (Maas et al. 2013). The synthesizing model for MISS only has one hidden layer with size [32]. L2 regularization is set to 10−6 on Criteo Dataset and 10−7 on Tencent Dataset. The models are updated by the Adam optimizer (Kingma and Ba 2014). For a fair comparison, we apply the grid search strategy to tune the best learning rate among {0.0001, 0.0005, 0.001}, and tune the waiting window for previous models in accordance with the original papers. MISS and FTP apply the same waiting windows, [1D, 7D, 14D, 21D, 30D] on Criteo and [1H, 6H, 24H, 48H, 120H] on Tencent. 3https://github.com/NealWalker/MISS Main Experiments (RQ1) We execute 5 random runs on Criteo and Tencent to illustrate the overall performance of MISS and baselines. The averaged results with significance test are given in Table 2. Following (Gu et al. 2021; Yang et al. 2021; Yang and Zhan 2022), we report the relative improvement to the performance gap between the Pretrain model and the Oracle model. We have the following findings upon their performance comparison: • Our MISS method significantly performs better than the baselines on both datasets. In particular, MISS outperforms the strongest baselines w.r.t. relative-AUC by 16.8%, 6.1% on Criteo and Tencent datasets, respectively. Similar improvements are observed at NLL and PR-AUC metrics. Our strategy yields remarkable improvements because it applies multi-head design to learn distinctive distributions and provides a new, robust synthesizing model to aggregate exploitable information. Besides, compared with ES-DFM and DEFUSE which require a whole extra auxiliary model with a heavy embedding layer, few extra parameters introduced by MISS are totally affordable. • Vanilla can perform better than expected if its waiting window is relatively long, e.g., 10 days, suggesting the value of maintaining a long waiting window to observe data, which is covered in MISS. By introducing importance sampling or calibration, FNW and FNC reach better performance. Such advantage is enlarged in ES-DFM by auxiliary models that calculate importance weights. MTDFM applies extra head for prediction calibration, achieving better AUC. DEFUSE comes up with a hidden model to precisely determine importance weight and reaches the best AUC and PR-AUC among baselines on The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8801 Metrics AUC (↑) NLL (↓) PR-AUC (↑) MISS 0.8477 0.3856 0.6495 MISS O 0.8451 0.3944 0.6435 MISS L 0.8390 0.3950 0.6382 MISS A 0.8469 0.3860 0.6489 MISS R 0.8459 0.3886 0.6430 MISS H 0.8477 0.3857 0.6496 Table 3: Ablation study of MISS on Criteo. Criteo. However, sophisticated terms contained in its calculation cause a relatively high NLL. • Tencent dataset represents the real industrial scenarios with massive data but scant feedback (an average conversion rate of 2.76%). Its low CVR and long-tail distribution weaken the strengths of weighting, leading to mediocre results for importance sampling methods. FTP and MISS, on the other hand, have lengthy waiting windows to model the distribution without adding many fake negatives. They do not rely on importance sampling and are resistant to scenarios with different CVR and distribution. Lastly, while the physical meaning of heads of FTP is the CVR restricted to various delay time windows, the meaning in MISS is the ground-truth CVR achieved by various sampling and weighting strategies, and the strategies are determined by various delay time. As a result, the FTP heads trained at samples from shorter waiting windows would inevitably underestimate the CVR, because these heads only observe part of the real positive samples. The synthesizing model with a specially designed pipeline and reliable heads helps MISS produce predictions pretty close to the ground-truth CVR and outperform FTP. Ablation Study (RQ2) In this section, we focus on ablation studies to validate the effect of adjusted datasets and the synthesizing model. We evaluate the following formulations of MISS on Criteo: MISS O: MISS without duplication mechanism. MISS L: MISS without the synthesizing model and outputting last head predictions. MISS A: MISS without the synthesizing model and output average head predictions. MISS R: MISS with a synthesizing model training on ˆ𝐷𝜏,𝑑𝑚𝑎𝑥instead. MISS H: MISS taking both head predictions and middle results as inputs for the synthesizing model. The results are shown in Table 3. The removal of the duplication mechanism leads to a significant decrease at performance, indicating the importance to adjust the training data of heads when they are directly used for predictions. MISS A outperforms MISS L by a great margin for its naive aggregation. Such advantages are further developed by the dynamic aggregation strategy of the proposed model. MISS R obtains mediocre results at PR-AUC and NLL. A synthesizing model trained on ˆ𝐷𝜏,𝑑𝑚𝑎𝑥gives dominant weights to the head with the longest waiting window Figure 4: MISS trained with various 𝛼on Criteo for their similarity on training data, ignoring other heads. In contrast, our synthesizing model with an assembled pipeline could comprehensively exploit the value of every head. Finally, we include the middle results from the hidden layer of the model as additional inputs for the synthesizing model. The similar results suggest that the head predictions alone are sufficient and extra information is not necessary. Global Positive Weighting (RQ3) In global positive weighting, a hyper-parameter 𝛼controls the degree of positive weighting. Here we evaluate how the value of 𝛼influence the bias of our predictions. We use the NLL metric to illustrate the bias for its sensitivity to the absolute value of the prediction. The results of other metrics are omitted for similar trends. According to the results in Figure 4, with the increase of 𝛼, the NLL of MISS continues decreasing, suggesting predictions with higher accuracy and confidence are made. Notably, 𝛼replaces the ideal CVR prediction value in Eq. (17), but a value higher than the global average CVR could actually reach better performance. One explanation is that as we do not decrease the weights of negatives, excessive weights for positive samples could achieve a similar effect. Conclusion In this paper, we concentrated on the online CVR prediction task and proposed the MISS approach to deal with the issue of delayed feedback. We underline the value of integrating observations of various waiting windows and design a general framework to synthesize predictions by investigating their relationships on assembled unbiased data. MISS also decreases the bias of models by a universal weighting strategy with an assembled training pipeline. Experiments on two real-world datasets demonstrate the significance of our method. Acknowledgements The research work is supported by National Key R&D Plan No. 2022YFC3303302, the National Natural Science Foundation of China under Grant No.61976204, and the CAAI Huawei MindSpore Open Fund. Xiang Ao is also supported by the Project of Youth Innovation Promotion Association CAS and the Beijing Nova Program. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8802 References Badanidiyuru, A.; Evdokimov, A.; Krishnan, V.; Li, P.; Vonnegut, W.; and Wang, J. 2021. Handling many conversions per click in modeling delayed feedback. arXiv preprint arXiv:2101.02284. Bottou, L.; Peters, J.; Qui˜nonero-Candela, J.; Charles, D. X.; Chickering, D. M.; Portugaly, E.; Ray, D.; Simard, P.; and Snelson, E. 2013. Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising. Journal of Machine Learning Research, 14(11). Breiman, L. 2004. Bagging predictors. Machine Learning, 24: 123–140. Chapelle, O. 2014. Modeling delayed feedback in display advertising. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1097–1105. Chapelle, O.; Manavoglu, E.; and Rosales, R. 2014. Simple and Scalable Response Prediction for Display Advertising. ACM Transactions on Intelligent Systems and Technology (TIST), 5: 1 – 34. Chen, Y.; Jin, J.; Zhao, H.; Wang, P.; Liu, G.; Xu, J.; and Zheng, B. 2022. Asymptotically Unbiased Estimation for Delayed Feedback Modeling via Label Correction. In Proceedings of the ACM Web Conference 2022, 369–379. Choi, Y.; Kwon, M.; Park, Y.; Oh, J.; and Kim, S. 2020. Delayed Feedback Model with Negative Binomial Regressionfor Multiple Conversions. Gao, H.; and Yang, Y. 2022. Multi-Head Online Learning for Delayed Feedback Modeling. arXiv preprint arXiv:2205.12406. Goldfarb, A.; and Tucker, C. 2011. Online display advertising: Targeting and obtrusiveness. Marketing Science, 30(3): 389–404. Gu, S.; Sheng, X.-R.; Fan, Y.; Zhou, G.; and Zhu, X. 2021. Real Negatives Matter: Continuous Training with Real Negatives for Delayed Feedback Modeling. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2890–2898. Guo, Y.; Ao, X.; Liu, Q.; and He, Q. 2023. Leveraging PostClick User Behaviors for Calibrated Conversion Rate Prediction Under Delayed Feedback in Online Advertising. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. Guo, Y.; Li, H.; Ao, X.; Lu, M.; Liu, D.; Xiao, L.; Jiang, J.; and He, Q. 2022. Calibrated Conversion Rate Prediction via Knowledge Distillation under Delayed Feedback in Online Advertising. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 3983–3987. He, X.; Zhang, H.; Kan, M.-Y.; and Chua, T.-S. 2016. Fast matrix factorization for online recommendation with implicit feedback. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, 549–558. Hojjat, S. A.; Turner, J. G.; Cetintas, S.; and Yang, J. 2017. A Unified Framework for the Scheduling of Guaranteed Targeted Display Advertising Under Reach and Frequency Requirements. Oper. Res., 65: 289–313. Hou, Y.; Zhao, G.; Liu, C.; Zu, Z.; and Zhu, X. 2021. Conversion Prediction with Delayed Feedback: A Multi-task Learning Approach. In 2021 IEEE International Conference on Data Mining (ICDM), 191–199. Huangfu, Z.; Zhang, G.-D.; Wu, Z.; Wu, Q.; Zhang, Z.; Gu, L.; Zhou, J.; and Gu, J. 2022. A Multi-Task Learning Approach for Delayed Feedback Modeling. In Companion Proceedings of the Web Conference 2022, WWW ’22, 116–120. Kingma, D.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations. Ktena, S. I.; Tejani, A.; Theis, L.; Myana, P. K.; Dilipkumar, D.; Husz´ar, F.; Yoo, S.; and Shi, W. 2019. Addressing delayed feedback for continuous training with neural networks in CTR prediction. In Proceedings of the 13th ACM conference on recommender systems, 187–195. Lee, K.-c.; Orten, B.; Dasdan, A.; and Li, W. 2012. Estimating conversion rate in display advertising from past erformance data. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 768–776. Li, H.; Pan, F.; Ao, X.; Yang, Z.; Lu, M.; Pan, J.; Liu, D.; Xiao, L.; and He, Q. 2021. Follow the Prophet: Accurate Online Conversion Rate Prediction in the Face of Delayed Feedback. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. Liu, Q.; Li, H.; Ao, X.; Guo, Y.; Dong, Z.; Zhang, R.; Chen, Q.; Tong, J.; and He, Q. 2023. Online Conversion Rate Prediction via Neural Satellite Networks in Delayed Feedback Advertising. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1406–1415. Maas, A. L.; Hannun, A. Y.; Ng, A. Y.; et al. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, volume 30, 3. Citeseer. Su, Y.; Zhang, L.; Dai, Q.; Zhang, B.; Yan, J.; Wang, D.; Bao, Y.; Xu, S.; He, Y.; and Yan, W. 2021. An attentionbased model for conversion rate prediction with delayed feedback via post-click calibration. In Proceedings of the 29th IJCAI, 3522–3528. Wang, Y.; Zhang, J.; Da, Q.; and Zeng, A. 2020. Delayed feedback modeling for the entire space conversion rate prediction. arXiv preprint arXiv:2011.11826. Wolpert, D. H. 1992. Stacked generalization. Neural Networks, 5: 241–259. Yang, J.-Q.; Li, X.; Han, S.; Zhuang, T.; Zhan, D.-C.; Zeng, X.; and Tong, B. 2021. Capturing delayed feedback in conversion rate prediction via elapsed-time sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 4582–4589. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8803 Yang, J.-Q.; and Zhan, D.-C. 2022. Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems. In NeurIPS 2022. Yasui, S.; Morishita, G.; Komei, F.; and Shibata, M. 2020. A feedback shift correction in predicting conversion rates under delayed feedback. In Proceedings of The Web Conference 2020, 2740–2746. Yoshikawa, Y.; and Imai, Y. 2018. A nonparametric delayed feedback model for conversion rate prediction. arXiv preprint arXiv:1802.00255. Zhang, W.; Yuan, S.; and Wang, J. 2014. Optimal real-time bidding for display advertising. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1077–1086. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8804
2024
978
18,826
KG-TREAT: Pre-training for Treatment Effect Estimation by Synergizing Patient Data with Knowledge Graphs Ruoqi Liu1, Lingfei Wu2, Ping Zhang1 1The Ohio State University 2Anytime.AI [email protected], [email protected], [email protected] Abstract Treatment effect estimation (TEE) is the task of determining the impact of various treatments on patient outcomes. Current TEE methods fall short due to reliance on limited labeled data and challenges posed by sparse and high-dimensional observational patient data. To address the challenges, we introduce a novel pre-training and fine-tuning framework, KG-TREAT, which synergizes large-scale observational patient data with biomedical knowledge graphs (KGs) to enhance TEE. Unlike previous approaches, KG-TREAT constructs dual-focus KGs and integrates a deep bi-level attention synergy method for in-depth information fusion, enabling distinct encoding of treatment-covariate and outcome-covariate relationships. KG-TREAT also incorporates two pre-training tasks to ensure a thorough grounding and contextualization of patient data and KGs. Evaluation on four downstream TEE tasks shows KG-TREAT’s superiority over existing methods, with an average improvement of 7% in Area under the ROC Curve (AUC) and 9% in Influence Function-based Precision of Estimating Heterogeneous Effects (IF-PEHE). The effectiveness of our estimated treatment effects is further affirmed by alignment with established randomized clinical trial findings. Introduction Treatment effect estimation (TEE), which identifies the causal effects of treatment options on patient outcomes given observational covariates, is a pivotal task in healthcare (Glass et al. 2013). Yet, existing TEE methods (Shalit, Johansson, and Sontag 2017; Shi, Blei, and Veitch 2019; Zhang et al. 2022) are limited in both generalizability and accuracy due to their dependence on small, task-specific datasets that might not fully encompass the complex relationships among covariates, treatments, and outcomes. To address this, one might consider deploying foundation models (Devlin et al. 2019; Brown et al. 2020; Bommasani et al. 2021) trained on large datasets, to improve generalizability. However, the application of foundation models to TEE is not straightforward. Medical data, often characterized by high-dimensional and sparse medical concepts, continue to pose challenges to these models (Huang, Altosaar, and Ranganath 2019; Rasmy et al. 2021). Even with largescale datasets, developing a domain-specific understanding Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. of these medical concepts and identifying potential confounders to reduce estimation bias remains difficult. The sheer volume of data does not necessarily equate to rich, specific instances that the model needs to learn effectively. Therefore, we turn to biomedical knowledge graphs (KGs) - structured representations of diverse medical concepts and their relations. By synergizing potentially sparse patient data with domain-specific knowledge from KGs, we can derive meaningful insights and identify key confounders for adjustment in TEE. Despite its potential, synergizing patient data with KGs poses several challenges. Firstly, utilization of the entire KG can introduce noise that is irrelevant to patient data. While recent works have suggested constructing personalized knowledge graphs (PKGs) by retrieving relevant medical information for each patient from the entire KG to mitigate noise (Ye et al. 2021; Xu et al. 2023; Yang et al. 2023), these methods fail to distinguish among various types of medical codes, such as treatments, covariates, and outcomes. This issue can lead to bias and spurious correlations in TEE. Secondly, the primary application scenario of existing works is clinical risk prediction (Choi et al. 2017; Ma et al. 2018a), neglecting the encoding of vital causal relationships among covariates, treatments, and outcomes unique to TEE. Additionally, existing methods (Ye et al. 2021; Xu et al. 2023) often incorporate patient data and KGs only at the final prediction stage, leading to superficial combination and inefficient information utilization. To address these challenges, we propose a novel pre-training and fine-tuning framework for TEE, named KG-TREAT, by synergizing patient data and KGs. Firstly, we address the bias issue by constructing dual-focus PKGs, one focusing on the relationship between treatment and covariates (treatment-covariate PKG) and the other focusing on the relationship between outcome and covariates (outcome-covariate PKG). These PKGs explicitly capture and represent the key relationships and dependencies among treatments, outcomes, and covariates, thereby mitigating the risk of spurious correlations. Secondly, we present a deep bilevel attention synergy method, namely DIVE. The first level of attention applies a treatment(outcome) attention mechanism to patient data and specific treatment(outcome) information from PKGs, explicitly encoding the complex relationships among covariates, treatments, and outcomes. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8805 Co-attention  Co-attention Transformer Encoder Graph Encoder Outcome-attention Visit 2 Visit 1 KG (UMLS) (a) Dual-focus PKG Construction [ICD] 401.1 [ICD] 415.0 [RxNorm] 6918 [ICD] 785.0 [ICD] 401.9 [RxNorm] 1998 [ICD] 724.4 [ICD] 782.3 Visit ... Treatment Pre-treatment Medical Codes Outcome Patient Outcome-  covariate PKG [RxNorm] ... [ICD] ... PKG Retrieval Alignment as  sequence input (b) Pre-training on Patient Data & PKGs [MASK] x Transformer Encoder Masked Code Prediction Link Prediction Link Prediction Code Emb. Time Emb. Type Emb. + + x  ? ? ... ... ? CLS CLS MLP MLP Treatment Prediction Outcome Prediction POOL POOL (c) Fine-tuning for TEE Treatmentcovariate PKG cause Hypertension treat Renal failure Captopril PKG Retrieval Deep Bi-level Attention Synergy Deep Bi-level Attention Synergy Figure 1: A detailed illustration of KG-TREAT. (a) Dual-focus PKGs are constructed by extracting relevant treatment-covariate and outcome-covariate information for each individual patient from KG. (b) The model is pre-trained by synergizing patient data with corresponding PKGs through the proposed deep bi-level attention synergy method. Two pre-training tasks are unified to learn contextualized representations. (c) The pre-trained model is fine-tuned on downstream data for TEE. second level employs a multi-layer co-attention mechanism to patient data and its corresponding PKGs, ensuring deep information synergy between these modalities. KG-TREAT is first pre-trained by combining two selfsupervised tasks: masked code prediction and link prediction. These tasks ensure that the patient data and KGs are fully grounded and contextualized to each other. The model is then fine-tuned on downstream data for TEE. Our contributions include: • We propose KG-TREAT, a novel pre-training and finetuning framework that integrates observational patient data with KGs for TEE. This includes the construction of dual-focus PKGs and the introduction of a deep bi-level attention synergy method, DIVE. • We compile a large dataset of 3M patient records from MarketScan Research Databases1 and KG (300K nodes, 1M edges) from Unified Medical Language System (UMLS) (Bodenreider 2004) for pre-training, and 4 TEE fine-tuning datasets for assessing treatment effectiveness in coronary artery disease (CAD). • Comprehensive experiments demonstrate the superior performance of KG-TREAT over existing TEE methods. It shows an average improvement of 7% in AUC for outcome prediction and a 9% improvement in IF-PEHE for TEE compared to the best baseline across four tasks. • Case study shows that the estimated treatment effects align with established randomized controlled trials (RCTs) findings, further demonstrating the effectiveness of our approach in real-world use. Preliminary Patient Data. A patient record is a collection of multiple visits, denoted as x = {x1, . . . , xT }. Each visit is characterized by a series of medications m1, . . . , m|M| ∈M (with |M| total medication codes), and diagnosis codes 1https://www.merative.com/real-world-evidence d1, . . . , d|D| ∈D, (with |D| total diagnosis codes). A patient’s demographics include age and gender, encoded as categorical and binary values respectively, and are denoted as c ∈C. We create a comprehensive medical vocabulary, W = {M, D, C} that includes all these patient attributes. We denote the pre-train data as X and downstream data as Z, where X ∩Z = ∅. Personalized Knowledge Graph. A biomedical KG, which contains extensive relationships among various medical codes (e.g., medications and diagnoses), can be represented as a multi-relational graph G = (V, E), where V is the set of entity nodes and E ⊆V × R × V is the set of edges. These edges connect nodes in V via triplets, and R is the set of relation types. A triplet is denoted as (h, r, t), h, t ∈V, r ∈R, represents a relationship within the KG. As an entire KG can be large and contain noises, a personalized KG (PKG), g = (v, e) is considered by extracting a subgraph from the KG, where v ∈V and e ∈E. Treatment Effect Estimation. Given a patient’s visit sequence x = {x1, . . . , xT }, demographics c, binary treatment a ∈{0, 1} (where 1 indicates treated and 0 indicates control status), disease outcome y ∈{0, 1} (where 1 indicates its presence and 0 indicates its absence), and PKGs g, we aim to estimate treatment effect as E[Y (1) −Y (0)|X = x, C = c, G = g], where Y (A) is the potential outcome if the patient receives treatment A (Rubin 2005). We make three standard assumptions in our TEE analysis: consistency, positivity, and ignorability (see details in Appendix A). These assumptions ensure that the treatment effects estimated as E[Y |A = 1, X = x, C = c, G = g] −E[Y |A = 0, X = x, C = c, G = g] are identifiable. Method In this section, we introduce our model (Fig. 1), including three main modules: 1) Dual-focus PKG construction, 2) Pre-training on patient data & PKGs, and 3) Fine-tuning for TEE. Algorithm 1 shows the model training procedure. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8806 Data Encoding As the raw patient data and KGs are not applicable for direct modeling, we need to encode the patient data into dense embeddings and construct dual-focus PKGs given the relevant information from individual patients. Patient Data Encoding. Compared to natural text, patient data presents unique challenges owing to its irregular temporality (i.e., variability in time intervals between patient visits) and hierarchical structures (i.e., a patient record includes multiple visits and each visit includes different types of medical codes). To address these, we propose a comprehensive embedding approach that extends the original BERT (Devlin et al. 2019) embedding by incorporating both the code type and temporal information. For every input medical code, the patient embedding e is obtained as: e = wcode + ttype + vvisit + pphysical (1) where wcode ∈Rdemb is the medical code embedding, and ttype ∈Rdemb is the type embedding. Our input data includes three types: demographics, medication, and diagnosis. The visit time embedding vvisit ∈Rdemb is the actual time of a visit. The physical time embedding pphysical ∈ Rdemb measures the physical time over a fixed time interval. The code embedding, time embedding, and type embedding are integrated as the input to the patient sequence encoder. Dual-focus PKG Construction. To facilitate a nuanced understanding of patient data and obtain personalized estimation for TEE, we propose to construct dual-focus PKGs that capture diverse medical contexts of treatment-covariate and outcome-covariate relationship, respectively. We first map the medical codes from a patient’s record to their corresponding medical concepts in KG, resulting in an initial set of graph nodes v′. Then we augment the graph with treatment and outcome data. The treatment-covariate PKG includes the mapped treatment concepts added to v′ as v′(a), while the outcome-covariate PKG incorporates mapped outcome concepts, added to v′ as v′(y). To leverage implicit contextual knowledge, we include k-hop bridge nodes in the final graph node set v(a) and v(y). A k-hop bridge node denotes the entity node positioned within a k-hop path between any pair of linked entities in the node set v′(a) or v′(y). Finally, we establish a link between any pair of entities in v(a) and v(y) if there exists an edge between them. This procedure results in dual-focus PKGs as ga = (v(a), e(a)) and gy = (v(y), e(y)). Pre-training KG-TREAT The patient embeddings and PKGs are first encoded and then synergized through the proposed deep bi-level attention synergy method. KG-TREAT is pre-trained by unifying two tasks: masked code prediction and link prediction. Patient Sequence Encoder. Patient visit sequences are encoded using an N-stacked Transformer (Vaswani et al. 2017). Each Transformer encoder has a multi-head selfattention layer followed by a fully-connected feed-forward layer. The patient sequence representations are computed as: ˜h (l+1) CLS , ˜h (l+1) 1 , . . . , ˜h (l+1) T = fseq(h(l) CLS, h(l) 1 , . . . , h(l) T ), (2) Algorithm 1: KG-TREAT Pre-training and Fine-tuning Input: Pre-train data X, KG G, downstream data Z Output: Pre-trained model fθ∗, treatment effects δ 1: Obtain patient data encoding e by Eq. (1); 2: Extract dual-focus PKGs g(a), g(y) from entire KG; 3: Obtain patient representations {˜hCLS, ˜h1, . . . , ˜hT } by Eq. (2); 4: Obtain PKG representations {˜vCLS, ˜v1, . . . , ˜vT } by Eq. (3); 5: Synergize patient and PKG representations by Eq. (8); 6: Pre-train the model by unifying MCP Eq. (10) and LP Eq. (11); 7: Initialize the model with parameters θ∗from pre-training; 8: Obtain patient representations {hCLS, h1, . . . , hT } and PKG representations {vCLS, v1, . . . , vT } by Eq. (2), (3); 9: Fine-tune the model and estimate effects by Eq. (14), (15); where l = 1, . . . , N denotes the Transformer layer and the representations in layer l = 0 are initialized with the patient embedding e. The term hCLS is the encoding of a special code that is appended to the patient sequence and acts as the pooling point for prediction. More details of the Transformer architecture are provided in Appendix B. Graph Encoder. We utilize graph neural networks (GNNs) to encode the PKGs. We initialize node embeddings following existing work (Feng et al. 2020). We transform KG triplets into textual data and feed these sentences into a pre-trained language model, BioLinkBERT (Yasunaga, Leskovec, and Liang 2022), to obtain sentence embeddings. We compute node embeddings by pooling all token outputs of the entity nodes. We encode the treatment-covariate PKG g(a) and outcome-covariate PKG g(y) as follows: ˜v(a)(l+1) CLS , . . . , ˜v(a)(l+1) I = fgnn(v(a)(l) CLS, . . . , v(a)(l) I ), ˜v(y)(l+1) CLS , . . . , ˜v(y)(l+1) J = fgnn(v(y)(l) CLS, . . . , v(y)(l) J ), (3) where l = 1, . . . , M denotes the GNN encoder layer, and vCLS is the encoding of a special node added to the PKG (with edges connecting to all other nodes) to serve as the pooling point for prediction. The node representations are updated via iterative message passing between neighbors as: v(a)(l+1) i = fv( X s∈Ni∪{i} αs,imsi) + v(a)(l) i , (4) where Ni denotes the neighbors of entity node i, αsi is the attention weight for scaling the message pass, and fv is a multi-layer perceptron with batch normalization. The message msi from s to i is computed as follows: msi = fm(v(a)(l) s , rsi), (5) where rsi is the relation embedding and fm is a linear transformation. The attention weight αs,i, which controls the impact of each neighbor on the current node, is computed as: qs = fq(v(a)(l) s ), ki = fk(v(a)(l) i , rsi), αs,i = Softmax(qsk⊤ i / √ d), (6) where fq and fk are linear transformations. The outcomecovariate PKG can be encoded similarly as above. Deep Bi-level Attention Synergy. To address the challenges of shallow synergy and inefficient information utilization in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8807 existing work (Ye et al. 2021), we propose a deep bi-level attention synergy method, DIVE. The first level of attention handles the complex treatmentcovariate and outcome-covariate relationships for bias adjustment and accurate estimation. Given the patient sequence representation ˜h (l) t and treatment node representation ˜v(a)(l) a , we compute the treatment attention weight αa,h,t and treatment-related attention-pooling of patient sequence representation ˆh (l) as follows: αa,h,t = Softmax(˜v(a)(l) a ˜h (l) t ⊤ / √ d), ˆh(l) = T X t=1 αa,h,t˜h (l) t . (7) The second level of attention enables deep synergy of patient data with KGs. We apply a multi-head co-attention (Murahari et al. 2020) across patient sequence and graph representations in multiple hidden layers. Concretely, we obtain synergized patient sequence representations by transforming ˜h (l) to queries, ˜v(l)(a) to keys and values. These synergized representations are then concatenated with the treatment-related patient representations derived from Eq. (7), and passed through a multi-layer perceptron fc to yield bi-level attention synergized patient sequence representations h(l). This process is formally denoted as: h(l)′ = MHCAh,v(a)(˜h (l), ˜v(a)(l), ˜v(a)(l)), h(l) = fc[h(l)′; ˆh (l)], (8) where MHCAQ,K(Q, K, K) is the multi-head co-attention applied to Q, K, with Q as query, K as both key and value. We compute the synergized node representations v(a)(l) by transforming ˜v(a)(l) to queries, ˜h (l) to keys and values as: v(a) = MHCAv(a),h(˜v(a)(l), ˜h (l), ˜h (l)). (9) The synergized outcome node representations can be obtained similarly. In essence, DIVE introduces a bi-level attention synergy system: one level that adjusts bias by handling relationships among covariates, treatments, and outcomes, and a second level that focuses on deep synergy. These levels work together to facilitate efficient information utilization and overcome the limitations of shallow synergy methods. Pre-training Tasks. The goal of pre-training is to encourage a thorough grounding and contextualization of patient data and KGs. To approach this, two self-supervised pre-training tasks are adopted: masked code prediction (MCP) and KG link prediction (LP). MCP predicts the masked medical code with position j ∈ J using the representation hj. The loss of MCP, LMCP(θm), with optimization parameters θm, is formulated as: LMCP(θm) = − X j∈J log(P(wj|hj), (10) where P(wj|hj) is the softmax probability of the masked code over all codes in the vocabulary. By using the synergized representations, the patient data are enhanced with external knowledge to predict the masked codes. LP is widely used in KG representation learning, which aims to distinguish existing (positive) triplets from corrupted (negative) triplets using the representations of entities and relations. Formally, given the representations of a triplet obtained from the graph encoder as (vh, r, vt), the pre-training loss for LP is optimized over parameters θl as: LLP(θl) = X (h,r,t)∈S  −σ(d(vh, r, vt))+ X (h′,r,t′)∈S′ σ(d(vh′, r, vt′))  (11) where (h′, r, t′) ∈S′ (h,r,t) are corrupted triplets in which either the head or tail entity is replaced by a random entity (but not both simultaneously), σ denotes logarithm sigmoid function, and d is the score function such as TransE (Bordes et al. 2013) and DistMult (Yang et al. 2015). The final pre-training loss is optimized over parameters θ = {θm, θl} by integrating MLP and LP as L(θ) = LMCP(θm) + LLP(θl). Fine-tuning KG-TREAT for TEE After pre-training, we fine-tune the model on downstream data for TEE. To mitigate confounding bias, the model is fine-tuned to simultaneously predict the treatment and outcome using shared representations. This strategy discourages reliance on unrelated features and prioritizes confounders for predictions (Shi, Blei, and Veitch 2019). To elaborate, given downstream patient data and corresponding PKGs, we obtain representations for patients, treatment-covariate PKG, and outcome-covariate PKG. We then predict the treatment using a combination of hCLS, v(a)CLS, and an attention-based pooling v(a)POOL with query h[CLS] as the input to a prediction head fϕa. The loss of treatment prediction is computed as: ˆa = fϕa ◦fθ∗([hCLS, v(a)CLS, v(a)POOL]), LT(θ∗, ϕa) = BCE(ˆa, a), (12) where θ∗are the optimized parameters of the pre-train model, BCE denotes binary cross entropy loss. Similarly, we predict the outcome by combining hCLS, v(y)CLS, and an attention-based pooling v(y)POOL with query h[CLS] as the input to a prediction head fϕy. We employ separate heads for treated and control potential outcomes, computing the loss of outcome prediction as: ˆy = fϕy ◦fθ∗([hCLS, v(y)CLS, v(y)POOL]), LO(θ∗, ϕy) = BCE(ˆy, y). (13) We jointly optimize both treatment prediction and outcome prediction, computing the final loss as follows: LTEE(θ∗, ϕ) = LO(θ∗, ϕy) + βLT(θ∗, ϕa), (14) where β is a hyper-parameter that controls the influence of treatment prediction. Note that only the observational, or factual, outcomes are used to compute outcome prediction loss, as counterfactual outcomes are unavailable. After finetuning the model, we infer the treatment effect δ as the difference between two predicted potential outcomes under the treated and control treatment as: ˆδ = ˆya=1 −ˆya=0 (15) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8808 Experimental Setup Pre-training Data. We extract patient data from MarketScan Commercial Claims and Encounters (CCAE) database2 for those diagnosed with coronary artery disease (CAD), producing a dataset of 2,955,399 patient records and 116,661 unique medical codes. We use UMLS3 as external knowledge. We unify the medical codes in patient data and concepts in the UMLS through standard vocabularies4 and extract all relevant relationships from UMLS, resulting in a KG with around 300K nodes and 1M edges Downstream Tasks. Our goal is to evaluate the effects of two treatments on reducing stroke and myocardial infarction risk for CAD patients, given the patient’s covariates and corresponding PKGs. As the ground truth treatment effects are not available in observational data and RCTs are the gold standard for TEE, we specifically create 4 downstream datasets based on CAD-related RCTs. More details of datasets are provided in Appendix Table A3 and Fig. A2). Baselines. We compare KG-TREAT with state-of-the-art methods, all trained solely on downstream data. • TARNet (Shalit, Johansson, and Sontag 2017) predicts the potential outcomes based on balanced hidden representations among treated and controlled groups. • DragonNet (Shi, Blei, and Veitch 2019) jointly predicts treatment and outcome via a three-head neural network: one for treatment prediction and two for outcomes. • DR-CFR (Hassanpour and Greiner 2020) predicts the counterfactual outcome by learning disentangled representations that the covariates can be disentangled into three components: only contributing to treatment selection, only contributing to outcome predication, and both. • TNet (Curth and van der Schaar 2021a) is a deep neural network version of T-learner (K¨unzel et al. 2019) (i.e., decomposes the TEE into two or more subregression/classification problems). • SNet (Curth and van der Schaar 2021a) learns disentangled representations and assumes that the covariates can be disentangled into five components by considering two potential outcomes separately. • FlexTENet (Curth and van der Schaar 2021b) assumes inductive bias for the shared structure of two potential outcomes and adaptively learns what to share between the potential outcome functions. • TransTEE (Zhang et al. 2022) is a Transformer-based TEE model, which encodes the covariates and treatments via Transformer and cross-attention. Additionally, we consider several variants of the proposed model for comparison: • w/o DIVE: replacing the proposed deep bi-level attention synergy method DIVE with a simple concatenation of patient and graph representations only at the final prediction. • w/o KGs: pre-trained solely on patient data without KGs. 2https://www.merative.com/real-world-evidence 3https://www.nlm.nih.gov/research/umls/index.html 4nlm.nih.gov/research/umls/sourcereleasedocs/index.html • w/o pre-train: directly trained on the downstream patient data and KGs using the same model architecture as KG-TREAT. • w/o pre-train & KGs: directly trained on the downstream patient data with the patient sequence encoder only. Note that we do not compare with existing pre-training models for clinical risk prediction (Li et al. 2020; Rasmy et al. 2021), as these models are not directly applicable to our context. Clinical risk prediction models focus on forecasting the likelihood of a disease based on variable correlations. TEE mainly quantifies the causal impact of a treatment on an outcome, predicting all potential outcomes as treatment effects, an entirely different objective from risk prediction. Metrics. We assess factual prediction performance using standard classification metrics: Area under the ROC Curve (AUC) and Area under the Precision-Recall Curve (AUPR). We evaluate counterfactual prediction performance using influence function-based precision of estimating heterogeneous effects (IF-PEHE) (Alaa and van der Schaar 2019), which measures the mean squared error between estimated treatment effects and approximated true treatment effects. Additional details of this metric are in Appendix C. Implementation Details. The patient sequence encoder uses the BERT-base architecture (Devlin et al. 2019). The PKGs are retrieved with 2-hop bridge nodes, with a maximum node limit of 200. The downstream data is randomly split into training, validation, and test sets with percentages of 90%, 5%, and 5% respectively. All results are reported on the test sets. More implementation details5 including parameter tuning and setup are mentioned in Appendix C. Results Quantitative Analysis We quantitatively compare KG-TREAT with state-of-the-art methods in terms of factual outcome prediction and TEE. Table 1 presents the results on 4 downstream datasets. Our key findings include: • KG-TREAT significantly outperforms the best baseline method, demonstrating an average improvement of 7% in AUC, 12% in AUPR, and 9% in IF-PEHE. This validates the effectiveness of our pre-training approach, which synergizes patient data with KGs. • The variant of KG-TREAT without the deep bi-level attention synergy method, w/o DIVE, shows a drop in performance. This highlights the effectiveness of DIVE in modeling the relationships among covariates, treatments, and outcomes, and in synergizing patient data with KGs. • Pre-training has a more significant impact on model performance than KGs, as indicated by the greater performance decline in the w/o pre-train scenario compared to the w/o KGs scenario. And w/o pre-train & KGs yields the worst performance among all the model variants. 5Code: https://github.com/ruoqi-liu/KG-TREAT The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8809 Method Rivaroxaban v.s. Aspirin Valsartan v.s. Ramipril AUC ↑ AUPR ↑ IF-PEHE ↓ AUC ↑ AUPR ↑ IF-PEHE ↓ TARNet 0.746 0.404 0.277 0.743 0.333 0.289 DragonNet 0.761 0.424 0.261 0.742 0.337 0.271 DR-CFR 0.764 0.426 0.259 0.747 0.341 0.276 TNet 0.726 0.401 0.321 0.730 0.322 0.299 SNet 0.761 0.430 0.254 0.747 0.354 0.268 FlexTENet 0.729 0.403 0.301 0.739 0.328 0.285 TransTEE 0.751 0.411 0.272 0.753 0.379 0.264 KG-TREAT 0.828 0.556 0.171 0.858 0.526 0.149 w/o DIVE 0.811 0.522 0.202 0.829 0.495 0.160 w/o KGs 0.805 0.518 0.219 0.813 0.481 0.165 w/o pre-train 0.786 0.488 0.231 0.778 0.373 0.189 w/o pre-train & KGs 0.769 0.470 0.239 0.749 0.371 0.198 Method Ticagrelor v.s. Aspirin Apixaban v.s. Warfarin AUC ↑ AUPR ↑ IF-PEHE ↓ AUC ↑ AUPR ↑ IF-PEHE ↓ TARNet 0.755 0.460 0.282 0.760 0.515 0.325 DragonNet 0.762 0.464 0.289 0.766 0.534 0.309 DR-CFR 0.764 0.460 0.278 0.769 0.526 0.287 TNet 0.741 0.433 0.311 0.753 0.509 0.330 SNet 0.765 0.465 0.265 0.769 0.527 0.273 FlexTENet 0.750 0.452 0.313 0.756 0.512 0.324 TransTEE 0.770 0.471 0.255 0.803 0.534 0.267 KG-TREAT 0.851 0.609 0.160 0.843 0.639 0.191 w/o DIVE 0.839 0.571 0.176 0.830 0.611 0.210 w/o KGs 0.830 0.552 0.181 0.829 0.605 0.217 w/o pre-train 0.815 0.524 0.200 0.807 0.569 0.241 w/o pre-train & KGs 0.808 0.497 0.213 0.801 0.550 0.247 Table 1: Comparison with state-of-the-art methods on 4 downstream datasets. DIVE is our proposed deep bi-level attention method for synergizing patient data with KGs. The results are averaged over 20 random runs. Target v.s. Compared Estimated Effect P value Model Conclusion RCT Conclusion Rivaroxaban v.s. Aspirin [-0.010, 0.009] 0.952 No significant difference No significant difference (Anand et al. 2018) Valsartan v.s. Ramipril [-0.015, 0.007] 0.564 No significant difference No significant difference (Pfeffer et al. 2021) Ticagrelor v.s. Aspirin [-0.006, 0.021] 0.436 No significant difference No significant difference (Sandner et al. 2020) Apixaban v.s. Warfarin [-0.006, -0.001] 0.001 A. is more effective than W. A. is more effective than W. (Granger et al. 2011) Table 2: Comparison of the estimated treatment effects with corresponding ground truth RCT. The estimated effects are shown in 95% confidence intervals (CI) under 20 bootstrap runs. The RCT conclusions are obtained from published articles. Qualitative Analysis Besides the quantitative analysis, we demonstrate the model’s ability in facilitating randomized controlled trials (RCTs) by providing an accurate estimation of treatment effects and generating consistent conclusions. Additionally, we show that KGs help improve performance by identifying a more comprehensive set of confounders for adjustment. Validation with RCT Conclusion. We compare the estimated treatment effects to the corresponding RCT results. First, we compute the average treatment effects as the mean of the differences between the treated outcomes and controlled outcomes (Hern´an 2004). Then, we test the significance of the estimated effects against zero using a T-test with significance level α = 0.05 for model conclusion generation. Table 2 shows that model-generated conclusions align with their corresponding RCT conclusions, suggesting that KG-TREAT can be served as an effective computational tool to emulate RCTs using large-scale patient data and KGs. A thorough comparison of our method with all the baseline methods is shown in Appendix Table A7. Attention Visualization. We use case studies to illustrate how our model identifies potential confounders from patient data and KGs for adjusting bias and accurate estimation. We visualize the model attention weights of each PKG in Fig. 2 and observe that KG-TREAT successfully identifies key confounders as medical codes with high attention weights from The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8810 Warfarin Apixaban Atrial fibrillation Phenprocoumon Treated Atrial myxoma Cause Syncope Cause Treat Treated Drugs, hormones and biological mediators Interact Atrovastatin Hypertensive disorder Carvedilol Treat Treat Cause Myocardial infarction Atrial fibrillation Isosorbide Risk factor Clopidogrel Lisinopril Drugs, hormones and biological mediators Atorvastatin Hypertensive disorder Carvedilol Treat Treat Cause Treat Cause Treat Treat Treat Treat Treat Belongs to Belongs to Belongs to Belongs to Belongs to Belongs to Belongs to (a) Treatment-covariate PKG (b) Outcome-covariate PKG Figure 2: Visualization of the graph attention weights for (a) treatment-covariate PKG and (b) outcome-covariate PKG. The patient is from “Apixaban v.s. Warfarin” dataset. Higher attention weights are denoted as thicker and darker edges in the graph. The extra 2-hop bridge nodes are in gray color to distinguish them from the initial set of nodes in yellow color. Method Rivaroxaban v.s. Aspirin Valsartan v.s. Ramipril Ticagrelor v.s. Aspirin Apixaban v.s. Warfarin AUC ↑ IF-PEHE ↓ AUC ↑ IF-PEHE ↓ AUC ↑ IF-PEHE ↓ AUC ↑ IF-PEHE ↓ Pre-train Task MCP only 0.815 0.204 0.840 0.160 0.840 0.169 0.831 0.200 LP only 0.791 0.225 0.813 0.174 0.816 0.186 0.812 0.232 Score Function RotatE 0.825 0.176 0.851 0.152 0.848 0.164 0.838 0.198 TransE 0.823 0.177 0.851 0.155 0.846 0.166 0.832 0.203 KG-TREAT 0.828 0.171 0.858 0.149 0.851 0.160 0.843 0.191 Table 3: Ablation study results of using different pre-train tasks and score functions in link prediction. KG-TREAT adopts both MCP and LP as pre-training tasks and DistMult as score function. PKGs. For example, the common medical codes, such as “Carvedilol”, “Hypertensive disorder”, “Atorvastatin”, are identified as the potential confounders and also mentioned in related literature (Stolk et al. 2017; Yusuf et al. 2004). Notably, with the help of external knowledge, our model can recover potential confounders that are not observed in the patient data. For example, “Hypertensive disorder” is a potentially missing confounding factor added to PKG through 2-hop bridge node searching. This result indicates that solely relying on the patient data may fail to recognize a more comprehensive set of confounders for adjustment. Ablation Studies In the above experiments, we find that pre-training plays a critical role in model performance. Therefore, we focus on important model choices in pre-training tasks. An additional ablation study of the influence of downstream data size on model performance is provided in Appendix Fig. A3. Pre-training Tasks. We analyze the impact of pre-training tasks by excluding the LP and MCP tasks. As shown in Table 3, unifying both MCP and LP tasks (KG-TREAT) yields the best performance on 4 downstream datasets. The MCP task plays a crucial role in pre-training, with its exclusion leading to larger performance drops (3% of AUC and 4% of IF-PEHE) than the LP task (1% of AUC and IF-PEHE). This demonstrates the importance of integrating both tasks to jointly learn from both data modalities. Link Prediction Head Choice. We evaluate various score functions for the LP task. As shown in Table 3, DistMult, which is adopted in our model, offers better performance than other score functions. While, the differences among different functions are not significant, indicating that the pretraining model on large-scale data is not particularly sensitive to the scoring function selection. Related Work Deep learning for TEE. Deep learning has been extensively used for TEE and achieved improved performance than classical linear methods due to its flexibility of modeling non-linearity (Shalit, Johansson, and Sontag 2017; Shi, Blei, and Veitch 2019; Hassanpour and Greiner 2020; Curth and van der Schaar 2021a,b). For instance, TARNet (Shalit, Johansson, and Sontag 2017) employs shared representations to simultaneously predict two potential outcomes, encouraging the similarity between treated and control distributions. SNet (Curth and van der Schaar 2021a) learns disentangled representations for flexible information sharing among treatment prediction and outcome prediction. Recent Transformer-based models have been introduced as backbones for TEE to help handle various data modalities (e.g., graph, text, etc.) (Zhang et al. 2022; Guo et al. 2021). However, these existing methods are mainly trained on smallThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8811 scale, task-specific labeled data. This may limit model performance due to insufficient learning of the complex relationships among covariates, treatments, and outcomes. Knowledge Integration in Healthcare. Various works in healthcare incorporate biomedical KGs to enrich patient data (Choi et al. 2017; Ma et al. 2018a,b). Recent works have shown that personalized KGs (PKGs), constructed by retrieving relevant medical features of individual patients from KG, can encode personalized information and mitigate noise of the entire KG (Ye et al. 2021; Xu et al. 2023; Yang et al. 2023). However, these approaches often fail to distinguish among different types of medical codes and encode relationships among covariates, treatments, and outcomes. This can lead to confounding bias and spurious correlations in TEE. Additionally, existing methods often integrate patient data and KGs only in the final prediction, resulting in shallow combination and inefficient information utilization. Pre-training in Healthcare. The foundation models have been successfully applied to various domains including healthcare and patient data (Huang, Altosaar, and Ranganath 2019; Li et al. 2020; Rasmy et al. 2021). These methods typically convert a patient’s medical records into a sequence of tokens for pre-training, followed by fine-tuning for healthcare-related downstream tasks such as clinical risk prediction. However, learning deep, domain-specific representations of complex medical features from sparse patient data can be challenging, even with large-scale data. Furthermore, existing methods often fail to handle complex relationships among covariates, treatments, and outcomes, potentially leading to biased estimation. To the best of our knowledge, ours is the first pre-training model by synergizing both observational patient data and KGs. Conclusion In this paper, we propose KG-TREAT, a pre-training and fine-tuning framework for TEE by synergizing patient data with KGs. We construct dual-focus personalized KGs that incorporate key relationships among covariates, treatments, and outcomes for addressing potential bias in TEE. We propose a novel synergy method (DIVE) to achieve deep information exchange between patient data and PKGs, and encourage complex relationship encoding for TEE. We jointly pre-train the model via two self-supervised tasks and finetune it on downstream TEE datasets. Thorough experiments on real-world patient data show the effectiveness of KG-TREAT compared to state-of-the-art methods. We further demonstrate the estimated treatment effects are well consistent with corresponding published RCTs. Acknowledgments This work was funded in part by the National Institute of General Medical Sciences (NIGMS) of NIH under award number R01GM141279. References Alaa, A. M.; and van der Schaar, M. 2019. Validating Causal Inference Models via Influence Functions. In Chaudhuri, K.; and Salakhutdinov, R., eds., Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 915 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, 191–201. Anand, S. S.; Bosch, J.; Eikelboom, J. W.; Connolly, S. J.; Diaz, R.; Widimsky, P.; Aboyans, V.; Alings, M.; Kakkar, A. K.; Keltai, K.; et al. 2018. Rivaroxaban with or without aspirin in patients with stable peripheral or carotid artery disease: an international, randomised, double-blind, placebocontrolled trial. The Lancet, 391(10117): 219–229. Bodenreider, O. 2004. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic acids research, 32(suppl 1): D267–D270. Bommasani, R.; Hudson, D. A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M. S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. 2021. On the opportunities and risks of foundation models. ArXiv preprint, abs/2108.07258. Bordes, A.; Usunier, N.; Garc´ıa-Dur´an, A.; Weston, J.; and Yakhnenko, O. 2013. Translating Embeddings for Modeling Multi-relational Data. In Burges, C. J. C.; Bottou, L.; Ghahramani, Z.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, 2787–2795. Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D. M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Models are Few-Shot Learners. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Chen, T.; and Guestrin, C. 2016. XGBoost: A Scalable Tree Boosting System. In Krishnapuram, B.; Shah, M.; Smola, A. J.; Aggarwal, C. C.; Shen, D.; and Rastogi, R., eds., Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 785–794. Choi, E.; Bahadori, M. T.; Song, L.; Stewart, W. F.; and Sun, J. 2017. GRAM: Graph-based Attention Model for Healthcare Representation Learning. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017, 787–795. Curth, A.; and van der Schaar, M. 2021a. Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms. In Banerjee, A.; and Fukumizu, K., eds., The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, 1810–1818. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8812 Curth, A.; and van der Schaar, M. 2021b. On Inductive Biases for Heterogeneous Treatment Effect Estimation. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y. N.; Liang, P.; and Vaughan, J. W., eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 15883–15894. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–4186. Feng, Y.; Chen, X.; Lin, B. Y.; Wang, P.; Yan, J.; and Ren, X. 2020. Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1295–1309. Glass, T. A.; Goodman, S. N.; Hern´an, M. A.; and Samet, J. M. 2013. Causal inference in public health. Annual review of public health, 34: 61–75. Granger, C. B.; Alexander, J. H.; McMurray, J. J.; Lopes, R. D.; Hylek, E. M.; Hanna, M.; Al-Khalidi, H. R.; Ansell, J.; Atar, D.; Avezum, A.; et al. 2011. Apixaban versus warfarin in patients with atrial fibrillation. New England Journal of Medicine, 365(11): 981–992. Guo, Z.; Zheng, S.; Liu, Z.; Yan, K.; and Zhu, Z. 2021. CETransformer: Casual Effect Estimation via Transformer Based Representation Learning. In Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 524– 535. Springer. Hassanpour, N.; and Greiner, R. 2020. Learning Disentangled Representations for CounterFactual Regression. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. Hern´an, M. A. 2004. A definition of causal effect for epidemiological research. Journal of Epidemiology & Community Health, 58(4): 265–271. Huang, K.; Altosaar, J.; and Ranganath, R. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. ArXiv preprint, abs/1904.05342. K¨unzel, S. R.; Sekhon, J. S.; Bickel, P. J.; and Yu, B. 2019. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the national academy of sciences, 116(10): 4156–4165. Li, Y.; Rao, S.; Solares, J. R. A.; Hassaine, A.; Ramakrishnan, R.; Canoy, D.; Zhu, Y.; Rahimi, K.; and SalimiKhorshidi, G. 2020. BEHRT: transformer for electronic health records. Scientific reports, 10(1): 7155. Ma, F.; Gao, J.; Suo, Q.; You, Q.; Zhou, J.; and Zhang, A. 2018a. Risk Prediction on Electronic Health Records with Prior Medical Knowledge. In Guo, Y.; and Farooq, F., eds., Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, 1910–1919. Ma, F.; You, Q.; Xiao, H.; Chitta, R.; Zhou, J.; and Gao, J. 2018b. KAME: Knowledge-based Attention Model for Diagnosis Prediction in Healthcare. In Cuzzocrea, A.; Allan, J.; Paton, N. W.; Srivastava, D.; Agrawal, R.; Broder, A. Z.; Zaki, M. J.; Candan, K. S.; Labrinidis, A.; Schuster, A.; and Wang, H., eds., Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, 743–752. Murahari, V.; Batra, D.; Parikh, D.; and Das, A. 2020. Largescale pretraining for visual dialog: A simple state-of-the-art baseline. In European Conference on Computer Vision, 336– 352. Springer. Pfeffer, M. A.; Claggett, B.; Lewis, E. F.; Granger, C. B.; Køber, L.; Maggioni, A. P.; Mann, D. L.; McMurray, J. J.; Rouleau, J.-L.; Solomon, S. D.; et al. 2021. Angiotensin receptor–neprilysin inhibition in acute myocardial infarction. New England Journal of Medicine, 385(20): 1845– 1855. Rasmy, L.; Xiang, Y.; Xie, Z.; Tao, C.; and Zhi, D. 2021. Med-BERT: pretrained contextualized embeddings on largescale structured electronic health records for disease prediction. NPJ digital medicine, 4(1): 1–13. Rubin, D. B. 2005. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469): 322–331. Sandner, S. E.; Schunkert, H.; Kastrati, A.; Wiedemann, D.; Misfeld, M.; Boening, A.; Tebbe, U.; Nowak, B.; Stritzke, J.; Laufer, G.; et al. 2020. Ticagrelor monotherapy versus aspirin in patients undergoing multiple arterial or single arterial coronary artery bypass grafting: insights from the TiCAB trial. European Journal of Cardio-Thoracic Surgery, 57(4): 732–739. Shalit, U.; Johansson, F. D.; and Sontag, D. A. 2017. Estimating individual treatment effect: generalization bounds and algorithms. In Precup, D.; and Teh, Y. W., eds., Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, 3076–3085. Shi, C.; Blei, D. M.; and Veitch, V. 2019. Adapting Neural Networks for the Estimation of Treatment Effects. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alch´eBuc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 2503– 2513. Stolk, L. M.; de Vries, F.; Ebbelaar, C.; de Boer, A.; Schalekamp, T.; Souverein, P.; ten Cate-Hoek, A.; and Burden, A. M. 2017. Risk of myocardial infarction in patients with atrial fibrillation using vitamin K antagonists, aspirin or direct acting oral anticoagulants. British journal of clinical pharmacology, 83(8): 1835–1843. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8813 V. N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 5998–6008. Xu, Y.; Chu, X.; Yang, K.; Wang, Z.; Zou, P.; Ding, H.; Zhao, J.; Wang, Y.; and Xie, B. 2023. SeqCare: Sequential Training with External Medical Knowledge Graph for Diagnosis Prediction in Healthcare Data. In Proceedings of the ACM Web Conference 2023, 2819–2830. Yang, B.; Yih, W.; He, X.; Gao, J.; and Deng, L. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Yang, K.; Xu, Y.; Zou, P.; Ding, H.; Zhao, J.; Wang, Y.; and Xie, B. 2023. KerPrint: Local-Global Knowledge Graph Enhanced Diagnosis Prediction for Retrospective and Prospective Interpretations. In Proceedings of the AAAI Conference on Artificial Intelligence, 5357–5365. Yasunaga, M.; Leskovec, J.; and Liang, P. 2022. LinkBERT: Pretraining Language Models with Document Links. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 8003– 8016. Ye, M.; Cui, S.; Wang, Y.; Luo, J.; Xiao, C.; and Ma, F. 2021. Medpath: Augmenting health risk prediction via medical knowledge paths. In Proceedings of the Web Conference 2021, 1397–1409. Yusuf, S.; Hawken, S.; ˆOunpuu, S.; Dans, T.; Avezum, A.; Lanas, F.; McQueen, M.; Budaj, A.; Pais, P.; Varigos, J.; et al. 2004. Effect of potentially modifiable risk factors associated with myocardial infarction in 52 countries (the INTERHEART study): case-control study. The lancet, 364(9438): 937–952. Zhang, Y.-F.; Zhang, H.; Lipton, Z. C.; Li, L. E.; and Xing, E. P. 2022. Can Transformers be Strong Treatment Effect Estimators? ArXiv preprint, abs/2202.01336. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8814
2024
979
18,827
Spherical Pseudo-Cylindrical Representation for Omnidirectional Image Super-resolution Qing Cai1, Mu Li2, Dongwei Ren3, Jun Lyu4, Haiyong Zheng1∗, Junyu Dong1∗, Yee-Hong Yang5 1 Faculty of Computer Science and Technology, Ocean University of China 2 School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 3 School of Computer Science and Technology, Harbin Institute of Technology 4 School of Nursing, The Hong Kong Polytechnic University 5 Department of Computing Science, University of Alberta [email protected], {limuhit,rendongweihit}@gmail.com, [email protected], {zhenghaiyong,dongjunyu}@ouc.edu.cn, [email protected] Abstract Omnidirectional images have attracted significant attention in recent years due to the rapid development of virtual reality technologies. Equirectangular projection (ERP), a naive form to store and transfer omnidirectional images, however, is challenging for existing two-dimensional (2D) image superresolution (SR) methods due to its inhomogeneous distributed sampling density and distortion across latitude. In this paper, we make one of the first attempts to design a spherical pseudo-cylindrical representation, which not only allows pixels at different latitudes to adaptively adopt the best distinct sampling density but also is model-agnostic to most offthe-shelf SR methods, enhancing their performances. Specifically, we start by upsampling each latitude of the input ERP image and design a computationally tractable optimization algorithm to adaptively obtain a (sub)-optimal sampling density for each latitude of the ERP image. Addressing the distortion of ERP, we introduce a new viewport-based training loss based on the original 3D sphere format of the omnidirectional image, which inherently lacks distortion. Finally, we present a simple yet effective recursive progressive omnidirectional SR network to showcase the feasibility of our idea. The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-theart methods both quantitatively and qualitatively. Introduction Omnidirectional images also referred to as 360◦images, provide 360◦× 180◦field-of-view (FoV), and enable an excellent immersive experience. Recent years have garnered significant attention in many real-world applications, including robotics (Su and Grauman 2021; Scaramuzza 2007), computer vision (Khasanova and Frossard 2017; Ozcinar, Rana, and Smolic 2019), virtual reality (VR) and augmented reality (AR) (Su and Grauman 2019; Deng et al. 2021) and gaming (Tateno, Navab, and Tombari 2018). In general, the original omnidirectional image format, i.e., the 3D sphere, ∗Corresponding authors Copyright c⃝2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: (a) Illustration of the inhomogeneous distributed sampling density and distortion issues of an ERP image. (b) Concept of the proposed spherical pseudo-cylindrical representation to address the above issues. must be transformed into a 2D planar representation to facilitate storage and transmission. Equirectangular projection (ERP) is the most popular projection form, in which the latitude and the longitude of the original spherical image are mapped to the horizontal and vertical grid coordinates. As a result, the distributed sampling density in the ERP is inhomogeneous and distorted across latitude (as shown in Fig. 1(a)), making it unfriendly to subsequent visual communication applications. Additionally, when considering the trade-off between resolution and ease of storage and transmission, omnidirectional images usually have low resolutions (Elbamby et al. 2018; Deng et al. 2021). Image super-resolution (SR) is a technique that aims to recover high-resolution (HR) image from its corresponding degraded low-resolution (LR) version with algorithms alone, without the need for any hardware device. It plays an important and fundamental role in many computer vision tasks (Haris, Shakhnarovich, and Ukita 2018; Zhang et al. 2021; Xia et al. 2022; Cao et al. 2016; Cai et al. 2023; Lyu et al. 2023). Due to their superior feature representation capabilities, convolutional neural networks (CNNs) have achieved remarkable success in SR and many architectures have been presented so far, for example, residual learning (Kim, Lee, and Lee 2016; Nie et al. 2020), dense The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 873 connections (Zhang et al. 2020; Song et al. 2020), UNetlike architectures with skip connections (Hu et al. 2019; Prajapati et al. 2021), dilated convolutions (Yang et al. 2017; Zhang et al. 2017), generative models (Ledig et al. 2017; Li et al. 2022) and other kinds of CNNs (Tai et al. 2017; Lu et al. 2021; Gao et al. 2022). Very recently, Transformerbased SR methods (Liang et al. 2021; Chu et al. 2021) have been proposed to fully utilize the advantages of Transformers in establishing long-range dependencies. However, directly applying these methods to omnidirectional images yields unsatisfactory performance, as they do not consider the inhomogeneous distributed sampling density and distortion across latitude in the ERP (Deng et al. 2021). To address the above issues, two methods have been proposed. The first one utilizes priors of sphere-to-plane mapping, such as the LAU-Net (Deng et al. 2021), which not only mitigates the issues of ERP but also is modelagnostic to most existing methods. However, the ability of this method is limited by the intrinsic characteristics of ERP (Yoon et al. 2022). Additionally, it uses loss functions designed for 2D planar image SR, significantly impacting its performance for omnidirectional image SR by not considering the sampling issues of ERP. The other method designs spherical convolution for omnidirectional images, as see in SphereSR (Yoon et al. 2022), where a new kernel weight is proposed to adapt to the inhomogeneous distributed sampling density and distortion across latitude in ERP. While achieving impressive performance in omnidirectional image super-resolution, this method suffers from high computational costs due to the repeated switching between spherical and 2D planar coordinates. Additionally, it limits the amortization cost of convolution, as intermediate representations with different latitudes cannot be shared across 2D planar images since they are projected to different planes. From the above discussions, one question arises immediately: Is there a simple yet effective method that can address the issues of ERP by fully utilizing the advantages of the above two methods and avoiding their weaknesses, and that can also be directly applied to most off-the-shelf SR network architectures? Motivated by the above questions, as shown in Fig. 1(b), this paper first introduces a novel latitude adaptive pseudo-cylindrical representation (LAPR) by designing a computationally tractable optimization algorithm to adaptively optimize the sampling density for each latitude of the ERP image. Then, a new viewport-based loss is proposed based on the original 3D sphere format of the omnidirectional image by transferring the final recovered HR ERP image back to the original 3D sphere format to avoid the distortion of ERP. Finally, we design an end-to-end recursive Transformer network based on CNN and Transformer to demonstrate the feasibility of the proposed idea. We conduct extensive comparisons with recently proposed state-of-the-art (SOTA) methods on benchmark omnidirectional datasets. The experimental results demonstrate that our method achieves SOTA performance. Briefly, the contributions of this paper include: • We propose a novel LAPR for the omnidirectional image by designing a computationally tractable optimization algorithm to adaptively obtain the optimal configuration of an ERP image. This approach not only addresses the sampling issues of ERP images but also can be easily applied to existing SR methods, directly improving their performance for omnidirectional images. • We also proposes a new viewport-based training loss introduced into the field of omnidirectional image SR, which successfully avoids the distortion of ERP images, as it is defined on the original 3D sphere format of the omnidirectional image. • We design a simple yet effective recursive omnidirectional backbone, which not only achieves SOTA performance but is also much more efficient in memory usage by recursively unfolding CNN and Transformer. Related Work 2D-SR Methods: SRCNN (Dong et al. 2014), which is pioneering work in applying CNN to single image SR, uses only a three-layer CNN to represent the mapping between lowresolution (LR) and high-resolution (HR) images. Based on the SRCNN, many deeper and wider CNN-based SR methods have been proposed. For example, by introducing residual learning into a deeper network, Kim et al. propose the VDSR (Kim, Lee, and Lee 2016). Lim et al. propose the EDSR (Lim et al. 2017) by removing unnecessary modules in conventional residual networks. Guo et al. propose DRN (Guo et al. 2020) by learning an additional dual regression mapping to estimate the down-sampling kernel. Later, attention mechanism is introduced into SR to guide the CNN to selectively focus on some features where there is more information. For example, Niu et al. propose the HAN (Niu et al. 2020) by integrating a layer attention module and a channel-spatial attention module into the residual blocks. Mei et al. design a novel non-local sparse attention with dynamic sparse attention pattern and propose the NLSN (Mei, Fan, and Zhou 2021). Zhang et al. design a highly efficient long-range attention block by simply cascading two shift-conv with a group-wise multi-scale selfattention module and propose the ELAN (Zhang et al. 2022). Recently, inspired by the significant success of Transformer in natural language processing for its advantages in modeling long-range context (Vaswani et al. 2017), it is also introduced into SR (Chen et al. 2021; Liang et al. 2021; Chu et al. 2021), such as the SwinIR (Liang et al. 2021) and the Swin2SR (Chu et al. 2021). However, without taking into account the inhomogeneous distributed sampling density and distortion across latitude in the ERP, all of these existing 2D-SR methods yield unsatisfactory performance for omnidirectional image SR (Deng et al. 2021; Yoon et al. 2022). 360◦-SR Methods: Since the sampling issue and distortion in omnidirectional images are caused by the transformation between the original spherical image and the 2D planar image, researchers try to address such issues from two aspects. On the one hand, some researchers focus on addressing them by fully utilizing priors of sphere-to-plane mapping. For example, Ozcinar et al. define a novel training loss by introducing the weighted-to-spherically uniform structural similarity to tackle the distortion issue of ERP images and propose the 360-SS (Ozcinar, Rana, and Smolic 2019). Deng et al. proThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 874 pose LAU-Net (Deng et al. 2021) by designing a latitude adaptive upscaling network. This network can dynamically upscale different latitude bands with varying upscaling factors, using a smaller upscaling factor for areas near the pole and a larger upscaling factor for areas around the equator of an ERP image. Although such method can mitigate the sampling issues of the ERP, it can not address them successfully because the method is based on the transformed format, i.e., ERP, which is limited by the intrinsic characteristics of the ERP. Even worse, such method still uses the loss functions designed for 2D planar image SR, which seriously affects its performance for omnidirectional image SR because of not considering the sampling issues of the ERP. On the other hand, other researchers focus on the original spherical image and propose spherical CNN. For example, Coors et al. propose Spherenet (Coors, Condurache, and Geiger 2018) by designing a CNN filter based on its spatial location on a sphere to address the distortion issue of the ERP image. Yoon et al. apply convolution to the spherical structure constructed based on the subdivision of the icosahedron and propose SphereSR (Yoon et al. 2022). While this method has achieved superior results, it cannot be directly applied to existing 2D-SR architectures. This limitation arises because intermediate representations, extracted using spherical CNN with different latitudes, cannot be shared across 2D planar images. Additionally, it suffers from high computational costs due to the repeated switching between spherical and 2D planar coordinates. Method Figure 2: The overall framework of the proposed method mainly consists of three parts: latitude adaptive pseudocylindrical representation, viewport-based training loss, and recursive omnidirectional network. To address the issues discussed above, this paper proposes a new method by designing a novel representation with optimized hyperparameter settings for the sphere-to-plane mapping and by defining a novel viewport-based training loss. The overall framework is shown in Fig. 2. Specifically, we first propose a latitude adaptive pseudo-cylindrical representation (LAPR) based on the sinusoidal projection since it satisfies all the three major projection requirements: equal-area, conformal and equidistant 1. However, when we directly apply sinusoidal projection for image SR, it only achieves a relatively small improvement compared to ERP, as it downsamples each row of the input low-resolution (LR) ERP image and loses some important information for SR. Thus, we propose our LAPR by firstly upsampling each row of the input LR ERP image and then designing a computationally tractable optimization algorithm to adaptively obtain a (sub)-optimal configuration of the latitude representation for ERP. Then, we propose our novel viewport-based loss, relaying on the original 3D sphere format of the omnidirectional image. This effectively mitigates the distortion of ERP by defining loss on the 3D sphere rather than ERP. Finally, we design a simple yet effective recursive progressive backbone to demonstrate the feasibility of the proposed idea. Additionally, we discuss the significant differences between our method and the two 360◦-SR methods. LAPR For an omnidirectional image x ∈RH×W represented in ERP with height H and width W, its plane-to-sphere coordinate conversion can be computed by: θi =  0.5 −i + 0.5 H  × π, (0 ≤i < H), (1) φj = j + 0.5 W −0.5  × 2π, (0 ≤j < W), (2) where θ and φ, respectively, denote the latitude and the longitude. We also define our representation in a 2D image domain Ω= {0, . . . , H −1} × {W new 0 , . . . , W new W −1}, which is parameterized by {W new i }W −1 i=0 , where W new i ∈ {µ1 ∗W, . . . , µW −1 ∗W} denotes the width of the i-th row and µi the magnification of each row. To avoid information loss caused by performing down sampling, we define our representation by up sampling each row of the original LR image. Therefore, µi is defined as a positive integer greater than 1, and bicubic interpolation is adopted as the up sampling filter if necessary. Sampling image by interpolation may increase the incidences of other forms of distortion. Generally speaking, distortion (such as aliasing) caused by sampling can be mitigated by the subsequent convolution operation, which has been validated by the results without similar distortion. By varying W new i , our representation can achieve precise control over the sampling density of each row, and the beginning point of each row Bi is defined by: Bi = ⌊(max(W new i ) −W new i )/2⌋, (3) 1Equal-area, conformal, and equidistant map projections preserve relative scales of things and stuff, local angles, and greatcircle distances between points, respectively, on the sphere. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 875 where ⌊·⌋denotes the floor function. We call this data structure the parametric pseudo-cylindrical representation because it can generalize several pseudo-cylindrical map projections by specifying different magnifications µi for different rows. For example, the map projection would be the standard ERP when µi = 1, the sinusoidal projection when µi = cos(θi) and Eq. 2 is replaced by: φj = j −Bi + 0.5 W new i −0.5  × 2π, (4) for j = {Bi, . . . , Bi + Wi −1} as the longitude mapping. By selecting different combinations of magnification µi for each row and the plane-to-sphere mapping, our LAPR not only includes a broad class of pseudo-cylindrical map projections as special cases but also opens the door to other novel representations that may be more suitable for omnidirectional image SR. Unless stated otherwise, in the remainder of the paper, we use Eq. 1 and Eq. 4 as the plane-tosphere coordinate conversion. Generally, different omnidirectional images possess different pseudo-cylindrical representations for optimal SR performance. To obtain the best optimization parameters, we propose a computationally tractable optimization algorithm. Specifically, we start by reducing the pseudo-cylindrical representation to a tiled representation (Yu, Lakshman, and Girod 2015) to further simplify and propose our LAPR. In this LAPR, the neighboring rows with the same magnification µi can be viewed as a tile and the tiled representation z for the original input x can be defined as {zt}T −1 t=0 , where zt ∈RHt×W new t denotes the t-th tile and T = H/Ht denotes the total number of tiles. Then, we formulate the optimization problem of the pseudo-cylindrical representation parameters as: min {W new t } 1 |Ψ| X x∈Ψ F(x, ISR(x), {W new t }) s.t. W new t ∈{W new 0 , . . . , W new L−1}, 0 ≤t < T, W new t = W new T −1−t, 0 ≤t ≤Thalf, W new t ≤W new t′ , for t ≤t′ and 0 ≤t, t′ ≤Thalf, (5) where x denotes the given image; F(·) denotes a quantitative measure function for the SR performance; ISR(·) denotes an existing pre-trained public 2D-SR method; W new t = (t+ 1) ⌊W/L⌋, where L is the number levels of quantized width W of the ERP image and it is set such that L ≪W to reduce the search space of the possible widths. As shown in Fig. 1 (b), we make the proposed LAPR symmetrical along the equator to double the search speed, and thus Thalf = ⌊(T −1)/2⌋−1. Finally, the tractable optimization algorithm is proposed (Please see the supplementary for details). Fig. 3 shows a comparison experiment between the representative 2D-SR model EDSR (Lim et al. 2017) using and without using the proposed LAPR. From the results, it can be observed that EDSR using the proposed LAPR outperforms that without using it. This not only demonstrates the effectiveness of the proposed LAPR but also shows that the LAPR is model-agnostic and can be applied to most off-theshelf SR methods. Figure 3: Comparison between existing methods with and without using the proposed LAPR. Viewport-based Training Loss As discussed in introduction section, almost all the existing training loss functions for omnidirectional image SR networks are designed for 2D planar image SR, which seriously limits their performance for omnidirectional image SR since they do not consider the distortion across the latitude of ERP images. When humans view an omnidirectional image using head-mounted displays, the ERP image is first transformed into a 3D sphere by using the plane-to-sphere coordinates defined in Eq. 1, Eq. 2. The visual content is then rendered as a viewport (as shown in Fig. 1 (a)), depending on the human’s head position and the field-of-view (FoV) of the headmounted display (Zhou et al. 2021). Inspired by this observation, we define our training loss function based on the viewports of the omnidirectional image, reflecting how an omnidirectional image is viewed (Sui et al. 2021; Fang et al. 2022). Specifically, we first adopt rectilinear projections (Ye, Alshina, and Boyce 2017) to map the recovered HR image in ERP format back to the 3D sphere format and then sample 14 viewports uniformly distributed over the sphere for each omnidirectional image 2, which cover all spherical content. Each viewport is a Hv × Wv rectangle, where Hv = ⌈H 3 ⌉and Wv = ⌈W 4 ⌉, with a FoV of π 3 × π 2 . Given a training dataset {Ii,j HRv, Ii,j GTv}N,14 i=1,j=1, which has N recovered images Ii,j HRv (each of them has 14 viewports) and the corresponding ground truth images Ii,j GTv (each of them has 14 viewports), our viewport-based loss function is defined as: LV B MAE(Θ) = 1 14 1 N N X i=1 14 X j=1 ||Ii,j HRv −Ii,j GTv||1, (6) where Θ denotes the parameter set of the proposed network. Recursive Omnidirectional Backbone As shown in Fig. 2, our network is a progressive architecture designed by gradually unfolding recursive block (RB) constructed based on residual swin Transformer blocks (RSTB) (Liang et al. 2021), convolution layers, and ReLU layers. This progressive structure aims to recover the highresolution (HR) omnidirectional image progressively from its low-resolution (LR) input. Specifically, we first represent the input LR omnidirectional image using the proposed 2The centers of the 14 viewports correspond to (0, −π 2 ), (0, 0), (0, π 2 ), (0, π), (−π 4 , −π 2 ), (−π 4 , 0), (−π 4 , π 2 ), (−π 4 , π), ( π 4 , −π 2 ), ( π 4 , 0), ( π 4 , π 2 ), ( π 4 , π), ( π 2 , 0) and (−π 2 , 0), respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 876 LAPR to address the sampling issue of the ERP image. Then, it is input to the designed recursive network to extract deep features. Finally, the deep features are transformed back to the ERP form and then input into the upscale module and the reconstructed module to output the final HR omnidirectional image. Following previous works (Zhang et al. 2018; Deng et al. 2021; Cai et al. 2022), we also use one convolution layer to extract the shallow feature (SF) F0 from the represented LR omnidirectional image by our LAPR ILAP R LR : F0 = fSF (ILAP R LR ), (7) where fSF is the convolution operation. Then, the extracted shallow feature is input to the proposed recursive network to further extract deep features (DF) FDF : FDF = fRB(fIN(F0, Fs−1)), (8) where fIN is the convolution operation; fRB is the operation of the recursive block, which consists of S residual swin Transformer blocks (RSTB) (Liang et al. 2021) and four convolution layers and a ReLU layer. Motivated by the previous work (Ren et al. 2019), we implement the above progressive hybrid architecture by recursively unfolding it R times, as shown in Fig. 2. Considering that the parameters of our network mainly come from the recursive block, we adopt it in a recursive manner instead of directly stacking the recursive block to reduce the model size. Finally, the recovered HR omnidirectional image IHR is obtained by mapping the deep features back to ERP FDF and inputting them into an upscale module and a reconstruction module as follows: IHR = fRec(fUP (FDF )), (9) where fUP and fRec denote the operation of the upscale module and the reconstruction module, respectively. Discussion Below, we discuss the significant differences between our method and the two kinds of 360◦-SR methods discussed in the second paragraph of related work section. Difference to the method based on priors of sphere-toplane mapping: (i) This method only uses the 2D planar representation to mitigate the sampling issues of ERP, while we design our representation by integrating ERP format and sphere format into one model. Our approach fully utilizes the advantages of ERP format, making it model-agnostic to existing 2D-SR models, and also leverages the advantages of the sphere format to avoiding sampling issues in ERP. (ii) While this method still uses the loss functions designed for 2D planar image SR, it seriously affects its performance for omnidirectional image SR because it does not considering the sampling issues of ERP. In contrast, we design a novel loss for omnidirectional images based on the viewport of a 3D sphere. Difference to the method based on designing spherical convolution: (i) This method needs to continuously switch between spherical and 2D planar coordinate to enable the proposed spherical convolution, resulting in high computational costs. In contrast, our method requires only one switch. (ii) This method presents a challenge to existing 2DSR architectures as intermediate representations, obtained through spherical convolution with varying latitudes, cannot be effectively shared across 2D planar images. In contrast, our method is model-agnostic and compatible with most offthe-shelf SR models. Experiments Experiment Settings Datasets: Following previous methods (Yoon et al. 2022), we also choose ODI-SR (Deng et al. 2021) as our training dataset, which contains 1200 training images, 100 validation images, and 100 testing images. We use the ODI-SR and SUN 360 Panorama (Xiao et al. 2012) as our test datasets. Evaluation Metrics: To quantitatively compare the recovered HR results of the proposed model with that of the SOTA models, we use weighted-to-spherically-uniform PSNR (WS-PSNR) (Sun, Lu, and Yu 2017) and weighted-tospherically-uniform SSIM (WS-SSIM) (Zhou et al. 2018). These are two widely used metrics for quantitatively evaluating the recovered omnidirectional image. Implementation Details: Following previous works (Deng et al. 2021; Yoon et al. 2022), we train our model for the scales of ×8 and ×16, and all degraded datasets are obtained using bicubic interpolation. To avoid boundary artifacts between neighboring tiles, following previous work (Deng et al. 2021), an extra Ht 8 is added for neighboring tiles, where Ht denotes the height of each tile. The proposed model is trained by the ADAM optimizer (Kingma and Ba 2014) with a fixed initial learning rate of 10−4. The whole process is implemented in the PyTorch platform with 4 RTX3090 GPUs, each with 24GB of memory (Please see the supplementary for more details). Comparisons with State-of-the-art Methods To validate the effectiveness and superior performance of the proposed method, we compare our method with 10 SOTA methods including 7 2D-SR methods: EDSR (Lim et al. 2017), HAN (Niu et al. 2020), DRN (Guo et al. 2020), NLSN (Mei, Fan, and Zhou 2021), SwinIR (Liang et al. 2021), ELAN (Zhang et al. 2022) and Swin2SR (Conde et al. 2023) (Note that, for a fair comparison, all the comparison methods are retrained on the ODI-SR dataset using their open-source codes with the same patch size as our method, dubbed as 2D-SR-Re), 3 360◦-SR methods: 360-SS (Ozcinar, Rana, and Smolic 2019), LAU-Net (Deng et al. 2021) and SphereSR (Yoon et al. 2022). Quantitative Comparison: Table 1 reports the quantitative comparisons between our method and 10 SOTA SISR methods on two benchmark datasets for scale factor ×8 and ×16. The best results are represented in bold and the second best in underlined. It can be found that, compared with these methods, our method achieves the best results on multiple benchmarks for all scaling factors and surpasses all of them in terms of WS-PSNR and WS-SSIM. In particular, our method improves the WS-PSNR value by 0.32 dB and 0.34 dB on the ODI-SR dataset for scale factor ×8 and ×16 compares with that of the second best method, respectively. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 877 SR Methods ODI-SR dataset SUN 360 dataset ×8 ×16 ×8 ×16 WS-PSNR WS-SSIM WS-PSNR WS-SSIM WS-PSNR WS-SSIM WS-PSNR WS-SSIM 2D-SR-Re EDSR 23.97 0.6483 22.24 0.6090 23.79 0.6472 21.83 0.5974 DRN 24.32 0.6571 22.52 0.6212 24.25 0.6602 22.11 0.6092 HAN 24.32 0.6620 22.53 0.6265 24.25 0.6681 22.12 0.6105 NLSN 24.33 0.6684 22.53 0.6285 24.26 0.6709 22.14 0.6182 SwinIR 24.34 0.6721 22.54 0.6288 24.27 0.6734 22.15 0.6273 ELAN 24.35 0.6756 22.56 0.6390 24.28 0.6788 22.16 0.6355 Swin2SR 24.37 0.6770 22.58 0.6395 24.29 0.6822 22.18 0.6380 360◦-SR 360-SS 24.14 0.6539 22.35 0.6102 24.19 0.6536 22.10 0.5947 LAU-Net 24.36 0.6602 22.52 0.6284 24.24 0.6708 22.05 0.6058 SphereSR 24.37 0.6777 22.51 0.6370 24.17 0.6820 21.95 0.6342 Ours 24.72 0.6886 22.90 0.6480 24.53 0.6855 22.37 0.6475 Table 1: Quantitative comparisons with state-of-the-art SR methods on two benchmark datasets for scale factor ×8 and ×16. The bold/underlined font represent the best/second best result. Figure 4: Visual comparisons with state-of-the-art SISR methods for 8× SR on the ODI-SR and the Sun 360 datasets. The colors red and blue represent the best and the second best methods. Best viewed on screen. Qualitative Comparison: In Fig. 4, we also visually illustrate the zoomed-in comparison results with SOTAs on several images from the test datasets. From the results, we find that the proposed method can consistently obtain sharper reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 878 sults, recovering more high-frequency textures and details, while most competing models suffer from some unpleasant blurring artifacts. This successfully validates the effectiveness and efficiencies of the proposed method. Further Comparison: Table 2 shows the FLOPs, the numModel FLOPs Params Time WS-PSNR SwinIR 900 G 11.5 M 0.982 s 22.54 ELAN 845 G 8.9 M 0.715 s 22.56 Swin2SR 952 G 12.3 M 1.041 s 22.58 360-SS 15 G 1.6 M 0.025 s 22.35 LAU-Net 685 G 9.4 M 0.443 s 22.52 SphereSR 587 G 8.7 M 0.401 s 22.51 Ours 372 G 7.8 M 0.312 s 22.90 Table 2: Computational complexity comparison on the ODISR dataset for scale factor ×16. ber of parameters, the running time and the WS-PSNR values comparisons between our method and SOTA methods on the ODI-SR dataset for scale factor of 16. It can be found that our method achieves the best performance with competitive efficiency and computation cost. Ablation Study LAPR: As shown in Table 3, to evaluate the effectiveness of the proposed LAPR, we conduct a comparison experiment between our method using different representations: original ERP, sinusoidal projection and the proposed LAPR. It can be observed that the highest WS-PSRN and WS-SSIM values are obtained when the proposed LAPR is used. Different Rep ERP Sinusoidal Our LAPR WS-PSNR 24.48 24.56 24.72 WS-SSIM 0.6688 0.6791 0.6886 Table 3: Different input representations comparison on the ODI-SR dataset for scale factor ×8. To validate the LAPR is model-agnostic to existing 2DSR methods, as shown in Table 4, another comparison experiment between the Top 3 2D-SR models: SwinIR (Liang et al. 2021), ELAN (Zhang et al. 2022) and Swin2SR (Conde et al. 2023), using and without using the proposed LAPR is conducted. From the results, it can be observed that the values of all the three methods using the proposed LAPR outperform that of without using it. This not only validates the effectiveness of the proposed LAPR but also shows that the proposed LAPR is model-agnostic to most off-the-shelf SR methods and can improve their performance. 2D Models SwinIR ELAN Swin2SR w/o LAPR 24.34 24.35 24.37 w/ LAPR 24.48 24.50 24.52 Table 4: Comparison between exiting 2D-SR methods with and without using the proposed LAPR on the ODI-SR dataset for scale factor ×8. Viewport-Based Loss: To investigate the effectiveness of the proposed viewport-based loss, we conduct a comparison experiment between the proposed method using the widely used LMAE loss designed for 2D planar image SR network and the proposed viewport-based loss LV B MAE. The corresponding WS-PSRN values are shown in Table 5. It can be found that our viewport-based loss achieves a better performance, which demonstrated its effectiveness. Loss function LMAE LV B MAE WS-PSNR 22.80 22.90 WS-SSIM 0.6310 0.6480 Table 5: Influence of different training losses on the ODI-SR dataset for scale factor ×16. Recursive Network: To validate the superior performance of our method mainly come from the proposed LAPR and our viewport-based loss rather than the Transformer-based backbone, we train another version of our method by replacing the RSTB shown in Fig. 2 with 32-residual blocks similar to NLSA (Mei, Fan, and Zhou 2021), dubbed as Ours-C (Note that, for a fair comparison, all the parameter settings of residual blocks are the same as those in NLSA). As shown in Table 6, Ours-C can still achieve superior performance compared to previous SOTA methods, indirectly validating the effectiveness of our LAPR and viewport-based loss. Model Params (M) WS-PSNR WS-SSIM SwinIR 11.5 24.34 0.6721 ELAN 8.9 24.35 0.6756 Swin2SR 12.3 24.37 0.6770 360-SS 1.6 24.14 0.6539 LAU-Net 9.4 24.36 0.6602 SphereSR 8.7 24.37 0.6777 Ours-C 7.5 24.62 0.6779 Table 6: Comparison on the ODI-SR dataset for scale ×8. Conclusion In this paper, we present a novel method for accurate omnidirectional image super-resolution that effectively addresses sampling issues and distortion across the latitude of ERP images. Specifically, we introduce a latitude adaptive pseudo-cylindrical representation for omnidirectional images. This representations allows pixels at different latitudes to adaptively adopt the best distinct sampling density. This is achieved by employing the proposed computationally tractable optimization algorithm to search for the optimal width for each tile. Additionally, we propose a viewportbased loss, which reflects how humans view omnidirectional images, to mitigate the distortion of ERP. Finally, a recursive progressive backbone is designed to demonstrate the feasibility of our idea. Quantitative and qualitative evaluations on different benchmark datasets demonstrate the effectiveness of the proposed method, showcasing its the superior performance over most SOTA methods. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 879 Acknowledgments This work was supported in part by the National Science Foundation of China (No.62102338, 62102339, 62171421, 62371413); in part by the Natural Science Foundation of Shandong Province (No.ZR2020QF031); in part by the China Postdoctoral Science Foundation (No.2023M733342); in part by the Qingdao Postdoctoral Innovation Project under Grant (No.QDBSH20230101001); in part by TaiShan Scholars Youth Expert Program of Shandong Province (No.tsqn202306096); in part by the National Key R&D Program of China (No.2022ZD0117201) and in part by the University of Alberta, adn Natural Science and Engineering Research Council of Canada. References Cai, Q.; Li, J.; Li, H.; Yang, Y.-H.; Wu, F.; and Zhang, D. 2022. TDPN: Texture and Detail-Preserving Network for Single Image Super-Resolution. IEEE TIP, 31: 2375–2389. Cai, Q.; Qian, Y.; Li, J.; Lyu, J.; Yang, Y.-H.; Wu, F.; and Zhang, D. 2023. HIPA: hierarchical patch transformer for single image super resolution. IEEE TIP, 32: 3226–3237. Cao, L.; Ji, R.; Wang, C.; and Li, J. 2016. Towards domain adaptive vehicle detection in satellite image by supervised super-resolution transfer. In AAAI, volume 30, 1138–1144. Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; and Gao, W. 2021. Pre-trained image processing transformer. In CVPR, 12299–12310. Chu, X.; Tian, Z.; Wang, Y.; Zhang, B.; Ren, H.; Wei, X.; Xia, H.; and Shen, C. 2021. Twins: Revisiting the design of spatial attention in vision transformers. In NIPS, volume 34, 9355–9366. Conde, M. V.; Choi, U.-J.; Burchi, M.; and Timofte, R. 2023. Swin2SR: Swinv2 transformer for compressed image superresolution and restoration. In ECCVW, 669–687. Springer. Coors, B.; Condurache, A. P.; and Geiger, A. 2018. Spherenet: Learning spherical representations for detection and classification in omnidirectional images. In ECCV, 518–533. Deng, X.; Wang, H.; Xu, M.; Guo, Y.; Song, Y.; and Yang, L. 2021. Lau-net: Latitude adaptive upscaling network for omnidirectional image super-resolution. In CVPR, 9189–9198. Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a deep convolutional network for image super-resolution. In ECCV, 184– 199. Elbamby, M. S.; Perfecto, C.; Bennis, M.; and Doppler, K. 2018. Toward low-latency and ultra-reliable virtual reality. IEEE Network, 32(2): 78–84. Fang, Y.; Huang, L.; Yan, J.; Liu, X.; and Liu, Y. 2022. Perceptual quality assessment of omnidirectional images. In AAAI, volume 36, 580–588. Gao, G.; Li, W.; Li, J.; Wu, F.; Lu, H.; and Yu, Y. 2022. Feature distillation interaction weighting network for lightweight image super-resolution. In AAAI, volume 36, 661–669. Guo, Y.; Chen, J.; Wang, J.; Chen, Q.; Cao, J.; Deng, Z.; Xu, Y.; and Tan, M. 2020. Closed-loop matters: Dual regression networks for single image super-resolution. In CVPR, 5407–5416. Haris, M.; Shakhnarovich, G.; and Ukita, N. 2018. Deep backprojection networks for super-resolution. In CVPR, 1664–1673. Hu, X.; Naiel, M. A.; Wong, A.; Lamm, M.; and Fieguth, P. 2019. RUNet: A robust UNet architecture for image super-resolution. In CVPRW, 505–507. Khasanova, R.; and Frossard, P. 2017. Graph-based classification of omnidirectional images. In ICCVW, 869–878. Kim, J.; Lee, J. K.; and Lee, K. M. 2016. Accurate image superresolution using very deep convolutional networks. In ICCV, 1646– 1654. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ledig, C.; Theis, L.; Husz´ar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, 4681–4690. Li, W.; Zhou, K.; Qi, L.; Lu, L.; and Lu, J. 2022. Best-buddy gans for highly detailed image super-resolution. In AAAI, volume 36, 1412–1420. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte, R. 2021. SwinIR: Image restoration using swin transformer. In ICCVW, 1833–1844. Lim, B.; Son, S.; Kim, H.; Nah, S.; and Mu Lee, K. 2017. Enhanced deep residual networks for single image super-resolution. In ICCVW, 136–144. Lu, L.; Li, W.; Tao, X.; Lu, J.; and Jia, J. 2021. Masa-sr: Matching acceleration and spatial adaptation for reference-based image super-resolution. In CVPR, 6368–6377. Lyu, J.; Li, G.; Wang, C.; Cai, Q.; Dou, Q.; Zhang, D.; and Qin, J. 2023. Multicontrast MRI Super-Resolution via TransformerEmpowered Multiscale Contextual Matching and Aggregation. IEEE TNNLS. Mei, Y.; Fan, Y.; and Zhou, Y. 2021. Image super-resolution with non-local sparse attention. In CVPR, 3517–3526. Nie, S.; Ma, C.; Chen, D.; Yin, S.; Wang, H.; Jiao, L.; and Liu, F. 2020. A Dual Residual Network with Channel Attention for Image Restoration. In ECCV, 352–363. Niu, B.; Wen, W.; Ren, W.; Zhang, X.; Yang, L.; Wang, S.; Zhang, K.; Cao, X.; and Shen, H. 2020. Single image super-resolution via a holistic attention network. In ECCV, 191–207. Ozcinar, C.; Rana, A.; and Smolic, A. 2019. Super-resolution of omnidirectional images using adversarial learning. In IEEE International Workshop on Multimedia Signal Processing, 1–6. Prajapati, K.; Chudasama, V.; Patel, H.; Sarvaiya, A.; Upla, K. P.; Raja, K.; Ramachandra, R.; and Busch, C. 2021. Channel Split Convolutional Neural Network (ChaSNet) for Thermal Image Super-Resolution. In CVPR, 4368–4377. Ren, D.; Zuo, W.; Hu, Q.; Zhu, P.; and Meng, D. 2019. Progressive image deraining networks: A better and simpler baseline. In CVPR, 3937–3946. Scaramuzza, D. 2007. Omnidirectional vision: from calibration to root motion estimation. Ph.D. thesis, ETH Zurich. Song, D.; Xu, C.; Jia, X.; Chen, Y.; Xu, C.; and Wang, Y. 2020. Efficient residual dense block search for image super-resolution. In AAAI, volume 34, 12007–12014. Su, Y.-C.; and Grauman, K. 2019. Kernel transformer networks for compact spherical convolution. In CVPR, 9442–9451. Su, Y.-C.; and Grauman, K. 2021. Learning Spherical Convolution for 360 Recognition. IEEE TPAMI. Sui, X.; Ma, K.; Yao, Y.; and Fang, Y. 2021. Perceptual quality assessment of omnidirectional images as moving camera videos. IEEE TVCG, 28(8): 3022–3034. Sun, Y.; Lu, A.; and Yu, L. 2017. Weighted-to-spherically-uniform quality evaluation for omnidirectional video. IEEE Signal Processing Letters, 24(9): 1408–1412. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 880 Tai, Y.; Yang, J.; Liu, X.; and Xu, C. 2017. Memnet: A persistent memory network for image restoration. In ICCV, 4539–4547. Tateno, K.; Navab, N.; and Tombari, F. 2018. Distortion-aware convolutional filters for dense prediction in panoramic images. In ECCV, 707–722. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In NIPS. Xia, B.; Hang, Y.; Tian, Y.; Yang, W.; Liao, Q.; and Zhou, J. 2022. Efficient Non-Local Contrastive Attention for Image SuperResolution. In AAAI. Xiao, J.; Ehinger, K. A.; Oliva, A.; and Torralba, A. 2012. Recognizing scene viewpoint using panoramic place representation. In CVPR, 2695–2702. Yang, W.; Tan, R. T.; Feng, J.; Liu, J.; Guo, Z.; and Yan, S. 2017. Deep joint rain detection and removal from a single image. In CVPR, 1357–1366. Ye, Y.; Alshina, E.; and Boyce, J. 2017. JVET-G1003: Algorithm description of projection format conversion and video quality metrics in 360lib version 4. Joint Video Exploration Team. Yoon, Y.; Chung, I.; Wang, L.; and Yoon, K.-J. 2022. SphereSR: 360deg Image Super-Resolution With Arbitrary Projection via Continuous Spherical Image Representation. In CVPR, 5677– 5686. Yu, M.; Lakshman, H.; and Girod, B. 2015. Content adaptive representations of omnidirectional videos for cinematic virtual reality. In Proceedings of the International Workshop on Immersive Media Experiences, 1–6. Zhang, K.; Zuo, W.; Gu, S.; and Zhang, L. 2017. Learning deep CNN denoiser prior for image restoration. In CVPR, 3929–3938. Zhang, X.; Zeng, H.; Guo, S.; and Zhang, L. 2022. Efficient LongRange Attention Network for Image Super-resolution. ECCV. Zhang, Y.; Li, K.; Li, K.; and Fu, Y. 2021. Mr image superresolution with squeeze and excitation reasoning attention network. In CVPR, 13425–13434. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; and Fu, Y. 2018. Image super-resolution using very deep residual channel attention networks. In ECCV, 286–301. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; and Fu, Y. 2020. Residual dense network for image restoration. IEEE TPAMI, 43(7): 2480– 2495. Zhou, Y.; Sun, Y.; Li, L.; Gu, K.; and Fang, Y. 2021. Omnidirectional image quality assessment by distortion discrimination assisted multi-stream network. IEEE TCSVT, 32(4): 1767–1777. Zhou, Y.; Yu, M.; Ma, H.; Shao, H.; and Jiang, G. 2018. Weighted-to-spherically-uniform SSIM objective quality evaluation for panoramic video. In IEEE International Conference on Signal Processing, 54–57. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 881
2024
98
18,828
Learning Accurate and Bidirectional Transformation via Dynamic Embedding Transportation for Cross-Domain Recommendation Weiming Liu1, Chaochao Chen1*, Xinting Liao1, Mengling Hu1, Yanchao Tan2, Fan Wang1, Xiaolin Zheng1, Yew-Soon Ong3 1College of Computer Science and Technology, Zhejiang University, China 2College of Computer and Data Science, Fuzhou University, Fuzhou, China 3School of Computer Science and Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore {21831010, zjuccc, xintingliao, humengling}@zju.edu.cn, [email protected], {fanwang97, xlzheng}@zju.edu.cn, [email protected] Abstract With the rapid development of Internet and Web techniques, Cross-Domain Recommendation (CDR) models have been widely explored for resolving the data-sparsity and cold-start problem. Meanwhile, most CDR models should utilize explicit domain-shareable information (e.g., overlapped users or items) for knowledge transfer across domains. However, this assumption may not be always satisfied since users and items are always non-overlapped in real practice. The performance of many previous works will be severely impaired when these domain-shareable information are not available. To address the aforementioned issues, we propose the Joint Preference Exploration and Dynamic Embedding Transportation model (JPEDET) in this paper which is a novel framework for solving the CDR problem when users and items are non-overlapped. JPEDET includes two main modules, i.e., joint preference exploration module and dynamic embedding transportation module. The joint preference exploration module aims to fuse rating and review information for modelling user preferences. The dynamic embedding transportation module is set to share knowledge via neural ordinary equations for dual transformation across domains. Moreover, we innovatively propose the dynamic transport flow equipped with linear interpolation guidance on barycentric Wasserstein path for achieving accurate and bidirectional transformation. Our empirical study on Amazon datasets demonstrates that JPEDET outperforms the state-of-the-art models under the CDR setting. Introduction Cross-Domain Recommendation (CDR) has been widely investigated nowadays since it is an effective approach for tackling data sparsity and cold-start issues in the recommender system (Zang et al. 2022; Lu et al. 2015; Zhu et al. 2021a). Leveraging useful knowledge (e.g., useritem ratings and reviews) across domains can enhance the model performance. Meanwhile, most current CDR models (Man et al. 2017) assume that users or items are overlapped across domains for knowledge sharing. However, explicit domain-shareable information (e.g., overlapped users *Chaochao Chen is the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. or items) might be difficult to obtain, and thus, the contributions of these CDR models might be insignificant (Li, Yang, and Xue 2009; Moreno et al. 2012; Gao et al. 2013; Choi et al. 2022). How to obtain preeminent recommendation results without explicit domain-shareable information has become an urgent problem. In this paper, we focus on the dual-target non-overlapped CDR problem. That is, we aim to provide source (or target) users with target (or source) items according to preference characteristics. We further assume that both source and target users are non-overlapped. Meanwhile, each domain has multiple types of information, e.g., user-item ratings and reviews information, which is commonly available in real practice (Yi and et al 2018; Wang, Ounis, and Macdonald 2021; Chen et al. 2019; Dong et al. 2020). This problem is rather challenging since (1) there is no explicit transferring bridges (e.g., via overlapped users) for dual knowledge sharing and (2) the existence of embedding discrepancy across domains that strongly hurdles the model performance. Previous CDR models cannot better resolve these challenges well, resulting in limited performance. On the one hand, most CDR models should rely on overlapped users or items to develop reliable representations via embedding mapping and alignment mechanism (Zang et al. 2022). However, these model performance could be severely degraded when explicit domain-shareable information (e.g., overlapped users) are not available. What is worse, different domains with diverse kinds of items are always heterogeneous which always involves domain bias (Guerraoui et al. 2017; Li et al. 2021). Although commonly-used Maximum Mean Discrepancy (MMD) is easier to be implemented for domain adaptation without overlapped users, it fails to provide accurate matching results among complicated latent embedding spaces (Korba et al. 2021). Meanwhile previous papers have pointed out that adversarial training with domain discriminators could be unstable under some circumstances (Shu et al. 2018) which degrades the model performance. On the other hand, recent CDR models (Yu et al. 2020; Li et al. 2023) still mainly focus on unidirectional mapping from rich to sparse domains. However, unidirectional mapping cannot satisfy dual-target recommendation task which limits their potentials. Thus how to fully exploit The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8815 accurate and bidirectional domain alignment method for recommendation still needs more investigation. To address the aforementioned issues, in this paper, we propose Joint Preference Exploration and Dynamic Embedding Transportation model (JPEDET) for solving dualtarget non-overlapped cross domain recommendation problem. We devise two modules in JPEDET, i.e., joint preference exploration module and dynamic embedding transportation module for fusing users’ rating and review information and share general knowledge across domains. The joint preference exploration module fuses rating and review information via dual autoencoder frameworks for exploiting user general preference. The dynamic embedding transportation module aims to share general but implicit preference among non-overlapped users across domains. To fulfill this task, we innovatively propose dynamic transport flow with matching regularization and moving correction stages. Specifically, dynamic transport flow provides an accurate and bidirectional transformation across domains via linear interpolation guidance on barycentric Wasserstein path, providing monotonous and straight trajectories with a constant moving speed. Utilizing these two modules, we can enhance recommendation performance for both domains via realizing the embedding mapping and transportation process of non-overlapped users across domains. We summarize our contributions: (1) We propose a novel framework consisting of the joint preference exploration module and dynamic embedding transportation module, i.e., JPEDET, for solving the dual-target non-overlapped cross domain recommendation problem. (2) We are the first to propose accurate and bidirectional transformation via dynamic embedding transportation, for enhancing dual transferring user embeddings across domains with theoretical guarantees. (3) Extensive empirical studies demonstrate that JPEDET significantly improves the state-of-the-art model. Related Work Cross Domain Recommendation. Cross Domain Recommendation (CDR) mainly involves source and target domains for solving data sparsity and cold-start problem (Zhao and et al 2021; Khan and et al 2017; Sun et al. 2022). Existing CDR approaches have two main types, i.e., overlapped-based methods and non-overlapped-based methods. Overlapped-based methods assume that users or items are overlapped across domains and regard them as the bridge for knowledge sharing. Most overlappedbased methods (Man et al. 2017; Zhu et al. 2022; Kang et al. 2019) utilize linear or non-linear mapping functions on the overlapped users to transfer useful information. Some overlapped-based methods also adopt crossconnection components (Hu, Zhang, and Yang 2018) or orthogonal transformation unit (Chen et al. 2023; Li and Tuzhilin 2020, 2021) to enhance the model performance. Non-overlapped-based methods investigate the more general and challenging case when users and items are nonoverlapped. Most non-overlapped-based methods utilize some other auxiliary useful information (e.g., reviews) (Yu et al. 2020; Choi et al. 2022) to enhance the model performance with MMD (Long and et al 2015), and domain adversarial training strategy (Ganin et al. 2016; Zhang et al. 2021). Nonetheless, MMD fails to obtain reliable estimation when domains are biased, while adversarial learning is proven to too unstable to provide promising results (Korba et al. 2021; Shu et al. 2018). How to provide more accurate predictions under non-overlapped scenario still needs more exploration. Dynamic Flow and Optimal Transport. Dynamic flow aims to provide an accurate and invertible transformation approach between different probability distributions. Discrete normalized flow (Rezende and Mohamed 2015; Tabak and Turner 2013) was first been proposed for density estimation with logarithm probability calculation. RTQ Chen et al. (Richter-Powell and et al 2022) further adopted neural ordinary equation into dynamic flow which made it simpler for dual forward and backward processes. To make the continuous trajectory become simpler for faster convergence, researchers have utilized dynamic optimal transport techniques for achieving straight and smooth results (Onken et al. 2021; Finlay et al. 2020; Huang and Yeh 2021; Tong and et al 2020; Yang and Karniadakis 2020; Huguet and et al 2022; Liu and et al 2023; Tong and et al. 2023). However, these models requires either heavy computation on gradient and matrix trace, or non-trivial estimation for domain distribution, which is not practical for real applications. Methodology First, we describe notations. We assume there are two domains, i.e., a source domain a and a target domain b. We assume each domain have N x U users and N x V items where x ∈{a, b}. rx ∈RN x U×N x V is the observed rating matrices in x-th domain. For the i-th user and j-th item in the x-th domain, it consists of the tuples (ux i , vx j , rx ij, hx ij). Here rx ij and hx ij denote the rating and review information respectively. Meanwhile, the source and target users/items are nonoverlapped. We aim to provide dual transfer cross domain recommendation for non-overlapped users, i.e., providing items in domain b to users in domain a who they did not have rating interactions in domain b and vice versa. The task is more general and challenging since (1) rating and review are diverse and heterogeneous for user modeling, and (2) no explicit domain-shareable information, e.g., overlapped users/items, is available to serve as the bridge for knowledge sharing. We then introduce the overview of JPEDET framework. JPEDET mainly has two modules, i.e., joint preference exploration module and dynamic embedding transportation module. The joint preference exploration module aims to better exploit user general preference embeddings according to the rating and review information. The dynamic embedding transportation module is set to dual transform the users across different domains. To achieve this goal, we firstly propose dynamic transport flow with the combination of neural ordinary equation and optimal transport technique. Specifically, dynamic transport flow includes matching regularization and moving correction stages to learn an accurate and bidirectional domain adaptation approach. The model framework is shown in Fig. 2 and we will introduce JPEDET in details. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8816 Trajectory OT matching User Embedding in Domain a User Embedding in Domain b (a) Initial User Embeddings (b) Optimizing via continuous normalizing flow (c) Matching Regularization Stage of DTF (d) After training of DTF Transformed User Embedding from Domain a to Domain b A A Romantic Preference Horror Preference Romantic Preference Horror Preference B B Figure 1: The toy examples of dynamic embedding transportation from domain a to domain b with two-dimension data The Framework JPEDET Joint Preference Exploration Module. Firstly, we provide the details of the joint preference exploration module. In this module, we aim to combine both rating and auxiliary review information for generating user general preference embeddings. To start with, we first adopt the rating encoder Ex R(·) to obtain the user rating preference embeddings as Ex R(rx i∗) = zx R,i ∈Rd where d denotes the dimension. Then we can utilize the rating decoder Dx R(·) to reconstruct the user-item rating information as erx i∗= Dx R(zx R,i). For the single review information hx ij, we first adopt the sentence segmentation component Sentencizer to split hx ij into several individual sentences following (Pugoy and Kao 2021). After that we adopt SentenceBERT (Reimers and Gurevych 2019) to obtain the embeddings for each sentence. Then we average these sentence embeddings to obtain the review representations on hx ij. Finally, we average all review representations belonging to the i-th user in domain x as Hx i . Likewise, we adopt the review encoder Ex H(·), decoder Dx H(·) to model the user review preference embeddings as Ex H(Hx i ) = zx H,i ∈Rd and reconstruct the review via f Hx i = Dx H(zx H,i). Besides, we further exploit the user general preference embeddings which contain both rating and review information. To fulfill this task, we concatenate the user review and rating information and use a fullconnected network W x(·) to obtain the user general preference as ux i = W x(rx i∗⊕Hx i ). Inspired by the multi-view consensus learning strategy (Zhang, Liu, and Fu 2019), we regard ratings and reviews information as multiple views for representing the user tastes and characteristics. Thus we tend to learn the neural networks Gx R(·), Gx H(·) for modeling the relationship between user general and specific (e.g., rating and review) preference embeddings. Then we propose the preference exploration loss Lx R as: Lx R = 1 N N X i=1 h ||rx i −erx i ||2 2 + ||Hx i −e Hx i ||2 2 i + 1 N N X i=1  ||zx R,i −Gx R(ux i )||2 2 + ||zx H,i −Gx H(ux i )||2 2  . where N denotes the batchsize. The first and second terms in Lx R represent the reconstruction loss on ratings and reviews. 𝑮𝑹 𝒂 𝑮𝑯 𝒂 𝑮𝑹 𝒃 𝑮𝑯 𝒃 Dynamic Embedding Transportation Source Domain Target Domain concat 𝑾𝒂 𝒛𝑹,𝒊 𝒂 𝒛𝑯,𝒊 𝒂 𝒖𝒊 𝒂 𝒛𝑹,𝒋 𝒃 𝒛𝑯,𝒋 𝒃 𝑾𝒃 concat 𝒖𝒋 𝒃 Rating View Review View 𝑬𝑹 𝒂 𝑫𝑹 𝒂 𝑬𝑯 𝒂 𝑫𝑯 𝒂 𝑬𝑹 𝒃 𝑫𝑹 𝒃 Rating View Review View 𝑬𝑯 𝒃 𝑫𝑯 𝒃 Figure 2: The framework of JPEDET The others denote the regression loss among the user general and rating/review preference embeddings. We adopt average weights among these terms following (Xin et al. 2022). Lx R will be applied in both source and target domains to explore user general preference ua and ub. Dynamic Embedding Transportation Module. After we obtain the user general preference, we should consider how to provide proper recommendation results on the j-th item in domain b to the i-th user in domain a, and vice versa. That is, one can directly adopt Db R(Gb R(ua)) or Da R(Ga R(ub)) to make the cross-domain predictions. However, we cannot obtain satisfactory results since different domains always exist the domain discrepancy and it will lead to poor model performance (Li et al. 2021; Yu et al. 2020). It has been shown in Fig. 1(a) where red and green dots denote the preference of source and target users, respectively. The red and green dots are separated which indicates the existence of the domain discrepancy among source and target users. Since we cannot obtain explicit bridge (e.g., overlapped users) for knowledge sharing, we should learn an accurate and bidirectional embedding transportation module across source and target domains to reduce the discrepancy for solving dual-target non-overlapped CDR problem. To fulfill this task, we propose Dynamic Transport Flow The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8817 (DTF) in the dynamic embedding transport module. DTF is equipped with neural ordinary equations (Chen et al. 2018; Grathwohl and et al 2018) which further includes two stages, i.e., matching regularization stage and moving correction stage. DTF first adopts discrete optimal transport mechanism among the user embeddings in the matching regularization stage, then utilizes moving correction stage to obtain a more smooth trajectory. Neural Ordinary Equations of DTF. To start with, we first introduce the neural ordinary equations of DTF which includes the forward and backward process. Specifically, the forward process denotes the procedure that transports the source user embeddings ua to target domain. We utilize a neural ordinary equation with learnable parameters f(·, θ) to fulfill the task as ua i,t+∆t = ua i,t + f(ua i,t, t; θ) · ∆t and ua i,T = F(ua i,0) = ODE.forward(ua i,0) where ua i,t denotes the transformed source user embedding at the t-th iteration and ∆t denotes the step size. ua i,T denotes the final transformed user embedding from source to target domains at the T-th iteration. Note that ua i,0 equals to the initial i-th user embeddings ua i and ua i,T denotes the final transformed user embeddings where T denotes the total number of iterations. We regard the whole forward moving process, i.e., F(·), as the transferring bridge, which can be shown in Fig. 1(b) for a more intuitive reflection. Specifically, the red and pink dots represent the ua i,0 user embeddings and ua i,T transformed user embeddings respectively. The purple line denotes the moving trajectory from ua i,0 to ua i,T . Likewise, we can not only transport user embeddings from domain a to b, but also make reverse transport from domain b to a using backward process as ub j,t−∆t = ub j,t −f(ub j,t, t; θ) · ∆t and ub j,T = F−1(ub j,0) = ODE.backward(ub j,0) where ub j,T denotes the j-th user’s final transformed embeddings from domain b to domain a. It is obvious that utilizing the neural ordinary equations can satisfy the bidirectional conditions. Matching Regularization Stage of DTF. Although we have obtained the final transformed user embeddings ua ·,T and ub ·,T via neural ordinary equations, they still have bias and discrepancy among the origin user embeddings ua and ub, respectively. Therefore, we aim to match the final transformed user embeddings ua ·,T and ub ·,T with original user embeddings ub ·,0 and ua ·,0 respectively in the matching stage of DTF. To tackle this issue, previous methods (Chen et al. 2018; Onken et al. 2021; Richter-Powell and et al 2022; Finlay et al. 2020) always adopted continuous normalizing flow which can be formulated as below: log P(ux i,T ) = log P(ux i,0) − Z T 0 tr( ∂f(ux i,t, t; θ) ∂ux i,t )dt, (1) where P(·) denotes the probability distribution. However, it is difficult to obtain the probability distribution on domain a or b, since these distributions are empirically observed but unknown. Although one can adopt some non-parametric methods (e.g., Kernel Density Estimation) to estimate the probability distribution, it is sensitive to find a suitable hyper-parameters to obtain accurate results. As the example in Fig. 1(b), the pink dots (transformed source user embeddings) and green dots (target user embeddings) are still not well aligned, when they are directly optimized via continuous normalized flow and kernel density estimation. What is worse, they are easier to obtain arbitrary mapping across domains while degrading the model performance (Korotin and et al 2019; Onken et al. 2021). To overcome such obstacles, we further utilize the optimal transport techniques for dynamic domain adaptation with Theorem 1 on optimizing the moving trajectories (Seguy et al. 2018; Makkuva and et al 2020; Huang and et al 2020; Mikami and Thieullen 2008). Theorem 1. Given the probability densities of µa and µb in the source and target domains respectively, the dynamic optimal transport problem can be formulated as follows (Finlay et al. 2020; Onken et al. 2021; Tong and et al 2020): min (ρ,f) Z T 0 Z Rd 1 2 ∥f(ux, t)∥2 2 · ρ(ux, t)dux dt, s.t. dρ(ux, t) dt + ∇· [ρ(ux, t) · f(ux, t)] = 0, ρ(·, 0) = µa, ρ(·, T) = µb, where ρ(·, t) denotes the probability densities of the transformed user embeddings at the t-th step. f(ux, t) has the optimal solution as f(ux, t) = −∇λ(ux, t) where λ(ux, t) denotes the potential function. Optimizing dynamic optimal transportation problem is equivalent to minimize the following two loss functions, i.e., continuity constraint loss ℓM and path-length constraint loss ℓS as follows: ℓM = Z T 0 Z Rd dλ(ux, t) dt −∥∇λ(ux, t)∥2 2 duxdt, ℓS = Z T 0 Z Rd 1 2 ∥f(ux, t)∥2 2dux dt. (2) Based on Theorem 1, one should first consider path-length constraint by figuring out the optimal mapping between the source and target domains to determine the moving directions. However, previous methods (Onken et al. 2021; Yang and Karniadakis 2020; Zhang, Weinan, and Wang 2018) should involve complex gradient and trace computation during the optimization. Meanwhile Discrete Optimal Transport (DOT) with entropy regularization term (Courty and et al 2017; Flamary et al. 2016) enjoys the benefits of providing cyclic monotonous mapping efficiently for providing accurate matching results (Makkuva and et al 2020; Paty, d’Aspremont, and Cuturi 2020; Villani et al. 2009). Therefore, we adopt DOT on ua and ub as follows: min π∈Γ J = N X i,j=1 [πi,j · ||ua i −ub j||2 2 + ϵπi,j(log(πi,j) −1)], (3) where Γ = {PN i=1 πi,j = 1/N, PN j=1 πi,j = 1/N} denotes the constraints on π. πi,j denotes the coupling matrix between ua i and ub j accordingly. ϵ is the balanced hyper parameter and PN i,j=1 πi,j ·(log(πi,j)−1) denotes the entropy regularization term. We can adopt Sinkhorn algorithm (Cuturi 2013) to solve the problem on π iteratively with time complexity of O(N 2). Note that the matching solution based on discrete optimal transport is monotonous. We depict the optimal matching solution between user in domain a (red dots) and user in domain b (green dot) with blue lines in Fig. 1(b). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8818 Moving Correction Stage of DTF. After we obtain the optimal solution of coupling matrix πi,j, we achieve the moving directions for each user. Meanwhile, we wish these data sample embeddings will travel in straight lines to other domains since it is the optimal trajectory for convex cost ideally as illustrated in Theorem 1. However, the moving trajectory depicted as the purple lines between the original and transformed user embeddings will rapidly vary and bring truncation error as shown in Fig. 1(b). Apparently, it will bring about noise and lead to inaccurate transportation plans across domains, resulting in the limited model performance (Finlay et al. 2020; Onken et al. 2021; Tong and et al 2020). The main reason is there always exists many possible transporting solutions across domains. The models may find out an arbitrary transporting result among all possible solutions and make the trajectory become fluctuated with poorly conditioned (Korotin and et al 2019). Previous methods (Finlay et al. 2020; Yang and Karniadakis 2020; Onken et al. 2021) mainly used kinetic (energy) and potential regularization to enforce the straight trajectories. However, these constraints only guide the model implicitly with heavy computation. How to provide a simple but efficient way for guiding the model to obtain straight trajectories explicitly is still challenging. To alleviate such issues, we propose moving correction stage of DTF which can provide a robust and smooth moving trajectory. For achieving this goal, we first propose Barycentric Wasserstein Path to represent an ideal straight trajectory. Then we propose Linear Interpolation Guidance to further constrain the moving trajectory. Barycentric Wasserstein Path. To start with, we first introduce the newly proposed barycentric Wasserstein path which is the basis of moving correction stage. We first apply the matching regularization stage of DTF to exploit the mapping solution across domains. Then we figure out the barycentric mapping embeddings eua i and eub j across domains respectively. Specifically, barycentric mapping is set to project the user embeddings ux in domain x to another domain via discrete optimal transport which can be computed as eua i = Nπi∗ua ∗and eub j = Nπ⊤ ∗jub ∗respectively. We then define the line segment vectors between the original user embeddings ua i and barycentric mapping embeddings eua i as the corresponding barycentric Wasserstein path Wa i in source domain. Likewise, we obtain the barycentric Wasserstein path Wb j in target domain which can be depicted as: Wa i := eua i −ua i , and Wb j := eub j −ub j. (4) Apparently, Wa i and Wb j can be viewed as an ideal straight trajectory that transforms the user embeddings across domains. We can utilize the corresponding barycentric Wasserstein path for optimizing our coarse trajectory during the training stage. Linear Interpolation Guidance. Then we will introduce our proposed Linear Interpolation Guidance strategy on barycentric Wasserstein path for optimizing the trajectory. Theorem 2. Given the probability densities of µa and µb in source and target domains respectively, data samples should move at a constant speed for achieving optimal solutions. Based on Theorem 2, we not only enforce the transformed user embeddings moving towards a straight and smooth trajectory, but also let them move at a constant speed to achieve the optimal solution. Suppose that the source and target domains have the probability densities of µa and µb respectively, McCann and Moser proposed a simple but efficient interpolation method αη = (1 −η)µb + ηµa for mass transportation where η ∈[0, 1] and αη denotes the interpolant (Dacorogna and Moser 1990; McCann 1997; Lei and Gu 2021; Rozen et al. 2021; Moser 1965; Gu and Yau 2020). This method also gives geodesics in Wasserstein space with lower transportation cost (Lei and Gu 2021; Liu and et al 2023). Based on the above observations, we first uniformly divide the barycentric Wasserstein path Wa i and Wb j into T segments as follows: γa i,t = ua i + (t/T) · Wa i , γb j,t = ub j + (t/T) · Wb j, (5) where γa i,t and γb j,t denote the linear interpolation points at the t-th time step on Wa i and Wb j, respectively. Then we aim to reduce the distance between the transformed user embedding (ua i,t, ub j,t) and the linear interpolation points (γa i,t, γb j,t) simultaneously. Therefore, we propose interpolation guidance loss for minimizing the pairwise distance as below: min LG = N X i,j=1 T X t=1 (||γa i,t −ua i,t||2 2 + ||γb j,t −ub j,t||2 2). (6) The linear interpolation on barycentric Wasserstein path is easy to compute and provides explicit guidance for training straight moving trajectories with a constant speed for optimization. That is, interpolation guidance loss satisfies both continuity constraint loss ℓM and path-length constraint loss ℓS as mentioned in Theorem 1 and Theorem 2. Model Summary. We first minimize the preference exploration loss Lx R for model pretraining in both source and target domains to obtain user preference embeddings. Then we minimize the interpolation guidance loss LG for learning the dynamic transport flow. After the training procedure, we can make accurate and bidirectional cross domain predictions via Db R(Gb R(F(ua))) or Da R(Ga R(F−1(ub))) respectively. Empirical Study Datasets and Tasks. We conduct extensive experiments on the popularly used real-world Amazon datasets (Ni, Li, and McAuley 2019). Amazon dataset has five domains, i.e., Movies (Movies and TV), Books (Books), CD (CDs and Vinyl), Phone (Cell Phones and Accessories), and Elec (Electronics) which are commonly used in the cross domain recommendation (Liu et al. 2022; Yu et al. 2020; Fu et al. 2019; Zhao et al. 2020). Specifically, we conduct four corresponding tasks as (T1) Book ↔CD, (T2) Book ↔Movie, (T3) Movie ↔CD, and (T4) Phone ↔Elec. We filter out users whose number of interactions is less than 5 in each domain following (Chen et al. 2023; Zhu et al. 2021b; Yuan, Yao, and Benatallah 2019). We keep the origin user-item ratings and set the unobserved or not clicked as 0. To establish the non-overlapped datasets, we first figure out the overlapped users among the source and target domains. Then we The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8819 Book->CD CD->Book Book->Movie Movie->Book Movie->CD CD->Movie Phone->Elec Elec->Phone RMSE MAE RMSE MAE RMSE MAE RMSE MAE RMSEMAE RMSEMAE RMSEMAE RMSE MAE NeuMF 4.720 4.216 3.697 3.044 3.525 3.227 3.683 3.251 4.862 4.311 4.102 3.531 4.977 4.013 4.469 3.540 DeepCoNN 4.275 4.080 3.101 2.469 2.902 2.734 2.856 2.743 4.453 4.022 3.264 2.519 4.382 3.500 3.974 3.192 VCM 4.123 3.969 2.758 2.202 2.591 2.435 2.660 2.353 4.312 3.904 3.048 2.282 4.174 3.395 3.623 3.017 NARRE 4.044 3.871 2.632 2.152 2.474 2.360 2.376 2.133 4.198 3.871 2.850 2.226 4.034 3.162 3.523 2.807 UBERT 3.811 3.723 2.140 1.964 2.152 1.935 2.093 1.854 3.666 3.579 2.284 1.895 3.772 2.943 3.153 2.501 ESCOFILT 3.756 3.694 2.115 1.863 2.080 1.859 2.101 1.792 3.710 3.608 2.326 1.953 3.741 2.893 3.130 2.465 RC-DFM 3.919 3.825 2.303 1.936 2.314 2.001 2.087 1.812 3.786 3.632 2.431 2.012 3.945 2.920 3.397 2.539 GA-DTCDR 3.762 3.704 2.146 1.881 2.159 1.803 2.032 1.768 3.754 3.615 2.243 1.920 3.713 2.851 2.962 2.278 Rec-DAN 3.502 3.267 1.968 1.725 2.086 1.874 1.922 1.703 3.521 3.349 2.005 1.866 3.620 2.658 2.742 1.973 DDTCDR 3.431 3.303 2.094 1.755 2.047 1.792 1.823 1.576 3.253 3.115 1.964 1.801 3.456 2.327 2.633 1.842 TDAR 2.918 2.483 1.835 1.583 1.903 1.665 1.731 1.459 3.042 2.856 1.909 1.774 3.195 2.032 2.521 1.734 DisAlign 2.342 2.139 1.907 1.628 1.793 1.505 1.664 1.430 2.751 2.352 1.653 1.302 3.242 2.119 2.407 1.686 CATN 2.287 1.950 1.714 1.514 1.704 1.426 1.580 1.362 2.153 1.917 1.718 1.484 3.002 1.745 2.390 1.556 CFAA 1.902 1.498 1.751 1.541 1.631 1.349 1.548 1.211 1.832 1.551 1.696 1.265 2.813 1.297 2.104 1.303 SRTrans 1.765 1.245 1.493 1.380 1.552 1.204 1.463 1.147 1.692 1.274 1.500 1.163 2.762 1.204 1.961 1.187 SER 1.456 1.031 1.420 1.047 1.382 1.050 1.465 1.087 1.435 1.040 1.396 1.059 2.798 1.173 1.975 1.148 JPEDET-B 3.685 3.539 2.023 1.810 2.264 1.713 1.982 1.770 3.684 3.538 2.045 1.832 3.616 2.700 2.205 1.531 JPEDET-M 1.714 1.352 1.585 1.396 1.521 1.299 1.606 1.187 1.626 1.203 1.680 1.296 2.869 1.258 2.047 1.361 JPEDET-A 1.596 1.168 1.551 1.261 1.469 1.188 1.524 1.125 1.507 1.163 1.400 1.180 2.954 1.385 2.121 1.423 JPEDET-M1 1.388 0.973 1.357 0.976 1.358 0.968 1.403 1.035 1.383 0.945 1.353 0.961 2.641 1.109 1.867 1.103 JPEDET-M2 1.374 0.965 1.362 0.980 1.362 0.975 1.408 1.038 1.362 0.936 1.347 0.958 2.607 1.096 1.875 1.110 JPEDET-M3 1.310 0.918 1.313 0.931 1.310 0.927 1.297 0.947 1.320 0.926 1.277 0.940 2.581 1.072 1.782 1.034 JPEDET 1.267 0.879 1.273 0.919 1.206 0.893 1.264 0.923 1.203 0.854 1.235 0.917 2.556 1.064 1.769 1.023 Table 1: Experimental results on Amazon datasets with different tasks. randomly select users to appear in source domain and the others in target domain, so that the users of the two domains are non-overlapping following the setting of (Wang, Niepert, and Li 2019). We only use (a) source user-source item rating and review information and (b) target user-target item rating and review information during the training phase. Experiment Settings. We set batch size N = 128 for source and target domains during the training. The latent dimension of user rating/review/general preference embeddings is set to d = 128. We set the step size as ∆t = 0.01 and the total number of iterations as T = 30 in the moving stage of dynamic transport flow. We set the balanced hyper-parameter ϵ = 0.1 in matching stage of dynamic transport flow for calculating the discrete optimal transport. For all experiments, we perform five random experiments and report the average results. We choose Adam as optimizer and set the learning rate as 0.01. We adopt the commonly-used Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) as the evaluation metrics following previous papers (Fu et al. 2019; Zhu et al. 2022; Zhao et al. 2020). Baseline. We compare JPEDET with the following models. (1) Single domain models: NeuMF (He et al. 2017), DeepCoNN (Zheng and et al 2017), VCM (Cui et al. 2018), NARRE (Chen et al. 2018), UBERT (Qiu et al. 2021), ESCOFILT (Pugoy and Kao 2021). (2) Cross domain models: RC-DFM (Fu et al. 2019), GA-DTCDR (Zhu et al. 2021b), Rec-DAN (Wang, Niepert, and Li 2019), DDTCDR (Li and Tuzhilin 2020), TDAR (Yu et al. 2020), DisAlign (Liu et al. 2021), CATN (Zhao et al. 2020), CFAA (Liu et al. 2022), SER (Choi et al. 2022), SRTrans (Li et al. 2023). Recommendation Performance. The comparison results on several datasets are shown in Table 1. From them, we can find that: (1) Single domain recommendation models equipped with ratings and reviews (e.g., DeepCoNN and NARRE) can obtain better results than the models which only use ratings (e.g., NeuMF). However, they cannot provide satisfactory results since they cannot reduce the domain bias and discrepancy. (2) Conventional cross-domain recommendation models (e.g., RC-DFM and GA-DTCDR) can obtain better results than most single domain recommendation models. However, they mainly rely on the domainshareable information for knowledge transfer which limits the performance when users are non-overlapped. (3) Some latest cross-domain recommendation models (e.g., Rec-DAN, TDAR and SER) also utilize adversarial learning strategy to reduce domain discrepancy when users and items are non-overlapped. Nonetheless, adversarial learning with domain discriminator is unstable and hard to train in practice (Shu et al. 2018) and thus they cannot achieve better results. (4) JPEDET achieves more satisfied results than the runner-up models (e.g., SER) with improvement from 7.4% to 17.9%, which proves that joint preference exploration and dynamic embedding transportation can boost the model potential. Ablation. To study how does each module of JPEDET contribute on the final performance, we compare JPEDET with its several variants, including JPEDET-B, JPEDET-M, JPEDET-A, JPEDET-M1, JPEDET-M2 and JPEDETM3. JPEDET-B only adopts the joint preference exploration module during the training procedure and it directly applies Db R(Gb R(ua)) or Da R(Ga R(ub)) for testing. JPEDET-M and JPEDET-A replace the dynamic embedding transportation module with MMD and domain adThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8820 RMSE(Book->Movie) RMSE(Movie->Book) 2.00 1.75 1.50 1.25 1.00 2.25 2.50 RC-DFM GA-DTCDR CATN JPEDET (a) The RMSE results 0.5 MAE(Book->Movie) MAE(Movie->Book) 1.5 1.0 2.0 2.5 RC-DFM GA-DTCDR CATN JPEDET (b) The MAE results 0.01 0.1 1 10 100 1000 1.14 1.16 1.18 1.20 1.22 1.24 1.26 1.28 1.30 RMSE RMSE(Movie->CD) RMSE(CD->Movie) RMSE(Book->Movie) RMSE(Movie->Book) (c) Effects of ϵ on RMSE 0.01 0.1 1 10 100 1000 0.80 0.82 0.84 0.86 0.88 0.90 0.92 0.94 MAE RMSE(Movie->CD) RMSE(CD->Movie) RMSE(Book->Movie) RMSE(Movie->Book) (d) Effects of ϵ on MAE Figure 3: The model extension and effect of hyper-parameters (a) Book->CD (b) CD->Book Book T-SNE Embedding CD T-SNE Embedding Generated CD T-SNE Embedding Generated Book T-SNE Embedding 10 5 -5 -10 0 -20 -10 0 10 20 30 40 15 10 5 0 -5 -10 -15 -20 -10 0 10 20 30 40 Figure 4: The T-SNE visualization of DTF in JPEDET versarial training respectively. JPEDET-M1, JPEDET-M2 and JPEDET-M3 replace the dynamic embedding transportation module with OT-Flow (Onken et al. 2021), TPR (Huang and Yeh 2021) and Rectified Flow (Liu and et al 2023) respectively. The comparison results are shown in Table 1. From it, we can observe that (1) JPEDET-B cannot reduce biases and discrepancy and reaches poor performance which indicates the significance of embedding transformation across domains. (2) JPEDET-M and JPEDETA both achieve better results than JPEDET-B. However MMD only provides ambiguous matching and thus it cannot better reduce domain discrepancy. JPEDET-A is difficult to train with domain discriminator in real practice, leading to the limited performance of JPEDET-A. (3) Although JPEDET-M1 and JPEDET-M2 both achieve better results than JPEDET-B, these methods cannot well adapt to the scenario when the probability distribution of source and target domains are hard to estimate then leading to limited performance. (4) JPEDET-M3 achieves competitive results than JPEDET while JPEDET-M3 should involve multiple iterations for re-training. When the source and target probability distributions are rather complex, it could be difficult for JPEDET-M3 to fulfill the rectification. (5) The above ablation study shows that our proposed JPEDET is effective in solving the cross-domain recommendation problem. Model Extension. We further analyse the general extension of JPEDET on the scenario that some source and target users are overlapped. Specifically, we randomly choose 5% users as overlapped across domains. The cross-domain user-item rating and review information for the rest of nonoverlapped users are removed during the training phase and they will be used for evaluation in the testing phase. Then we add a new alignment loss for these overlapped users as min LN = P i∈OU ||ua i,T −ub i,0||2 2 + ||ub i,T − ua i,0||2 2 and the total loss for domain adaptation is given as min[LG + LN]. We conduct the experiment on Amazon Book ↔Amazon Movie and report the result of RMSE, MAE in Fig. 3(a)-(b). From that we can observe utilizing overlapped users as domain-shareable information can further boost the model performance. Moreover, our proposed JPEDET even achieves the best performance against some other baseline models, indicating that JPEDET can also be used when users are overlapped. Effect of hyper-parameters. We finally study the effects of hyper-parameters ϵ on JPEDET empirically. We vary the ϵ ∈{0.01, 0.1, 1, 10, 100, 1000} in DTF of dynamic embedding transportation module on Movie ↔CD, Book ↔ Movie and report the results Fig. 3(c)-(d). From that we can observe that JPEDET is not sensitive to ϵ especially when ϵ = {0.01, 0.1, 1}. Meanwhile, smaller ϵ could lead to relatively sparse and robust solutions on π. When ϵ becomes larger (e.g., ϵ = {100, 1000}), the coupling matrix π will become dense and thus provides less accurate matching results. Therefore, we set ϵ = 0.1 for DTF in JPEDET. Visualization. To provide a more comprehensive insight into JPEDET, we adopt T-SNE to visualize the origin and transformed user embeddings on Movie ↔CD as shown in Fig. 4(a)-(b). We observe that DTF provides accurate and bidirectional transformation across domains for reducing the discrepancy, showing the efficacy of DTF in JPEDET. Conclusion and Future Work In this paper, we propose Joint Preference Exploration and Dynamic Embedding Transportation model (JPEDET), with the joint preference exploration module and the dynamic embedding transportation module. The dynamic embedding transportation module aims to provide an accurate and invertible embedding transformation approach between the source and target domains. We propose a simple but efficient approach namely Dynamic Transport Flow (DTF) with matching regularization stage and moving correction stage. Moreover, we adopt barycentric Wasserstein path with linear interpolation guidance to obtain straight moving trajectories. We conduct experiments to show the superior performance of JPEDET on several tasks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8821 Acknowledgements This work was supported in part by the National Natural Science Foundation of China (No. 62172362), Leading Expert of “Ten Thousands Talent Program” of Zhejiang Province, China (No. 2021R52001), Distributed Smart Value Chain programme which is funded under the Singapore RIE2025 Manufacturing, Trade and Connectivity (MTC) Industry Alignment Fund-Pre-Positioning (Award No: M23L4a0001). References Chen, R. T.; Rubanova, Y.; Bettencourt, J.; and Duvenaud, D. K. 2018. Neural ordinary differential equations. NeurIPS. Chen, X.; Zhang, Y.; Tsang, I. W.; Pan, Y.; and Su, J. 2023. Toward Equivalent Transformation of User Preferences in Cross Domain Recommendation. TOIS, 41(1): 1–31. Chen, Z.; Wang, X.; Xie, X.; Wu, T.; Bu, G.; Wang, Y.; and Chen, E. 2019. Co-attentive multi-task learning for explainable recommendation. In IJCAI, 2137–2143. Choi, Y.; Choi, J.; Ko, T.; Byun, H.; and Kim, C.-K. 2022. Based Domain Disentanglement without Duplicate Users or Contexts for Cross-Domain Recommendation. In CIKM. Courty, N.; and et al. 2017. Joint distribution optimal transportation for domain adaptation. NeurIPS. Cui, K.; Chen, X.; Yao, J.; and Zhang, Y. 2018. Variational collaborative learning for user probabilistic representation. WWW. Cuturi, M. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS. Dacorogna, B.; and Moser, J. 1990. On a partial differential equation involving the Jacobian determinant. In Annales de l’Institut Henri Poincar´e C, Analyse non lin´eaire. Dong, X.; Ni, J.; Cheng, W.; Chen, Z.; Zong, B.; Song, D.; Liu, Y.; Chen, H.; and De Melo, G. 2020. Asymmetrical hierarchical networks with attentive interactions for interpretable review-based recommendation. In AAAI. Finlay, C.; Jacobsen, J.-H.; Nurbekyan, L.; and Oberman, A. 2020. How to train your neural ODE: the world of Jacobian and kinetic regularization. In ICML, 3154–3164. PMLR. Flamary, R.; Courty, N.; Tuia, D.; and Rakotomamonjy, A. 2016. Optimal transport for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell, 1. Fu, W.; Peng, Z.; Wang, S.; Xu, Y.; and Li, J. 2019. Deeply fusing reviews and contents for cold start users in crossdomain recommendation systems. In AAAI. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-adversarial training of neural networks. JMLR, 17(1). Gao, S.; Luo, H.; Chen, D.; Li, S.; Gallinari, P.; and Guo, J. 2013. Cross-domain recommendation via cluster-level latent factor model. In PKDD, 161–176. Springer. Grathwohl, W.; and et al. 2018. FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models. In ICLR. Gu, X.; and Yau, S.-T. 2020. Computational Conformal Geometry. International Press and Higher Education Press. Guerraoui, R.; Kermarrec, A.-M.; Lin, T.; and Patra, R. 2017. Heterogeneous recommendations: what you might like to read after watching interstellar. VLDB, 10(10): 1070– 1081. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.-S. 2017. Neural collaborative filtering. In WWW, 173–182. Hu, G.; Zhang, Y.; and Yang, Q. 2018. Conet: Collaborative cross networks for cross-domain recommendation. In CIKM. Huang, C.-W.; and et al. 2020. Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization. In ICLR. Huang, H.-H.; and Yeh, M.-Y. 2021. Accelerating continuous normalizing flow with trajectory polynomial regularization. In AAAI, volume 35. Huguet, G.; and et al. 2022. Manifold interpolating optimaltransport flows for trajectory inference. NeurIPS. Kang, S.; Hwang, J.; Lee, D.; and Yu, H. 2019. Semisupervised learning for cross-domain recommendation to cold-start users. In CIKM, 1563–1572. Khan, M. M.; and et al. 2017. Cross domain recommender systems: a systematic literature review. CSUR, 50(3): 1–34. Korba, A.; Aubin-Frankowski, P.-C.; Majewski, S.; and Ablin, P. 2021. Kernel stein discrepancy descent. In ICML, 5719–5730. PMLR. Korotin, A.; and et al. 2019. Wasserstein-2 Generative Networks. In ICLR. Lei, N.; and Gu, X. 2021. Computational Conformal Geometry. International Press and Higher Education Press. Li, B.; Yang, Q.; and Xue, X. 2009. Can movies and books collaborate? cross-domain collaborative filtering for sparsity reduction. In IJCAI. Li, P.; and Tuzhilin, A. 2020. Ddtcdr: Deep dual transfer cross domain recommendation. In WSDM, 331–339. Li, P.; and Tuzhilin, A. 2021. Dual metric learning for effective and efficient cross-domain recommendations. TKDE. Li, S.; Yao, L.; Mu, S.; Zhao, W. X.; Li, Y.; Guo, T.; Ding, B.; and Wen, J.-R. 2021. Debiasing learning based crossdomain recommendation. In KDD, 3190–3199. Li, Z.; Amagata, D.; Zhang, Y.; Hara, T.; Haruta, S.; Yonekawa, K.; and Kurokawa, M. 2023. Semantic Relation Transfer for Non-overlapped Cross-domain Recommendations. In PAKDD, 271–283. Springer. Liu, W.; Su, J.; Chen, C.; and Zheng, X. 2021. Leveraging distribution alignment via stein path for cross-domain coldstart recommendation. NeurIPS, 34: 19223–19234. Liu, W.; Zheng, X.; Hu, M.; and Chen, C. 2022. Collaborative Filtering with Attribution Alignment for Review-based Non-overlapped Cross Domain Recommendation. In WWW. Liu, X.; and et al. 2023. Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. ICLR. Long, M.; and et al. 2015. Learning transferable features with deep adaptation networks. In ICML. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8822 Lu, J.; Wu, D.; Mao, M.; Wang, W.; and Zhang, G. 2015. Recommender system application developments: a survey. Decision support systems, 74: 12–32. Makkuva, A.; and et al. 2020. Optimal transport mapping via input convex neural networks. In ICML. Man, T.; Shen, H.; Jin, X.; and Cheng, X. 2017. Crossdomain recommendation: An embedding and mapping approach. In IJCAI, volume 17, 2464–2470. McCann, R. J. 1997. A convexity principle for interacting gases. Advances in mathematics, 128(1): 153–179. Mikami, T.; and Thieullen, M. 2008. Optimal transportation problem by stochastic optimal control. SIAM Journal on Control and Optimization. Moreno, O.; Shapira, B.; Rokach, L.; and Shani, G. 2012. Talmud: transfer learning for multiple domains. In CIKM. Moser, J. 1965. On the volume elements on a manifold. Transactions of the American Mathematical Society. Ni, J.; Li, J.; and McAuley, J. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP, 188–197. Onken, D.; Fung, S. W.; Li, X.; and Ruthotto, L. 2021. Otflow: Fast and accurate continuous normalizing flows via optimal transport. In AAAI, volume 35, 9223–9232. Paty, F.-P.; d’Aspremont, A.; and Cuturi, M. 2020. Regularity as regularization: Smooth and strongly convex brenier potentials in optimal transport. In International Conference on Artificial Intelligence and Statistics. PMLR. Pugoy, R. A.; and Kao, H.-Y. 2021. Unsupervised extractive summarization-based representations for accurate and explainable collaborative filtering. In ACL, 2981–2990. Qiu, Z.; Wu, X.; Gao, J.; and Fan, W. 2021. U-BERT: Pretraining user representations for improved recommendation. In AAAI, volume 35, 4320–4327. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In EMNLP, 3982–3992. Rezende, D.; and Mohamed, S. 2015. Variational inference with normalizing flows. In ICML. PMLR. Richter-Powell, J.; and et al. 2022. Neural Conservation Laws: A Divergence-Free Perspective. In NeurIPS. Rozen, N.; Grover, A.; Nickel, M.; and Lipman, Y. 2021. Moser flow: Divergence-based generative modeling on manifolds. NeurIPS, 34: 17669–17680. Seguy, V.; Damodaran, B. B.; Flamary, R.; Courty, N.; Rolet, A.; and Blondel, M. 2018. Large Scale Optimal Transport and Mapping Estimation. In ICLR. Shu, R.; Bui, H. H.; Narui, H.; and Ermon, S. 2018. A dirt-t approach to unsupervised domain adaptation. ICLR. Sun, Z.; Fang, H.; Yang, J.; Qu, X.; Liu, H.; Yu, D.; Ong, Y.-S.; and Zhang, J. 2022. DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation. TPAMI. Tabak, E. G.; and Turner, C. V. 2013. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2). Tong, A.; and et al. 2020. Trajectorynet: A dynamic optimal transport network for modeling cellular dynamics. In ICML. Tong, A.; and et al. 2023. Improving and generalizing flowbased generative models with minibatch optimal transport. In ICML Workshop. Villani, C.; et al. 2009. Optimal transport: old and new, volume 338. Springer. Wang, C.; Niepert, M.; and Li, H. 2019. Recsys-dan: discriminative adversarial networks for cross-domain recommender systems. TNNLS, 31(8): 2731–2740. Wang, X.; Ounis, I.; and Macdonald, C. 2021. Leveraging review properties for effective recommendation. In WWW. Xin, D.; Ghorbani, B.; Gilmer, J.; Garg, A.; and Firat, O. 2022. Do Current Multi-Task Optimization Methods in Deep Learning Even Help? NeurIPS, 35: 13597–13609. Yang, L.; and Karniadakis, G. E. 2020. Potential flow generator with L 2 optimal transport regularity for generative models. TNNLS, 33(2): 528–538. Yi, J.; and et al. 2018. Rating prediction in review-based recommendations via adversarial auto-encoder. In WI. Yu, W.; Lin, X.; Ge, J.; Ou, W.; and Qin, Z. 2020. Semisupervised collaborative filtering by text-enhanced domain adaptation. In KDD, 2136–2144. Yuan, F.; Yao, L.; and Benatallah, B. 2019. DARec: deep domain adaptation for cross-domain recommendation via transferring rating patterns. In IJCAI, 4227–4233. Zang, T.; Zhu, Y.; Liu, H.; Zhang, R.; and Yu, J. 2022. A survey on cross-domain recommendation: taxonomies, methods, and future directions. TOIS, 41(2): 1–39. Zhang, C.; Liu, Y.; and Fu, H. 2019. Ae2-nets: Autoencoder in autoencoder networks. In CVPR, 2577–2585. Zhang, L.; Weinan, E.; and Wang, L. 2018. Monge-Amp\ere Flow for Generative Modeling. Zhang, Y.; Liu, Y.; Han, P.; Miao, C.; Cui, L.; Li, B.; and Tang, H. 2021. Learning personalized itemset mapping for cross-domain recommendation. In IJCAI, 2561–2567. Zhao, C.; Li, C.; Xiao, R.; Deng, H.; and Sun, A. 2020. CATN: Cross-domain recommendation for cold-start users via aspect transfer network. In SIGIR, 229–238. Zhao, W. X.; and et al. 2021. Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms. In CIKM, 4653–4664. Zheng, L.; and et al. 2017. Joint deep modeling of users and items using reviews for recommendation. In WSDM. Zhu, F.; Wang, Y.; Chen, C.; Zhou, J.; Li, L.; and Liu, G. 2021a. Cross-domain recommendation: challenges, progress, and prospects. In IJCAI, 4721–4728. Zhu, F.; Wang, Y.; Zhou, J.; Chen, C.; Li, L.; and Liu, G. 2021b. A unified framework for cross-domain and crosssystem recommendations. TKDE. Zhu, Y.; Tang, Z.; Liu, Y.; Zhuang, F.; Xie, R.; Zhang, X.; Lin, L.; and He, Q. 2022. Personalized transfer of user preferences for cross-domain recommendation. In WSDM, 1507–1515. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8823
2024
980
18,829
Knowledge Graph Error Detection with Contrastive Confidence Adaption Xiangyu Liu1, Yang Liu1, Wei Hu1,2,* 1 State Key Laboratory for Novel Software Technology, Nanjing University, China 2 National Institute of Healthcare Data Science, Nanjing University, China {xyl, yliu20}[email protected], [email protected] Abstract Knowledge graphs (KGs) often contain various errors. Previous works on detecting errors in KGs mainly rely on triplet embedding from graph structure. We conduct an empirical study and find that these works struggle to discriminate noise from semantically-similar correct triplets. In this paper, we propose a KG error detection model CCA to integrate both textual and graph structural information from triplet reconstruction for better distinguishing semantics. We design interactive contrastive learning to capture the differences between textual and structural patterns. Furthermore, we construct realistic datasets with semantically-similar noise and adversarial noise. Experimental results demonstrate that CCA outperforms state-of-the-art baselines, especially in detecting semantically-similar noise and adversarial noise. Introduction A knowledge graph (KG) is composed of triplets in the form of (head entity, relation, tail entity), which finds extensive applications in downstream tasks like question answering (Saxena, Tripathi, and Talukdar 2020) and recommender systems (Guo et al. 2022). Existing KGs such as NELL (Carlson et al. 2010) and Knowledge Vault (Dong et al. 2014) continuously extract triplets in an automatic way, which inevitably introduces noise. Detecting these errors holds the potential to improve the quality of KGs. Existing works on KG error detection can be classified into embedding-based and path-based models. The former (Bordes et al. 2013; Yang et al. 2015; Trouillon et al. 2016) learns confidence scores based on the representations of entities and relations. The latter (Lin et al. 2015; Jia et al. 2019) uses paths between entities to evaluate the confidence of triplets. Different from the task of link prediction (Chen et al. 2021) or triplet classification (Yao, Mao, and Luo 2019), error detection focuses on detecting the error triplets in the whole unsupervised KG, aiming to capture the variance of triplets and give an accurate estimate of their confidence. Current KG error detection models face a significant challenge due to the unavailability of noise patterns and the difficulty in acquiring accurately labeled noise samples for robust supervision. Negative sampling by replacing entities is *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Models Noise types FB15K-237 WN18RR K=1% K=5% K=1% K=5% CAGED Random 0.945 0.758 0.795 0.486 Similar 0.633 0.367 0.657 0.384 TransE Random 0.904 0.726 0.630 0.434 Similar 0.611 0.303 0.503 0.361 Table 1: Results of our empirical study. Error triplets are divided into the “random” and “similar” groups, based on the methods of replacing entities. We show the top-K precision of two typical models on FB15K-237 and WN18RR. widely used in previous works, especially the embeddingbased models. However, real-world scenarios often introduce confusing noise semantically related to correct samples. Let us see two real error triplets in the FB15K-237 dataset (Toutanova et al. 2015): (George Lopez, profession, Disc jockey) and (Majel Barrett, profession, Writer). In the former, the correct tail entity should be ‘Comedian’, and in the latter, the correct tail entity should be ‘Actress’. Notably, ‘Disc jockey’ and ‘Writer’ both represent professions, mirroring common human errors. This form of noise is harder to differentiate and aligns closely with human error tendencies. We conduct a further empirical study to explore the performance of existing works on two specific types of noise: random noise and semantically-similar noise. Random noise is generated by randomly replacing the head or tail entity of a correct triplet, which is used in previous works (Dong et al. 2023). Semantically-similar noise uses entities that have cooccurrence with their relations, which means that it is semantically related to the relation. We test two typical models CAGED (Zhang et al. 2022) and TransE (Bordes et al. 2013) on the FB15K-237 (Toutanova et al. 2015) and WN18RR (Dettmers et al. 2018) datasets to verify whether existing works can deal with more realistic noise. As shown in Table 1, we add 5% of the two types of noise to the datasets and show the precision of the top-1% and top-5% detected triplets. The results show that, although the existing methods CAGED and TransE perform well on random noise, their effects are greatly reduced on semantically-similar noise. This is because existing models predominantly rely on graph structure, ignoring the rich textual information of KGs. The The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8824 1 (d) Knowledge Fusion & Confidence Injection 𝒗𝒗𝑖𝑖 ℎ 𝒖𝒖𝑖𝑖 𝑡𝑡 𝒖𝒖𝑗𝑗 𝑡𝑡 (c) Interactive Contrastive Learning 𝒗𝒗𝑖𝑖 𝑡𝑡 𝒖𝒖𝑖𝑖 ℎ 𝒖𝒖𝑗𝑗 ℎ 0.6 0.6 Negative construct Maximize agreement Keep distance Textual embedding Structural embedding Negative samples contrastive score function score aggregation (Michael Jackson, Place of Death, Gary) Michael Joseph Jackson was an American singer, songwriter, dancer, and philanthropist text information BERT encoder Transformer encoder subgraph construct prompt construct prompt reconstruct head entity 0.6 reconstruct tail entity reconstruction score function (b) Reconstruction Classifier ranking result pseudo label generation 0.6 𝒗𝒗𝑖𝑖 ℎ 𝒗𝒗𝑖𝑖 𝑡𝑡 𝒖𝒖𝑖𝑖 ℎ 𝒖𝒖𝑖𝑖 𝑡𝑡 (a) Feature Extraction Figure 1: An overview of the proposed model CCA. (a) BERT and Transformer-based graph encoders extract textual and graph structural information, respectively. (b) The reconstruction module classifies error triplets by reconstructing head and tail entities in textual and structural embedding. (c) Interactive contrastive learning aligns the projection of textual and structural embeddings and recognizes errors by inter-model difference. (d) The knowledge fusion module takes pseudo labels generated from aggregated results as triplet confidence, which is further injected into the training process. incompleteness of KGs leads to the lack of important information that can distinguish semantically-similar noise, which exhibits greater semantic relevance and shares a similar graph structure with correct triplets. Furthermore, we consider the prevalent scenarios of KG error detection to build adversarial noise. For automatic KG construction and completion, noise is inevitably introduced. An effective error detection model should possess the capability to pinpoint triplets that have been inaccurately completed or constructed. We filter out the error results from the construction model to constitute adversarial noise. Subsequent experiments also show that textual information plays a key role in the noise generated by completion which relies on graph structure. Existing works need a potent method to discern more realistic noise and extract the full potential of textual information within KGs for error detection. To achieve this goal, we opt to leverage a pre-trained language model (PLM) as the encoder of textual information in our model. We refer to KG-BERT (Yao, Mao, and Luo 2019), PKGC (Lv et al. 2022), MLMLM (Clouatre et al. 2021), and CoLE (Liu et al. 2022), which perform well in KG embedding and link prediction. A PLM uses a large number of open-domain corpora for training and can supplement rich information for triplets. We propose a novel KG error detection model CCA. It leverages the reconstruction of triplets to comprehend noise patterns from both textual and graph structural perspectives. Also, we design interactive contrastive learning to align the latent representations of textual and structural information. It facilitates noise identification based on disparities between these two forms of information. CCA combines the reconstruction and contrastive learning output and generates pseudo-labels to represent triplet confidence. This adaptive confidence guides model training by alleviating noise interference and transferring knowledge between reconstruction and contrastive learning. With the utilization of textual information, CCA not only outperforms the state-of-the-art methods on random noise but also performs more prominently in semantically-similar noise and adversarial noise, validating the effectiveness of CCA in complex real-world scenarios. In summary, this paper makes the following contributions: • We propose an end-to-end KG error detection model, which fully leverages both textual and structural information by reconstructing triplets, and alleviates the interference of noise. It transfers the knowledge between reconstruction and contrastive learning. • We design interactive contrastive learning to align the latent representations of textual and structural information. We use different negative sampling policies to mine anomalous features on the alignment of latent spaces. • We construct two kinds of noise, semantically-similar noise and adversarial noise, to evaluate the performance of our model in more realistic scenarios. Experiments show that CCA not only surpasses the state-of-the-art competitors on random noise but also achieves more significant results on the datasets with semantically-similar noise and adversarial noise. Datasets and source code are available at https://github.com/nju-websoft/CCA. Related Work KG Error Detection The embedding-based models, e.g., TransE (Bordes et al. 2013), DistMult (Yang et al. 2015), and ComplEx (TrouilThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8825 lon et al. 2016), evaluate triplets by designing score functions according to the representations of entities and relations. TransE employs ∥h + r −t∥as the score function to assess triplets. Subsequent works such as CKRL (Xie et al. 2018), CAGED (Zhang et al. 2022), and SCEF (Zhao and Liu 2019) adopt the TransE’s score function as a component. The embedding-based models heavily rely on negative sampling, and their effectiveness is hampered by the challenge of accurately modeling the real noise distribution. Consequently, the negative samples often deviate from the real noise, undermining the error detection performance. Some works exploit path information in KGs as a reference. PTransE (Lin et al. 2015) integrates relation path information into triplet embedding learning. KGTtm (Jia et al. 2019) leverages path information for error detection. Several works recognize that relying solely on graph structure may not be sufficient for robust KG error detection. Integrating additional information can offer more semantic insights into triplets or provide more accurate supervision signals, thereby enhancing model capabilities. Defacto (Lehmann et al. 2012) uses the information from relevant web pages to verify triplets, while the categorical information of entities and relations is also useful to assess the trustworthiness of triplets (Paulheim and Bizer 2014). CrossVal (Wang, Ma, and Gao 2020) introduces humancurated knowledge repositories to assess triplet confidence in the target KG through the correlation of triplets in two KGs. Crowdsourcing and active learning models, such as KAEL (Dong et al. 2023), show promising results. CKRL (Xie et al. 2018) and PGE (Cheng et al. 2022) also explore confidence-aware methods. Their confidence constraints are based solely on the score function of the TransE’s loss. The existing models mainly focus on utilizing graph structure while under-utilizing valuable textual information. By considering this, our proposed model CCA seeks to overcome these limitations by jointly leveraging textual and graph structural information, aiming to achieve more accurate and comprehensive KG error detection. PLMs for KGs While KG error detection using PLMs is relatively scarce, the use of PLMs has demonstrated success in various tasks like KG completion. KG-BERT (Yao, Mao, and Luo 2019) and PKGC (Lv et al. 2022) achieve good results by directly fine-tuning PLMs by the triplet classification tasks using carefully constructed input templates. MMLLM (Clouatre et al. 2021) effectively employs the [MASK] token in prompts to enhance link prediction. To fuse language models with structural models, (Nadkarni et al. 2021; Wang et al. 2021; Chen et al. 2023) leverage the text enhancement method to represent entities by combining textual and structural features. CoLE (Liu et al. 2022) explores the use of the two types of information to construct models independently and uses knowledge distillation to complement and improve each other’s performance. The Proposed Model In this section, we describe the proposed model CCA in detail. The framework is shown in Figure 1. The goal of P T t = [CLS] [SEP]r 1 Nh [SEP]r 2 Nr [SEP]r 3 [MASK] [SEP]r 4 Dh P T h = [CLS] [SEP]r 1 [MASK] [SEP]r 2 Nr [SEP]r 3 Nt [SEP]r 4 Dt P S t = [CLS] [h] [SEP] [r] [SEP] [MASK] [SEP] [h′] [r′]... P S h = [CLS] [MASK] [SEP] [r] [SEP] [t] [SEP] [t′] [r′]... Table 2: Inputs used in the textual and structural encoders the proposed model is to better leverage both textual and graph structural information at the same time. Given a triplet (h, r, t), we first construct its input sequence by its subgraph and the descriptions of the entities and the relation. Then, BERT and a Transformer-based encoder are used to extract textual features and graph structural features, respectively. Reconstruction loss is employed to construct the head or tail entity in the triplet and evaluate the triplet’s trustworthiness. Interactive contrastive learning utilizes the projection of textual and structural representations to discover anomalous features in the two alignment spaces. The scores from contrastive learning and reconstruction classifier are aggregated to generate a pseudo label as the confidence to dynamically constrain the training process. Feature Extraction Text encoder. Currently PLMs like BERT (Kenton and Toutanova 2019) have been widely used to extract textual features from KGs. For each entity e, following the previous work (Liu et al. 2022), we leverage its description De and human-readable literal name Ne to learn the textual representation. We first add the corresponding representation of the new entity into the PLM’s vocabulary. To better leverage the description of the entity, pre-trained prompts are constructed for initializing the new token embedding e: e = BERT(“The description of [MASK] is De”), (1) where [MASK] denotes the missing entity and we use its embedding to initialize the token embedding e. After pre-training on the description, we construct two prompts through the masked head and tail entities to better leverage textual information. Given a triplet (h, r, t), we use the prompts presented in Table 2 as the input sequence. In P T h , the [MASK] token is used to replace the original head entity, and the tail entity t in P T t is also masked, so that h and t can be reconstructed respectively by the encoder. [CLS] and [SEP] are two special tokens used to separate the different parts of the prompt. [SEP]r i is the i-th adjustable soft prompt token for the relation r, as a more expressive separator. We provide the textual description Dt of the tail entity in P T h , and Dh of the head entity is used in P T t . If the tail entity t is an anomalous entity, it is hard to reconstruct t through the text description of h in P T t , so the reconstruction loss can be used to evaluate the error possibility of triplets from the textual view. Structure encoder. To make full use of the graph structure in KGs, we build an entity’s subgraph by random sampling and use Transformer to encode it. Given a triplet (h, r, t), we also create two input sequences P S h and P S t by masking h and t, respectively. For P S h , let InNeighbor(h) =  (h′, r′) | (h′, r′, h) ∈ Γ and OutNeighbor(h) = The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8826  (t′, r′) | (h, r′, t′) ∈Γ denote two neighbor sets of entity h, where Γ is the triplet set. We randomly select some neighbors and separate them by the [SEP] token, as shown in Table 2. P S t is constructed in the same way. The structure encoder can learn the differences between t and the subgraph of h. Therefore, by reconstructing entities from the subgraphs, errors can be distinguished. Reconstruction Classifier Given a triplet (h, r, t), we can get the output representation eT h of the masked head entity from the text encoder: eT h = BERT(P T h ), (2) where P T h is the input sequence that we construct in Table 2. Then, we can calculate the prediction logits against all candidate entities and get the cross-entropy loss LT h according to the corresponding labels: LT h = CE  softmax MLP(eT h ), ET  , yh  , (3) where ET is the matrix of all entity embeddings and yh is the corresponding labels. eT h is transferred by a multi-layer perceptron and used to calculate the prediction logits with all embeddings in ET . CE() denotes the cross-entropy loss. Note that the tail entity should also be reconstructed, and we calculate eT t and LT t in the same way. Next, we introduce adaptive confidence to the reconstruction loss and obtain the final text reconstruction loss: Ltext = X (h,r,t)∈Γ c(h, r, t) LT h + LT t  , (4) where the triplet confidence c(h, r, t) is calculated in Section shortly. Note that we use the same way on the Transformer encoder to get the masked entity losses LS h and LS t , and the overall reconstruction loss Lstruct similar to Eqs. (2), (3), and (4). We add Ltext and Lstruct and jointly train the final reconstruction loss Lreconstruct as follows: Lreconstruct = αLtext + (1 −α)Lstruct, (5) where α is a hyperparameter to balance the training process. Finally, we use the cross-entropy loss to compute the scores of the triplet (h, r, t): scoretext = LT h + LT t , (6) scorestruct = LS h + LS t , (7) where scoretext and scorestruct are the confidence scores from text and structure reconstruction, respectively. Interactive Contrastive Learning We use interactive contrastive learning (Yang et al. 2022) to learn the differences between textual and structural information. For a triplet (h, r, t), vh i and vt i represent the token embeddings generated from the PLM by P T h and P T t , respectively. uh i and ut i represent the embeddings from the structure encoder. vh i contains the textual information of t and the reconstruction of h from the PLM. ut i has the structural information of h and the reconstruction of t from Transformer. Intuitively, vh i and ut i should be aligned, so that the prediction of h can match its structural information and the prediction of t can match its textual information. Therefore, we use vh i and ut i as one anchor pair and vt i and uh i as the other pair for contrastive learning. Negative sampling. In interactive contrastive learning, we maximize the agreement between the anchor pair like vh i from the textual encoder and its structural representation ut i for one triplet (h, r, t). We use two negative sampling policies to support stable training. First, we construct negative samples to align the latent spaces from the two encoders. For the anchor sample vh i from the text encoder, another sample in the structure encoder like ut j should keep a distance from the anchor sample. Thus, we randomly take another sample from the structure encoder as one of the negative samples. To improve the sensitivity and robustness of the model against noise, we construct error samples as negative examples based on the anchor. We randomly replace the head (or tail) entity of the anchor sample (h, r, t) to construct a typical error sample (h, r, t′) (or (h′, r, t)), and generate its representations mh ik and nt ik from different encoders, respectively. Note that (h, r, t′) is in fact the noise generated from disturbance. Therefore, mh ik and nt ik should not be aligned. To reduce the training cost, we directly use the representation of the entity obtained by the encoder in each training batch to disturb original embeddings. For a triplet (h, r, t), the similarity of its negative samples is calculated as neg sim(h, r, t) = X j̸=i exp sim(vh i , ut j)  + X X x=1 exp sim(mh ix, nt ix)  , (8) where vh i denotes the masked head triplet embedding of the i-th sample and ut j denotes the masked tail triplet embedding of the j-th sample. sim() denotes the cosine similarity used to calculate the distance of the two embeddings. mh ix and nt ix represent the randomly replaced embeddings from vh i and ut i, and we replace them for X = 4 times randomly. Adaptive contrastive learning. We employ the InfoNCE loss (van den Oord, Li, and Vinyals 2018) to train interactive contrastive learning. For the anchor pair vh i and ut i, the loss L1 ICL(h, r, t) is L1 ICL(h, r, t) = −log exp(sim(vh i ,ut i)) exp(sim(vh i ,ut i))+neg sim(h,r,t), (9) where neg sim(h, r, t) is the similarity of negative samples. Note that if we choose vt i and uh i as the anchor pair, we can get L2 ICL(h, r, t) in the same way. Therefore, the final contrastive training loss is Lcontr = P (h,r,t)∈Γ c(h, r, t) L1 ICL(h, r, t) + L2 ICL(h, r, t)  , (10) where c(h, r, t) is the adaptive confidence of (h, r, t). The score function of contrastive learning of (h, r, t) is scorecontrastive = sim(vh i , ut j). (11) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8827 Datasets Correct triplets Wrong triplets Avg. deg. FB15K-237 310,116 16,321 44.89 WN18RR 93,003 4,894 4.78 Table 3: Statistics of the modified datasets Knowledge Fusion Since scoretext and scorestruct are both logits from triplet reconstruction, we directly add them to obtain the reconstruction score: scorereconstruct = scoretext + λ scorestruct, (12) where λ is the hyperparameter to balance the scores. Note that the scores from reconstruction and contrastive learning have quite different distributions, so we use ranking to combine them. By ranking the error scores scorereconstruct and scorecontrastive, we obtain the error ranks R1(h, r, t) and R2(h, r, t) of each triplet, respectively. The final score is score(h, r, t) = 1 ⌈R1(h,r,t) γ ⌉ + 1 ⌈R2(h,r,t) γ ⌉β , (13) where γ is the hyperparameter to control the number of triplets with the same score. β is used to balance the scores. We generate a pseudo label from the final score. Let Z = normalize [z1, . . . , zn]  be the pseudo label set, where zi ∼N(µ, ρ) and Z is sorted in ascending order, the adaptive confidence of (h, r, t) is as follows: c(h, r, t) = zR(h,r,t), (14) where R(h, r, t) is the rank of score(h, r, t) and we use c(h, r, t) as the final result. Note that c(h, r, t) is the result integrated from textual and structural reconstruction and contrastive learning. We use c(h, r, t) directly constrain the training process according to Eqs. (4) and (10) to transfer knowledge across components. Experiments and Results Dataset Construction We conduct our experiments on FB15K-237 (Toutanova et al. 2015) and WN18RR (Dettmers et al. 2018). We add 5% noise into these two datasets. The statistics of the new datasets are shown in Table 3. We construct three types of noise to better evaluate the performance of our model: • Random noise, drawn from a uniform distribution that does not depend on the data and is not predictable. For a correct triplet (h, r, t), random noise is constructed by randomly replacing one of the entities or the relation. • Semantically-similar noise, which is more realistic because errors primarily stem from semantic confusion in real-world scenarios. To better evaluate semanticallyrelated error detection, we introduce semantically-similar noise. Given a triplet (h, r, t), we form a candidate set St by selecting other tail entities linked to r. For each entity e ∈St, we employ BERT to encode its description, acquiring the semantic embedding e of e. We calculate the sampling probability distribution of semantic similarity as follows and utilize it as a sampling probability to replace the original t: P(t) = softmax  t · e1, t · e2, . . . , t · ei  , (15) where t denotes the embedding of the original tail entity t, ei denotes the embedding of the i-th candidate entity, and · denotes the dot product. • Adversarial noise, which is adversarially generated from KG construction models. Since error detection models are frequently employed to discern errors in automatically constructed KGs, we use TransE (Bordes et al. 2013) for adversarial noise generation and evaluate the ability of models to recognize errors during KG construction. Given a dataset D, we randomly divide it into training and testing sets, Dtrain and Dtest. After training on Dtrain, we randomly select one entity within the top-10 prediction results of the triplet in Dtest to construct its error triplet. We iteratively repeat this process until we get an adequate amount of noise. Settings We use the ranking measures for evaluation. All triplets in a KG are ranked based on the confidence scores in ascending order. Triplets with lower confidence scores are more likely to be noisy. We follow CAGED (Zhang et al. 2022) and use precision@top-K and recall@top-K to assess the performance, where K denotes the ratio (e.g., 5%). All experiments are conducted on two Intel Xeon Gold 6326 CPUs, 512GB RAM, and one NVIDIA RTX A6000 GPU. We leverage the BERT-base model from huggingface as the PLM. We use PyTorch to implement our model and employ the AdamW optimizer and a cosine decay scheduler with a linear warm-up for optimization. The grid search is used for hyperparameter tuning. The results of KG embedding models are obtained from µKG (Luo, Sun, and Hu 2022), a recent open-source library for KG embedding. Baseline Models We compare our model with eight baselines, including five structural models and three textual models. • Structural models. We choose three typical embedding models TransE (Bordes et al. 2013), DistMult (Yang et al. 2015), and ComplEx (Trouillon et al. 2016) and two state-of-the-art structural error detection models KGTtm (Jia et al. 2019) and CAGED (Zhang et al. 2022) for comparison. Both KGTtm and CAGED utilize graph structural information based on the TransE’s score function. • Textual models. We choose one textual classification model KG-BERT (Yao, Mao, and Luo 2019), and two recent models StAR (Wang et al. 2021) and CSProm-KG (Chen et al. 2023), which leverage both textual and structural information, for comparison. KG-BERT takes entity and relation descriptions of a triplet as input and computes scores by binary classification. StAR and CSPromKG use the representations obtained from PLMs to aid structural models. CCA also falls into this category. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8828 Model types Models FB15K-237 WN18RR K=1% K=2% K=3% K=4% K=5% K=1% K=2% K=3% K=4% K=5% Precision@top-K Struct models TransE 0.946 0.774 0.606 0.498 0.423 0.690 0.576 0.501 0.437 0.400 DistMult 0.764 0.630 0.530 0.463 0.410 0.687 0.633 0.526 0.438 0.374 ComplEx 0.802 0.633 0.521 0.446 0.393 0.774 0.699 0.550 0.449 0.384 KGTtm 0.857 0.687 0.631 0.467 0.437 0.789 0.644 0.541 0.473 0.417 CAGED 0.863 0.666 0.602 0.543 0.467 0.753 0.620 0.536 0.470 0.421 Text models KG-BERT 0.966 0.799 0.660 0.584 0.498 0.973 0.968 0.938 0.829 0.710 StAR 0.970 0.835 0.681 0.571 0.490 0.971 0.918 0.842 0.739 0.647 CSProm-KG 0.961 0.798 0.689 0.574 0.509 0.977 0.927 0.869 0.773 0.680 CCA (ours) 0.969 0.812 0.707 0.599 0.534 0.986 0.959 0.920 0.834 0.733 Recall@top-K Struct models TransE 0.189 0.310 0.364 0.399 0.423 0.138 0.231 0.300 0.350 0.400 DistMult 0.153 0.252 0.318 0.371 0.410 0.137 0.253 0.316 0.350 0.374 ComplEx 0.161 0.254 0.313 0.357 0.393 0.155 0.279 0.330 0.359 0.384 KGTtm 0.171 0.275 0.378 0.374 0.437 0.158 0.257 0.324 0.378 0.417 CAGED 0.173 0.266 0.362 0.435 0.467 0.150 0.248 0.321 0.376 0.421 Text models KG-BERT 0.193 0.319 0.396 0.467 0.498 0.195 0.387 0.563 0.663 0.710 StAR 0.194 0.334 0.409 0.457 0.490 0.194 0.367 0.505 0.591 0.647 CSProm-KG 0.192 0.319 0.413 0.459 0.509 0.195 0.371 0.521 0.618 0.680 CCA (ours) 0.194 0.325 0.424 0.479 0.534 0.197 0.384 0.552 0.667 0.733 Table 4: Results of precision and recall at top-K on FB15K-237 and WN18RR Models K=1% K=3% K=5% FB15K-237 CCA (full) .969 / .194 .707 / .424 .535 / .535 – adapt conf. .951 / .190 .664 / .398 .496 / .496 – inter contr. .961 / .192 .679 / .407 .509 / .509 – struct recon. .797 / .159 .582 / .349 .475 / .475 – text recon. .778 / .156 .543 / .325 .414 / .414 WN18RR CCA (full) .986 / .197 .920 / .552 .733 / .733 – adapt conf. .971 / .194 .858 / .515 .632 / .632 – inter contr. .979 / .196 .911 / .546 .722 / .722 – struct recon. .974 / .195 .914 / .549 .726 / .726 – text recon. .577 / .115 .513 / .308 .423 / .423 Table 5: Ablation results of precision and recall at top-K Main Results Table 4 presents the comparison results of our model and eight baseline models on FB15K-237 and WN18RR, where we add 5% noise, containing three types of noise in equal quantities. Overall, our model outperforms eight baseline models on both datasets. We have three observations below: First, compared with the KG embedding models, the error detection models generally perform better. This is because the KG embedding models assume that all triplets are correct. They learn representations of entities and relations without alleviating disturbance from noise, making it difficult to discriminate error triplets. With adaptive confidence, CCA can effectively improve performance. Second, benefiting from PLMs, the textual models outperform the structural models on the two datasets. They both can learn noise patterns in the textual view with the descriptions of entities and relations, considering that PLMs can capture factual knowledge from a large amount of opendomain corpora. CCA combines both textual information Models Random Similar Adversarial FB15K-237 TransE 0.726 0.304 0.125 ComplEx 0.734 0.306 0.150 DistMult 0.662 0.328 0.125 KGTtm 0.730 0.318 0.128 CAGED 0.758 0.331 0.126 KG-BERT 0.674 0.336 0.199 StAR 0.728 0.371 0.164 CSProm-KG 0.732 0.419 0.211 CCA (ours) 0.768 0.453 0.240 WN18RR TransE 0.434 0.351 0.314 ComplEx 0.384 0.303 0.366 DistMult 0.316 0.338 0.285 KGTtm 0.448 0.391 0.359 CAGED 0.486 0.388 0.373 KG-BERT 0.806 0.633 0.599 StAR 0.794 0.610 0.527 CSProm-KG 0.791 0.634 0.536 CCA (ours) 0.807 0.657 0.545 Table 6: Precision at top-5% on three different error types and structural information and uses a Transformer encoder to learn structural information from scratch, which is more effective than other textual models. Third, the performance gap between CCA and KG-BERT on WN18RR is less significant than that on FB15K-237. We think the reason is that WN18RR is much sparser than FB15K-237. As shown in Table 3, the average degree of entities in WN18RR is far less than that of FB15K-237, which indicates that there is less structural information in WN18RR. Thus, our model obtains less improvement by combining with graph structure on sparse KGs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8829 CCA KG-BERT CAGED Case 1. Noise Razzie Award for Worst Actor, award winner, Richard D. Zanuck 2.9% 7.6% 3.4% Truth Razzie Award for Worst Actor, award winner, Marlon Wayans Case 2. Noise Tony Award for Best Choreography, ceremony, 54th Academy Awards 3.2% 10.6% 11.6% Truth Academy Award for Best Sound Mixing, ceremony, 54th Academy Awards Case 3. Noise Lauren Tom, place of birth, Buenos Aires 2.2% 3.2% 35.1% Truth Sebastian Krys, place of birth, Buenos Aires Case 4. Noise Viola Davis, award nominee, Carrie-Anne Moss 1.0% 10.9% 4.3% Truth Viola Davis, award nominee, Meryl Streep Table 7: Case study on FB15K-237 Ablation Study We conduct an ablation study to assess the impact of each component in CCA. We have four variants of our model by removing confidence adaption, interactive contrastive learning, structure reconstruction, or text reconstruction. We report their performance under the same settings. Table 5 shows that all components contribute to our model on both datasets, where text reconstruction contributes the most. On FB15K-237, the precision improvement of text construction is from 0.414 to 0.535. This shows that applying PLMs to KG error detection is a promising way, and it deserves further research. The improvement is more obvious on WN18RR. The reason is that the structure-based models perform poorly, as WN18RR is sparser than FB15K-237. Removing adaptive confidence affects differently on the two datasets. On WN18RR, adaptive confidence contributes more than FB15K-237, as the performance gap between text and structure reconstruction is larger. Without adaptive confidence, structure reconstruction is hard to gain benefits from the knowledge of PLMs, which provides scores with lower accuracy and disturbs the final results. Therefore, adaptive confidence can be more effective in improving overall performance when components have a larger performance gap. Performance on Different Error Types To investigate the robustness of all models to different noise types, we add three types of 5% noise separately to the datasets. Note that precision@top-5% and recall@top-5% have the same value as we add 5% noise. Table 6 presents precision@top-5% on FB15K-237 and WN18RR. On FB15K-237, CCA outperforms all baselines. For random noise, the structure-based models generally outperform KG-BERT, as this type of noise can be well distinguished by graph structure alone. For semantically-similar noise, all baseline models perform closely. As for adversarial noise, the textual models outperform the structural ones, because this type of noise consists of entities that are wrongly predicted as correct entities by TransE, which is a structural model. Our CCA takes advantage of the knowledge from PLMs and graph structure at the same time, so it outperforms all baselines, especially for semantically-similar noise. On WN18RR, CCA is comparable to KG-BERT, which is different from the observations on FB15K-237. The models with PLMs largely outperform the structural models on all three types of noise, mainly due to that WN18RR is sparser than FB15K-237. For adversarial noise, our model underperforms KG-BERT. Given that the structure-based models do not work well on WN18RR, Transformer in CCA provides less reliable knowledge for the PLM, hindering it from distinguishing the noise correctly. Case Study To explore how textual information and graph structure act on error detection, we perform a case study on CCA, KGBERT, and CAGED. Table 7 shows the position of the error triplet in a ranking, where a smaller percentage is better. In the first two cases, CCA leverages both textual and graph structural information, and outperforms KG-BERT and CAGED, which solely use one type of information. For example, in Case 1, the description shows that Richard D. Zanuck is an American film producer, and the graph structural information records films that he has produced. Evidence from text and graph structure are complementary to each other. In Case 3, CAGED is not effective compared with CCA and KG-BERT, which is caused by the lack of graph structural information. The degrees of entities “Lauren Tom” and “The Venture Bros” are 15 and 17, respectively, which are much smaller than the average degree of 44.89 in FB15K-237. In Case 4, entities in the noise and correct triplets all have abundant graph structural information, so CAGED can achieve a better effect. Conclusion In this paper, we propose a novel KG error detection model. It encodes textual and graph structural information to find noise patterns. To alleviate the disturbance of noise and integrate the knowledge from the two encoders, we design a confidence adaption model to aggregate the results and constrain the training process. To learn the noise patterns between textual and structural information, we leverage interactive contrastive learning to align latent spaces. We construct semantically-similar noise and adversarial noise for evaluation. Experiments show that our model achieves good results on semantically-similar noise and adversarial noise. Acknowledgments This work is supported by the National Natural Science Foundation of China (No. 62272219). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8830 References Bordes, A.; Usunier, N.; Garc´ıa-Dur´an, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. In NIPS, 2787–2795. Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Hruschka, E.; and Mitchell, T. 2010. Toward an architecture for neverending language learning. In AAAI, 1306–1313. Chen, C.; Wang, Y.; Sun, A.; Li, B.; and Lam, K.-Y. 2023. Dipping PLMs sauce: Bridging structure and text for effective knowledge graph completion via conditional soft prompting. In Findings of ACL, 11489–11503. Chen, S.; Liu, X.; Gao, J.; Jiao, J.; Zhang, R.; and Ji, Y. 2021. HittER: Hierarchical transformers for knowledge graph embeddings. In EMNLP, 10395–10407. Cheng, K.; Li, X.; Xu, Y. E.; Dong, X. L.; and Sun, Y. 2022. PGE: Robust product graph embedding learning for error detection. Proc. VLDB Endow., 15(6): 1288–1296. Clouatre, L.; Trempe, P.; Zouaq, A.; and Chandar, S. 2021. MLMLM: Link prediction with mean likelihood masked language model. In Findings of ACL, 4321–4331. Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2D knowledge graph embeddings. In AAAI, 1811–1818. Dong, J.; Zhang, Q.; Huang, X.; Tan, Q.; Zha, D.; and Zihao, Z. 2023. Active ensemble learning for knowledge graph error detection. In WSDM, 877–885. Dong, X.; Gabrilovich, E.; Heitz, G.; Horn, W.; Lao, N.; Murphy, K.; Strohmann, T.; Sun, S.; and Zhang, W. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In KDD, 601–610. Guo, Q.; Zhuang, F.; Qin, C.; Zhu, H.; Xie, X.; Xiong, H.; and He, Q. 2022. A survey on knowledge graph-based recommender systems. IEEE Trans. Knowl. Data Eng., 34(8): 3549–3568. Jia, S.; Xiang, Y.; Chen, X.; and Wang, K. 2019. Triple trustworthiness measurement for knowledge graph. In WWW, 2865–2871. Kenton, J. D. M.-W. C.; and Toutanova, L. K. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 4171–4186. Lehmann, J.; Gerber, D.; Morsey, M.; and Ngomo, A.-C. N. 2012. Defacto-deep fact validation. In ISWC, 312–327. Lin, Y.; Liu, Z.; Luan, H.; Sun, M.; Rao, S.; and Liu, S. 2015. Modeling relation paths for representation learning of knowledge bases. In EMNLP, 705–714. Liu, Y.; Sun, Z.; Li, G.; and Hu, W. 2022. I know what you do not know: Knowledge graph embedding via codistillation learning. In CIKM, 1329–1338. Luo, X.; Sun, Z.; and Hu, W. 2022. µKG: A library for multi-source knowledge graph embeddings and applications. In ISWC, 610–627. Lv, X.; Lin, Y.; Cao, Y.; Hou, L.; Li, J.; Liu, Z.; Li, P.; and Zhou, J. 2022. Do pre-trained models benefit knowledge graph completion? A reliable evaluation and a reasonable approach. In Findings of ACL, 3570–3581. Nadkarni, R.; Wadden, D.; Beltagy, I.; Smith, N.; Hajishirzi, H.; and Hope, T. 2021. Scientific language models for biomedical knowledge base completion: An empirical study. In AKBC. Paulheim, H.; and Bizer, C. 2014. Improving the quality of linked data using statistical distributions. Int. J. Semant. Web Inf. Syst., 10(2): 63–86. Saxena, A.; Tripathi, A.; and Talukdar, P. P. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In ACL, 4498–4507. Toutanova, K.; Chen, D.; Pantel, P.; Poon, H.; Choudhury, P.; and Gamon, M. 2015. Representing text for joint embedding of text and knowledge bases. In EMNLP, 1499–1509. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, ´E.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In ICML, 2071–2080. van den Oord, A.; Li, Y.; and Vinyals, O. 2018. Representation learning with contrastive predictive coding. arXiv, 1807.03748. Wang, B.; Shen, T.; Long, G.; Zhou, T.; Wang, Y.; and Chang, Y. 2021. Structure-augmented text representation learning for efficient knowledge graph completion. In WWW, 1737–1748. Wang, Y.; Ma, F.; and Gao, J. 2020. Efficient knowledge graph validation via cross-graph representation learning. In CIKM, 1595–1604. Xie, R.; Liu, Z.; Lin, F.; and Lin, L. 2018. Does William Shakespeare really write Hamlet? Knowledge representation learning with confidence. In AAAI, 4954–4961. Yang, B.; tau Yih, S. W.; He, X.; Gao, J.; and Deng, L. 2015. Embedding entities and relations for learning and inference in knowledge bases. In ICLR. Yang, C.; An, Z.; Cai, L.; and Xu, Y. 2022. Mutual contrastive learning for visual representation learning. In AAAI, 3045–3053. Yao, L.; Mao, C.; and Luo, Y. 2019. KG-BERT: BERT for knowledge graph completion. arXiv, 1909.03193. Zhang, Q.; Dong, J.; Duan, K.; Huang, X.; Liu, Y.; and Xu, L. 2022. Contrastive knowledge graph error detection. In CIKM, 2590–2599. Zhao, Y.; and Liu, J. 2019. SCEF: A support-confidenceaware embedding framework for knowledge graph refinement. arXiv, 1902.06377. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8831
2024
981
18,830
Perturbation-Invariant Adversarial Training for Neural Ranking Models: Improving the Effectiveness-Robustness Trade-Off Yu-An Liu1,2, Ruqing Zhang1,2, Mingkun Zhang1,2, Wei Chen1,2, Maarten de Rijke3, Jiafeng Guo1,2, Xueqi Cheng1,2 1CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3University of Amsterdam, Amsterdam, The Netherlands {liuyuan21b, zhangruqing, zhangmingkun20z, chenwei2022, guojiafeng, cxq}@ict.ac.cn, [email protected] Abstract Neural ranking models (NRMs) have shown great success in information retrieval (IR). But their predictions can easily be manipulated using adversarial examples, which are crafted by adding imperceptible perturbations to legitimate documents. This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs. By incorporating adversarial examples into training data, adversarial training has become the de facto defense approach to adversarial attacks against NRMs. However, this defense mechanism is subject to a trade-off between effectiveness and adversarial robustness. In this study, we establish theoretical guarantees regarding the effectiveness-robustness tradeoff in NRMs. We decompose the robust ranking error into two components, i.e., a natural ranking error for effectiveness evaluation and a boundary ranking error for assessing adversarial robustness. Then, we define the perturbation invariance of a ranking model and prove it to be a differentiable upper bound on the boundary ranking error for attainable computation. Informed by our theoretical analysis, we design a novel perturbation-invariant adversarial training (PIAT) method for ranking models to achieve a better effectivenessrobustness trade-off. We design a regularized surrogate loss, in which one term encourages the effectiveness to be maximized while the regularization term encourages the output to be smooth, so as to improve adversarial robustness. Experimental results on several ranking models demonstrate the superiority of PITA compared to existing adversarial defenses. Introduction Ranking is a fundamental problem in information retrieval (IR). With advances in deep learning (LeCun, Bengio, and Hinton 2015), neural ranking models (NRMs) (Guo et al. 2020) have achieved remarkable effectiveness. We have also witnessed substantial uptake of NRMs in practice (Lin, Nogueira, and Yates 2022). Recently, it has been demonstrated that NRMs are vulnerable to adversarial examples that are capable of inducing misbehavior with humanimperceptible perturbations (Wu et al. 2023; Liu et al. 2022; Chen et al. 2023). So far, little attention has been devoted to combating this issue. A representative and successful Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. method for attacking NRMs is the word substitution ranking attack (WSRA), which promotes a target document in rankings by replacing important words with synonyms (Wu et al. 2023). Given the prevalence of black-hat search engine optimization (SEO) (Gy¨ongyi and Garcia-Molina 2005), enhancing the adversarial robustness of NRMs against such attacks is vital for their use in real-world scenarios. Among adversarial defense mechanisms proposed to improve model robustness (Jia and Liang 2017; Raghunathan, Steinhardt, and Liang 2018; Madry et al. 2018), adversarial training remains the top-performer (Shafahi et al. 2019; Zhu et al. 2019). During adversarial training adversarial examples are fed to a model. However, this causes an undesirable reduction in effectiveness on natural (clean) samples, giving rise to a trade-off dilemma between effectiveness and robustness (Tsipras et al. 2019). This is because effectiveness concerns the overall performance under normal conditions, while adversarial robustness centers on performance under malicious behavior. Several refinements have been suggested for vanilla adversarial training, to mitigate the aforementioned trade-off in text and image classification (Zhang et al. 2019; Wang et al. 2021). However, clear differences exist between classification and ranking scenarios concerning the trade-off, given that the former relies on a single sample, whereas the latter involves a ranked list. So far, the ranking task has not benefited from these advances in bridging the gap between effectiveness and robustness. This naturally raises the first question: What is the trade-off between effectiveness and robustness for ranking problems? We contribute a theoretical characterization of this question by decomposing the robust ranking error, i.e., the prediction error for adversarial examples, into two terms: (i) a natural ranking error, which focuses on the natural effectiveness of the ranked list predicted by the ranking model on clean data, and (ii) a boundary ranking error, which indicates the ranking model’s adversarial robustness against adversarial examples, measuring the proximity of input features to the decision boundary. We then introduce the perturbation invariance of a ranking model, which says that any adversarial perturbation to candidate documents does not alter the resulting document ranking. We prove that the perturbation invariance is a differentiable upper bound on the boundary The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8832 ranking error, which is sufficiently tight. Differences in measurements of these two errors, which express distinct optimization objectives, showcase the trade-off between effectiveness and robustness for ranking problems. Next to the effectiveness-robustness trade-off, the second issue we address is: How to design a defense mechanism against adversarial examples while maintaining competitive effectiveness for NRMs guided by our theoretical characterization? We introduce a novel perturbation-invariant adversarial training method (PIAT) to achieve this goal. The key idea is to capture the trade-off between natural and boundary ranking error by optimizing a regularized surrogate loss composed of two terms: (i) a natural ranking loss, which encourages the optimization of the natural ranking error by minimizing the “difference” between the predicted ranked list and the ground-truth based on supervised data, and (ii) an adversarial ranking loss, as the regularization term, which encourages the optimization of boundary ranking error by minimizing the “difference” between the predicted ranked list on natural candidates and on attacked candidates using semi-supervised learning. We propose three ways to implement the regularization term to ensure perturbation invariance. By combining supervised and semi-supervised training, we effectively leverage information from large-scale volumes of unlabeled documents to improve the effectiveness-robustness trade-off for NRMs. Extensive experiments conducted on the widely-used MS MARCO passage ranking dataset show that PIAT offers superior defense against WSRA while maintaining effectiveness as compared to several empirical defense methods, including data augmentation and vanilla adversarial training. Ablations and visualizations are provided for more insights. Preliminaries Our work focuses on adversarial robustness to word substitution ranking attacks for NRMs. We review this type of attack in this section. Attacks in web search. The web, as a canonical example of a competitive search setting, involves document authors who have incentives to optimize their content for better rankings in search results (Kurland and Tennenholtz 2022). This practice is commonly known as search engine optimization (SEO), which aims at improving the visibility and ranking of a web page in retrieved results when specific queries are entered by users (Gy¨ongyi and Garcia-Molina 2005). This can lead to a decrease in the overall quality of search results, as many irrelevant or low-quality documents may end up being ranked higher than they deserve, while more valuable and accurate content may get pushed down in the results. Word substitution ranking attack. Recently, there has been much research on adversarial attacks against NRMs to simulate real-world ranking competitions. A representative study is the word substitution ranking attack (WSRA) (Wu et al. 2023), which demonstrates promising results in terms of the attack success rate. Given a ranking model, WSRA aims to promote a target document in rankings by replacing important words in its text with synonyms in a semanticspreserving way. Our research concentrates on WSRA attacks and develops a corresponding defense strategy. Typically, in ad-hoc retrieval, given a query q and a set of document candidates D = {d1, d2, . . . , dNd}, a neural ranking model f predicts the relevance score f (q, di) of each query-document pair for ranking the whole candidate set. For example, f outputs the ranked list [dNd, dNd−1, . . . , d1] if it determines f (q, dNd) > f (q, dNd−1) > · · · > f (q, d1). The rank position of document di with respect to query q predicted by f is πf (q, di). And we use πy (q, di) to represent the ground-truth rank position of di with respect to q. Given a target document d = (w1, w2, . . . , wM) ∈D, the WSRA task constructs an adversarial example d′ = (w′ 1, w′ 2, . . . , w′ M) by replacing at most ϵ · M (ϵ ≤1) words in d with any of their synonyms w′ m. We denote a candidate set of adversarial examples (neighborhood) of d as B(d, ϵ), i.e., B(d, ϵ) :=  d′ : d′ −d 0 /∥d∥≤ϵ , (1) where ∥d∥represents the number of words in document d, d′ −d 0 := PM m=1 I {w′ m ̸= wm} is the Hamming distance, with I{·} the indicator function. Ideally, the goal of the attacker is to find d′ ∈B(d, ϵ) such that f(q, d′) > f(q, d) and d′ has the same semantic meaning as d. Theoretical Analysis: The Trade-Off Between Effectiveness and Robustness Tsipras et al. (2019) have shown that the goals of standard performance and adversarial robustness may be at odds. There can be an inherent trade-off between effectiveness and robustness. Drawing inspiration from the definitions of natural and robust accuracy in (Zhang et al. 2019), we characterize the trade-off in ranking by breaking down the robust ranking error into the sum of the natural ranking error and boundary ranking error. We also provide a differentiable upper bound on the boundary ranking error, to inform the design of the defense method. Natural Ranking Error So far, much effort in the field of NRMs has been dedicated to improving the ranking effectiveness, which is about the average performance under normal conditions. Definition 1 (Natural ranking error) Formally, the natural error associated with the effectiveness of a ranking model f on natural (clean) examples is denoted as, Rnat(f) := Edi∼DI {πf(q, di) ̸= πy(q, di)}, (2) where I{·} is the indicator function that is 1 if an event happens and 0 otherwise. For simplicity, we consider the 0 −1 loss in our theoretical analysis to evaluate the natural error. Boundary Ranking Error Here, we first define the decision boundary of a ranking model, and then introduce the boundary ranking error corresponding to the adversarial robustness of ranking models. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8833 Definition 2 (Ranking decision boundary) For a ranking model f, we define the ranking decision boundary as the predicted rank position πf(q, di) being higher or lower than it truly deserves. Note that for the topmost and the bottommost ranks, we exclusively consider the situations where the predicted rank is one position lower and higher, respectively. Considering practical attacks aimed at ranking improvement, we denote πn(q, di) = πy(q, di) −1 as the neighborhood rank of (q, di). Recall that low values of rank positions attest to high ranking. In this way, the ranking decision boundary can be formulated as: DB(f) := {di ∼D : πf(q, di) = πn(q, di)} . (3) We use B(di, ϵ) to represent a neighborhood of di under the WSRA attack. Then, for a ranking model f, we denote the neighborhood of the decision boundary of f as: B(DB(f), ϵ) := {di ∼D : ∃d′ i ∈B(di, ϵ) such that [πf(q, di)−πn(q, di)] · [πf(q, d′ i)−πn(q, di)]≤0}. (4) This implies that di and d′ i are located on different sides of the decision boundary concerning the query q. Therefore, a successful adversarial attack could move the target document to the wrong side of the decision boundary, leading to weak robustness of NRMs. The above analysis elucidates why a ranking model with high effectiveness might still manifest considerable adversarial vulnerability. This discrepancy arises from the distinction between optimizing based on natural ranking error and acquiring a robust decision boundary for NRMs. Based on experimental findings due to Wu et al. (2023), we can tell: (i) decision boundaries learned based on natural ranking errors enable NRMs to achieve high effectiveness on clean documents, and (ii) such boundaries are susceptible to being breached by adversarial examples, resulting in vulnerabilities to easy attacks. Existing attack methods take advantage of this boundary vulnerability to deceive the NRM. As such, training robust NRMs requires a defining boundary ranking error to tackle this vulnerability effectively. Definition 3 (Boundary ranking error) We introduce the boundary ranking error to assess the existence of adversarial examples near the ranking decision boundary of f, i.e., Rbdy(f) := Edi∼DI {di ∈B(DB(f), ϵ) , πf(q, di) = πy(q, di)} . (5) Optimizing the boundary ranking error poses a challenge, mainly due to the large volume of unlabeled documents in the datasets and the unavailability of ground-truth rankings. To address this obstacle, we present a solution in the form of an upper bound on the boundary ranking error. Theorem 1 (Upper bound of boundary ranking error) According to Eq. 4 and 5, for a ranking model f : q × D → R and ranking mechanism r : R × R →{±1, 0}, we have: Rbdy(f)≤Edi∼D max d′ i∈B(di,ϵ)I  πf(q, di) ̸= πf(q, d′ i) .(6) Theorem 1 states the boundary ranking error can be upperbounded by the expectation that any adversarial example maintaining its original ranking positions. This emphasizes the perturbation invariance of a robust ranking model, that is, any perturbation to the inputted candidate documents does not change the output ranking. Consequently, restraining the boundary ranking error is attainable by maximizing the outputted perturbation invariance of ranking models. Nonetheless, if an upper bound is too loose, it may lead to the inadequacy of effectively optimizing the error. Hence, we further prove the upper bound in Theorem 1 is tight enough. The tightness ensures the reduction of the boundary ranking error through the optimization of perturbation invariance. The proof of Theorem 1 and its tightness are provided at https://github.com/ict-bigdatalab/PIAT. Trade-Off Between Two Ranking Errors Based on the definitions of natural error and boundary error for a ranking model, we present the robust ranking error for adversarial examples. Definition 4 (Robust ranking error) To train a robust ranking model, the robust ranking error Rrob(f) under the WSRA scenario, can be decomposed as follows, Rrob(f) = Rnat(f) + Rbdy(f), (7) where Rnat(f) corresponds to naturally wrongly ranked documents; and Rbdy(f) corresponds to correctly ranked samples but close to the ϵ-extension of the ranking decision boundary. Consequently, these samples are susceptible to successful boundary-crossing attacks (i.e., ranked higher or lower) by introducing human-imperceptible perturbations. Algorithmic Design: PerturbationInvariant Adversarial Training Inspired by our theoretical analysis, we present a new defense method for NRMs, named perturbation-invariant adversarial training (PIAT), to strike a balance between effectiveness and adversarial robustness. Motivation Theorem 1 and Definition 4 emphasize the importance of simultaneously optimizing the natural ranking error and the boundary ranking error, when training a robust ranking model while preserving effectiveness. We introduce a refinement to adversarial training, called PIAT, tailored specifically for ranking problems. This involves the incorporation of a regularized surrogate loss aimed at optimizing the robust ranking error, comprising two essential terms, i.e., L = λLnat + (1 −λ)Ladv, (8) where the first term, i.e., the natural ranking loss Lnat, encourages the natural ranking error to be optimized, by minimizing the “difference” between the predicted and groundtruth ranked list. We achieve this by leveraging a traditional pair-wise loss, which is supervised using the labeled querydocument pairs. The regularization term, i.e., the adversarial ranking loss Ladv, encourages the boundary ranking error to be optimized. We propose a perturbation-invariant ranking loss to minimize the “difference” between the prediction of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8834 a clean document set and that of an attacked document set, which is semi-supervised by the NRM’s outputs. The λ is a trade-off parameter that controls the balance between effectiveness and robustness during training. Natural Ranking Loss The standard training of NRMs primarily emphasizes the model’s effectiveness on the labeled dataset (Dai et al. 2018a; Dai and Callan 2019). In line with existing research, we adopt a pairwise loss as the natural ranking loss, i.e., Lnat = −1 |Nq| |Nq| X i=1 log ef(qi,d+) ef(qi,d+) + P¯n j=1 ef(qi,d− j ) , (9) where Nq is the number of training queries, d+ is the relevant document and d−is the irrelevant document. We use the negative examples returned by the retrieval stage as hard negative examples and also incorporate random negative examples for the same purpose (Nogueira and Cho 2019). Adversarial Ranking Loss To enhance adversarial robustness, we first use the WSRA attack to generate adversarial examples. Subsequently, we utilize augmented adversarial examples to optimize the proposed perturbation-invariant ranking loss. Adversarial examples. To execute a WSRA attack in a decision-based black-box setting, Wu et al. (2023) introduce a pseudo-relevance based adversarial ranking attack method to generate adversarial examples. Following this work, for each query q, given a candidate document set D, we conduct the attack against a portion of the documents evenly to derive the adversarial examples Dadv. Each adversarial example dadv in Dadv is selected from the neighborhood of the original document d, based on the most threatening attack effect, i.e., dadv = arg max d′∈B(d,ϵ) f(q, d′) −f(q, d)  . (10) Thus, we obtain adversarial examples Dadv for each query q, which will be used in the following loss. Perturbation-invariant ranking loss. As the regularization term in Eq. 8, the adversarial ranking loss encourages the model’s output to be smooth, effectively constraining the sample instances within adjacent ranking decision boundaries of the model. This is achieved by minimizing the ranking order variance between the prediction of natural documents D and that of adversarial examples Dadv. We design the perturbation-invariant ranking loss between D and Dadv as the adversarial ranking loss, i.e., Ladv = −1 |Nq| |Nq| X i=1 ψ (f (qi, D) , f (qi, Dadv)) , (11) where f(q, D) is the predicted ranked list by a ranking model f over D; ψ(·) is a differential metric to evaluate the difference in the resulting document rankings between D and Dadv. Here, Dadv comprises Nadv perturbed documents and Nd −Nadv benign documents. We consider three ways to compute the difference ψ(·) in the ranked results obtained using D and Dadv. (1) KL divergence. To promote smoothness between D and Dadv during optimization, our objective is to minimize the KL divergence between the similarity distributions of the ranking model f. As a result, the computation of Ladv in Eq. 11 using the KL divergence, is as follows: LKL adv = KL(P(S | Q, D; f) ∥P(S | Q, Dadv; f)) = 1 Nq Nq X i=1 Nd X j=1 P(si | qi, dj ∼D; f) · log P(si | qi, dj ∼D; f) P(si | qi, d′ j ∼Dadv; f), (12) where P(si | qi, dj ∼D; f) = exp(f (qi, dj)) P dk∼D exp(f (qi, dk)), P(si | qi, d′ j ∼Dadv; f) = exp f qi, d′ j  P d′ k∼Dadv exp f qi, d′ k . Let us consider a scenario where only one document ranked at the bottom within D is perturbed and moves to the top1 position, while the other documents are shifted down one position each. In this case, the distribution of the entire permutation would not undergo significant disordering. However, even though the overall re-ordering might be limited, the situation could have implications for practical search engines. Therefore, using KL divergence as a metric may not impose a sufficiently severe penalty for this attack result. Next, we present alternatives to tackle this issue. We introduce a listwise loss to model the output ranking both before and after perturbation. By concentrating on the ranked list, our approach strives to prevent the perturbed document from excessively rising to the top position, thereby preserving a natural and gradual change in rankings. (2) Listwise function – ListNet. ListNet (Cao et al. 2007) devises a listwise loss to assess the dissimilarity between the predicted ranked list and the ground-truth permutation, given by the following expression: LListNet (f; q, D, Y) = KL (P(πf | φ(f(q, D))) ∥P(πY)) , (13) where πf is the permutation predicted by f, πY is the ground-truth permutation, and φ is a transformation function (an increasing and strictly positive function, e.g., linear, exponential or sigmoid). The probability of a permutation given the score list (Cao et al. 2007), is computed as follows, P(πf | φ(f(q, D))) = Nd Y j=1 φ fπ(j)(q, D)  PNd k=j φ fπ(k)(q, D) , (14) where fπ(i)(q, D) denotes the similarity score predicted by f of the document, which is ranked at the i-th position with respect to the query q. We define Ladv based on ListNet as, LListNet adv = KL P(πf(q,D) | f(q, Dadv))∥P(πf(q,D) | f(q, D))  , (15) where πf(q,D) is the permutation computed by the ranking model f on data pair (q, D). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8835 (3) Listwise function – ListMLE. ListMLE (Xia et al. 2008) addresses the computational complexity of ListNet by optimizing the negative log-likelihood of the ground-truth permutation πY, i.e., LListMLE (f; q, D, Y) = −log P(πY | f(q, D)). (16) Inspired by ListMLE, we design our adversarial loss Ladv to compute the negative log-likelihood of benign document permutation, i.e., LListMLE adv = −log P(πf(q,D) | f(q, Dadv)), (17) where πf(q,D) represents the document permutation generated by the ranking model f for the list of documents that have not been attacked. This enables us to effectively align the ranked list after perturbations with the benign ranked list, thereby achieving adversarial robustness. Experiments We present our experimental setup and results in this section. Experimental Setup Dataset and target ranking models. We conduct experiments on the MS MARCO Passage Ranking dataset, which is a large-scale benchmark dataset for Web passage retrieval, with about 8.84 million passages (Nguyen et al. 2016). The relevant documents to user queries are obtained using Bing, thereby simulating real-world web search scenarios. We choose several typical ranking models that achieve promising effectiveness, including traditional probabilistic models, e.g., BM25 (Robertson and Walker 1994), interaction-focused NRMs, e.g., ConvKNRM (Dai et al. 2018b), and pre-trained models, e.g., BERT (Devlin et al. 2019) and PROP (Ma et al. 2021a), for adversarial attack. Evaluation metrics. (i) CleanMRR@k evaluates Mean Reciprocal Rank (MRR) performance on the clean dataset (Ma et al. 2021b; Yan et al. 2021). (ii) RobustMRR@k evaluates the MRR performance on the attacked dataset by WSRA. (iii) Attack success rate (ASR) (%) evaluates the percentage of the after-attack documents that are ranked higher than original documents (Wu et al. 2023). (iv) Location square deviation (LSD) (%) evaluates the consistency between the original and perturbed ranked list for a query, by calculating the average deviation between the document positions in the two lists (Sun, Li, and Zhao 2022). The effectiveness of a ranking model is better with a higher CleanMRR. The robustness of a ranking model is better with a higher RobustMRR and a lower ASR and LSD. Baselines. (i) Standard training (ST): We directly optimize the ranking model via the natural ranking loss (Eq. 9) without defense mechanisms. (ii) Data augmentation (DA): We augment each document in the collection with 2 new documents by uniformly replacing synonyms, and then use the normal hinge loss for training following (Wu et al. 2022). The number of replacement words equals the number of words perturbed by the WSRA attack. (iii) Adversarial training (AT): We follow the vanilla AT method (Goodfellow, Shlens, and Szegedy 2015) to directly include the adversarial examples during training. (iv) CertDR is a certified defense method for NRMs (Wu et al. 2022), which achieves certified top-K robustness against WSRA attacks. Implementation details. We implement target ranking models following previous work (Dai et al. 2018b; Devlin et al. 2019; Ma et al. 2021a; Liu et al. 2023b). First-stage retrieval is performed using the Anserini toolkit (Yang, Fang, and Lin 2018) with BM25, to obtain top 100 candidate passages. The ranked list is obtained by using the well-trained ranking model to re-rank the above initial candidate set. We randomly sample 1000 Dev queries as target queries to attack their ranked lists for evaluation. For each sampled query, we randomly sample 1 document from 9 ranges in the ranked list following (Wu et al. 2023), i.e., [11, 20], ..., [91, 100], respectively. We attack these 9 target documents to achieve their corresponding adversarial examples using WSRA. Finally, we evaluate the defense performance of ranking models using the attacked list with 9 adversarial examples and its query as an input. For BM25, we attack it using adversarial examples generated by the attack method in (Wu et al. 2023) designed for attacking BERT. For adversarial training, considering the time overhead, we sample 0.1 million (1/10 of the total) training queries to generate adversarial examples. For each training query, we randomly sample 10 documents from its initial candidate set to construct adversarial examples using WSRA. Note the sampled documents are not ground-truth ones. We set the maximum number of word substitutions to 20, and other hyperparameters are consistent with Wu et al. (2023). The regularization hyperparameter λ is set to 0.5. We train the NRMs with a batch size of 100, maximum sequence length of 256, and learning rate of 1e-5. By training the ranking model with different adversarial ranking losses, i.e., LKL adv, LListNet adv , and LListMLE adv , we obtain three types of PIAT as PIATKL, PIATListNet, and PIATListMLE, respectively. Experimental Results Defense comparison. Table 1 presents a comparison of the trade-off performance among four ranking models with different defenses. Observations on the defense baselines are: (i) Effectiveness and adversarial robustness of PROP is generally better than BERT, which in turn is stronger than ConvKNRM. This indicates that well-designed model architectures and pre-training objectives encourage a ranking model to achieve better trade-off performance. (ii) After being attacked, the ranking performance of the ST method without defense mechanisms, decreases significantly with a high ASR and LSD. Hence, it is imperative not only to focus on the effectiveness of existing NRMs when deploying them in real-world scenarios. (iii) CertDR ensures consistent ranking performance between clean and adversarial data. This could be attributed to CertDR’s ability to guarantee the stability of the Top-K of the ranked list by certifying the Top-K robustness. (iv) DA and AT enhance the model’s ranking performance on adversarial data, but this improvement comes at the cost of reduced performance on clean data. The finding is consistent with prior research in natural language processing and machine learning (Zhang et al. 2019; Rade and Moosavi-Dezfooli 2021; Bao, Wang, and Zhao 2021). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8836 Model Method CleanMRR@10 CleanMRR@100 RobustMRR@10 RobustMRR@100 ASR↓ LSD↓ BM25 0.1874 0.1985 0.1624 0.1736 56.4 15.3 ConvKNRM ST 0.2461 0.2592 0.1692 0.1741 95.1 33.2 DA 0.2298 0.2378 0.1786 0.1829 71.2 23.6 CertDR 0.1816 0.1935 0.1592 0.1632 65.3 19.3 AT 0.2316 0.2410 0.1896 0.1953 61.3 17.6 PIATKL 0.2498 0.2603 0.2008∗ 0.2073∗ 51.1∗ 12.6 PIATListNet 0.2513∗ 0.2621∗ 0.2035∗ 0.2009∗ 48.3∗ 11.5∗ PIATListMLE 0.2534∗ 0.2645∗ 0.2018∗ 0.2091∗ 49.2∗ 11.9 BERT ST 0.3831 0.3923 0.3225 0.3286 92.1 32.3 DA 0.3705 0.3810 0.3315 0.3423 63.2 18.9 CertDR 0.3202 0.3311 0.3026 0.3140 56.9 16.3 AT 0.3743 0.3865 0.3451 0.3508 55.1 15.6 PIATKL 0.3860 0.3948 0.3686∗ 0.3761∗ 41.2∗ 9.4 PIATListNet 0.3892 0.3981 0.3728∗ 0.3802∗ 36.1∗ 7.2∗ PIATListMLE 0.3910∗ 0.4002∗ 0.3705∗ 0.3785∗ 38.3∗ 7.9 PROP ST 0.3902 0.4061 0.3352 0.3478 90.3 30.9 DA 0.3783 0.3930 0.3418 0.3538 60.4 16.8 CertDR 0.3351 0.3489 0.3199 0.3220 52.8 13.4 AT 0.3819 0.4002 0.3532 0.3611 51.2 12.8 PIATKL 0.3943 0.4063 0.3749∗ 0.3853∗ 39.4∗ 8.2 PIATListNet 0.3971 0.4121 0.3794∗ 0.3890∗ 35.0∗ 6.2∗ PIATListMLE 0.3992∗ 0.4148∗ 0.3767∗ 0.3864∗ 37.8∗ 7.8 Table 1: Trade-off performance of different ranking models under PIAT and defense baselines; For CertDR, the ASR is evaluated under conditional success rate (Wu et al. 2022); ∗indicates significant improvements over the best baseline (p ≤0.05). When we look at PIAT, we find that: (i) In general, three types of PIAT exhibit superior effectiveness and adversarial robustness than baselines. This suggests that a combination of proposed supervised and semi-supervised training enables the effective utilization of information from extensive unlabeled documents to enhance trade-off performance. (ii) PIAT outperforms the baselines in terms of LSD, indicating increased resistance to perturbations across the entire ranked list. This highlights the efficacy of the perturbation-invariant ranking loss in facilitating NRMs to learn more robust ranking decision boundaries. (iii) PIATKL demonstrates comparatively lower effectiveness in comparison to the other two PIAT types, likely due to the fact that the KL divergence of the relevant scores serves as a soft constraint, rendering a relatively mild supervisory signal. (iv) PIATListMLE achieves a slightly inferior performance compared to PIATListNet. The reason might be that ListMLE compromises precision by converting list-wise differences into an estimate of the probability distribution. Nevertheless, ListMLE exhibits higher training efficiency. Effectiveness vs. robustness trade-off. λ is an important hyperparameter in our proposed method, since it plays a crucial role in determining the balance between effectiveness and robustness. Figure 1 shows a comparison of the effectiveness vs. robustness trade-off between PIAT and empirical defense baselines. CertDR is excluded from the comparison due to its inferior performance compared to empirical defenses in terms of both effectiveness and robustness, as indicated in Table 1. We conduct comparisons by examining CleanMRR@10 (for effectiveness) against 37 38 39 CleanMRR@10 (%) 32 34 36 38 RobustMRR@10 (%) ST DA AT PIATKL PIATListNet PIATListMLE 37 38 39 CleanMRR@10 (%) 40 60 80 ASR (%) Figure 1: The sensitivity of trade-off parameter λ exhibited by different types of PIAT, compared with empirical defense methods. From left to right, we increase the trade-off parameter λ of PIAT from 0.2 to 0.8 with the step of 0.15. RobustMRR@10 and ASR (for robustness) respectively, thereby visualizing the effectiveness-robustness trade-off. We show the results of the BERT model; similar findings were obtained for other ranking models. Both DA and AT enhance robustness, but at the expense of effectiveness. This suggests they may not adequately consider the balanced relationship between effectiveness and robustness. When we look at the different types of PIAT we find that they achieve a heightened trade-off between effectiveness and robustness. This indicates that proper modeling and optimization of the boundary ranking error can guide NRMs to bolster robustness, while maintaining or even improving effectiveness. Furthermore, we note that with an excessively large λ, the model’s robustness considerably decreases, while effectiveThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8837 AT PIAT clean example adversarial example original example ground truth Figure 2: t-SNE plot of query-document representations for AT and PIAT. For clean examples, a darker blue circle represents a higher relevance score. Triangle and cross denote the original document and its corresponding adversarial example. Star denotes the ground-truth query-document pair. ness exhibits marginal growth. Conversely, an excessively small λ shows a notable decline in effectiveness, while robustness experiences only a minor improvement. These findings emphasize the necessity of prioritizing the balance of effectiveness and robustness when training NRMs. Visual analysis. We train the BERT model using AT and PIATListMLE, respectively, on the MS MARCO passage ranking dataset, with inputs being query-document concatenations. The hidden states of [CLS] in BERT’s final layer are utilized as query-document pair representations and visualized using t-SNE (Van der Maaten and Hinton 2008) to observe semantic space distributions. As a t-SNE example, we generate plots by sampling a query (QID=262232) and selecting its top 100 candidate documents, including 9 adversarial examples. Results in Figure 2 show that: (i) For AT, the distribution of adversarial examples in the latent space is relatively disordered. By examining data at the same relevance level (color depth), we find that the decision boundaries exhibit a certain degree of chaos. Some adversarial examples have managed to move away from their original positions and closer to the ground-truth. AT lacks clear distinctions in terms of modeling effectiveness and robustness, relying solely on pair-wise loss for simultaneous optimization. (ii) For PIAT, the ranking decision boundary not only distinguishes between data points of varying relevance levels, but also effectively constrains the adversarial examples to stay close to their original examples. This result emphasizes the fact that by using perturbation-invariant loss, tailored through analysis of boundary ranking errors, PIAT achieves remarkable adversarial robustness while maintaining effectiveness compared to the traditional AT. Similar observations were obtained with other ranking models. Related Work Neural ranking models. The emergence of deep learning has led to the popularity of NRMs (Onal et al. 2018; Guo et al. 2020), showcasing their superiority over traditional ranking models. There have been efforts to leverage pre-trained models for ranking tasks (Fan et al. 2022), further enhancing the effectiveness of NRMs. Additionally, studies have explored training NRMs using data augmentation techniques, such as hard negative mining (Xiong et al. 2021; Zhan et al. 2021), achieving new state-of-the-art performance. Despite these effectiveness improvements, these studies often overlook the adversarial robustness of NRMs. Defense methods. Adversarial attacks aim to discover human-imperceptible perturbations that can deceive neural networks (Szegedy et al. 2014). In IR, there is growing interest in robustness (Liu et al. 2023a) and adversarial attacks. Wu et al. (2023) introduced the WSRA method of attacking black-box NRMs using word substitution. This study revealed the serious vulnerability of NRMs to synonym substitution perturbations. As a result, subsequent explorations of attack against NRMs have emerged (Liu et al. 2023c, 2022; Chen et al. 2023), inspired by this pioneering work. In response to adversarial attacks, research has proposed various defense strategies to enhance adversarial robustness. These can be generally classified into certified defenses and empirical defenses. Certified defenses aim for theoretical robustness against specific adversarial perturbations (Raghunathan, Steinhardt, and Liang 2018). For instance, Wu et al. (2022) introduced a certified defense method that ensures the top-K robustness of NRMs via randomized smoothing. However, due to their theoretical nature, these methods often face limitations in practical applications and may not fully meet the desired performance requirements. Empirical defenses aim to enhance the empirical robustness of models against known adversarial attacks, and this approach has been extensively explored in image classification (Madry et al. 2018; Wang et al. 2019) and text classification (Ye, Gong, and Liu 2020; Jia et al. 2019). Among these methods, adversarial training emerges as one of the most effective defenses. Adversarial training on adversarial examples remains empirically robust (Cui et al. 2021). However, the use of adversarial training as a defensive mechanism is often limited to simple classification scenarios, and its application in NRMs remains largely unexplored. Therefore, we propose an adversarial training method tailored for NRMs to improve the trade-off between effectiveness and robustness. Conclusion To the best of our knowledge, our study is the first study on the trade-off between effectiveness and adversarial robustness for neural retrieval models. Our theoretical analysis motivated the development of perturbation-invariant adversarial training, incorporating a new regularized surrogate loss. Experimental results have showcased the superior performance of our method in terms of effectiveness and robustness. Broader impact and limitations. We aim for our initial exploration to serve as a benchmark for adversarial robustness and to inspire the IR community to further enhance the effectiveness-robustness trade-off. As to the limitations of our work, we currently only consider the popular attack of WSRA, and constructing adversarial training examples could be time-consuming. In future work, we will investigate the design of adversarial training methods to defend against other or unseen attacks, and create training examples with reduced time overhead. Besides, we will consider more benchmark datasets to simulate different retrieval scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8838 Acknowledgements This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 62006218 and 61902381, the Youth Innovation Promotion Association CAS under Grants No. 2021100, the project under Grants No. 2023YFA1011602, JCKY2022130C039 and 2021QY1701, the CAS Project for Young Scientists in Basic Research under Grant No. YSBR-034, the Innovation Project of ICT CAS under Grants No. E261090, and the Lenovo-CAS Joint Lab Youth Scientist Project. This work was also (partially) funded by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https://hybridintelligence-centre.nl, and project LESSEN with project number NWA.1389.20.183 of the research program NWA ORC 2020/21, which is (partly) financed by the Dutch Research Council (NWO). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. References Bao, R.; Wang, J.; and Zhao, H. 2021. Defending Pretrained Language Models from Adversarial Word Substitution Without Performance Sacrifice. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 3248–3258. Cao, Z.; Qin, T.; Liu, T.-Y.; Tsai, M.-F.; and Li, H. 2007. Learning to Rank: From Pairwise Approach to Listwise Approach. In Proceedings of the 24th International Conference on Machine learning, 129–136. Chen, X.; He, B.; Ye, Z.; Sun, L.; and Sun, Y. 2023. Towards Imperceptible Document Manipulations against Neural Ranking Models. In Findings of the Association for Computational Linguistics: ACL 2023, 6648–6664. Cui, J.; Liu, S.; Wang, L.; and Jia, J. 2021. Learnable Boundary Guided Adversarial Training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 15721–15730. Dai, Z.; and Callan, J. 2019. Deeper Text Understanding for IR with Contextual Neural Language Modeling. In Proceedings of the 42nd international ACM SIGIR Conference on Research and Development in Information Retrieval, 985– 988. Dai, Z.; Xiong, C.; Callan, J.; and Liu, Z. 2018a. Convolutional Neural Networks for Soft-matching N-grams in Adhoc Search. In Proceedings of the eleventh ACM international Conference on Web Search and Data Mining, 126– 134. Dai, Z.; Xiong, C.; Callan, J.; and Liu, Z. 2018b. Convolutional Neural Networks for Soft-Matching N-Grams in Adhoc Search. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North. Fan, Y.; Xie, X.; Cai, Y.; Chen, J.; Ma, X.; Li, X.; Zhang, R.; and Guo, J. 2022. Pre-training Methods in Information Retrieval. Foundations and Trends in Information Retrieval, 16(3): 178–317. Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations. Guo, J.; Fan, Y.; Pang, L.; Yang, L.; Ai, Q.; Zamani, H.; Wu, C.; Croft, W. B.; and Cheng, X. 2020. A Deep Look into Neural Ranking Models for Information Retrieval. Information Processing & Management, 57(6): 102067. Gy¨ongyi, Z.; and Garcia-Molina, H. 2005. Web Spam Taxonomy. In AIRWeb 2005: First International Workshop on Adversarial Information Retrieval on the Web, 39–47. Jia, R.; and Liang, P. 2017. Adversarial Examples for Evaluating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2021–2031. Jia, R.; Raghunathan, A.; G¨oksel, K.; and Liang, P. 2019. Certified Robustness to Adversarial Word Substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4129–4142. Kurland, O.; and Tennenholtz, M. 2022. Competitive Search. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2838–2849. LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep Learning. Nature, 521(7553): 436–444. Lin, J.; Nogueira, R.; and Yates, A. 2022. Pretrained Transformers for Text Ranking: Bert and Beyond. Springer Nature. Liu, J.; Kang, Y.; Tang, D.; Song, K.; Sun, C.; Wang, X.; Lu, W.; and Liu, X. 2022. Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models. In 2022 ACM SIGSAC Conference on Computer and Communications Security, 2025–2039. Liu, Y.-A.; Zhang, R.; Guo, J.; Chen, W.; and Cheng, X. 2023a. On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective. In Gen-IR@SIGIR. Liu, Y.-A.; Zhang, R.; Guo, J.; de Rijke, M.; Chen, W.; Fan, Y.; and Cheng, X. 2023b. Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 1647–1656. Liu, Y.-A.; Zhang, R.; Guo, J.; de Rijke, M.; Chen, W.; Fan, Y.; and Cheng, X. 2023c. Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models. In 46th international ACM SIGIR Conference on Research and Development in Information Retrieval, 1700–1709. Ma, X.; Guo, J.; Zhang, R.; Fan, Y.; Ji, X.; and Cheng, X. 2021a. Prop: Pre-training with Representative Words Prediction for Ad-hoc Retrieval. In Proceedings of the 14th ACM international Conference on Web Search and Data Mining, 283–291. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8839 Ma, X.; Guo, J.; Zhang, R.; Fan, Y.; Li, Y.; and Cheng, X. 2021b. B-PROP: Bootstrapped Pre-training with Representative Words Prediction for Ad-hoc Retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1513– 1522. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations. Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016. MS MARCO: A Human Generated Machine Reading Comprehension Dataset. In CoCo@ NIPS. Nogueira, R.; and Cho, K. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085. Onal, K. D.; Zhang, Y.; Altingovde, I. S.; Rahman, M. M.; Karagoz, P.; Braylan, A.; Dang, B.; Chang, H.-L.; Kim, H.; McNamara, Q.; Angert, A.; Banner, E.; Khetan, V.; McDonnell, T.; Nguyen, A. T.; Xu, D.; Wallace, B. C.; de Rijke, M.; and Lease, M. 2018. Neural Information Retrieval: At the End of the Early Years. Information Retrieval, 21(2–3): 111–182. Rade, R.; and Moosavi-Dezfooli, S.-M. 2021. Helper-based adversarial training: Reducing excessive margin to achieve a better accuracy vs. robustness trade-off. In ICML 2021 Workshop on Adversarial Machine Learning. Raghunathan, A.; Steinhardt, J.; and Liang, P. 2018. Certified Defenses against Adversarial Examples. In International Conference on Learning Representations. Robertson, S. E.; and Walker, S. 1994. Some Simple Effective Approximations to the 2-Poisson Model for Probabilistic Weighted Retrieval. In SIGIR’94: Proceedings of the Seventeenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 232– 241. Springer. Shafahi, A.; Najibi, M.; Ghiasi, M. A.; Xu, Z.; Dickerson, J.; Studer, C.; Davis, L. S.; Taylor, G.; and Goldstein, T. 2019. Adversarial Training for Free! Advances in Neural Information Processing Systems, 32. Sun, K.; Li, Z.; and Zhao, H. 2022. Reorder and then Parse, Fast and Accurate Discontinuous Constituency Parsing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 10575–10588. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing Properties of Neural Networks. In International Conference on Learning Representations. Tsipras, D.; Santurkar, S.; Engstrom, L.; Turner, A.; and Madry, A. 2019. Robustness May Be at Odds with Accuracy. In International Conference on Learning Representations, 2019. Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Wang, X.; Yang, Y.; Deng, Y.; and He, K. 2021. Adversarial Training with Fast Gradient Projection Method Against Synonym Substitution based Text Attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 13997–14005. Wang, Y.; Ma, X.; Bailey, J.; Yi, J.; Zhou, B.; and Gu, Q. 2019. On the Convergence and Robustness of Adversarial Training. In ICML. Wu, C.; Zhang, R.; Guo, J.; Chen, W.; Fan, Y.; de Rijke, M.; and Cheng, X. 2022. Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2128–2137. Wu, C.; Zhang, R.; Guo, J.; de Rijke, M.; Fan, Y.; and Cheng, X. 2023. PRADA: Practical Black-box Adversarial Attacks Against Neural Ranking Models. ACM Transactions on Information Systems, 41(4): Article 89. Xia, F.; Liu, T.-Y.; Wang, J.; Zhang, W.; and Li, H. 2008. Listwise Approach to Learning to Rank: Theory and Algorithm. In Proceedings of the 25th International Conference on Machine Learning, 1192–1199. Xiong, L.; Xiong, C.; Li, Y.; Tang, K.-F.; Liu, J.; Bennett, P.; Ahmed, J.; and Overwijk, A. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. arXiv preprint arXiv:2007.00808. Yan, M.; Li, C.; Bi, B.; Wang, W.; and Huang, S. 2021. A Unified Pretraining Framework for Passage Ranking and Expansion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 4555–4563. Yang, P.; Fang, H.; and Lin, J. 2018. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality (JDIQ), 10(4): 1–20. Ye, M.; Gong, C.; and Liu, Q. 2020. SAFER: A Structurefree Approach for Certified Robustness to Adversarial Word Substitutions. arXiv preprint arXiv:2005.14424. Zhan, J.; Mao, J.; Liu, Y.; Guo, J.; Zhang, M.; and Ma, S. 2021. Optimizing Dense Retrieval Model Training with Hard Negatives. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; and Jordan, M. 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In International Conference on Machine Learning, 7472–7482. PMLR. Zhu, C.; Cheng, Y.; Gan, Z.; Sun, S.; Goldstein, T.; and Liu, J. 2019. FreeLB: Enhanced Adversarial Training for Natural Language Understanding. arXiv preprint arXiv:1909.11764. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8840
2024
982
18,831
Full Bayesian Significance Testing for Neural Networks Zehua Liu1, Zimeng Li1, Jingyuan Wang1,2,3*, Yue He4 1School of Computer Science and Engineering, Beihang University, Beijing, China 2School of Economics and Management, Beihang University, Beijing, China 3Key Laboratory of Data Intelligence and Management (Beihang University), Ministry of Industry and Information Technology, Beijing, China 4Department of Computer Science and Technology, Tsinghua University, Beijing, China Abstract Significance testing aims to determine whether a proposition about the population distribution is the truth or not given observations. However, traditional significance testing often needs to derive the distribution of the testing statistic, failing to deal with complex nonlinear relationships. In this paper, we propose to conduct Full Bayesian Significance Testing for neural networks, called nFBST, to overcome the limitation in relationship characterization of traditional approaches. A Bayesian neural network is utilized to fit the nonlinear and multi-dimensional relationships with small errors and avoid hard theoretical derivation by computing the evidence value. Besides, nFBST can test not only global significance but also local and instance-wise significance, which previous testing methods don’t focus on. Moreover, nFBST is a general framework that can be extended based on the measures selected, such as Grad-nFBST, LRPnFBST, DeepLIFT-nFBST, LIME-nFBST. A range of experiments on both simulated and real data are conducted to show the advantages of our method. Introduction Significance testing aims to determine whether a proposition about the population distribution1 is true or false given observations, which is widely used in many scientific fields, such as social sciences(Orlitzky 2012; Ortega and Navarrete 2017) and medical research(Matthews et al. 1990; Rutledge and Loh 2004). For example, it is often used to evaluate the efficacy of new treatments or drugs. First, clinical trials are performed to compare the response of patients treated with a new therapy against a control group. Then, significance testing is used as an analytical tool to determine whether the observed improvement in the treatment group is significant, which provides evidence that the new therapy is effective. To attain a proper testing result, a golden standard is to recover the true data generation model f0 behind the population distribution, then justify the proposition according to f0 directly. For this purpose, a number of significance testing approaches are proposed under different assumptions about f0 (Gozalo 1993; Lavergne and Vuong 1996; Racine *Corresponding author ([email protected]) Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1https://online.stat.psu.edu/stat462/node/249/ 1997). However, simple assumptions can hardly fit the real situation, while complex assumptions make it hard to derive the theoretical distribution of the testing statistic. Recently, (Horel and Giesecke 2020) provides a provable solution of significance testing in nonlinear cases, but it suffers from the computational difficulty in statistics, only addressing a limited function space. There is still a gap to be solved for significance testing in the general correlations. Existing significance testing methods only focus on global propositions. However, some propositions that are invalid globally can still hold on and contribute to a certain sub-population. For example, clinical trials have shown that a drug is effective in treating cancer, but not in some individuals. For a better justification in nonlinear cases, a significance testing approach should verify the correctness of a proposition in population distribution and sub-population distribution respectively. To deal with the complicated real data in wide applications(He et al. 2020; Wang et al. 2018, 2020a, 2023), we introduce deep neural networks into the significance testing to capture the nonlinear correlations. To overcome the barrier of computing statistics under the complex fitting functions, we solve the significance testing problem from the Bayesian perspective (Kass and Raftery 1995), and propose a novel approach that conducts the Full Bayesian Significance Testing for neural networks, abbreviated as nFBST (neural FBST). Given the testing statistics, nFBST can test the correctness of a proposition for both population-level or sub-population-level problems, by comparing the posterior probabilities of it and its opposite. In addition, nFBST is a general framework that can be extended based on different testing statistics, such as Grad-nFBST, LRP-nFBST, DeepLIFT-nFBST, LIME-nFBST, and so on. A range of experiments on both simulated and real data are conducted to show the advantages of our method. The main contributions can be summarized as follows: • We are the first to introduce deep neural networks into significance testing. Our approach replaces complicated theoretical derivation by fitting distributions in a Bayesian way, and the neural network serves as a good estimator of f0 without assuming specific forms. • We design a complete procedure using Full Bayesian Significance Testing for Neural Networks, namely nFBST. It is a general framework that can be extended The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8841 based on different implementations and different testing statistics, such as Grad-nFBST, LRP-nFBST, DeepLIFTnFBST, LIME-nFBST, and so on. • Our proposed nFBST can solve both local and global significance testing problems while previous methods only focus on the latter. Under non-linear assumptions, global significance may be inconsistent with local or instance-wise significance. • We conduct extensive experiments to verify the advantage of our method on better testing results. Theoretical Method Classical Frequentist Significance Test We denote f0 : X ⊂Rd →R as the underlying and unknown conditional mean function, namely E[Y|X = x], of the population (X, Y ). Then, we consider the data generation process of (X, Y ) as follows: y = f0(X) + ϵ, (1) where ϵ is a random error such that E[ϵ|X] = E[ϵ] = 0. Significance testing first defines a testing statistic η, then proposes two contradictory propositions (or hypotheses) H0 and H1, which represent the null hypothesis and the alternative hypothesis respectively. Classical significance testing is regarded as a procedure for measuring the consistency of data with the null hypothesis by the calculation of a p-value (tail area under the null hypothesis) (De Braganc¸a Pereira and Stern 1999). The process is as follows: • First, we make assumptions about the population distribution of f0 and denote it as f0(β), whose parameters are β. Then we derive the theoretical distribution of η(β) under the assumptions. • Second, based on the observed data D, we fit an optimal estimator ˆf(ˆβ) as an approximation function of f0, whose parameters are ˆβ. • Third, we calculate η(ˆβ) and p-value further to determine whether the distribution of η(β) is reasonable under the null hypothesis using sample information η(ˆβ). The problem of testing the significance of a feature is: H0 : η(β) = 0 H1 : η(β) ̸= 0, (2) where η(β) is a measure of feature importance. For example, if we assume f0 satisfies linear relationships as follows: y = β0 + β1x1 + · · · + βdxd + ϵ. (3) Whether the coefficient of a feature xj is equal to zero determines its significance, that is η(β) = βj. However, there are two main defects in classical significance testing. • First, the effectiveness of classical significance testing is based on reasonable assumptions about f0. However, it is difficult to find such precise assumptions when the data distribution is actually complicated. • Second, some models, such as deep learning, excel in accurately fitting complex data distributions. However, the more complex assumption of f0, the more computational theoretical distribution of η(β), even intractable. Full Bayesian Significance Test In order to solve the problems, we adopt the Full Bayesian significance Testing (FBST) (De Braganc¸a Pereira and Stern 1999; de B. Pereira, Stern, and Wechsler 2008). FBST is a statistical methodology that allows for the testing of precise hypotheses in a Bayesian framework. Here, “full” means that one only needs to use the posterior distribution to test without the specific assumptions for f0. In contrast to classical significance testing which uses p-value to reject or fail to reject the null hypothesis, FBST provides a measure of evidence in favor of or against the null hypothesis, taking into account prior information and the strength of observations. Let P(H) be the prior of the hypothesis H and P(D|H) be the likelihood function of H given observations D. The posterior probability distributions for the null and alternative hypotheses are then calculated using Bayes’ theorem as P(H|D) = P(D|H) × P(H) P(D) ∝P(D|H) × P(H). (4) It is consistent with the process by which people adjust their assessments in response to observed data. The evidence in favor of the null hypothesis is quantified by the Bayes Factor. Its value reflects which proposition is more likely under the observed data. If it is greater than 1, we believe it provides evidence in favor of H0. The evidence is moderate if it is greater than 3 and strong if it is greater than 10 (Jeffreys 1998). On the contrary, it provides evidence against H0 if it is smaller than 1, moderate evidence for less than 1 3, and strong evidence for less than 0.1. From the above analysis, we can conclude that FBST doesn’t need to assume a specific distribution form of f0, but calculate P(D|H0) and P(D|H1) instead. In other words, the current goal is to obtain a good estimator to fit P(D|H). Approximate the Distribution of Testing Statistics According to the universal approximation theorem, neural networks with appropriate size can approximate an extensive class of functions to a desired degree of accuracy (Hornik, Stinchcombe, and White 1989). In this paper, we propose to use Bayesian neural networks to fit the likelihood P(D|H). As a technique that combines Bayesian Theory and neural networks, Bayesian neural networks can fit complex relationships and produce a probability distribution over model parameters θ that expresses our beliefs regarding how likely the different parameter values are. Given a dataset D = {(X(1), y(1), . . . , (X(n), y(n))}, nFBST first uses Bayesian neural networks, whose parameters are θ, to fit D. Before training, a prior distribution is assigned to model parameters θ as an initial belief π(θ) according to experience. This belief is gradually adjusted to fit data D by using the Bayesian rule. The final belief is presented as the posterior distribution P(θ|D) = P(D|θ)π(θ) P(D) = π(θ) Qn i=1 P(y(i)|X(i), θ) R Θ Qn i=1 P(y(i)|X(i), θ)dθ , (5) where Θ is the parameter space. Given a new case X, the prediction made by the Bayesian neural network is the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8842 Figure 1: Bayesian evidence calculated based on the distribution of η(θ). weighted average of an ensemble P(y|X, D) = Z Θ P(y|X, θ)P(θ|D)dθ. (6) Then, based on the posterior distribution of θ, we obtain the posterior distribution of the testing statistic η(θ) and denote p(η(θ)|D) as its probability density. The testing problem is formulated as: H0 : η(θ) = 0 H1 : η(θ) ̸= 0 (7) We denote the whole space of η(θ) as Ψ such that η(θ) ∈Ψ. Then, we define the region whose probability greater than p(η(θ) = 0|D) according to the following formula: Ψ0 = {η(θ) : p(η(θ)|D) > p(η(θ) = 0|D)}, (8) where p(η(θ) = 0|D) is the maximum of the posterior density under the null hypothesis H0 (Figure 1). The Bayes Factor is not the only way to calculate evidence and its result is influenced by prior distribution. In our method, we adopt a more flexible valid Bayesian evidence for the null hypothesis provided by (De Braganc¸a Pereira and Stern 1999): Ev(H0) = 1 − Z Ψ0 p  η(θ)|D  dη(θ) = 1 − Z Ψ 1  η(θ) ∈Ψ0  p  η(θ)|D  dη(θ). (9) Using the Monte Carlo method, the above formula can be further simplified to Ev(H0) ≈1 −1 m m X i=1 1  ηi(θ) ∈Ψ0  = 1 −1 m m X i=1 1  p(ηi(θ)|D) > p(0|D)  , (10) where ηi(θ) is sampled m times based on the posterior probability density of η(θ). The result of Eq (10) is called Bayesian evidence, whose value is between 0 and 1. The closer the Bayesian evidence to 1, the more likely to accept H0. The closer the Bayesian evidence to 0, the more likely to reject H0. Moreover, we have mathematically proven that under certain constraints, as the sample size approaches infinity, Ev(H0) for insignificant features converges to 1. The detailed process of proof is provided in the Appendix. Implementation Approach Calculate the Distribution of Testing Statistics So far, we have clarified the entire process of FBST, but there still remain two implementation details that need to be elaborated on. First, to perform nFBST to deal with the testing problem Eq (7), we need to calculate the posterior distribution of θ. Second, after obtaining the distribution of θ, we need to calculate the distribution of the testing statistic η(θ). In practice, it is intractable to solve the integral in Eq (5). A popular way, known as Variational Inference (VI), entails approximating the real but intractable posterior distribution with a tractable distribution called variational distribution (Blei, Kucukelbir, and McAuliffe 2017; Jaakkola and Jordan 2000). Therefore, Eq (5) could be efficiently approximated. Formally, variational family Q = {qϑ : ϑ ∈Γ} is a predefined family of tractable distributions on model parameter space Θ, where ϑ is the parameter of variational distribution and Γ is the range of ϑ. The optimal variational distribution qϑ∗is chosen from Q such that ϑ∗= arg min ϑ∈Γ KL(qϑ(θ)∥P(θ|D)). (11) KL divergence describes the “distance” between two distributions. We set diagonal Gaussian distributions as the prior and variational families of parameter θ. This assumption is common in many works (Blundell et al. 2015; Kendall and Gal 2017). Under this assumption, applying Bayesian rule Eq (5), Eq (11) can be further simplified as ϑ∗= arg min ϑ∈Γ −E[log P(D|θ)] + KL(qϑ(θ)∥π(θ)) + log P(D). (12) The derivation is shown in Appendix. The first term is related to data (such as MSE for regression task); The second term is only related to parameters θ like regularization term, and the third term is a constant. In the end, we finish approximating the posterior distribution of parameters P(θ|D) with variational distribution qϑ∗(θ). We adopt Kernel Density Estimation (KDE) (Scott 1979; Parzen 1962) to estimate the posterior probability density of η(θ), which is a common non-parametric method. The process is as follows: • First, draw samples of parameters θ with size m from the approximate posterior distribution qϑ∗(θ) randomly, that is {θ1, . . . , θm}. • Second, calculate {η(θ1), . . . , η(θm)} to obtain sample from the posterior of η(θ). • Third, estimate p(η(θ)|D) using KDE p  η(θ)|D  = 1 mh m X i=1 K η(θ) −η(θi) h  , (13) where K is the kernel function, and h is the window width (also known as bandwidth). The commonly used kernel function is the Gaussian kernel function. Finally, by calculating Bayesian evidence in Eq (10), we finish the entire process of nFBST. nFBST is a general and flexible framework that can be easily extended based on different implementations, including VI and KDE. In the case The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8843 of VI, we have derived the detailed training procedure in the Appendix. The approximation between variational and posterior distributions can be gauged through the prediction error. As for KDE, its robust theoretical underpinning guarantees convergence and consistency, as evidenced by the work of (Parzen 1962). Consequently, the margin of error in our approach remains within a reasonable range. Design of Testing Statistics To test the significance of a feature xj, we first need a reasonable measure as the testing statistic to represent the relationship between xj and y. nFBST is a flexible framework that can be applied to global significance, local significance, and instance-wise significance testing problems. The weighted average of the partial derivative, with weights defined by a positive measure µ, is adopted as the testing statistic in (Horel and Giesecke 2020): η(θ) = Z X ∂f0(x) ∂xj 2 dµ(x). (14) This reflects the global significance of the overall data, whose value will not change when the data distribution is fixed. In non-linear contexts, the significance of a feature in sub-population distribution is dynamic and varies with its range. We consider a simple case f0(X) = ReLU(x0) where x0 ∼N(0, 1). It can be calculated that η(θ) = 1 2, which is a constant without considering the specific values of x0. If we define Xi ⊆X, the local significance testing statistic value is dynamic as Xi varies, that is η(θ, Xi) = Z Xi ∂f0(X) ∂xj 2 dµ(X). (15) In this example, if we define X1 = {X : x0 < 0}, X2 = {X : x0 ≥0}, we obtain η(θ, X1) = 0, η(θ, X2) = 1 2. It is clear that under partial derivative settings, x0 is insignificant when its value is less than zero, but significant when its value is greater than zero. Further, Xi can contain only one data, that is the instance-wise significance testing statistic η(θ, xj) = ∂f0(X) ∂xj . (16) nFBST is a general framework and supports significance testing based on various feature importance measures. In our implementation, we select LRP(Binder et al. 2016), LIME(Ribeiro, Singh, and Guestrin 2016), and DeepLIFT(Shrikumar, Greenside, and Kundaje 2017) as testing statistics and the corresponding methods are called LRP-nFBST, LIME-nFBST, and DeepLIFT-nFBST respectively. Eq (16) uses gradient as the testing statistic and we name it Grad-nFBST. For global significance, Eq (14) is difficult to capture enough information when the distribution of the testing statistic is complex. Therefore, we propose a Quantilebased Global Significance, namely Q-GS. First, we sort all Bayesian evidence of instance-wise significance in descending order. Then, we set a threshold λ and select the quantile of the sorted evidence. It satisfies that the percentage of evidence over it reaches the threshold λ. Experiments Toy Example We consider the following data generation process y = 8+x2 0+x1x2+cos(x3)+exp(x4x5)+0.1x6+0x7+ϵ, (17) where X = [x0, x1, . . . , x7] ∼U(−1, 1)8, ϵ ∼N(0, 1). The variable x7 has no influence on y. Our goal is to differentiate x7 from other features, that is to determine x7 as insignificance but others as significance. We compare the following three classical testing methods as the baselines: • Bootstrap (Efron 1979). It gets the distribution of the testing statistic from samples repeatedly drawn from the original data and simulates the mean and variance of the population to perform Z-test. • Likelihood ratio test (Fisher 1922). By training an unconstrained model incorporating all variables and a nested model with restricted variables, and comparing their likelihoods, we will obtain the standard asymptotic chi-square distribution if the unconstrained model is assumed correctly. • t-test for linear models (Student 1908). It calculates the estimated coefficient divided by its standard error and tests the result whether to follow the t-distribution. From table 1 we have the following observations: • First, for the three classical testing methods, only Bootstrap accurately identifies the global significance of all features. Usually, we set the significance level α and compare p-value with it. If the p-value is smaller than the significance level, we reject the null hypothesis and accept the alternative hypothesis. That is, the smaller the p-value, the more we can determine a feature is significant. If we set α = 0.05, the p-value of Bootstrap satisfies that only x7 is greater than α but others are not. However, the likelihood ratio test and t-test can hardly distinguish correctly. This is probably because their assumptions about f0 are too strong and lead to errors. • Second, all nFBST methods based on different testing statistics perform well. Here, we set λ = 0.5 to obtain QGS. It means only when more than half of the instancewise testing results reflect insignificance, we determine the feature as insignificant globally. The smaller the QGS, the less the evidence for H0, and we tend to reject that the feature is insignificant. The results show all nFBST methods provide strong evidence about x7 for H0 but little evidence for other features. • Third, instance-wise significance can provide us with more insights than global significance. For the likelihood ratio test and t-test methods, p-value for x3 is high even the highest. We plot scatters of the evidence for x3 obtained by nFBST and plot histograms under different x3 intervals. As shown in Figure 2, its evidence of GradnFBST is more concentrated on one when the value of x3 is close to zero. This is consistent with Eq (17) as ∂f0(X)/∂x3 = −sin(x3). We conclude that global significance is more coarse-grained than instance-wise significance due to averaging different situations together. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8844 (a) Scatter of Evidence (b) 0 ≤|x3| < 0.25 (c) 0.25 ≤|x3| < 0.5 (d) 0.5 ≤|x3| < 0.75 (e) 0.75 ≤|x3| < 1 Figure 2: Bayesian evidence of x3 obtained by Grad-nFBST under different intervals on the toy example. p-value Q-GS(λ = 0.5) by nFBST Feature Bootstrap likelihood ratio test t-test GradnFBST DeepLIFTnFBST LRPnFBST x0 < 0.001 0.696 0.442 <0.001 <0.001 <0.001 x1 <0.001 0.063 0.592 0.021 <0.001 0.02 x2 <0.001 0.087 0.598 0.029 <0.001 0.027 x3 0.027 0.813 0.838 0.011 0.043 0.01 x4 <0.001 0.209 0.604 0.005 0.007 0.003 x5 <0.001 0.361 0.559 0.006 0.01 0.007 x6 <0.001 0.318 <0.001 0.278 0.203 0.273 x7 0.383 0.049 0.922 0.637 0.637 0.617 Table 1: Global significance testing results of different algorithms on the toy example, the maximum values are bolded. Simulation Experiments In this section, we conduct experiments on three simulation datasets for analysis. Compared to three classical testing methods, we found nFBST can perfectly distinguish the global significance while others cannot. Compared to five feature importance methods, we found nFBST can improve the ability to test instance-wise significance. Data Generation Process We consider the input features X = [x0, x1, . . . , x99] ∼U(−1, 1)100 and the data generation process y = f0(X) + ϵ, (18) where ϵ ∼N(0, 0.01) and f0 is a neural network function whose weights and biases are initialized randomly. Then only the last fifty features are insignificant by setting the corresponding weights to zero. Our goal is to select these fifty insignificant features accurately. According to the above generation process, we generate two sets of 10,000 independent samples, namely Dataset 1 and Dataset 2. The difference between them is that the structure of f0 for Dataset 1 is three hidden layers of 20 nodes but three hidden layers of 16 nodes for Dataset 2. In our experiments, we only adopt the structure with three hidden layers of 20 nodes as the trained model to simulate conditions where it has the same or different structure from f0. Then, we reduce the data size to one-tenth of the Dataset 2, that is 1,000 independent samples, namely Dataset 3, to simulate the scenario of small data. Test the Global Significance For the global significance of each feature, there are two possible testing results, insignificant or significant. Therefore, we can consider the task of testing as a binary classification problem. Specifically, significant represents the positive class, and insignificant represents the negative class. TP means identifying significant features correctly. TN means identifying insignificant features correctly. FN means misidentifying a feature that should be significant as an insignificant feature. FP means misidentifying a feature that should be insignificant as a significant feature. Precision=TP/(TP+FP), reflects the accuracy of correctly testing significant features. Recall=TP/(TP+FN), reflects the completeness of correctly identifying significant features. F1-score combines precision and recall to calculate the harmonic mean. Table 2 shows these metrics on three simulation datasets compared to three classical testing methods. As with the settings of “Toy Example”, we set the significance level for classical testing methods as 0.05 and λ = 0.5 to obtain QGS. First, Bootstrap tends to determine a feature as significant thus resulting in high recall but poor precision. On the contrary, the likelihood ratio test tends to determine a feature as insignificant thus resulting in poor recall but fine precision. If we compare comprehensively, t-test method outperforms the two methods with the highest F1-score. Second, all nFBST methods based on different testing statistics perform perfectly, with the F1-score of 1. Compared to the classical testing methods, the improvement is largely due to the flexible hypothesis of nFBST, as neural networks can fit more complex cases of f0. Test the Instance-wise Significance For the instancewise significance of each feature, there are also two possible testing results, insignificant or significant. Specifically, there are 10,000 instances of 100 features for Dataset 1 and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8845 Dataset Metric Bootstrap likelihood ratio test t-test GradnFBST DeepLIFTnFBST LRPnFBST LIMEnFBST Dataset 1 Precision 0.54 0.91 0.96 1 1 1 1 Recall 0.98 0.80 0.86 1 1 1 1 F1-score 0.70 0.85 0.91 1 1 1 1 Dataset 2 Precision 0.55 0.94 0.91 1 1 1 1 Recall 0.98 0.66 0.86 1 1 1 1 F1-score 0.71 0.78 0.89 1 1 1 1 Dataset 3 Precision 0.51 0.87 0.90 0.82 0.80 0.83 0.85 Recall 1 0.54 0.54 0.84 0.90 0.86 0.80 F1-score 0.68 0.67 0.68 0.83 0.85 0.84 0.82 Table 2: Precision, Recall and F1-score for global significance on different Datasets. Figure 3: Average AUC of all features for instance-wise significance before and after nFBST under different eps. Dataset 2. As the last fifty features are insignificant by setting the corresponding weights to zero, our focus is mainly on the first fifty features. Because the instance-wise significance of these features varies with their values. First, we calculate the gradients of f0 on each instance and adjust different precision thresholds, namely eps, to label the instancewise significance. If the gradient is less than eps, we label it insignificant, otherwise significant. Then, we evaluate the performance by ROC and AUC. Most existing testing methods don’t distinguish global significance and instance-wise significance and only focus on the former. Therefore, we select feature importance analysis methods as baselines. They assign a feature importance score for a prediction, which reflects the significance of the feature learned from the model. From Figure 3, we have the following observations: • First, diverse measure-based nFBST consistently surpasses primary feature importance methods across various epsilon settings. Through the comparison of Grad and Grad-nFBST, DeepLIFT and DeepLIFT-nFBST, LRP and LRP-nFBST, LIME and LIME-nFBST, we infer that the integration of nFBST enhances the capacity to discern instance-wise significant and insignificant features. Moreover, it can be concluded that our approach remains unaffected by the precision of the ground truth. The distinct AUC for each feature under varying epsilon values is presented in the Appendix. • Second, LIME and LIME-nFBST perform worse than other methods. That’s because LIME is a perturbationbased method that constructs a local linear model based on the data collected by perturbing near sample points. Its performance is limited by sampling efficiency. Real World Experiments In this section, we analyzed the performance of nFBST in real world scenarios through UCI and image datasets. On energy efficiency, we focused on analyzing feature x8 and found the instance-wise testing results are consistent with its physical truth. On MNIST, by comparing different feature importance methods, we found nFBST recognized the object information in the image more prominently. The energy efficiency dataset comprises 768 samples and 8 features. It aims to predict the dependent target y (HL, heating load), which determines the specifications of the heating equipment needed to maintain comfortable indoor air conditions. The descriptions of the features and target are shown in the Appendix. From figure 4, we find that testing results are more concentrated around one when x8 equals zero, while others are not. It indicates that the instancewise significance of x8 is different under different values, insignificant if its value is zero. The research in (Tsanas and Xifara 2012) confirms our findings. There are six possible values for x8 (0, 1, 2, 3, 4, 5) in total. When x8 equals zero, it means no glazing areas and that’s why x8 doesn’t make sense in this situation. In conclusion, nFBST can effectively discover instance-wise significance in real world data. The testing problem for MNIST is defined as testing each pixel of a digit image and distinguishing significant pixels from insignificant pixels for the target. It involves a weakly supervised semantic segmentation task in computer vision. Ideally, the pixels related to the target class should be assigned higher scores than the background pixels. For feature importance analysis methods, they generate a saliency map based on feature importance scores. For nFBST, Bayesian The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8846 (a) x8 = 0 (b) x8 = 1 (c) x8 = 2 (d) x8 = 3 (e) x8 = 4 (f) x8 = 5 Figure 4: Histograms of gradient distributions for different values of x8 on energy efficiency dataset. Figure 5: Visualization of scores calculated by different methods for the target class. evidence represents the evidence supporting H0, which is pixel-wise insignificance in this task. Figure 5 shows that the object pixels are more prominent after using nFBST. Besides, Grad-nFBST identifies a messy area because the primary gradient method recognizes poorly. When we multiply it with the input, the performance is improved and this problem doesn’t exist in other methods (LRP and DeepLIFT). Related Works In recent years, there has been an increasing amount of literature on the interpretability of deep learning (Wang et al. 2019a,b, 2020b, 2021, 2022; Wang, Wu, and Zhao 2021; Wang, Feng, and Wu 2019; Cong et al. 2021; Ji et al. 2020, 2022), one of which is feature importance analysis. The first group propagates an importance score from the output neuron backward to the input. Most of them are gradient-based, including saliency map(Simonyan, Vedaldi, and Zisserman 2013), deconvolution(Zeiler and Fergus 2014), guided backpropagation(Springenberg et al. 2014) and integrated gradients(Sundararajan, Taly, and Yan 2017). The first three methods have different strategies to calculate the gradient when passing through the ReLU layer, but cannot show negative contributions and face discontinuity. The integrated gradients method increases the computational cost by computing the integral. Another approach is the Layer-wise Relevance Propagation proposed by (Binder et al. 2016). (Kindermans et al. 2016) shows that the LRP rules for ReLU networks are equivalent within a scaling factor to gradient × input in some conditions. Moreover, DeepLIFT (Shrikumar, Greenside, and Kundaje 2017) and SHAP (Lundberg and Lee 2017) do not compute gradients but are also based on back-propagation. In contrast, the second group of methods makes perturbations to individual inputs or neurons(Zeiler and Fergus 2014). A typical approach is LIME (Ribeiro, Singh, and Guestrin 2016), where data are collected by perturbing near sample points to construct a local linear model. However, it is computationally expensive and requires a large number of samples to obtain reliable results. The above methods aim to explain the predictions of a model locally at a specific instance, while others aim to understand how the model works globally. The partial dependence plot (PDP) shows the marginal impact of one or two features on the model prediction (Friedman 2001)(Greenwell, Boehmke, and McCarthy 2018). (Datta, Sen, and Zick 2016) measures the impact by calculating the difference in the quantity of interest when the data is generated according to the true distribution and the hypothetical distribution designed deliberately. SP-LIME extends LIME to global by selecting typical points(Ribeiro, Singh, and Guestrin 2016). There is also prior work treating the significance of variables. One is to regard neural networks as parametric formulations (Olden and Jackson 2002; White 1989a,b; Vuong 1989) but restricts the model structure. The testing statistic is not necessarily identifiable due to the non-identifiability of neural networks. The other is to regard neural networks as nonparametric models (Gozalo 1993; Lavergne and Vuong 1996; Yatchew 1992; Fan and Li 1996; Lavergne and Vuong 2000; Racine 1997). However, most of them study in the context of kernel regressions and can be computationally challenging because of Bootstrap. The latest related research restricts the model structure to a single hidden layer and only tests the global significance (Horel and Giesecke 2020). Conclusion In this paper, we propose to conduct the Full Bayesian Significance Testing for neural networks, called nFBST. It is a general framework that can be extended based on different measures. To the best of our knowledge, we are the first to introduce significance testing into deep neural networks. What’s more, it offers a new perspective of exploring knowledge hidden behind the underlying relationship between features and targets in a rigorous way, rather than explaining the estimated relationship which contains estimation errors due to the randomness of the data generation process. Extensive experiments on simulation and real-world datasets confirm the advantages of our proposed approach. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8847 Acknowledgments Prof. Wang’s work was supported by the National Natural Science Foundation of China (No. 72171013, 72222022, 72242101), the Fundamental Research Funds for the Central Universities (YWF-23-L-829) and the DiDi Gaia Collaborative Research Funds. Dr. He’s work was supported by the China National Postdoctoral Program for Innovative Talents (BX20230195). References Binder, A.; Montavon, G.; Lapuschkin, S.; M¨uller, K.-R.; and Samek, W. 2016. Layer-wise relevance propagation for neural networks with local renormalization layers. In International Conference on Artificial Neural Networks, 63–71. Springer. Blei, D. M.; Kucukelbir, A.; and McAuliffe, J. D. 2017. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518): 859–877. Blundell, C.; Cornebise, J.; Kavukcuoglu, K.; and Wierstra, D. 2015. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424. Cong, L. W.; Tang, K.; Wang, J.; and Zhang, Y. 2021. AlphaPortfolio: Direct construction through deep reinforcement learning and interpretable AI. Available at SSRN 3554486. Datta, A.; Sen, S.; and Zick, Y. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP), 598–617. IEEE. de B. Pereira, C. A.; Stern, J. M.; and Wechsler, S. 2008. Can a significance test be genuinely Bayesian? Bayesian Analysis, 3(1): 79 – 100. De Braganc¸a Pereira, C. A.; and Stern, J. M. 1999. Evidence and credibility: Full Bayesian significance test for precise hypotheses. Entropy, 1(4): 99–110. Efron, B. 1979. Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics, 7(1): 1 – 26. Fan, Y.; and Li, Q. 1996. Consistent model specification tests: omitted variables and semiparametric functional forms. Econometrica: Journal of the econometric society, 865–890. Fisher, R. A. 1922. On the interpretation of χ 2 from contingency tables, and the calculation of P. Journal of the royal statistical society, 85(1): 87–94. Friedman, J. H. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189–1232. Gozalo, P. L. 1993. A consistent model specification test for nonparametric estimation of regression function models. Econometric Theory, 9(3): 451–477. Greenwell, B. M.; Boehmke, B. C.; and McCarthy, A. J. 2018. A simple and effective model-based variable importance measure. arXiv preprint arXiv:1805.04755. He, Y.; Cui, P.; Ma, J.; Zou, H.; Wang, X.; Yang, H.; and Yu, P. S. 2020. Learning stable graphs from multiple environments with selection bias. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2194–2202. Horel, E.; and Giesecke, K. 2020. Significance tests for neural networks. Journal of Machine Learning Research, 21(227): 1–29. Hornik, K.; Stinchcombe, M.; and White, H. 1989. Multilayer feedforward networks are universal approximators. Neural networks, 2(5): 359–366. Jaakkola, T. S.; and Jordan, M. I. 2000. Bayesian parameter estimation via variational methods. Statistics and Computing, 10(1): 25–37. Jeffreys, H. 1998. The theory of probability. OuP Oxford. Ji, J.; Wang, J.; Jiang, Z.; Jiang, J.; and Zhang, H. 2022. STDEN: Towards physics-guided neural networks for traffic flow prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4048–4056. Ji, J.; Wang, J.; Jiang, Z.; Ma, J.; and Zhang, H. 2020. Interpretable spatiotemporal deep learning model for traffic flow prediction based on potential energy fields. In 2020 IEEE International Conference on Data Mining (ICDM), 1076– 1081. IEEE. Kass, R. E.; and Raftery, A. E. 1995. Bayes factors. Journal of the american statistical association, 90(430): 773–795. Kendall, A.; and Gal, Y. 2017. What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30. Kindermans, P.-J.; Sch¨utt, K.; M¨uller, K.-R.; and D¨ahne, S. 2016. Investigating the influence of noise and distractors on the interpretation of neural networks. arXiv preprint arXiv:1611.07270. Lavergne, P.; and Vuong, Q. 2000. Nonparametric significance testing. Econometric Theory, 16(4): 576–601. Lavergne, P.; and Vuong, Q. H. 1996. Nonparametric selection of regressors: The nonnested case. Econometrica: Journal of the Econometric Society, 207–219. Lundberg, S. M.; and Lee, S.-I. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Matthews, J.; Altman, D. G.; Campbell, M.; and Royston, P. 1990. Analysis of serial measurements in medical research. British Medical Journal, 300(6719): 230–235. Olden, J. D.; and Jackson, D. A. 2002. Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks. Ecological modelling, 154(1-2): 135–150. Orlitzky, M. 2012. How can significance tests be deinstitutionalized? Organizational Research Methods, 15(2): 199– 228. Ortega, A.; and Navarrete, G. 2017. Bayesian hypothesis testing: an alternative to null hypothesis significance testing (NHST) in psychology and social sciences. In Bayesian inference. IntechOpen. Parzen, E. 1962. On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3): 1065–1076. Racine, J. 1997. Consistent significance testing for nonparametric regression. Journal of Business & Economic Statistics, 15(3): 369–378. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8848 Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144. Rutledge, T.; and Loh, C. 2004. Effect sizes and statistical testing in the determination of clinical significance in behavioral medicine research. Annals of Behavioral Medicine, 27(2): 138–145. Scott, D. W. 1979. On optimal and data-based histograms. Biometrika, 66(3): 605–610. Shrikumar, A.; Greenside, P.; and Kundaje, A. 2017. Learning important features through propagating activation differences. In International conference on machine learning, 3145–3153. PMLR. Simonyan, K.; Vedaldi, A.; and Zisserman, A. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Springenberg, J. T.; Dosovitskiy, A.; Brox, T.; and Riedmiller, M. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806. Student. 1908. The probable error of a mean. Biometrika, 6(1): 1–25. Sundararajan, M.; Taly, A.; and Yan, Q. 2017. Axiomatic attribution for deep networks. In International conference on machine learning, 3319–3328. PMLR. Tsanas, A.; and Xifara, A. 2012. Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy and buildings, 49: 560–567. Vuong, Q. H. 1989. Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica: Journal of the Econometric Society, 307–333. Wang, J.; Feng, K.; and Wu, J. 2019. SVM-based deep stacking networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 5273–5280. Wang, J.; Ji, J.; Jiang, Z.; and Sun, L. 2022. Traffic flow prediction based on spatiotemporal potential energy fields. IEEE Transactions on Knowledge and Data Engineering. Wang, J.; Peng, Z.; Wang, X.; Li, C.; and Wu, J. 2020a. Deep fuzzy cognitive maps for interpretable multivariate time series prediction. IEEE transactions on fuzzy systems, 29(9): 2647–2660. Wang, J.; Wang, Z.; Li, J.; and Wu, J. 2018. Multilevel wavelet decomposition network for interpretable time series analysis. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2437–2446. Wang, J.; Wu, N.; Lu, X.; Zhao, W. X.; and Feng, K. 2021. Deep Trajectory Recovery with Fine-Grained Calibration using Kalman Filter. IEEE Transactions on Knowledge and Data Engineering, 33(3): 921–934. Wang, J.; Wu, N.; and Zhao, W. X. 2021. Personalized route recommendation with neural network enhanced search algorithm. IEEE Transactions on Knowledge and Data Engineering, 34(12): 5910–5924. Wang, J.; Wu, N.; Zhao, W. X.; Peng, F.; and Lin, X. 2019a. Empowering A* search algorithms with neural networks for personalized route recommendation. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 539–547. Wang, J.; Wu, Y.; Li, M.; Lin, X.; Wu, J.; and Li, C. 2020b. Interpretability is a kind of safety: An interpreterbased ensemble for adversary defense. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 15–24. Wang, J.; Yang, C.; Jiang, X.; and Wu, J. 2023. WHEN: A Wavelet-DTW Hybrid Attention Network for Heterogeneous Time Series Analysis. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2361–2373. Wang, J.; Zhang, Y.; Tang, K.; Wu, J.; and Xiong, Z. 2019b. AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy Using Interpretable Deep Reinforcement Attention Networks. In Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’19, 1900–1908. New York, NY, USA: Association for Computing Machinery. ISBN 9781450362016. White, H. 1989a. Learning in artificial neural networks: A statistical perspective. Neural computation, 1(4): 425–464. White, H. 1989b. Some asymptotic results for learning in single hidden-layer feedforward network models. Journal of the American Statistical Association, 84(408): 1003–1013. Yatchew, A. J. 1992. Nonparametric regression tests based on least squares. Econometric Theory, 8(4): 435–451. Zeiler, M. D.; and Fergus, R. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision, 818–833. Springer. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8849
2024
983
18,832
KGDM: A Diffusion Model to Capture Multiple Relation Semantics for Knowledge Graph Embedding Xiao Long1, Liansheng Zhuang1*, Aodi Li1, Jiuchang Wei1, Houqiang Li1, Shafei Wang2 1University of Science and Technology of China, Hefei 230026, China 2Peng Cheng Laboratory, Shenzhen 518000, China [email protected]; [email protected] Abstract Knowledge graph embedding (KGE) is an efficient and scalable method for knowledge graph completion. However, most existing KGE methods suffer from the challenge of multiple relation semantics, which often degrades their performance. This is because most KGE methods learn fixed continuous vectors for entities (relations) and make deterministic entity predictions to complete the knowledge graph, which hardly captures multiple relation semantics. To tackle this issue, previous works try to learn complex probabilistic embeddings instead of fixed embeddings but suffer from heavy computational complexity. In contrast, this paper proposes a simple yet efficient framework namely the Knowledge Graph Diffusion Model (KGDM) to capture the multiple relation semantics in prediction. Its key idea is to cast the problem of entity prediction into conditional entity generation. Specifically, KGDM estimates the probabilistic distribution of target entities in prediction through Denoising Diffusion Probabilistic Models (DDPM). To bridge the gap between continuous diffusion models and discrete KGs, two learnable embedding functions are defined to map entities and relation to continuous vectors. To consider connectivity patterns of KGs, a Conditional Entity Denoiser model is introduced to generate target entities conditioned on given entities and relations. Extensive experiments demonstrate that KGDM significantly outperforms existing state-of-the-art methods in three benchmark datasets. Introduction Knowledge graphs (KGs) are a type of multi-relational graph that stores factual knowledge in the real world. Benefitting from their efficiency in storing and representing factual knowledge, KGs are essential for many applications such as question answering (Hao et al. 2017), information retrieval (Xiong, Power, and Callan 2017), recommender systems (Zhang et al. 2016), and natural language processing (Yang, Yang, and Cohen 2017). Usually, a KG consists of enormous factual triplets, where each triplet (h, r, t) includes a head entity h, a relation r, and a tail entity t. Due to the complexity of the real world, KGs are often incomplete, which restricts their applications in downstream tasks. Therefore, knowledge graph completion (KGC) is proposed to complete *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. missing facts by inferring from existing ones. Generally, the KGC task is to make the entity prediction. Knowledge graph embedding (KGE) is a promising approach to predicting the missing facts. It learns the embeddings of entities and relations of KGs in a low-dimensional vector space, where the embeddings are required to preserve the semantic meaning and relational structure. Despite the success of KGE models, most of them make deterministic entity predictions in the vector space. So, they can only capture specific semantics of relations and lack the capability to deal with relations that contain multiple semantics. Due to the ambiguity of knowledge in KGs, a relation often has multiple semantics revealed by the entity pairs in one-to-many and many-to-many forms. For example, the relation LocationContains has multiple latent semantics: city-related as (US, LocationContains, New York), college-related as (US, LocationContains, Yale University), and company-related as (US, LocationContains, Google). As shown in Table 1, the existence of multiple relation semantics is quite common in knowledge graphs and significantly degrades the performance of KGE methods. For example, RotatE (Sun et al. 2019) and SEA (Gregucci et al. 2023) have impressive performances on 1-1 and N-1 types, but they perform very poorly on 1-N and N-N types. Additionally, although TransH (Wang et al. 2014) and TransG (Xiao et al. 2015) perform well on the FB15k dataset in tackling the issue of multiple relation semantics, they still exhibit poor performance on the more comprehensive FB15k-237 dataset (excluding inverse relations and having a higher entity-relation ratio (Dettmers et al. 2018)) in the 1-N and N-N types. Relation Type &Proportion Methods TransH TransG RotatE SEA 1-1 (1.57%) 0.492 0.489 0.487 0.493 1-N (18.60%) 0.077 0.078 0.081 0.087 N-1 (4.60%) 0.449 0.458 0.467 0.470 N-N (75.23%) 0.219 0.228 0.237 0.242 All types (100%) 0.287 0.301 0.338 0.360 Table 1: MRR scores on FB15k-237 by relation types. To address the above issue, some methods focus on learning probabilistic embeddings, where the predictions represent The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8850 a specific distribution rather than a deterministic value. For example, (Xiao et al. 2015; He et al. 2015) and (Feng et al. 2021) proposed density-based embeddings, where each entity/relation is represented by a multi-dimensional Gaussian distribution and a mixture of Gaussian distributions respectively. They use the mean vector µ to indicate its position and the covariance metric Σ to represent the corresponding uncertainty. The prediction results are no longer deterministic values but a distribution, which allows for the demonstration of multiple semantic relations and diversity of target entities. However, learning probabilistic embeddings often requires calculating the inverse of the large matrix and solving a linear system, which demands huge memory and is time-consuming. How to efficiently capture the multiple relation semantics with KG embeddings is still a challenging problem. Instead of learning complex probabilistic embeddings, this paper proposes to model the multiple relation semantics in prediction for KGE methods. Our key idea is to cast entity prediction into the task of conditional entity generation. That is, we generate the predicted entities conditioned on the given entities and relations in the vector space. To this end, this paper proposes a novel conditional diffusion model, namely the Knowledge Graph Diffusion Model (KGDM) to estimate the probabilistic distribution of target entities in prediction. To capture connectivity patterns of KGs, this paper introduces a Conditional Entity Denoiser module to generate target entities conditioned on given entities and relations. A major obstacle to knowledge graph diffusion is that diffusion processes typically operate in continuous space, while entities and relations are inherently discrete. Though there are some discrete diffusion-like approaches (Dhariwal and Nichol 2021), they cannot utilize the guidance, which drastically impresses the diffusion model’s sample quality. To address this gap, we conduct diffusion directly in a vector space and define two learnable embedding functions: EMBe, and EMBr that map entities and relations to vectors. Our contributions can be summarized as follows: • We propose a simple yet efficient framework namely KGDM to capture the multiple relation semantics in prediction for KGE methods. It directly learns the probabilistic distribution of target entities through Denoising Diffusion Probabilistic Models (DDPM) and casts the entity prediction task into the conditional fact generation task. To the best of our knowledge, KGDM is the first attempt to explore the potential of diffusion models in knowledge graph reasoning. • A Conditional Entity Denoiser module is proposed to generate a target entity conditioned on given entities and relations. Furthermore, two embedding functions for entities and relations are defined to bridge the gap between continuous diffusion models and discrete KGs. • Extensive experiments on four benchmark datasets demonstrate that KGDM achieves superior performances in KGC tasks and significantly outperforms all types of state-ofthe-art methods on three datasets. In FB15k-237, KGDM achieves up to a 25% relative improvement over the stateof-the-art methods. Related Work KG embedding methods: KG embedding, which aims to encode entities and relations into a continuous vector space. The general intuition of these methods is to model and infer the connectivity patterns (i.e., symmetry/antisymmetry, inversion, and composition) in knowledge graphs according to the observed knowledge facts. Most KGE methods (Bordes et al. 2013; Sun et al. 2019; Yang et al. 2014; Cao et al. 2022) focus on defining a relation-dependent scoring function fr(h, t) in the general or design space to model these patterns. For example, TransE (Bordes et al. 2013), which represents relations as translations, can model the inversion and composition patterns. RotatE (Sun et al. 2019), which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. Meanwhile, some other works consider how to model the multiple relation semantics and diversity of target entities to improve the performance of embeddings. Translation-based model (Wang et al. 2014) models a relation as a hyperplane to tackle entity pairs in one-to-many form. However, they can not deal with the issue of multiple relation semantics. Density-based models (Xiao et al. 2015) and (He et al. 2015) also use Gaussian distributions to represent entities while drawing a mixture of Gaussian distributions for relations to represent the multiple relation semantics. However, these density-based embedding methods often make complex designs on embeddings, which often need to calculate the inverse of the large matrix and solve a linear system, which requires huge memory and is very time-consuming. Diffusion Model: The diffusion model uses diffusion processes to model the generation and defines the sampling of data as the process of gradually denoising it from a complete Gaussian noise. The forward process gradually adds Gaussian noise to the data from a predefined noise schedule until the time step T. In recent years, the class of diffusion-based (or score-based) deep generative models has demonstrated its outstanding performance in modeling high-dimensional multi-modal distributions (Ho, Jain, and Abbeel 2020; SohlDickstein et al. 2015), and the capability of generating highquality and diverse samples (Dhariwal and Nichol 2021; Rombach et al. 2022) on several benchmark generation tasks in the field of computer vision (Dhariwal and Nichol 2021). Additionally, to handle discrete data, past works have studied text diffusion models on discrete state spaces, which defines a corruption process on discrete data (Austin et al. 2021). And some works (Li et al. 2022; Gong et al. 2022) focus on continuous diffusion models for text generation. The reverse process uses a neural backbone often implemented as a U-Net (Dhariwal and Nichol 2021; Ho, Jain, and Abbeel 2020) or transformer (Li et al. 2022; Gong et al. 2022) to parameterize the conditional distribution p(xt−1|xt). However, knowledge graphs are often stored in the form of triplets (h, r, t), which is different from images or text. They have shorter lengths and less obvious long-range dependencies. So, in this work, we propose an MLP-based Conditional Entity Denoiser (CEDenoiser) for KG data to learn the reverse diffusion process. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8851 Methodology Problem Setup Let G = (V, E) be an instance of a knowledge graph, where V is the set of nodes and E is the set of edges. Each edge e has a relation type r ∈R. We aim to predict the missing entities in G, i.e., given an incomplete triplet (h, r, ?), we aim to predict the missing tail entity t. Noticing that the problem (?, r, t) is the same, this paper only discusses (h, r, ?). The traditional KGE methods carry out entity prediction through a ranking procedure. Specifically, to predict the tail entity of an incomplete fact triple (h, r, ?), it makes the prediction ˆt by relation-dependent score function fr(h) in vector space. Then ranking the distance between ˆt and each entity t in KGs to get the answer. While KGDM uses the Conditional Entity Denoiser to generate the predicted tail entity Xt conditioned on the given head entity Xh and relation Xr in vector space. And then, it does the same ranking process to get the final answer. The following sections will provide a detailed introduction to the architecture of KGDM and its training objectives. The KGDM Architecture Figure 1 illustrates the architecture of the KGDM. From a high-level perspective, KGDM can be divided into two stages: forward process and reverse process with conditional denoising. Specifically, KGDM learns to model the Markov transition from Gaussian distribution to the distribution of the target entities in vector space. In the inference stage, it uses the known entity and relation to guide the reverse process to generate target entities in vector space. Next, we will provide a detailed introduction to these two processes. Forward Process. To apply a continuous diffusion model to discrete entities. We define two learnable embedding functions: EMBe and EMBr, which both are linear layers. They maps each entity and relation to vectors Xe ∈Re, Xr ∈Rr. Then, for the training triplet (h, r, t), given the ground-truth tail entity embedding Xt and its condition embedding Xh and Xr. Let q(Xt|Xh, Xr) be the unknown target entities’ distribution in vector space. First, KGDM defines the forward diffusion process q(Xt Tt|Xt Tt−1) which maps the embedding of tail entity to pure noise by gradually adding Gaussian noise at each time step Tt = i until at diffusion step Tt = T. The forward transition is parametrized by: q(Xt Tt|Xt Tt−1) = N(Xt Tt; p 1 −βTtXt Tt−1, βTtI), (1) where {βTt}T Tt=1 are forward process variances. Reverse Process with Conditional Denoising. In the second stage, KGDM defines the conditional reverse diffusion process p(Xt Tt−1|Xt Tt, Xh, Xr) which performs iterative denoising from pure Gaussian noise to generate target entities in vector space conditioned on the known entity embedding Xh and relation embedding Xr: pθ(Xt Tt−1|Xt Tt, Xh, Xr) = N(Xt Tt−1; µθ(Xt Tt, Tt, Xh, Xr), σ2 TtI), (2) where σTt is the constant variance following (Ho, Jain, and Abbeel 2020), µθ is the mean of the Gaussian distribution computed by a neural network, and θ is the parameters of the denoise model. As shown in (Ho, Jain, and Abbeel 2020), we can reparameterize the mean to make the neural network learn the added noise at time step Tt instead. In this way, µθ can be reparameterized as follows: µθ(Xt Tt, Tt, Xh, Xr) = 1 √αTt (Xt− βTt √1 −¯αTt ϵθ(Xt Tt, Tt, Xh, Xr)), (3) where Tt is the time step, {βTt}T Tt=1 are forward process variances, αTt = 1 −βTt, and ¯αTt = QTt s−1 αs. ϵθ(Xt Tt, Tt, Xh, Xr) is the designed CEDenoiser to predict the added noise conditioned on known condition embeddings at time step Tt. Conditional Entity Denoiser Currently, the backbones of most existing denoising models are designed for image or text data. While knowledge graphs are often stored in the form of triplets (h, r, t), which have a short length and their long-range dependencies are not apparent. So, instead of using transformers, we propose a simple yet efficient MLP-based Conditional Entity Denoiser (CEDenoiser). We conduct ablation studies in the following sections to demonstrate that transformers are not suitable. The architecture of CEDenoiser is illustrated in Figure 1(b). Formally, the architecture of CEDenoiser can be described as follows: Xc = ScoringModule(Xh, Xr), (4) E = CEDenoiserBlock(Xt ′ , XTt, Xc), (5) ϵ = LinearLayer(LN(E)), (6) where Xh and Xr are the embedding of entity h and relation r, Xc is the final condition embedding calculated by Scoring Module, and Xt ′ is the noised tail embedding. XTt denotes the timestep embedding at step Tt. E is intermediate feature calculated by the CEDenoiser block, and ϵ is the noise predicted by CEDenoiser. Next, we will introduce the Scoring Module and the CEDenoiser block. Scoring Module. To generate the target tail entity from the noise in vector space, CEDenoiser uses the known embeddings (Xh, Xr) to guide the generative process in DDPM. After the vectorization of conditions, most of the previous works (Li et al. 2022; Ho and Salimans 2022) concatenate the different conditional embeddings simply and use them as the final control condition. However, for the triplets in KGs, the entities and relations that serve as conditions usually have rich patterns, which are not independent of each other. Inspired by the existing KGE methods on modeling the patterns among the entities and relations. By defining the relation-dependent score functions equation (7) and equation (8) from (Bordes et al. 2013; Sun et al. 2019), the Scoring Module can better capture the patterns among conditions and utilize them to guide the generation process: ScoringModule(Xh, Xr) = Xh + Xr, (7) ScoringModule(Xh, Xr) = Xh ◦Xr, (8) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8852 (a) KGDM Architecture (b) Conditional Entity Denoiser Figure 1: (a) Architecture of KGDM. It consists of a forward diffusion process and a reverse process modeled by a Conditional Entity Denoiser. (b) Architecture of Conditional Entity Denoiser (CEDenoiser). where ◦denotes the Hadamard product. Noticing that different score functions model different patterns among entities and relations. So, we use the equation (7) in FB15k-237 which contains a large number of composition patterns, and use equation (8) in WN18RR, Kinship, and UMLS which contains many symmetry patterns. In the following experiments, we conduct ablation studies to demonstrate that the performance of simply concatenate conditional embeddings is far inferior compared to that of using score functions. CEDenoiser Block. Inspired by the success of the transformer encoder (Vaswani et al. 2017) in the e graph data domain (Hu et al. 2020), the CEDenoiser Block adopts a similar architecture. It consists of alternating layers of MLP. Layernorm(LN) is applied before every layer and we employ residual connections around each of the sub-layers (Wang et al. 2019). However, since the form of triplets (h, r, t) is simple, with a short length, and their long-range dependencies are not apparent. CEDenoiser employs simple MLP layers rather than multiheaded self-attention layers. Furthermore, to make full use of the conditional embeddings to guide generation, we regress dimension-wise scaling parameters α which are applied immediately prior to any residual connections (Peebles and Xie 2022) within the sub-layers as shown in Figure 1(b). Training and Inference Since negative sampling has been proven quite effective for both learning knowledge graph embeddings (Trouillon et al. 2016) and word embeddings (Mikolov et al. 2013). To train the model, we use a loss function similar to the negative sampling loss (Mikolov et al. 2013): L = −log σ(γ −d1(Xt, Denoise(Xt)) − n X i=1 1 k log σ(d1(Xt, Denoise(Xt ′ i)) −γ), (9) where γ is a fixed margin, σ is the sigmoid function, and the d1 is the L1 distance. The predicted noise and the final denoised results can convert to each other (Ho, Jain, and Abbeel 2020) by Denoise(Xt) = 1 √ ¯αTt Xt Tt − Algorithm 1: Training Stage Input: (h, r, t), (h, r, t ′); Parameters: EMBe, EMBr, CEDenoiser: ϵθ; repeat Calculate Xh, Xr, Xt, Xt ′ by EMBe, EMBr; ϵ ∼N(0, I); Tt ∼Uniform ({1, · · · , T}); Xt Tt = √αTtXt + √1 −αTtϵ; Denoise(Xt) = 1 √ ¯αTt Xt Tt − q 1 ¯αTt −1 ϵθ(Xt Tt, Tt, Xh, Xr); Take the gradient descent step on: L = −log σ(γ −d1(Xt, Denoise(Xt)) − Pn i=1 1 k log σ(d1(Xt, Denoise(Xt ′ i)) −γ); until converged; q 1 ¯αTt −1 ϵθ(Xt Tt, Tt, Xh, Xr). And (h, r, t ′ i) is the i-th negative triplet for the tail entity prediction on positive sample (h, r, t). For the head prediction, we replace the corresponding h ′ i. During the inference stage, when predicting (h, r, ?), KGDM uses the trained CEDenoiser and the corresponding trained conditions (known embedding of the entity Xh and relation Xr) to perform iterative denoising from pure Gaussian noise to target entity Xt in vector space. The training and inference algorithms of KGDM are illustrated in Algorithm 1 and Algorithm 2 respectively. Experiment Experiment Setup Datasets: We select four typical KGC datasets for evaluation, including FB15k-237 (Toutanova and Chen 2015), WN18RR (Dettmers et al. 2018), Kinship and UMLS. For Kinship and UMLS, we use the datasets division in (Qu et al. 2020). Statistics of datasets can be found in the appendix. Baselines: We compared with the four types of KGC methods following (Cui and Chen 2022). Knowledge graph embedThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8853 Algorithm 2: Inference Stage Input: Incomplete triplet (h, r, ?); Output: Predicted target entity embedding Xt 0; Xt Tt ∼N(0, I); Xh ←EMBe(h), Xr ←EMBr(r); for Tt = T, · · · , 1 do z ∼N(0, I) if Tt > 1, else z = 0; Xt Tt−1 = 1 √αTt (Xt Tt − 1−αTt √ 1−¯αTt ϵθ(Xt Tt, Tt, Xh, Xr)) + σTtz; end return Xt 0 ding methods: TransE (Bordes et al. 2013), DualE (Cao et al. 2021), DistMult (Yang et al. 2014), ComplEx (Trouillon et al. 2016), ComplEx-N3 (Lacroix, Usunier, and Obozinski 2018), KG2GM (Feng et al. 2021), KG2E (He et al. 2015), TuckER (Balaževi´c, Allen, and Hospedales 2019), ConvE (Dettmers et al. 2018), RotatE (Sun et al. 2019), HAKE (Zhang et al. 2020), ATTH (Chami et al. 2020), SEA (Gregucci et al. 2023), AnKGE-HAKE (Yao et al. 2023). Path-based methods: RNNLogic (Qu et al. 2020), NeuralLP(Yang, Yang, and Cohen 2017), DRUM (Sadeghian et al. 2019), PathRank (Lee et al. 2013), MINERVA (Das et al. 2017), and M-Walk(Shen et al. 2018). Graph neural networks methods: NBFNet (Zhu et al. 2021), COMPGCN (Vashishth et al. 2019) and RGCN (Schlichtkrull et al. 2018) and Instance-based learning methods: IBLE and CIBLE (Cui and Chen 2022) Evaluation Protocols: For each test triplet (h, r, t), we construct two queries: (h, r, ?) and (?, r, t), with the answers t and h. The Mean Rank (MR), Mean Reciprocal Rank (MRR), and H@N are reported under the filtered setting (Sun et al. 2019), in line with previous research. Implementation details: Our model is trained on 2 Nvidia Quadro RTX 8000 GPU. We describe the hyper-parameter, architectures, and more experimental details in the appendix. The source code of this paper can be obtained from https: //github.com/key2long/KGDM. Main Results The main results are presented in Tables 2, and Table 3. We categorize the existing KGC methods into two main groups, non-embedding methods and embedding methods. The nonembedding methods are listed in the upper section of the table, while the embedding methods are listed in the lower section of the table. Our observations based on the results are as follows. First, compared to embedding methods, KGDM shows remarkable improvement across all metrics on four datasets. Specifically, it achieves a 15.8% (43.6% relative), 1.6% (3.2% relative), 13.8% (21.1% relative), and 11.8% (14.9% relative) increase in MRR scores over the best embedding models on the FB15k-237, WN18RR, Kinship, and UMLS datasets, respectively. Second, compared to non-embedding methods, KGDM achieves better results for all metrics on the FB15k237, Kinship, and UMLS datasets. On the WN18RR, KGDM retains its superiority over other non-embedding methods except for the NBFNet. Specifically, KGDM achieves a 10.5% (25.3% relative), 6.1% (8.3% relative), and 6.7% (8.0% relative) increase in MRR scores over the best non-embedding models on the FB15k-237, Kinship, and UMLS datasets, respectively. In conclusion, these results illustrate that by modeling the multiple relation semantics and distribution of target entities in prediction, KGDM can improve the performance of embedding methods greatly and explore the potential of embedding methods in KGC tasks. Furthermore, we notice that the performance improvement of KGDM on the WN18RR is not significant. We analyze that it is caused by the higher entity-to-relation ratio (the number of entities/ the number of relations) on WN18RR(40943/11) than the other three datasets: FB15k-237 (14541/237), UMLS (135/46), and Kinship (104/25). The larger the entity-to-relation ratio implies more difficult to model the distribution of target entities. Next, We break down the performance of KGDM by the categories of relations (Wang et al. 2014) in the FB15k-237. Table 4 shows the prediction MRR scores for each category (the results of tail pred mode can be found in the appendix). It is observed that KGDM shows a greater relative improvement in the 1-N and N-N types. Specifically, it achieves a 5.6% (11.2% relative), 14.8% (56.9% relative), 12.8% (21.3% relative), and 19.0% (40.9% relative) increase in MRR scores over the best embedding models on the 1-1, 1-N, N-1, N-N types. These results also illustrate that the multiple relation semantics in KGs often degrades the performance of KGE methods. KGDM can effectively alleviate this issue and perform better on 1-N and N-N types. Ablation Study Scoring module of the CEDenoiser. As mentioned above, to better leverage conditions to guide the generative process in conditional DDPM. We design a Scoring Module to deal with the embeddings of conditions rather than simply concatenating them. In this subsection, we analyze the necessity of the Scoring Module in the FB15k-237 dataset. We compare the results between using the Scoring Module and directly concatenating the conditions without using the Scoring Module. The contributions of the Scoring Module are summarized in Table 5. More details on the ablation study are provided in the appendix. It can be observed that without the Scoring Module, the performance of KGDM drops by about 15% on average, illustrating that the entities and relations in KGs are not independent of each other. Simply concatenating their embeddings cannot capture the patterns in the triplets. By defining the relation-dependent score functions, the Scoring Module can better utilize the conditional information in the triplets to guide the generation. The CEDenoiser can generate higher-quality answers that are more aligned with the probabilistic distribution of target entities. MLP-based architecture of the CEDenoiser. Furthermore, we conduct an ablation study to prove that an MLP-based architecture is more suitable for KGs in conditional DDPM. We replace the MLP layers in the CEDenoiser block with two transformer layers. The results in Table 6 show that the performance of KGDM decreases by nearly 35% when using transformer layers in the CEDenoiser block, indicating that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8854 Model FB15k-237 WN18RR MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 Non-embedding methods PathRank 0.087 7.4 9.2 11.2 0.189 17.1 20.0 22.5 M-Walk 0.232 16.5 24.3 0.437 41.4 44.5 NeuralLP 0.237 17.3 25.9 36.1 0.381 36.8 38.6 40.8 DRUM 0.238 17.4 26.1 36.4 0.382 36.9 38.8 41.0 RNNLogic 0.344 25.2 38.0 53.0 0.483 44.6 49.7 55.8 RGCN 0.273 18.2 30.3 45.6 0.402 34.5 43.7 49.4 COMPGCN 0.355 26.4 39.0 53.5 0.479 44.3 49.4 54.6 NBFNet 0.415 32.1 45.4 59.9 0.551 49.7 57.3 66.6 IBLE 0.284 20.0 31.0 45.2 0.418 40.6 42.2 44.3 CIBLE 0.341 24.5 37.7 53.7 0.490 44.6 50.7 57.5 Embedding methods TransE 0.294 46.5 0.226 50.1 KG2E 0.108 4.2 11.5 24.9 0.054 0.70 7.70 13.3 ConvE 0.325 23.7 35.6 50.1 0.430 40.0 44.0 52.0 RotatE 0.338 24.1 37.5 53.3 0.476 42.8 49.2 57.1 HAKE 0.349 25.2 38.5 54.5 0.496 45.2 51.3 58.0 ATTH 0.348 25.2 38.4 54.0 0.486 44.3 49.9 57.3 DualE 0.365 26.8 40.0 55.9 0.492 44.4 51.3 58.4 KG2GM 0.168 9.0 18.2 32.5 0.087 0.46 18.2 29.6 SEA 0.360 26.4 39.8 54.9 0.500 45.4 51.8 59.1 AnKGE-HAKE 0.385 28.8 42.8 57.2 0.500 45.4 51.5 58.7 KGDM(Ours) 0.520 42.3 56.6 70.8 0.516 45.7 51.9 59.3 Table 2: Entity prediction results on FB15k-237 and WN18RR. The best results are in bold and the second best results are underlined. Model Kinship UMLS MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 Non-embedding methods PathRank 0.369 27.2 41.6 67.3 0.197 14.8 21.4 25.2 NeuralLP 0.302 16.7 33.9 59.6 0.483 33.2 56.3 77.5 MINERVA 0.401 23.5 46.7 76.6 0.564 42.6 65.8 81.4 DRUM 0.334 18.3 37.8 67.5 0.548 35.8 69.9 85.4 RNNLogic 0.722 59.8 81.4 94.9 0.842 77.2 89.1 96.5 NBFNet 0.606 43.5 72.5 93.7 0.778 68.8 84.0 93.8 IBLE 0.615 45.9 71.7 92.8 0.816 71.7 90.0 96.1 CIBLE 0.728 60.3 82.0 95.6 0.831 74.9 89.7 97.0 Embedding methods DistMult 0.241 15.5 26.3 41.9 0.430 39.0 44.0 49.0 ComplEx 0.247 15.8 27.5 42.8 0.440 41.0 46.0 51.0 ComplEx-N3 0.605 43.7 71.0 92.1 0.791 68.9 87.3 95.7 TuckER 0.603 46.2 69.8 86.3 0.732 62.5 81.2 90.9 RotatE 0.651 50.4 75.5 93.2 0.744 63.6 82.2 93.9 KGDM(Ours) 0.789 68.7 87.0 97.2 0.909 87.2 93.7 97.3 Table 3: Entity prediction results on Kinship and UMLS. The best results are in bold and the second best results are underlined. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8855 Model Head Pred 1-1 1-N N-1 N-N TransE 0.498 0.079 0.455 0.224 TransG 0.489 0.078 0.458 0.228 RotatE 0.487 0.081 0.467 0.234 WGCN 0.422 0.093 0.454 0.261 COMPGCN 0.457 0.112 0.471 0.275 KGDM 0.554 0.260 0.599 0.465 Table 4: MRR scores by relation category in FB15k-237. Model FB15k-237 Kinship MRR H@1 MRR H@1 KGDM w/o Scoring Module 0.449 35.0 0.635 47.9 KGDM w/ Scoring Module 0.520 42.3 0.789 68.7 Table 5: Ablation on Scoring Module of the CEDenoiser in FB15k-237 and Kinship. the MLP layers are more suitable for modeling simple-form triplets in knowledge graphs. Hyper-parameters. Next, we conduct ablation experiments on hyper-parameters, including the number of the CEDenoiser block and the hidden size of the MLP layer in the CEDenoiser block. Figure 2 reports the results on FB15k237 in terms of MRR scores. In Figure 2(a), we test the performance of different numbers of CEDenoiser blocks. It suggests that a shallow and proper number (3 in this case) is essential for KGDM. Because the inputs of the CEDenoiser are relatively simple, a very deep network may suffer from overfitting and cause performance drops. Figure 2(b) studies the influence of the hidden size of the MLP layer in the CEDenoiser block. We observe that the increase of hidden size helps to improve the performance of the model, but too large size (e.g., 2400) will harm the performance. Case Study Finally, we explore how KGDM performs better on triplets of the 1-N category. We provide an example from FB15k237. The triplet to be predicted is (road running, /olympics/olympic_sport/country, ?). For this question, there are six countries as answers in the test dataset. We visualize the true entity embeddings and prediction embeddings calculated by the KGDM in Figure 3(a), and prediction results calculated by trained TransE in Figure 3(b). From this visualization, we can observe that the embedding predicted by TransE is a deterministic "point" in the vector space, which is calculated by f T ransE r (t) = h + r. In fact, the embedding predicted by TransE can be seen as the "average" of the candidate answers, which neglects the diversity of target entities. So, this "average result" is usually far away from the truth entities. However, due to the KGDM models the distribution of target Model FB15k-237 Kinship MRR H@1 MRR H@1 KGDM w/ transformer-based 0.334 24.0 0.661 52.2 KGDM w/ MLP-based 0.520 42.3 0.789 68.7 Table 6: Ablation on MLP-based architecture of the CEDenoiser in FB15k-237 and Kinship. entities in prediction, the entities generated based on conditioned entities and relations are not unique and most of them are closer to true entities. (a) Num of CEDenoiser Blocks (b) Hidden Size Figure 2: Hyper-parameters analysis on FB15k-237 (MRR). (a) KGDM Predictions (b) TransE Predictions Figure 3: T-SNE visualization to the predictions of ( road running, /olympics/olympic_sport/country, ? ) calculated by KGDM and TransE. Conclusion To better solve KGC tasks by considering the multiple relation semantics in KGs, this paper proposes a novel conditional diffusion model, namely the Knowledge Graph Diffusion Model (KGDM) that can model the multiple relation semantics in prediction for KGE methods. Different from existing methods of learning probabilistic embeddings, KGDM throws the entities prediction task into the conditional entities generation task and models the probabilistic distribution of target entities. Extensive experiments demonstrate that KGDM significantly outperforms existing state-of-the-art methods on benchmark datasets for KGC tasks. Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant No.U20B2070, No.61976199, No.61836011 and No.72293573. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8856 References Austin, J.; Johnson, D. D.; Ho, J.; Tarlow, D.; and van den Berg, R. 2021. Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information Processing Systems, 34: 17981–17993. Balaževi´c, I.; Allen, C.; and Hospedales, T. M. 2019. Tucker: Tensor factorization for knowledge graph completion. arXiv preprint arXiv:1901.09590. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26. Cao, Z.; Xu, Q.; Yang, Z.; Cao, X.; and Huang, Q. 2021. Dual quaternion knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 6894–6902. Cao, Z.; Xu, Q.; Yang, Z.; Cao, X.; and Huang, Q. 2022. Geometry interaction knowledge graph embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 5521–5529. Chami, I.; Wolf, A.; Juan, D.-C.; Sala, F.; Ravi, S.; and Ré, C. 2020. Low-dimensional hyperbolic knowledge graph embeddings. arXiv preprint arXiv:2005.00545. Cui, W.; and Chen, X. 2022. Instance-based Learning for Knowledge Base Completion. Advances in Neural Information Processing Systems, 35: 30744–30755. Das, R.; Dhuliawala, S.; Zaheer, M.; Vilnis, L.; Durugkar, I.; Krishnamurthy, A.; Smola, A.; and McCallum, A. 2017. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851. Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Dhariwal, P.; and Nichol, A. 2021. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34: 8780–8794. Feng, W.; Zha, D.; Guo, X.; Dong, Y.; and He, Y. 2021. Representing Knowledge Graphs with Gaussian Mixture Embedding. In Knowledge Science, Engineering and Management: 14th International Conference, KSEM 2021, Tokyo, Japan, August 14–16, 2021, Proceedings, Part I, 166–178. Springer. Gong, S.; Li, M.; Feng, J.; Wu, Z.; and Kong, L. 2022. Diffuseq: Sequence to sequence text generation with diffusion models. arXiv preprint arXiv:2210.08933. Gregucci, C.; Nayyeri, M.; Hernández, D.; and Staab, S. 2023. Link prediction with attention applied on multiple knowledge graph embedding models. In Proceedings of the ACM Web Conference 2023, 2600–2610. Hao, Y.; Zhang, Y.; Liu, K.; He, S.; Liu, Z.; Wu, H.; and Zhao, J. 2017. An end-to-end model for question answering over knowledge base with cross-attention combining global knowledge. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 221–231. He, S.; Liu, K.; Ji, G.; and Zhao, J. 2015. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM international on conference on information and knowledge management, 623–632. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851. Ho, J.; and Salimans, T. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598. Hu, Z.; Dong, Y.; Wang, K.; and Sun, Y. 2020. Heterogeneous graph transformer. In Proceedings of the web conference 2020, 2704–2710. Lacroix, T.; Usunier, N.; and Obozinski, G. 2018. Canonical tensor decomposition for knowledge base completion. In International Conference on Machine Learning, 2863–2872. PMLR. Lee, S.; Park, S.; Kahng, M.; and Lee, S.-g. 2013. PathRank: Ranking nodes on a heterogeneous graph for flexible hybrid recommender systems. Expert Systems with Applications, 40(2): 684–697. Li, X.; Thickstun, J.; Gulrajani, I.; Liang, P. S.; and Hashimoto, T. B. 2022. Diffusion-lm improves controllable text generation. Advances in Neural Information Processing Systems, 35: 4328–4343. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. Peebles, W.; and Xie, S. 2022. Scalable Diffusion Models with Transformers. arXiv preprint arXiv:2212.09748. Qu, M.; Chen, J.; Xhonneux, L.-P.; Bengio, Y.; and Tang, J. 2020. Rnnlogic: Learning logic rules for reasoning on knowledge graphs. arXiv preprint arXiv:2010.04029. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–10695. Sadeghian, A.; Armandpour, M.; Ding, P.; and Wang, D. Z. 2019. Drum: End-to-end differentiable rule mining on knowledge graphs. Advances in Neural Information Processing Systems, 32. Schlichtkrull, M.; Kipf, T. N.; Bloem, P.; Van Den Berg, R.; Titov, I.; and Welling, M. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, 593–607. Springer. Shen, Y.; Chen, J.; Huang, P.-S.; Guo, Y.; and Gao, J. 2018. M-walk: Learning to walk over graphs using monte carlo tree search. Advances in Neural Information Processing Systems, 31. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, 2256–2265. PMLR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8857 Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; and Tang, J. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197. Toutanova, K.; and Chen, D. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality, 57–66. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, 2071–2080. PMLR. Vashishth, S.; Sanyal, S.; Nitin, V.; and Talukdar, P. 2019. Composition-based multi-relational graph convolutional networks. arXiv preprint arXiv:1911.03082. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wang, Q.; Li, B.; Xiao, T.; Zhu, J.; Li, C.; Wong, D. F.; and Chao, L. S. 2019. Learning deep transformer models for machine translation. arXiv preprint arXiv:1906.01787. Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI conference on artificial intelligence, volume 28. Xiao, H.; Huang, M.; Hao, Y.; and Zhu, X. 2015. TransG: A generative mixture model for knowledge graph embedding. arXiv preprint arXiv:1509.05488. Xiong, C.; Power, R.; and Callan, J. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of the 26th international conference on world wide web, 1271–1279. Yang, B.; Yih, W.-t.; He, X.; Gao, J.; and Deng, L. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Yang, F.; Yang, Z.; and Cohen, W. W. 2017. Differentiable learning of logical rules for knowledge base reasoning. Advances in neural information processing systems, 30. Yao, Z.; Zhang, W.; Chen, M.; Huang, Y.; Yang, Y.; and Chen, H. 2023. Analogical inference enhanced knowledge graph embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 4801–4808. Zhang, F.; Yuan, N. J.; Lian, D.; Xie, X.; and Ma, W.-Y. 2016. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 353–362. Zhang, Z.; Cai, J.; Zhang, Y.; and Wang, J. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 3065–3072. Zhu, Z.; Zhang, Z.; Xhonneux, L.-P.; and Tang, J. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 34: 29476–29490. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8858
2024
984
18,833
Deep Hierarchical Video Compression Ming Lu1,2, Zhihao Duan3, Fengqing Zhu3, and Zhan Ma1* 1School of Electronic Science and Engineering, Nanjing University 2Interdisciplinary Research Center for Future Intelligent Chips (Chip-X), Nanjing University 3Elmore Family School of Electrical and Computer Engineering, Purdue University [email protected], [email protected], [email protected], [email protected] Abstract Recently, probabilistic predictive coding that directly models the conditional distribution of latent features across successive frames for temporal redundancy removal has yielded promising results. Existing methods using a single-scale Variational AutoEncoder (VAE) must devise complex networks for conditional probability estimation in latent space, neglecting multiscale characteristics of video frames. Instead, this work proposes hierarchical probabilistic predictive coding, for which hierarchal VAEs are carefully designed to characterize multiscale latent features as a family of flexible priors and posteriors to predict the probabilities of future frames. Under such a hierarchical structure, lightweight networks are sufficient for prediction. The proposed method outperforms representative learned video compression models on common testing videos and demonstrates computational friendliness with much less memory footprint and faster encoding/decoding. Extensive experiments on adaptation to temporal patterns also indicate the better generalization of our hierarchical predictive mechanism. Furthermore, our solution is the first to enable progressive decoding that is favored in networked video applications with packet loss. Introduction Deep learning breathes fresh life into the visual signal (e.g., images and videos) compression community that has been dominated by handcrafted codecs for decades (Wallace 1991; Marcellin et al. 2000; Wiegand et al. 2003; Sullivan et al. 2012; Bross et al. 2021). Instead of manually designing and optimizing individual modules such as transforms, mode selection, and quantization in traditional codecs, data-driven approaches adopt end-to-end learning of neural networks (Ball´e, Laparra, and Simoncelli 2016; Theis et al. 2017). Despite the conceptual simplicity, learned image compression methods have achieved superior ratedistortion performance, surpassing the latest VVC (Versatial Video Coding (Bross et al. 2021)) intra codec (He et al. 2022; Lu et al. 2022). For videos, however, learning-based methods are still not free from the shackles of the traditional hybrid framework. Most existing methods follow the two-stage pipeline shown *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 𝑥! 𝑣#! 𝑥! 𝑥#! 𝑥#!"# 𝑥$! 𝑥#!"# Motion Codec Residual Codec (a) 𝑥! 𝑥"! 𝑧! 𝑝(𝑧!|𝑍"!) (b) 𝑥! 𝑥"! … … … … 𝑧! " 𝑧! # 𝑝(𝑧! "|𝑍! $", 𝑍$! " ) (c) Figure 1: Interframe coding using (a) hybrid motion & residual coding, (b) single-scale probabilistic predictive coding, and (c) hierarchical probabilistic predictive coding (Ours). in Fig. 1a: code motion flows first and then the residual between the current and motion-warped frame, either in an explicit (Lu et al. 2019) or conditional (Li, Li, and Lu 2021) manner. This framework is usually cumbersome in design (for example, separate models for intraframe coding, inter residual coding, motion coding, and motion estimation are required); thus, extensive hyperparameter tuning is necessary. Furthermore, inaccurate motion-induced warping error propagates inevitably across temporal frames, gradually degrading the quality of reconstructed frames over time. As a promising solution to the problems mentioned earlier, (latent-space) probabilistic predictive coding attempts to reduce temporal redundancy by conditionally predicting future frames in a one-shot manner. Intuitively, if the current frame can be well predicted through the past frames, motion (e.g., flow) estimation and compensation can be completely exempted, and the aforementioned error propagation can also be eliminated accordingly. Recently, Mentzer et al. (Mentzer et al. 2022) proposed a probabilistic predictive video coding framework named Video Compression Transformer (VCT). Under the VAE-based image compression framework, VCT models the latent features of the current frame conditioned on the previous-frame latent features using a transformer-based temporal entropy model. Though VCT outperforms many previous video coding methods, its The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8859 conditional prediction of single-scale latent features at 1/16 resolution of the original frame in Fig. 1b fundamentally constrains its characterization capacity, which ignores multiscale characteristics of video frames. This paper proposes a hierarchical probabilistic predictive coding, termed DHVC, in which conditional probabilities of multiscale latent features of future frames are effectively modeled using deliberately-designed, powerful hierarchical VAEs. The latent distribution at a certain scale in the current frame is predicted by the prior features from previous scales in the same frame and the corresponding scale of the previous frames. Doing so gives us a powerful and efficient modeling ability to characterize arbitrary feature distributions. For instance, Mentzer et al. (Mentzer et al. 2022) relied on a complicated prediction in a block-level autoregressive manner, which is inefficient. Instead, we perform a multi-stage conditional probability prediction, showing better performance and desiring less complexity. Upon extensive evaluations using commonly used video sequences, our method outperforms well-known learned models using hybrid motion and residual coding and previous state-of-the-art method using latent probabilistic predictive coding. Extensive studies on the adaptation to various temporal patterns also reveal the generalization of our hierarchical predictive mechanism. In addition, our method also supports temporal progressive decoding, being the first learned progressive video coding method to our best knowledge. Therefore, it can handle packet losses induced by poor network connections to some extent. Our contributions can be summarized as follows: • We propose a hierarchical probabilistic prediction model for video coding. Our model employs a collection of multiscale latent variables to represent the coarse-to-fine nature of video frames scale-wisely. • We propose the spatial-temporal prediction and in-loop decoding fusion modules, which enable better performance, lower memory consumption, and faster encoding/decoding than the previous best probabilistic predictive coding-based method (Mentzer et al. 2022). • Experiments demonstrate that our method is better generalized to various temporal patterns. Our model is also the first to support the functionality of progressive decoding. Related Work We briefly review end-to-end learned video coding methods, including classical hybrid motion and residual coding and recently-emerged probabilistic predictive coding. We also theoretically explain the hierarchical VAE formalism as it provides the basis for our method. Learned Video Coding Data compression and variational autoencoders (VAEs): Let x denote data (e.g., image or video) with an unknown distribution. Traditional image/video coding belongs to the transform coding, where one wants to find an encoder fe, a decoder fd, and an entropy model for the transform coefficients such that the rate-distortion cost is minimized: min H(fe(x)) + λ · d(x, fd(fe(x))). (1) Here, the first term is the (cross-) entropy of the compressed coefficients approximating the rate, d is a distortion function, and λ is the Lagrange multiplier that balances the rate and distortion tradeoff. As studied in (Ball´e et al. 2018; Duan et al. 2023), transform coding can be equivalently considered as data distribution modeling using variational autoencoders (VAEs). Specifically, VAEs assume a model of data: p(x, z) = p(x | z) · p(z), (2) where z is latent variables like transform coefficients. In VAE, a prior p(z) describes the distribution of the latent variable, a decoder p(x | z) maps latent-space elements to original data-space signal, and an approximate posterior q(z | x) (i.e., the encoder) encodes data into the latent space. Letting ˆx ∼p(x | z) denote the reconstruction, the objective can be written as (Yang and Mandt 2022; Duan et al. 2023) min DKL(q(z | x) ∥p(z)) + λ · d(x, ˆx), (3) and if the posterior q(z | x) is deterministic and discrete (e.g., when quantization is applied to z), this VAE objective equals the rate-distortion optimization in Eq. (1). Such a connection has inspired many subsequent works to apply VAE-based probabilistic methods to compression task, such as (Yang, Bamler, and Mandt 2020b; Agustsson and Theis 2020; Yang, Bamler, and Mandt 2020a; Theis and Ahmed 2022; Ryder et al. 2022; Chen et al. 2022). Learned video coding methods can be generally categorized into two groups: hybrid motion & residual coding and probabilistic predictive coding. Hybrid Motion & Residual Coding refers to the classical coding framework with motion and residual processing. Lu et al. (Lu et al. 2019) first proposed to use two similar VAE-based networks to code the motion and residuals, respectively, which was then enhanced with better motion alignment in (Lu et al. 2020; Liu et al. 2020a). Then, Hu et al. (Hu, Lu, and Xu 2021) migrated the motion alignment to the feature domain and achieved better compression performance. Recently, by converting residual coding to conditional coding of aligned features, Li et al. (Li, Li, and Lu 2021) took the learned video coding to a new level of performance. Subsequently, by further integrating multi-scale aligned feature fusion, post-processing, and bitrate allocation, learned video coding algorithms achieved unprecedented compression efficiency, surpassing the latest VVC (Li, Li, and Lu 2022). Probabilistic Predictive Coding is an emerging video coding method. Liu et al. (Liu et al. 2020b) relied on stacked convolutions for latent distribution prediction, while VCT (Mentzer et al. 2022) adopted Transformer for the same purpose. Both works perform temporally-conditional distribution prediction only using single-scale latent variables (i.e., 1/16 of the original resolution), which greatly constrains the accuracy of probability estimation and leads to sub-optimal predictive performance. Therefore, in this paper, we propose a hierarchical probabilistic predictive coding method, which substantially improves the accuracy and efficiency of temporal prediction by characterizing multiscale latent features for conditional probability estimation in a coarse-to-fine approach. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8860 Hierarchical VAEs To improve the flexibility and expressiveness of single-sale VAE, hierarchical VAEs (Kingma et al. 2016; Child 2020; Vahdat and Kautz 2020) employ multiscale latent variables, denoted by Z = {z1, ..., zL}. Accordingly, the latent priors can be factorized as: p(Z) = L Y l=1 p(zl | Z<l), (4) where L is the total number of hierarchical scales, and Z<l denotes {z1, ..., zl−1}. Typically, z1 has the smallest dimension, while zL is with the largest dimension. Such a dimensional refinement from a lower scale to a higher one improves the flexibility of VAEs and effectively captures the coarse-to-fine characteristics of images. Among popular hierarchical VAE architectures, ResNet VAE (Kingma et al. 2016) provides the most promising performance in terms of image modeling. Different from the Hyperprior VAE used in (Ball´e et al. 2018), each latent variable zl of ResNet VAE is conditioned on all Z<l and its encoding is bi-directional with dependent on both x and Z<l. This might explain the fact that ResNet VAE can be scaled up to more than 70 layers (Child 2020). The loss function for training ResNet VAEs can be extended from Eq. (3) for supervising multiscale latents: min L X i=1 DKL(ql ∥pl) + λ · d(x, ˆx), (5) where ql and pl are shorthand notations for the posterior and prior for the l-th scale latent variable, i.e., ql = q(zl | x, Z<l), and pl = p(zl | Z<l). (6) Our proposed method is developed based on the ResNet VAE structure by further introducing the temporal priors in addition to the deliberate design for practical compression. Preliminary: Predictive Video Coding Suppose a video sequence X = {x1, ..., xT } that contains T frames for encoding. As a convention for predictive coding methodologies (Liu et al. 2020b; Mentzer et al. 2022), an analysis encoder followed by quantization is first applied to transform each input frame xt into the discrete latent representation zt with reduced resolution. A symmetrical decoder is then used to recover the reconstruction ˆxt from zt. Having the probability mass function (PMF) p(zt) to estimate the true distribution of symbols in zt, we can get the bits desired for transmission by entropy coding. The main idea of the probabilistic predictive coding is to parameterize the p(zt) as a conditional distribution p(zt) = p(zt | Z<t), (7) where Z<t is a set of latent features preceding time t. By exploiting the temporal redundancy across frames using an efficient prediction network, one can obtain more accurate probability estimation for the current frame to reduce crossentropy and thus maximize coding efficiency. To this end, VCT (Mentzer et al. 2022) introduces a Transformer model in a block-level autoregressive manner to model p(zi t | z<i t , Z<t) for jointly appreciating spatial and temporal correlation. Here, zi t corresponds to a pixel at position i within a predefined block in the latent space of the current frame, and z<i t s are autoregressive neighbors previously processed. Although decent compression performance is achieved, VCT only performs conditional probability prediction using single-scale latent variables, ignoring the multiscale characteristics in both spatial and temporal domains. Therefore, the prediction is sub-optimal, and the complexity of the prediction network is usually unaffordable. Proposed Method Network Architecture Overview Figure 2a depicts the overall framework of our method, which consists of a bottom-up path and a top-down path. Given an input frame xt, the bottom-up path produces a set of features Rt = {r1 t , ..., rL t } at respective 1/64, 1/32, 1/16, and 1/8 resolutions of the original input through downsampling and feature aggregation/embedding using residual blocks (ResBlocks). Different from (Chen et al. 2022), the rl t at each scale is dependent on both feature extracted from xt and decoded feature from lower scale, which turns our method into conditional residual coding to minimize the bitrate consumption. These residual features Rt are subsequently sent to the top-down path for hierarchical probabilistic modeling. The top-down path starts from two learnable constant biases and then encodes a sequence of latent variables Zt = {z1 t , ..., zL t } (in the Latent Blocks) to produce respective prior feature f l t and reconstructive feature dl t scale-by-scale. In the end, the ˆxt is reconstructed by passing the last reconstructive representation dL t into multiple upsampling and ResBlock layers. The down-sampling operations ↓are implemented by strided convolution, and the up-sampling operations ↑are implemented by 1 × 1 convolution followed by pixel shuffle. The ConvNeXt (Liu et al. 2022) units are adopted in ResBlocks. More details can be found in supplementary materials. Predictive Coding Modules We now detail the architecture of the Latent Block (Fig. 2b), which is critical to the effectiveness of our approach. Like in ResNet VAEs, each Latent Block adds “information”, carried by the latent variable zl t, into the top-down path features. We substantially extend it by introducing (1) a SpatialTemporal Prediction module for predictive coding and (2) an In-loop Decoding Fusion module to improve coding performance, which is described below one-by-one. Spatial-Temporal Prediction (Fig. 2c): To predict zt t at l-th scale, we combine the same-scale temporal priors Zl <t with spatial prior f l−1 t from previous scales to produce the prior distribution parameters. We begin with the Temporal Fusion by passing the temporal priors Zl <t into stacked ResBlocks, with skip connections at each level. Then, the spatial prior feature f l−1 t is concatenated with the fused temporal information for subsequent Conditional Generation to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8861 𝑥! ResBlock ⋯ ↓ ResBlock ↓ ⋯ ResBlock ↓ Biases Latent Block ↓ ⋯ Latent Block ↓ ⋯ ResBlock ↓ 𝑥"! 𝑍"! # 𝑍"! $ ⋯ 𝑟! $ 𝑟! # 𝑑! # 𝑑! $ Bottom-up Top-down 𝑓! # (a) 𝑑! " ResBlock 𝑓! "#$ Res Block Res Block C Conv ResBlock Conv 𝑟! " 𝜇%! ", 𝜎%! " 𝜇! " Posterior 𝑞! " Q Prior 𝑝! " 𝐷%&(𝑞! " || 𝑝! ") Conv 𝑧! " 𝑓! " 𝑍'! " 𝑐! " C 𝑑! "#$ ResBlock ResBlock Prediction A Conv (b) 𝑍!" # ResBlock ResBlock ResBlock ResBlock ResBlock ResBlock ResBlock ResBlock 𝑐" # ResBlock ResBlock ResBlock ResBlock ResBlock ResBlock ResBlock ResBlock 𝑓" #$% 𝜇%" #, 𝜎%" # Temporal Fusion Conditional Generation C C (c) Figure 2: (a) Overall Architecture, (b) Latent Block, and (c) Spatial-Temporal Prediction Module of our proposed DHVC. ↓ and ↑are respective downscaling and upscaling operations. “C” represents concatenation, “A” represents addition, and “Q” represents quantization. The convolutional layer “Conv” is used for feature re-dimension. get the contextual feature cl t and the prior distribution parameters, i.e., the mean ˆµl t and scale ˆσl t. In-loop Decoding Fusion (on the right of Fig. 2b): Two distinct features are generated during the decoding process: the prior feature f l t utilized as the spatial prior for subsequent scale, and the reconstructive feature dl t for the eventual reconstruction. In our specific implementation, we concatenate the previously decoded feature dl−1 t and the contextual feature cl t, along with the f l t to generate the fused results dl t. This design represents a notable departure from the original ResNet VAE framework, which employs a single top-down path feature for both the prior and reconstruction purposes. Through the method in our study, the f l t solely handles conditional distribution modeling, whereas the dl t is responsible for the reconstruction aspect. By leveraging the dependable contextual feature cl t, we achieve a desirable decoded dl t while conserving bitrate consumption effectively. Note that our framework requires only a single model for intra and inter frame coding. For intra-coding, the temporal priors Zl <t each scale are set using learnable constant biases, while for inter coding, we use the two previous latents zl t−1 and zl t−2 to make up of Zl <t. Probabilistic Model and Loss Function With the specific neural network modules, our framework effectively extends hierarchical VAEs to predictive video coding. To support practical lossy compression using feasible entropy coding algorithms, we follow previous works (Ball´e et al. 2018; Duan et al. 2023) and apply quantization-aware training using uniform posteriors. Specifically, we adopt a hybrid quantization strategy at training time to simulate the quantization error. The additive uniform noise is applied in terms of rate estimation, while the straight-through rounding operation is used for reconstruction. We use uniform quantization at test time. For the prior, we use the Gaussian distribution convolved with uniform distribution, which is flexible to match the posterior. Posteriors: The (approximate) posterior for the l-th latent variable, zl t, is defined to be an uniform distribution: q(zl t | xt, Z<l t ) = U(µl t −1 2, µl t + 1 2), (8) where µl t is the output of the posterior branch in the latent block (see Fig. 2b) by merging the embedded feature rl t and prior feature f l−1 t from the previous level. The discrete zl t depends on the frame xt as well as previous level latent variables Z<l t . Priors: Our framework extends ResNet VAE to predictive video coding by conditioning the prior distributions for zl t on temporal latent variables Zl <t. At each timestep, considering L levels of latent variables Zt = {z1 t , ..., zL t }, the latent conditional distribution can be factorized as p(Zt | Z<t) = L Y l=1 p(zl t | Z<l t , Zl <t), (9) Then, the prior distribution for each zl t is defined as a Gaussian convolved with a uniform distribution: p(zl t | Z<l t , Zl <t) = N(ˆµl t, ˆσl t 2) ∗U(−1 2, 1 2) (10) where N(ˆµl t, ˆσl t 2) denotes the Gaussian probability density function. The mean ˆµl t and scale ˆσl t are predicted by the prior branch in the latent block. Note that the prior mean ˆµl t and scale ˆσl t are dependent on both the latent variables from previous time steps Zl <t for that level and on the latent variables of the previous levels at the current timestep Z<l t . The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8862 0.05 0.10 0.15 Bpp 32 33 34 35 36 37 38 PSNR UVG DHVC(Ours) VCT DCVC RLVC MLVC DVC-Pro x265 HM-16.26 (a) 0.05 0.10 0.15 0.20 0.25 Bpp 32 33 34 35 36 37 38 39 PSNR MCL-JCV DHVC(Ours) VCT DCVC RLVC MLVC DVC-Pro x265 HM-16.26 (b) 0.1 0.2 0.3 Bpp 30 31 32 33 34 35 36 PSNR HEVC-ClassB DHVC(Ours) VCT DCVC RLVC MLVC DVC-Pro x265 HM-16.26 (c) 0.1 0.2 0.3 0.4 0.5 Bpp 26 28 30 32 34 36 PSNR HEVC-ClassC DHVC(Ours) VCT DCVC RLVC MLVC DVC-Pro x265 HM-16.26 (d) 0.1 0.2 0.3 0.4 0.5 Bpp 26 28 30 32 34 36 PSNR HEVC-ClassD DHVC(Ours) VCT DCVC RLVC MLVC DVC-Pro x265 HM-16.26 (e) 0.02 0.04 0.06 0.08 0.10 Bpp 33 34 35 36 37 38 39 40 PSNR HEVC-ClassE DHVC(Ours) VCT DCVC RLVC MLVC DVC-Pro x265 HM-16.26 (f) Figure 3: Compression efficiency comparison using rate-distortion (R-D) curves. Training Objective: Typically, the hybrid motion and residual coding methods require multi-stage or simultaneous optimization of the optical flow, motion coding, and residual coding networks during the training phase. Differently, the training process of our model is as easy as optimizing a lossy image coding. The loss function L is extended from Eq. (5) with the inclusion of temporal dependency: L = min L X l=1 −log2 p(zl t | Z<l t , Zl <t) + λ · d(xt, ˆxt), (11) The first term consists of the rate for all latent variables. The second term corresponds to the reconstruction distortion, which is commonly chosen to be the Mean Squared Error (MSE) or MS-SSIM (Wang, Simoncelli, and Bovik 2003) loss for videos. The multiplier λ, which trades off rate and distortion, is pre-determined and fixed throughout training. At test time, the λ is the same for both intra and inter frame, and the actual bitrates are determined by the estimation accuracy of conditional probabilistic modeling. Experimental Results Implementation Settings Datasets: We use the popular Vimeo-90K (Xue et al. 2019) dataset to train our model, which consists of 64,612 video samples. Training batches comprise sequential frames that are randomly cropped to the size of 256 × 256. Commonly used test datasets, i.e., the UVG (Mercat, Viitanen, and Vanne 2020), MCL-JCV (Wang et al. 2016), and HEVC Class B, C, D, and E (Bossen et al. 2013), are used for evaluation. They cover various scene variations, and resolutions are available from 416×240 to 1920×1080. Test sequences in YUV420 format are pre-processed following the suggestions in (Sheng et al. 2022) to generate RGB frames as the input of learned models. The first 96 frames of each video are used for evaluation, and the group of pictures (GOP) is set at 32. These settings are the same as in other learned video coding methods. Training details: We progressively train our model for fast convergence. First, the model is trained to encode a single frame independently for 2M iterations by setting the temporal prior at each scale level as a learnable bias. Then, we train the aforementioned model for 500K steps using three successive frames, with temporal priors hierarchically generated from previously-decoded frames. In the end, another 100K steps are applied to fine-tune the model using five successive frames, for which it better captures long-term temporal dependence (Liu et al. 2020a). We set λ from {256, 512, 1024, 2048} and {4, 8, 16, 32} for respective MSE and MS-SSIM optimized models to cover wide rate ranges. Adam (Kingma and Ba 2014) is the optimizer with the learning rate at 10−4. Our model is trained using two Nvidia RTX 3090, and the batch size is fixed at eight. Evaluation All the evaluation experiments are performed under the lowdelay configuration. We choose x265 1 and HM-16.26 2 as the benchmarks of traditional video codecs. Both of them use the default configuration. The detailed codec settings can be found in supplementary materials. For learned video coding methods, we compare with the representative algorithms using hybrid motion & residual coding 1https://www.videolan.org/developers/x265.html 2https://hevc.hhi.fraunhofer.de The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8863 Method Parm (M) Ops Mem (G) ET/DT (s) DCVC 7.58 1126.40 11.71 13.78/46.48 VCT 187.82 3042.20 10.51 1.64/1.58 DHVC 112.46 433.81 4.27 0.25/0.21 Table 1: Complexity comparison among DCVC, VCT, and DHVC (Ours). “Parm” represents the model parameters. “Ops” denotes the average multiply-add operations per pixel. “Mem” is the peak memory consumed in the inference. “ET” and “DT” are the encoding and decoding times. We perform evaluations on a single RTX 3090-24G GPU. method, i.e., DVC-Pro (Lu et al. 2020), MLVC (Lin et al. 2020), RLVC (Yang et al. 2020), and DCVC (Li, Li, and Lu 2021), and the best-performing probabilistic predictive coding model VCT (Mentzer et al. 2022). Recently, a set of codec optimization tools, like feature fusion, postprocessing, bitrate allocation, etc., are integrated into the representative frameworks to further improve the compression performance (Li, Li, and Lu 2022). In this work, we currently focus on comparisons regarding frameworks themselves, and choose the best-in-class methods without further augmentation. Integrating these optimization mechanisms into our method is an interesting topic for future study. Performance: Figure 3 depicts the R-D (rate-distortion) curves for testing various methods across various datasets. Our model, DHVC, leads all the learned methods regardless of testing videos, revealing the generalization of our method. It also performs much better than x265 on popular datasets with 1080p resolution (e.g., UVG and MCL-JCV) and even better than HM-16.26 on the MCL-JCV, suggesting the encouraging potential of such hierarchical predictive coding. Though our method still performs the best, we must admit that it yet remains a noticeable performance gap between the learned codecs and traditional ones on low-resolution videos in HEVC Class C, D, and E. We believe this mainly owes to the conditional probability estimation or motion/residual coding upon latent variable with much lower resolution downscaled from the original input. For a low-resolution input, its local block contains more complicated texture patterns than that in a same-size block of a high-resolution video. Thus, the downsampling-induced information loss is more critical, potentially leading to inaccurate correlation characterization and subsequent compression. Such a hypothesis is well justified when comparing the performance between the VCT and our method. VCT shows a great performance drop for those low-resolution datasets, even worse than the earliest learned method DVC-Pro in middle and high bitrate ranges. This is because VCT conducts the conditional probability estimation upon single-scale latent features at 1/16 resolution of the original input. Instead, our method performs conditional probability estimation using multi-scale latent variables at respective 1/64, 1/32, 1/16, and 1/8 resolutions. Such a hierarchical mechanism greatly improves performance by thoroughly exploiting the coarseto-fine natural characteristics of video frames. Due to the space limitation, results for learned models 0.05 0.10 0.15 0.20 Bpp 33 34 35 36 37 38 PSNR UVG Baseline Baseline + TP Baseline + TP + DF Baseline + TP + DF + LT (Ours) Figure 4: Performance contribution of modular components. trained using MS-SSIM loss can be found in the supplemental materials, which show a clear advantage of our method over both the traditional and learned video codecs. Complexity: Evaluation results with 1080p videos are listed in Table 1. Except for the model size, our method shows clear advantages for other metrics, reporting the least requirements of respective kMACs per pixel, peak memory consumption, encoding, and decoding time. This also suggests that the model size is not closely related to the computational complexity of running codecs in practice. The sizeable parameters used in our method are mainly attributed to using basic ConvNeXt units to form the ResBlocks and Latent Blocks (see Fig. 2a). Our DHVC shows a clear reduction in kMACs per pixel and peak memory occupation, owing to the use of simple probabilistic prediction modules instead of complicated Transformer-based prediction network in VCT or complicated motion and residual coding modules in DCVC. For encoding and decoding time, as the DCVC applies the pixel-wise spatial autoregressive model for entropy coding, it takes about 17.86 and 40.64 seconds, which is unacceptable for practical codecs. VCT, instead, uses a simplified 4×4 block-level spatial autoregressive model, offering faster encoding (decoding) to DCVC. Our method completely removes the use of a spatial autoregressive model through the proposed hierarchical processing pipeline, which further reduces the encoding and decoding time to respective 0.25 and 0.21 seconds, i.e., 55×/221× (6×/7×) faster encoding/decoding than the DCVC and VCT respectively. Deep Dive We perform ablation studies to understand the capacity of our proposed DHVC better. Modular Contribution: We further examine the contribution of each module in our proposed DHVC in Fig. 4. “Baseline” denotes the model disabling both the temporal prediction and in-loop decoding fusion in latent blocks, with only the spatial prior from previous scales for probabilistic modeling. “Baseline + TP” indicates the temporal probabilistic prediction is integrated to reduce the temporal redundancy. Apparently, the performance with the support of temporal information improves significantly upon the base model. Furthermore, with the help of in-loop decoding fusion module, dubbed by “Baseline + TP +DF” in the figure, an averaged 1 dB PSNR increase is obtained. It is sufficient to justify the advantage of compensating for fusion based on The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8864 0.0 0.1 0.2 0.3 0.4 0.5 Bpp 31 32 33 34 35 36 PSNR (dB) DHVC(x=0) DHVC(x=10) DHVC(x=20) VCT(x=0) VCT(x=10) VCT(x=20) DCVC(x=0) DCVC(x=10) DCVC(x=20) (a) 0.05 0.10 0.15 0.20 0.25 Bpp 32.5 35.0 37.5 40.0 42.5 45.0 47.5 50.0 PSNR (dB) DHVC(x=0) DHVC(x=1.0) DHVC(x=2.0) VCT(x=0) VCT(x=1.0) VCT(x=2.0) DCVC(x=0) DCVC(x=1.0) DCVC(x=2.0) (b) 0.05 0.10 0.15 0.20 0.25 0.30 Bpp 30 31 32 33 34 35 36 PSNR (dB) DHVC(x=0) DHVC(x=0.05) DHVC(x=0.1) VCT(x=0) VCT(x=0.05) VCT(x=0.1) DCVC(x=0) DCVC(x=0.05) DCVC(x=0.1) (c) Figure 5: Impact of temporal pattern on compression using synthetic data: (a) pixel shifting with values x = 0, 10, 20, (b) Gaussian blurring with sigma x · t at frame order t, and (c) fading by linear transition between two unrelated scenes using alpha blending. All evaluations start from x = 0 (i.e., videos consist of still images) as denoted by solid lines in the figures. Level 1 Level 1-2 Level 1-3 Level 1-4 Level 1-5 0.013, 28.61dB 0.054, 35.20dB 0.126, 40.80dB 0.005, 25.83dB 0.0005, 18.13dB Figure 6: Progressive decoding by visualizing five scale levels. Blue bars and red lines represent bit per pixel (bpp) and PSNR values for each frame respectively. Please zoom in for more details. prediction results on the decoding side. In addition, the longterm finetuning with five frames, represented by “Baseline + TP +DF + LT”, brings another R-D improvement to constitute the complete performance of our work. This suggests that the rate-distortion relationship between frames can be effectively balanced by joint training with multiple frames. Adaptation Capacity to Temporal Patterns is critical for the model’s generalization when encoding different video contents. We exactly follow (Mentzer et al. 2022) to generate videos using three different temporal patterns, including the pixel shifting, blurring, and fading effects. The R-D curves are plotted in Fig 5. Our method outperforms the VCT with regard to all synthetic datasets, which demonstrates the powerful modeling capability of the hierarchical probabilistic predictive mechanism. No matter which temporal pattern or how fast the scene change is, our method is consistently applicable. However, both the VCT and our DHVC behave worse than the DCVC when it comes to the videos with pixel-shifted. This is mainly due to the fact that the DCVC utilizes a motion alignment module by encoding the motion data. For such regular object displacement, motion estimation can achieve high prediction accuracy, which has obvious advantages over our latent space probabilistic prediction. Considering how to add hierarchical motion alignment to our approach is a topic worth future exploring. Progressive Decoding Capability is enabled in the proposed DHVC, which was seldom supported in existing methods. Specifically, once obtain the lowest-scale features (lowest resolution) of the current frame, we have the coarsest frame reconstruction after decoding as in the upper left subplot of Fig. 6 (e.g., Level 1). As additional compressed latent features are transmitted to the decoder side, we can clearly observe the improvement of reconstruction results (see visualized subplots with more scale levels and PSNR increases in bottom subplots accordingly). When we receive partial scales, we notice the PSNR degradation (red curves) in a GOP. This is due to the error propagation since the temporal references can only provide partial priors for decoding. Instead, once we receive all-scale latent features, the PSNR metric is stable across frames and GOPs (see “Level 1 - 5”). At the same time, gradual PSNR degradation in a GOP still prevails in existing video codecs using hybrid motion & residual coding. Progressive decoding quickly provides relatively-coarse reconstructions by encoding and transmitting partial features. In this exemplified Fig. 6, for such a 1080p video, having two scales of compressed latent features can already present a clear preview of the content, by which our model provides a fast and less bitrate-consuming preview in video streaming applications. This also gives us a broader understanding: we can still decode the content in networked applications with packet loss when we have partial packets. In congested connections, we can proactively drop latent packets corresponding to higher scales. Conclusion This paper proposes a novel hierarchical probabilistic predictive coding framework for learning-based video compression, termed DHVC. The DHVC provides superior compression efficiency to popular and representative learned video codecs across a great variety of video samples. More importantly, DHVC offers the fastest encoding and decoding with the least running memory, which not only reveals the best balance between coding performance and complexity efficacy but also offers encouraging potential for application of learned video codecs. Our future work will focus on exploring efficient prior representations or optimization mechanisms to further improve the compression efficiency. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8865 Acknowledgements This work is partially supported by the National Key Research and Development Project of China (No.2022YFF0902402) and the Natural Science Foundation of China (No.U20A20184). References Agustsson, E.; and Theis, L. 2020. Universally Quantized Neural Compression. Advances in Neural Information Processing Systems, 33: 12367–12376. Ball´e, J.; Laparra, V.; and Simoncelli, E. P. 2016. Endto-end optimized image compression. arXiv preprint arXiv:1611.01704. Ball´e, J.; Minnen, D.; Singh, S.; Hwang, S. J.; and Johnston, N. 2018. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436. Bossen, F.; et al. 2013. Common test conditions and software reference configurations. JCTVC-L1100, 12(7): 1. Bross, B.; Wang, Y.-K.; Ye, Y.; Liu, S.; Chen, J.; Sullivan, G. J.; and Ohm, J.-R. 2021. Overview of the versatile video coding (VVC) standard and its applications. IEEE Transactions on Circuits and Systems for Video Technology, 31(10): 3736–3764. Chen, Z.; Gu, S.; Lu, G.; and Xu, D. 2022. Exploiting intraslice and inter-slice redundancy for learning-based lossless volumetric image compression. IEEE Transactions on Image Processing, 31: 1697–1707. Child, R. 2020. Very deep vaes generalize autoregressive models and can outperform them on images. arXiv preprint arXiv:2011.10650. Duan, Z.; Lu, M.; Ma, Z.; and Zhu, F. 2023. Lossy Image Compression with Quantized Hierarchical VAEs. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 198–207. He, D.; Yang, Z.; Peng, W.; Ma, R.; Qin, H.; and Wang, Y. 2022. Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5718–5727. Hu, Z.; Lu, G.; and Xu, D. 2021. FVC: A new framework towards deep video compression in feature space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1502–1511. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kingma, D. P.; Salimans, T.; Jozefowicz, R.; Chen, X.; Sutskever, I.; and Welling, M. 2016. Improved variational inference with inverse autoregressive flow. Advances in neural information processing systems, 29. Li, J.; Li, B.; and Lu, Y. 2021. Deep contextual video compression. Advances in Neural Information Processing Systems, 34: 18114–18125. Li, J.; Li, B.; and Lu, Y. 2022. Hybrid spatial-temporal entropy modelling for neural video compression. In Proceedings of the 30th ACM International Conference on Multimedia, 1503–1511. Lin, J.; Liu, D.; Li, H.; and Wu, F. 2020. M-LVC: Multiple frames prediction for learned video compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3546–3554. Liu, H.; Lu, M.; Ma, Z.; Wang, F.; Xie, Z.; Cao, X.; and Wang, Y. 2020a. Neural video coding using multiscale motion compensation and spatiotemporal context model. IEEE Transactions on Circuits and Systems for Video Technology, 31(8): 3182–3196. Liu, J.; Wang, S.; Ma, W.-C.; Shah, M.; Hu, R.; Dhawan, P.; and Urtasun, R. 2020b. Conditional entropy coding for efficient video compression. In European Conference on Computer Vision, 453–468. Springer. Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; and Xie, S. 2022. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11976–11986. Lu, G.; Ouyang, W.; Xu, D.; Zhang, X.; Cai, C.; and Gao, Z. 2019. Dvc: An end-to-end deep video compression framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11006–11015. Lu, G.; Zhang, X.; Ouyang, W.; Chen, L.; Gao, Z.; and Xu, D. 2020. An end-to-end learning framework for video compression. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3292–3308. Lu, M.; Guo, P.; Shi, H.; Cao, C.; and Ma, Z. 2022. Transformer-based Image Compression. In 2022 Data Compression Conference (DCC), 469–469. Marcellin, M. W.; Gormish, M. J.; Bilgin, A.; and Boliek, M. P. 2000. An overview of JPEG-2000. In Proceedings DCC 2000. Data Compression Conference, 523–541. IEEE. Mentzer, F.; Toderici, G.; Minnen, D.; Hwang, S.-J.; Caelles, S.; Lucic, M.; and Agustsson, E. 2022. Vct: A video compression transformer. arXiv preprint arXiv:2206.07307. Mercat, A.; Viitanen, M.; and Vanne, J. 2020. UVG dataset: 50/120fps 4K sequences for video codec analysis and development. In Proceedings of the 11th ACM Multimedia Systems Conference, 297–302. Ryder, T.; Zhang, C.; Kang, N.; and Zhang, S. 2022. Split Hierarchical Variational Compression. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition), 386–395. Sheng, X.; Li, J.; Li, B.; Li, L.; Liu, D.; and Lu, Y. 2022. Temporal Context Mining for Learned Video Compression. IEEE Transactions on Multimedia, 1–12. Sullivan, G. J.; Ohm, J.-R.; Han, W.-J.; and Wiegand, T. 2012. Overview of the high efficiency video coding (HEVC) standard. IEEE Transactions on circuits and systems for video technology, 22(12): 1649–1668. Theis, L.; and Ahmed, N. Y. 2022. Algorithms for the Communication of Samples. Proceedings of the International Conference on Machine Learning, 162: 21308–21328. Theis, L.; Shi, W.; Cunningham, A.; and Husz´ar, F. 2017. Lossy Image Compression with Compressive Autoencoders. International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8866 Vahdat, A.; and Kautz, J. 2020. NVAE: A deep hierarchical variational autoencoder. Advances in neural information processing systems, 33: 19667–19679. Wallace, G. K. 1991. The JPEG still picture compression standard. Communications of the ACM, 34(4): 30–44. Wang, H.; Gan, W.; Hu, S.; Lin, J. Y.; Jin, L.; Song, L.; Wang, P.; Katsavounidis, I.; Aaron, A.; and Kuo, C.-C. J. 2016. MCL-JCV: a JND-based H. 264/AVC video quality assessment dataset. In 2016 IEEE international conference on image processing (ICIP), 1509–1513. IEEE. Wang, Z.; Simoncelli, E. P.; and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, 1398–1402. Ieee. Wiegand, T.; Sullivan, G. J.; Bjontegaard, G.; and Luthra, A. 2003. Overview of the H. 264/AVC video coding standard. IEEE Transactions on circuits and systems for video technology, 13(7): 560–576. Xue, T.; Chen, B.; Wu, J.; Wei, D.; and Freeman, W. T. 2019. Video enhancement with task-oriented flow. International Journal of Computer Vision, 127: 1106–1125. Yang, R.; Mentzer, F.; Van Gool, L.; and Timofte, R. 2020. Learning for video compression with recurrent auto-encoder and recurrent probability model. IEEE Journal of Selected Topics in Signal Processing, 15(2): 388–401. Yang, Y.; Bamler, R.; and Mandt, S. 2020a. Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33: 573–584. Yang, Y.; Bamler, R.; and Mandt, S. 2020b. Variational Bayesian Quantization. Proceedings of the International Conference on Machine Learning, 119: 10670–10680. Yang, Y.; and Mandt, S. 2022. Towards Empirical Sandwich Bounds on the Rate-Distortion Function. International Conference on Learning Representations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8867
2024
985
18,834
Spectral-Based Graph Neural Networks for Complementary Item Recommendation Haitong Luo1, 2, Xuying Meng1, 5, Suhang Wang3, Hanyun Cao1, 2 Weiyao Zhang1, Yequan Wang4, Yujun Zhang1, 2, 6* 1Institute of Computing Technology, Chinese Academy of Sciences, 2University of Chinese Academy of Sciences, 3Pennsylvania State University, 4BAAI, 5Purple Mountain Laboratories, 6Nanjing Institute of InforSuperBahn {luohaitong21s, nrcyujun}@ict.ac.cn Abstract Modeling complementary relationships greatly helps recommender systems to accurately and promptly recommend the subsequent items when one item is purchased. Unlike traditional similar relationships, items with complementary relationships may be purchased successively (such as iPhone and Airpods Pro), and they not only share relevance but also exhibit dissimilarity. Since the two attributes are opposites, modeling complementary relationships is challenging. Previous attempts to exploit these relationships have either ignored or oversimplified the dissimilarity attribute, resulting in ineffective modeling and an inability to balance the two attributes. Since Graph Neural Networks (GNNs) can capture the relevance and dissimilarity between nodes in the spectral domain, we can leverage spectral-based GNNs to effectively understand and model complementary relationships. In this study, we present a novel approach called Spectralbased Complementary Graph Neural Networks (SComGNN) that utilizes the spectral properties of complementary item graphs. We make the first observation that complementary relationships consist of low-frequency and mid-frequency components, corresponding to the relevance and dissimilarity attributes, respectively. Based on this spectral observation, we design spectral graph convolutional networks with low-pass and mid-pass filters to capture the low-frequency and mid-frequency components. Additionally, we propose a two-stage attention mechanism to adaptively integrate and balance the two attributes. Experimental results on four e-commerce datasets demonstrate the effectiveness of our model, with SComGNN significantly outperforming existing baseline models. Introduction Complementary item recommendation (Liu et al. 2020; Hao et al. 2020; Bibas, Shalom, and Jannach 2023) aims to suggest related items to users after they make a purchase in order to stimulate further purchases. To ensure the success of an e-commerce platform, it is crucial to model the complementary relationships between items. Complementary relationships involve items that are relevant yet dissimilar, as *Corresponding Author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. iPhone 14 Galaxy S23 AirPods Pro Basketball Complementary Substitutable Unrelated Query Item Candidate Item Dissimilarity Categories Appearances Functions ... Relevance Digital Product Apple brand  ... Figure 1: Item relationships in recommender systems. they are purchased together but serve different functions (e.g., iPhone and AirPods Pro). These properties make complementary relationships more challenging to model than traditional similarity relationships (also known as substitutable relationships). In this study, we focus on complementary item recommendation, i.e., given a query item, the goal is to recommend relevant yet dissimilar items to satisfy users’ needs and encourage joint purchases. The core attributes of complementary relationships are relevance and dissimilarity. As shown in Figure 1, for iPhone and AirPods Pro, they are relevant as digital products under the Apple brand. Their dissimilarity lies in that they are different products with different functions and appearances. When recommending complementary products for users, it is crucial to understand and balance these two characteristics. On one hand, if we emphasize their relevance too much, it may lead to substitutable item recommendations. On the other hand, if we emphasize their dissimilarity too much, it may lead to recommendations for unrelated items. Hence, researchers have made many efforts on complementary item recommendations. Some works (McAuley, Pandey, and Leskovec 2015; Wang et al. 2018; Cen et al. 2019; Liu et al. 2020; Chen et al. 2023) tentatively decouple and focus on complementary relations from item relaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8868 tionships. However, they ignore the dissimilarity attribute and only consider the relevance. To further model the dissimilarity attribute, recent works (Hao et al. 2020; Bibas, Shalom, and Jannach 2023) model the dissimilarity with category mapping networks that consider category diversity. However, these works still simplify the complementary relationships since the dissimilarity is not limited to categories. Without a deep understanding of these two attributes, existing works fail to model the essence of complementary relationships, which also leads to an inability to explore the trade-off between the two properties. Recent advances show that GNNs can capture the relevance and dissimilarities of nodes from the spectral domain (Wu et al. 2022; Tang et al. 2022), which provides a promising direction to model the complementary relationships for simultaneously capturing the relevance and dissimilarity. Thus, in this work, we model complementary relationships with spectral-based GNNs. However, we are faced with two challenges: (1) the lack of a deep understanding of complementary relationships from a spectral perspective. Existing spectral-based GNNs do not explore and adapt to the spectral properties of complementary relationships, resulting in a gap between their spectral properties and the two attributes; (2) the trade-off between the relevance and dissimilarity attributes. Since the two attributes are opposites, over-emphasizing either one can lead to inaccurate complementary item recommendations. Therefore, it is crucial to strike a balance between these two attributes. In an attempt to address these challenges, we first analyze complementary relationships from a spectral perspective on graphs and observe that the spectrum of the complementary item graph is mainly composed of low-frequency and midfrequency components, which correspond to the relevance and dissimilarity characteristics respectively. Based on the observation, we design low-pass and mid-pass graph convolutional networks to decouple and extract the corresponding low-frequency relevance and mid-frequency dissimilarity components. To balance the two attributes, we propose a two-step attention mechanism to adaptively integrate and balance them. Our contributions are summarized as follows: • We conduct the first study to gain an understanding of the spectral properties of complementarity relationships based on GNNs, which associate the low-frequency and mid-frequency components with relevance and dissimilarity, respectively. • We design a novel model with spectral-based GNNs and a two-stage attention mechanism, to decouple, extract and adaptively balance the low-frequency relevance and mid-frequency dissimilarity. • We demonstrate the effectiveness of our proposed framework on four publicly available datasets, which outperforms the state-of-art approaches by a margin. Related Work In this section, we introduce related work on graph neural networks and complementary item recommendations. Graph Neural Networks GNNs (Wu et al. 2020) have shown great ability in modeling graph-structured data. Generally, GNNs can be classified into two main forms, i.e., spatial-based and spectral-based ones. Spatial-based GNNs (Hamilton, Ying, and Leskovec 2017; Veliˇckovi´c et al. 2017; Gao, Wang, and Ji 2018; Zhu et al. 2023) operate in the spatial domain, where the graph convolution is defined in terms of the neighborhood structure of each node. Spectral-based GNNs (Bruna et al. 2013; Defferrard, Bresson, and Vandergheynst 2016; Kipf and Welling 2016; Balcilar et al. 2020; Wu et al. 2022) operate in the spectral domain, where the graph convolution filter is defined in terms of the eigenvectors of the graph Laplacian matrix. Since GCN only utilizes low-frequency information (Balcilar et al. 2020), to broaden the available frequency bandwidth, recent studies (Balcilar et al. 2020; Wu et al. 2022) attempt to designs filter functions to incorporate all the bands of graph signals. Complementary Item Recommendations To maximize profit and provide convenience to users, modeling item relationships is a crucial task in recommendation systems. However, existing works (Wang, Sarwar, and Sundaresan 2011; Yao and Harper 2018; Meng et al. 2018) often oversimplify item relationships as merely being “related”, disregarding the fact that these relationships can be further categorized as substitutable or complementary ones. Complementary relationships involve items that are relevant yet dissimilar, making the modeling of such relationships more challenging while substitutable items are almost similar. To tackle this, one straightforward method is frequent pattern mining and association rules (Han et al. 2007). Recently, deep learning methods have been applied to recommend complementary items. Some studies (McAuley, Pandey, and Leskovec 2015; Wang et al. 2018; Cen et al. 2019; Liu et al. 2020; Wu, Zhou, and Zhou 2022; Chen et al. 2023) decouple and focus specifically on complementary relationships from item relationships to provide more precise recommendations. However, they tend to ignore the dissimilarity attribute. In response, some works try to consider the dissimilarity attribute. For example, P-companion (Hao et al. 2020) and ALCIR (Bibas, Shalom, and Jannach 2023) propose category mapping networks to recommend complementary items and include category diversity. Since the dissimilarity is related to not only categories but also other features such as appearances and prices, these works still oversimplify the dissimilarity and do not have a deep understanding of the complementary relationships. Furthermore, their inability to accurately model and decouple the two attributes prevents them from striking a balance between relevance and dissimilarity. Problem Statement and Motivation We first provide preliminaries and definitions of our graphbased complementary item recommendation problem. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8869 Dataset Appliances Grocery Toys Home # items 804 38548 24638 75514 # edges 8290 642884 614730 776766 Shigh 0.3408 0.4034 0.3150 0.4169 Table 1: Statistics and Shigh of four datasets. Notations and Problem Definition Let G = {V, X, E} denotes the complementary item graph, where V is the set of nodes {v1, ..., vN} and each node is an item. E = {eij} is the set of undirected edges. Feature matrix X ∈RN×d is made of d−dimensional features of N nodes. Let A ∈RN×N denotes the adjacency matrix. Aij = 1 if vi and vj are complementary; otherwise, Aij = 0. Let D ∈RN×N be the diagonal degree matrix with Dii = P j Aij. The normalized graph Laplacian matrix L = I−D−1 2 AD−1 2 , where I is an identity matrix. With these notations, we formally define the problem of graphbased complementary item recommendation as: Problem 1. For a complementary item graph G = {V, X, E}, where nodes denote items and edges denote complementary relationships, we aim to predict the probability of an edge ei,j when given two items vi and vj, and find items being complementary accordingly. Observations on Real-world Datasets To observe the spectral properties of the complementary item graphs, we first introduce two metrics (Tang et al. 2022), i.e., spectrum and high-frequency area. (1) The spectrum visualizes frequency distribution in spectral domains. It is plotted using eigenvalues λ as the x-axis and the spectral energy as the y-axis. The eigenvalues λ = {λ1, λ2, ..., λN} and the corresponding eigenvectors U = (u1, u2, ..., uN) can be obtained by the decomposition of the normalized Laplacian matrix L, where eigenvalues λ also denote the frequencies of the graph. The spectral energy ˆx2 k/ PN i=1 ˆx2 i is based on the graph Fourier transform ˆx = (ˆx1, ˆx2, ..., ˆxN)T = UT x, where x = (x1, x2, ..., xN)T ∈RN denotes one dimension of features from N nodes. Since λ ranges from 0 to 2, we define λ close to 2 as high frequencies, λ close to 0 as low frequencies, and λ close to 1 as medium frequencies. Due to the huge computational effort of eigenvalue decomposition, only small-scale graph datasets can be drawn for spectrum plots. (2) The high-frequency area Shigh denotes the area of the high-frequency region in the spectrum. It measures the area between the accumulated spectral energy curve (the solid lines in Figure 2) and the curve with a y-value of 1 (the dashed lines in Figure 2). Thus, Shigh is within [0, 2]. Previous work (Tang et al. 2022) shows that Shigh can be obtained by Shigh = PN k=1 λk ˆx2 k PN k=1 ˆx2 k = xT Lx xT x . There is no need for eigenvalue decomposition, making it feasible for largescale datasets. The larger the Shigh is, the more the mid- and high-frequency components are. Based on the two metrics, we conduct our analysis on four real-world datasets obtained from Amazon (He and 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 0.0 0.2 0.4 0.6 0.8 1.0 Spectral Energy Feature Dimension 1 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 0.0 0.2 0.4 0.6 0.8 1.0 Spectral Energy Feature Dimension 2 Figure 2: Spectral energy distribution of Appliances dataset. McAuley 2016), i.e., “Appliances”, “Grocery and Gourmet Food” (abbreviated as Grocery), “Toys and Games” (Toys), and “Home and Kitchen” (Home). The details of the datasets can be found in the experiment section. “Appliances” is a small-scale dataset, while the others are large-scale datasets. In each dataset, nodes represent items and edges represent complementary relationships. We draw the spectrums for the small-scale dataset and compute high-frequency area Shigh for all datasets. Since both the spectrum and Shigh are obtained based on a single feature dimension, we randomly select two dimensions to plot the spectrum in Figure 2 and calculate the average value of all feature dimensions to obtain Shigh in Table 1. In Figure 2, the histogram denotes the spectral energy distribution and the solid curve denotes the accumulated spectral energy distribution. From Figure 2 and Table 1, we can conclude that the complementary item graph is mainly composed of lowfrequency and mid-frequency components in the spectral domain. In detail, (1) from Figure 2, the spectrum shows that λ with low and medium values have larger spectral energies, which means the complementary relationship is composed of low-frequency and mid-frequency components; and (2) from Tabel 1, the high-frequency area Shigh of all the datasets fall between 0 and 1, indicating that similar to the Appliances dataset, the spectrum of the other three datasets are mainly composed of low-frequency and mid-frequency parts. Additionally, as Shigh is below 0.5, the low-frequency component is greater than the mid-frequency component. Since the more similar a node is to its neighbors in the spatial domain, the lower the corresponding frequency component is in the spectral domain (Wu et al. 2022; Bo et al. 2021), we can regard the low-frequency component as the relevance attribute and the mid-frequency component as the dissimilarity attribute. To further verify it, we conduct a case study experiment in the experiment section. In this way, we bridge the gap between the properties of the complementary relationship and the spectral characteristics. Methodology Based on our observations, we propose a novel framework for complementary item prediction by modeling the two attributes in the spectral domain. As illustrated in Figure 3, the framework consists of three key modules: spectralbased GCN filters, a two-stage attention mechanism, and contrastive learning optimization. (1) To model the lowThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8870 Contrastive Learning Pairwise Attention Self Attention Self Attention Query Item Pair-wise Attention Mechanism Query Embedding Query Candidate Embedding Key Value Attention Score New Candidate Embedding Low-pass GCN Filter Candidate Item Mid-pass GCN Filter Low-pass GCN Filter Mid-pass GCN Filter Figure 3: The overall framework of our proposed model SComGNN. frequency relevance and mid-frequency dissimilarity, we decouple and extract them using specialized GCN filters. (2) Integrating these attributes poses a challenge, as manually determining their importance is difficult. Thus we introduce a two-stage attention mechanism that adaptively integrates the attributes. It utilizes a pairwise attention mechanism to determine the significance of relevance and dissimilarity between by item pairs, followed by a self-attention mechanism that integrates the attributes independently. (3) Finally, we optimize our model using contrastive learning. In the following sections, we provide details of each module. Spectral-based GCN Filters To model the low- and mid- components of the complementary item graph, we decouple the low-frequency relevance and mid-frequency dissimilarity using specialized GCN filters. We will first introduce the unified form of spatialbased and spectral-based GCNs. Based on it, we then design spectral-based low-pass and mid-pass filters and turn them into spatial forms for implementation. Unified form of Spatial-based and Spectral-based GCNs GCNs can be explored from both spatial and spectral domain perspectives. Though the two approaches start from different domains, they can be interchanged (Balcilar et al. 2020). The GCN propagation can be formulated as: Hl+1 = σ( K X k=1 CkHlWl k), (1) where σ is the activation function, K is the number of filters, Hl denotes the node representation at layer l, and Wl k is a learnable weight matrix of the filter k at layer l. Here Ck is the graph convolution kernel in the spatial domain, which can be formulated in the spectral domain as: Ck = Udiag(Fk(λ))UT , (2) where U and λ are the eigenvectors and the eigenvalues of the normalized graph Laplacian matrix L. Here diag represents the diagonal matrix with specified elements. Fk(λ) is the graph convolutional filter in the spectral domain and is a function with λ as the independent variable. Eq. (2) can also be formulated as: Fk(λ) = diag−1(UT CkU). (3) The key to the spatial-based GNNs is the design of Ck, while the key to the spectral-based GNNs is the design of Fk(λ). With Eq. (2) and Eq. (3), these two convolutional kernels can be converted to each other. Since spatial-based GNNs are generally easier to understand and implement than spectral-based ones, Eq. (2) inspires us to design the spectral convolutional kernel Fk(λ) first, and then convert it to a spatial form to implement it, like what existing works do (Kipf and Welling 2016; Wu et al. 2022). Spectral-based Filters Existing spectral-based GNNs (Bruna et al. 2013; Defferrard, Bresson, and Vandergheynst 2016; Kipf and Welling 2016) design different Fk(λ) to obtain different GNN models. Since complementary item graphs are mainly composed of low and mid-frequency components in the spectral domain, our goal is to design a low-pass and a mid-pass GCN filter to extract the low and mid-frequency components, respectively, and filter out other components. As the convolution of the spatial domain is equal to the product of the spectral domain, the larger the amplitude value, the more of the corresponding frequency component is retained. To avoid complex calculations, we design the spectral convolution kernel of the low-pass filter as a linear decreasing function of λ: Flow(λ) = 1 −λ/2. (4) With Eq. (2), we can turn it into a spatial form: Clow = ( eA + I)/2, (5) where eA = D−1/2AD−1/2, denoting the normalized adjacency matrix. Linear functions can be implemented as highpass or low-pass in increasing or decreasing form, however, it is not possible to implement a mid-pass filter, where the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8871 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 Magnitude The Low-pass GCN Filter 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 Magnitude The Mid-pass GCN Filter Figure 4: Spectrums of low-pass and mid-pass GCNs. mid-frequency component is retained while others are filtered out. Therefore, we use the quadratic function of λ to realize the spectral convolution kernel of the mid-pass filter: Fmid(λ) = −(λ −1)2 + 1. (6) With Eq. (2), it can also be formulated in the spatial domain: Cmid = I −eA2. (7) To better understand the low-pass and mid-pass filters, we look at both the spectral and spatial domains. For the spectral domain, we plot the spectral convolutional kernels Flow(λ) and Fmid(λ) (i.e., the spectrums) in Figure 4. As shown in the spectrums, the low-pass filter retains the low-frequency part and filters out other parts while the mid-pass filter retains the middle-frequency part and filters out other parts. For the spatial domain, with Eq. (5) and Eq. (7), our low-pass filter aggregates a node’s self-information with its neighborhood information while the mid-pass filter obtains the difference between a node’s self-information and its two-hop neighborhood information. Therefore, we can conclude that the low-pass filter extracts the relevance between nodes and neighbors, while the mid-pass filter extracts the dissimilarity between nodes and neighbors. With the low-pass and mid-pass GCN filters, we can extract the low and mid-frequency components from the complementary item graph, which correspond to the relevance and dissimilarity attributes, respectively. We formulate it as: Hl mid = Relu(CmidHl−1 midWl−1), (8) Hl low = Relu(ClowHl−1 lowWl−1), (9) where Hl low and Hl mid are the low-frequency and midfrequency node representation matrix at layer l, Wl−1 is the weight matrix. Note that, H0 low = H0 mid = X. Tow-stage Attention Mechanism Since items with complementary relationships are relevant yet dissimilar, it is challenging to determine manually which attribute is more crucial when making complementary relationship predictions. Even for the same item, the significance of these two attributes may differ in different item pairs. To tackle this problem, we propose a two-stage attention mechanism to merge the two attributes adaptively, which is composed of pairwise attention in item pairs and self-attention independently. Pairwise Attention Mechanism. We first adopt a pairwise attention mechanism to integrate the low- and midfrequency components of items in pairs. For example, given an item pair (vi, vj), item vj determines the proportion of low-frequency relevance and mid-frequency dissimilarity of item vi, and vice versa. With the low-pass and mid-pass GCN filters, we obtain the low and mid-frequency item representation matrix HL low and HL mid, where L is the depth of propagation layers. For item vi, we denote its embeddings as hL if , where f ∈{low, mid}. With these notations, we first introduce how the low-frequency component of item vj selects and integrates the two embeddings of item vi: zilow = X f αjlow,if hL if , αjlow,if = exp(hL jlow T hL if ) P f exp(hL jlow T hL if ) , (10) where hL jlow denotes the low-frequency embedding of item vj. Here αjlow,if denotes the proportion of low-frequency and mid-frequency embeddings of item vi. zilow denotes the integrated representation of item vi decided by lowfrequency component of item vj. Also, the two embeddings of item vi can be selected and integrated by the mid-frequency component of item vj: zimid = X f αjmid,if hL if , αjmid,if = exp(hL jmid T hL if ) P f exp(hL jmid T hL if ) , (11) where hL jmid denotes the mid-frequency embedding of item vj. Here αjlow,if denotes the proportion of two embeddings of item vi. zimid denotes the integrated representation of item vi decided by mid-frequency component of item vj. By the pairwise attention mechanism with item vj, we obtain the integrated embedding zilow and zimid of item vi. Here the low and mid in the subscripts no longer denote the frequency component in item vi, but rather which frequency component of item vj it is integrated from. For item vj in pair (vi, vj), we can do the same pairwise attention step to obtain zjlow and zjmid. Self-attention Mechanism. After the pairwise attention step, the low and mid-frequency components of items vi and vj have been selected and integrated by each other. Next, we use the self-attention mechanism to further adaptively integrate the low and mid-frequency components by themselves. Similar to the above pairwise attention step, two embeddings of item vi can be selected and integrated by zilow: ezilow = X f βilow,if zif , βilow,if = exp(zilow T zif ) P f exp(zilow T zif ), (12) where βilow,if denotes the proportion of two embeddings of item vi, ezilow denotes the further integrated representation of item vi by zilow after the self-attention step. Also, the two embeddings of vi can be selected and integrated by zimid: ezimid = X f βimid,if zif , βimid,if = exp(zimid T zif ) P f exp(zimid T zif ), (13) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8872 Method Datasets Appliances Toys Grocery Home Metric HR@5 HR@10 NDCG HR@5 HR@10 NDCG HR@5 HR@10 NDCG HR@5 HR@10 NDCG Baselines GIN 0.4347 0.6226 0.4279 0.5242 0.7408 0.4866 0.4344 0.6107 0.4425 0.4843 0.6552 0.4681 GraghSage 0.4402 0.6574 0.4215 0.5313 0.7514 0.4863 0.6255 0.8000 0.5359 0.7272 0.8417 0.6061 Popularity 0.2040 0.3208 0.2906 0.1809 0.2816 0.2914 0.2556 0.3690 0.3392 0.2263 0.3505 0.3223 DCF 0.3630 0.5366 0.3817 0.4876 0.6714 0.4661 0.5991 0.7574 0.5326 0.6846 0.7798 0.6015 P-Companion 0.3545 0.5414 0.3759 0.4098 0.6017 0.3923 0.4943 0.6774 0.4152 0.5847 0.7220 0.5145 ALCIR 0.3754 0.5394 0.3792 0.3930 0.5959 0.3994 0.5067 0.6892 0.4614 0.5411 0.6885 0.4826 Ours SComGNN 0.4919 0.7127 0.4377 0.6561 0.8589 0.5501 0.7207 0.8565 0.5959 0.7943 0.8789 0.6610 Table 2: Performance comparison on four datasets. where βimid,if denotes the proportion of two embeddings of vi, ezimid denotes the further integrated representation of item vi decided by zimid. By the self-attention mechanism, we obtain the further integrated embedding ezilow and ezimid of item vi. Finally, we concat the two embeddings and turn them into a lowdimensional representation: ˆzi = ([ezilow ⊕ezimid])W, (14) where W ∈R2d′×d′ and d′ is the embedding size. For item vj, we do the same step to obtain ˆzj. Contrastive Learning Optimization We treat the graph complementary item recommendation as a link prediction problem and follow the principle of contrastive learning (He et al. 2020) to construct positive and negative samples for each item, which encourages the model to pull together items that are complementary and push apart those that are not. For each item, positive samples are its complementary items and negative samples are randomly sampled from nodes that do not have links to it. The loss function can be formally defined as : L = − X ei,+∈E log exp(ˆzT i ˆz+/τ) PM j=0 exp(ˆzT i ˆzj/τ) , (15) where ˆz+ is the positive sample, M is the number of the negative items, and τ is a temperature hyperparameter. Note that, the value of ˆzi is not fixed, instead, it changes as the item pair changes. In the inference phase, we use the representations generated from the trained model to predict whether two items are complementary. The prediction score can be computed by: si,j = Sigmoid(ˆzT i ˆzj). (16) The algorithm and detailed time complexity analysis can be found in the supplementary files, where the time complexity of spectral-based GCN filters and the two-stage attention mechanism is O(3|E|) and O(8|d′|), respectively. Experiments In this section, we carry out comprehensive experiments to demonstrate the effectiveness of our method. Experimental Setup Datasets Following (Liu et al. 2020; Hao et al. 2020; Bibas, Shalom, and Jannach 2023), we use publicly available benchmark datasets from Amazon. We consider “alsobought” as complementary relationships, and our task is to realize the link prediction on the complementary item graphs. We select four datasets: “Appliances”, “Grocery”, “Toys”, and “Home”, and use the categories and price of each item as features. For categories, we choose BERT (Vaswani et al. 2017) as the pre-trained model to obtain the category embedding, and for the price, we discretize the continuous price to bins using equal-depth binning. Similar to previous work (Liu et al. 2020), for each item, we randomly sample one edge for constructing the test data, one for the validation data, and use the remaining edges as the training data. The statistics of the datasets are shown in Table 1. Baselines and Implementation. The baseline models can be divided into two groups: traditional GNNs and complementary item recommendation models. The first group includes GIN (Xu et al. 2018) and GraphSage (Hamilton, Ying, and Leskovec 2017). The second group includes Popularity (Bibas, Shalom, and Jannach 2023), DCF (Galron et al. 2018), P-Companion (Hao et al. 2020), and ALCIR (Bibas, Shalom, and Jannach 2023). We exclude DecGCN (Liu et al. 2020) and EMRIGCN (Chen et al. 2023) from our comparison since they incorporate substitutable relationships which are beyond the scope of this paper, as our focus is on complementary item recommendations. For our implementation, we set the embedding size and network layers of both two GCN filters to 16 and 1, respectively. We evaluated the performance using two metrics: Hit Rate (HR@K) and NDCG (Normalized Discounted Cumulative Gain), with K set to 5 and 10. Detailed descriptions of baselines and implementations can be found in the supplementary files and our code is available1. Overall Performance We present our experimental results in Table 2, including the results of our model and baselines on four datasets, where the boldfaced and underlined values represent the best and the second-best performance, respectively. Based on the results, we can make the following observations: 1https://github.com/luohaitong/SComGNN The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8873 Dataset Appliances Toys Grocery Home Metric HR@5 HR@10 NDCG HR@5 HR@10 NDCG HR@5 HR@10 NDCG HR@5 HR@10 NDCG Ours 0.4919 0.7127 0.4377 0.6561 0.8589 0.5501 0.7207 0.8565 0.5959 0.7943 0.8789 0.6610 w/o l 0.1071 0.1855 0.2457 0.0724 0.1365 0.2236 0.1272 0.2194 0.2560 0.0957 0.1745 0.2375 w/o m 0.4364 0.6420 0.4235 0.5194 0.7186 0.4907 0.6113 0.7839 0.5389 0.7349 0.8387 0.6239 w/o a 0.4644 0.6766 0.4195 0.5889 0.8228 0.5149 0.6194 0.8034 0.5330 0.7532 0.8626 0.6223 Table 3: The ablation study performance. 1 2 3 4 Network Layer 0.45 0.50 0.55 0.60 0.65 NDCG 4 8 16 32 64 Embedding Size 0.35 0.40 0.45 0.50 0.55 0.60 0.65 NDCG Appliances Toys Grocery Home Figure 5: Hyperparameter sensitivity evaluation. First, SComGNN outperforms all other models on all datasets, making it a state-of-the-art model for complementary item recommendation. Specifically, for the HR@10 score, SComGNN outperforms the best baseline model by 7.8%, 14.3%, 13.3%, and 4.4% on the four datasets, respectively. On the other two metrics (i.e., HR@5 and NDCG), SComGNN achieves a similar improvement in performance. The results demonstrate the importance of leveraging both low-frequency relevance and mid-frequency dissimilarity to enhance performance. Additionally, we observe that traditional graph-based models can also perform well, even without custom modifications for complementary relationships. In fact, some of these models outperform non-graph-based complementary item recommendation models. This highlights the powerful capabilities of GNNs in modeling relationships. Compared to GIN, Graphsage’s strong performance comes from the operation on neighbor node sampling, which reduces its reliance on low-frequency components. Ablation Study We carry out ablation experiments to investigate the contributions of three key modules, i.e., the low-pass GCN filter, the mid-pass GCN filter, and the two-stage attention mechanism. SComGNN w/o l is the variant without the lowfrequency GCN filter and the two-stage attention mechanism, thus the model only obtains the mid-frequency representation. SComGNN w/o m is the variant without the mid-frequency GCN filter and the two-stage attention mechanism, thus the model only obtains the low-frequency representation. SComGNN w/o a is the variant without the twostage attention mechanism thus the mid-frequency and lowfrequency representations are simply concatenated. The results are shown in Table 3. In Table 3, we can observe that (1) SComGNN achieves the best performance among the four models, indicating the collective importance of all three modules. (2) Compared to SComGNN w/o l, ScomGNN w/o m achieves better performance, which verifies our observations that the low-frequency component is greater than the midfrequency component. (3) In some cases, the performance of SComGNN w/o a can even be inferior to that of SComGNN w/o m. This indicates although the mid-frequency representation is valuable, its integration needs to be more efficient. Hyperparameter Analysis We investigate the impact of two hyperparameters on the performance of our model, i.e., the depth of low-pass and mid-pass GCN layers and the embedding size. Due to similar trends and page limitations, we only present the NDCG results in Figure 5, and complete results can be found in supplementary files. We can make some conclusions. First, for the depth of GCN layers, the performance may decrease as the depth of network layers increases. This is because our model aggregates structural information from different perspectives so that a one-layer network can perform well. Next, for the embedding size, 16 is the most appropriate value. Increasing the embedding size to 32 and 64 not only does not necessarily improve the model performance but also increases the model complexity and training time. Also, reducing the embedding size to 8 or 4 results in a significant drop in performance, indicating the importance of learning rich and expressive feature representations. Case Study To assess the impact of low and mid-frequency components in the production environment, we compare the performance of three models: SComGNN w/o m, SComGNN w/o l, and SComGNN. Figure 6 shows the TOP-3 complementary items recommended for “Instant Coffee”. In the recommendations solely based on the low-frequency components, items are all coffee, which is strongly similar to the query item. Conversely, with only the mid-frequency components, items appear to have a low correlation to the query item. However, with both the low-frequency and midfrequency components, a diverse range of items, including coffee, sugar, and cocoa, are recommended. The inclusion of sugar as a flavoring for coffee, and the presence of cocoa alongside coffee as distinct but related beverages, illustrate our ability to capture both relevance and dissimilarity. This outcome highlights the crucial roles played by the lowfrequency component in representing relevance and the midfrequency component in representing dissimilarity, both of which are essential for complementary relationships. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8874 Instant Coffee SComGNN Query Item TOP-3 Recommendations m l Single-Serve Capsules & Pods Single-Serve Capsules & Pods Single-Serve Capsules & Pods Sugars Cocoa Single-Serve Capsules & Pods Sugars Lollipops Diced Tomatoes Figure 6: Examples of complementary recommendation results with different frequency components. Conclusion In this paper, we bridge the gap between spectral properties and attributes of complementary relationships. Our analysis reveals that complementary item graphs primarily consist of low-frequency and mid-frequency components in the spectral domain, representing relevance and dissimilarity attributes. We propose GCN filters to extract the two components and employ a two-stage attention mechanism for adaptive integration. Experiments on four publicly available datasets demonstrate the effectiveness of our theoretical analysis and the proposed method. In the future, more effective GCN filters and integration approaches of two frequency components deserve to be explored. Acknowledgements This work is supported in whole or in part, by National Science Foundation of China (61972381, 62106249, 62372429), Project on Cyber Security and Informatization of Chinese Academy of Sciences (CAS-WX2022SF0401), and the Pilot for Major Scientific Research Facility of Jiangsu Province of China (NO.BM2021800). References Balcilar, M.; Renton, G.; H´eroux, P.; Gauzere, B.; Adam, S.; and Honeine, P. 2020. Bridging the gap between spectral and spatial domains in graph neural networks. arXiv preprint arXiv:2003.11702. Bibas, K.; Shalom, O. S.; and Jannach, D. 2023. Semisupervised Adversarial Learning for Complementary Item Recommendation. arXiv preprint arXiv:2303.05812. Bo, D.; et al. 2021. Beyond low-frequency information in graph convolutional networks. In AAAI, 3950–3957. Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Cen, Y.; Zou, X.; Zhang, J.; Yang, H.; Zhou, J.; and Tang, J. 2019. Representation learning for attributed multiplex heterogeneous network. In KDD, 1358–1368. Chen, H.; He, J.; Xu, W.; Feng, T.; Liu, M.; Song, T.; Yao, R.; and Qiao, Y. 2023. Enhanced Multi-Relationships Integration Graph Convolutional Network for Inferring Substitutable and Complementary Items. In AAAI, volume 37, 4157–4165. Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. Neurips, 29. Galron, D. A.; Brovman, Y. M.; Chung, J.; Wieja, M.; and Wang, P. 2018. Deep item-based collaborative filtering for sparse implicit feedback. arXiv preprint arXiv:1812.10546. Gao, H.; Wang, Z.; and Ji, S. 2018. Large-scale learnable graph convolutional networks. In KDD, 1416–1424. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. Neurips, 30. Han, J.; Cheng, H.; Xin, D.; and Yan, X. 2007. Frequent pattern mining: current status and future directions. Data mining and knowledge discovery, 15(1): 55–86. Hao, J.; Zhao, T.; Li, J.; Dong, X. L.; Faloutsos, C.; Sun, Y.; and Wang, W. 2020. P-companion: A principled framework for diversified complementary product recommendation. In CIKM, 2517–2524. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In CVPR, 9729–9738. He, R.; and McAuley, J. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW, 507–517. Kipf, T. N.; and Welling, M. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Liu, Y.; Gu, Y.; Ding, Z.; Gao, J.; Guo, Z.; Bao, Y.; and Yan, W. 2020. Decoupled graph convolution network for inferring substitutable and complementary items. In CIKM, 2621–2628. McAuley, J.; Pandey, R.; and Leskovec, J. 2015. Inferring networks of substitutable and complementary products. In KDD, 785–794. Meng, X.; Wang, S.; Shu, K.; Li, J.; Chen, B.; Liu, H.; and Zhang, Y. 2018. Personalized Privacy-Preserving Social Recommendation. In AAAI, 3796–3803. Tang, J.; Li, J.; Gao, Z.; and Li, J. 2022. Rethinking graph neural networks for anomaly detection. In ICML, 21076– 21089. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Neurips, 30. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Wang, J.; Sarwar, B.; and Sundaresan, N. 2011. Utilizing related products for post-purchase recommendation in e-commerce. In Proceedings of the fifth ACM conference on Recommender systems, 329–332. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8875 Wang, Z.; Jiang, Z.; Ren, Z.; Tang, J.; and Yin, D. 2018. A path-constrained framework for discriminating substitutable and complementary products in e-commerce. In WSDM, 619–627. Wu, L.; Zhou, Y.; and Zhou, D. 2022. Towards high-order complementary recommendation via logical reasoning network. In ICDM, 1227–1232. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1): 4–24. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; and Zhang, C. 2022. Beyond low-pass filtering: Graph convolutional networks with automatic filtering. IEEE Transactions on Knowledge and Data Engineering. Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Yao, Y.; and Harper, F. M. 2018. Judging similarity: a usercentric study of related item recommendations. In RecSys, 288–296. Zhu, H.; Tang, X.; Zhao, T.; and Wang, S. 2023. You Need to Look Globally: Discovering Representative Topology Structures to Enhance Graph Neural Network. In PAKDD, 40–52. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8876
2024
986
18,835
Enhancing Cognitive Diagnosis Using Un-interacted Exercises: A Collaboration-Aware Mixed Sampling Approach Haiping Ma1, Changqian Wang1, Hengshu Zhu2, Shangshang Yang3*, Xiaoming Zhang1, Xingyi Zhang4* 1Department of Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology, Anhui University, China 2Career Science Lab, BOSS Zhipin, China 3 School of Artificial Intelligence, Anhui University, China 4School of Computer Science and Technology, Anhui University, China [email protected], [email protected], {changqian.wang.dl, zhuhengshu, yangshang0308, xyzhanghust}@gmail.com Abstract Cognitive diagnosis is a crucial task in computer-aided education, aimed at evaluating students’ proficiency levels across various knowledge concepts through exercises. Current models, however, primarily rely on students’ answered exercises, neglecting the complex and rich information contained in uninteracted exercises. While recent research has attempted to leverage the data within un-interacted exercises linked to interacted knowledge concepts, aiming to address the long-tail issue, these studies fail to fully explore the informative, uninteracted exercises related to broader knowledge concepts. This oversight results in diminished performance when these models are applied to comprehensive datasets. In response to this gap, we present the Collaborative-aware Mixed Exercise Sampling (CMES) framework, which can effectively exploit the information present in un-interacted exercises linked to un-interacted knowledge concepts. Specifically, we introduce a novel universal sampling module where the training samples comprise not merely raw data slices, but enhanced samples generated by combining weight-enhanced attention mixture techniques. Given the necessity of real response labels in cognitive diagnosis, we also propose a ranking-based pseudo feedback module to regulate students’ responses on generated exercises. The versatility of the CMES framework bolsters existing models and improves their adaptability. Finally, we demonstrate the effectiveness and interpretability of our framework through comprehensive experiments on realworld datasets. Introduction Amid the rapid advancement of computer-aided education, cognitive diagnosis has garnered increasing attention (Lord 2012; Yang et al. 2023c; Qin et al. 2023). As a crucial task in intelligent education, cognitive diagnosis aims at evaluating students’ proficiency levels across various knowledge concepts through exercises. As illustrated in Figure 1, existing cognitive diagnosis studies are based on the historical response logs between students and exercises, as well as the associations between exercises and knowledge concepts *Corresponding Authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. for modeling. They believe exercise interactions provide the greatest diagnostic value for students, while overlooking the information contained in un-interacted exercises. In practice, each student’s interaction with exercises represents a mere fraction of the complete exercise bank, with the uninteracted exercises containing intricate and extensive information. In this paper, we attempt to leverage these un-interacted exercise information. One challenge with leveraging such un-interacted exercise information is the absence of students’ potential response labels. Recent work in EIRS (YAO et al. 2023) makes the assumption that students will perform comparably on exercises related to the same knowledge concepts. EIRS aims to mitigate the long-tail problem (where insufficient interaction data results in skewed distributions) through similarity-oriented sampling of exercises associated with previously interacted concepts. Since the attained sample exercises convey analogous information to the interacted ones, constraining the acquired knowledge, this circumstance culminates in the method being unable to realize optimal performance on full datasets. Consequently, determining how to extract additional informative un-interacted exercises constitutes another challenge. Within the domain of recommender systems, informative negative samples are frequently utilized to train the system and enhance recommendation performance (Rendle and Freudenthaler 2014). Accordingly, substantial research has investigated techniques for sampling informative negative instances (Rendle et al. 2012; Wang et al. 2020b; Liu and Wang 2023). However, owing to the distinctive nature of cognitive diagnosis models, which encompass intricate interrelationships among students, exercises, and knowledge concepts, sampling approaches from recommender systems are not transferable to cognitive diagnosis models. To address the aforementioned challenges, we propose a general framework, namely Collaborative-aware Mixed Exercise Sampling (CMES) for cognitive diagnosis models. CMES extracts more informative diagnoses from the pool of un-interacted exercises and obtains students’ potential response labels. Specifically, to improve the quality and efficiency of sampling, we preclude sampling exercises affiliThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8877 Figure 1: Illustration of cognitive diagnosis. Mainstream cognitive diagnosis models derive diagnosis results from students’ response logs. e1 and e2 are the exercises that Bob has interacted with, while the remaining exercises are those that he has not interacted with. ated with students’ interacted concepts, as well as exercises with potentially similar information to interacted ones. We cluster students founded on their response capabilities and collaborations. Throughout the sampling progression, we sample from other students’ interacted exercise sets in different clusters. As interacted exercises encompass robust diagnostic intimations while un-interacted ones retain prospective heterogeneous information, we use mixing techniques to mix the sampled exercise information by injecting interacted exercise information into sampled exercises, obtaining more informative mixed samples. Finally, we design a ranking based pseudo feedback module to predict potential response situations for the sampled exercises, which is combined with the cognitive diagnosis task for joint learning. Our main contributions are summarized as follows: • To fully leverage the latent information in un-interacted exercises for student diagnosis, we propose a generic sampling framework CMES for enhancing cognitive diagnosis tasks. • We specially design a learning-based pseudo feedback module that defines a learning-to-rank task assisting in the training of the cognitive diagnosis task, • We have conducted extensive experiments on real-world datasets to validate the effectiveness and scalability of our approach. Related Work In this section, we review the related work about cognitive diagnosis models and sampling strategies. Cognitive Diagnosis Cognitive diagnosis, a fundamental and critical task in education, aims to infer students’ mastery of knowledge concepts. The early models IRT (Lord 1980) and DINA (De La Torre 2009) are two classic cognitive diagnosis models. Unlike IRT which hypothesizes unidimensional independence and adopts continuous latent variables to evaluate examinees’ potential abilities, DINA is based on the attribute independence assumption and uses 0/1 binary vectors to represent students’ mastery of each attribute. MIRT (Reckase 2009), as an extension of IRT, discards the unidimensionality and proposes that student proficiency is multidimensional, thus utilizing multiple latent traits to characterize students more comprehensively. NCD (Wang et al. 2020a) firstly introduces neural networks into cognitive diagnosis, so as to capture the sophisticated student-exercise relationships. Afterwards, more neural network based approaches (Yang et al. 2023d,a; Ma et al. 2022; Yang et al. 2023b) are proposed, such as ECD (Zhou et al. 2021) incorporates contextual features to facilitate more precise diagnosis of students’ cognitive status. RCD (Gao et al. 2021) attempts to explore student-exercise-concept associations via graphs and conducts more delicate modeling of the interactions. Recent work of ICD (Qi et al. 2023) further investigates the intrinsic correlations among knowledge concepts and quantitative relationships between exercises and concepts. Despite the remarkable progress, existing methods exclusively take advantage of interacted responses while overlooking the un-interacted yet more informative exercises. Sampling Strategy Sampling strategies are extensively utilized in recommender systems, where sampling informative non-interacted instances close to positive samples facilitates models to better learn the boundary between positive and negative samples. Conventional recommender systems often adopt random negative sampling (RNS) (Rendle et al. 2012) and static popularity-based negative sampling (PNS) (CasellesDupr´e, Lesaint, and Royo-Letelier 2018; Chen et al. 2017), through which the attained negative samples are typically of low quality and fail to train models effectively. Dynamic negative sampling (DNS) (Zhang et al. 2013) is an adaptive negative sampling approach, scoring each sample and using high-scored ones as negative samples for model training. Currently, GAN-based negative sampling (Wang et al. 2017; Ding et al. 2019; Guo et al. 2020) prevails in recommender systems. Despite explorations into GANs, existing GAN-based sampling strategies often suffer from poor interpretability and inferior performance due to training instability. A graph data augmentation based negative sampling (Huang et al. 2021) augments the positive samples with negative sample information to delude the recommender and enhance its ability to distinguish the boundary. Due to the sophisticated student-exercise interactions that not only reply on answering records but also associations between exercises and concepts, transplanting negative sampling strategies from recommender systems into cognitive diagnosis faces challenges. Although previous work EIRS (YAO et al. 2023) has introduced sampling strategies into cognitive diagnosis, it essentially performs similaritybased sampling, where the attained samples carry comparable information to interacted ones and fail to provide extra diagnostic values. Inspired by the high-quality negative samples achieved in recommender systems, we propose a novel sampling strategy to obtain informative samples. Problem Statement For cognitive diagnosis, we define three entity groups: the student set S = {s1, s2, ..., sN} of size N; the exercise set The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8878 E = {e1, e2, ..., eM} of size M; and the knowledge concept set K = {k1, k2, ..., kC} of size C. The exercise-concept relationship is defined by matrix Q ∈RM×C, where Qi,j = 1 if exercise ei involves concept kj, else Qi,j = 0. We also define interaction logs as triplets (si, ej, rij) ∈R, where ej is called an interacted exercise of student si, rij = 1 if student si correctly answered exercise ej, else rij = 0 and R is the interaction set. The un-interacted exercise set for student si is defined by Ui = E\Ei, where Ei is the interacted exercise set of student si. The knowledge concepts associated with the interacted exercises by student si are called interacted knowledge concepts Ki, where Ki ⊂K. PROBLEM DEFINITION. Given student entity, exercise entity, knowledge concept entity, students’ exercising response logs, the un-interacted exercises set Ui for each student, and the exercise-knowledge relational matrix. Our goal is to leverage the un-interacted exercises to enhance the performance of cognitive diagnosis. The Proposed CMES Framework In this section, we first briefly introduce the proposed framework, then elaborates on each module, and finally discusses how to train the cognitive diagnosis model with the proposed CMES framework. Overview. The brief idea of this paper is to enhance cognitive diagnosis by sample augmentation using un-interacted exercises. For this aim, as shown in Figure 2, our CMES framework comprises three key components: the sample augmentation module, the pseudo feedback module, and the extensible diagnosis module. The initial two modules aim to sample and blend informative exercises from the pool of uninteracted exercises for individual students, simultaneously evaluating potential feedback labels for the mixed exercises. More precisely, within the sample augmentation module, we group students based on their response capabilities to mitigate interference from exercises with limited information. We then proceed to sample and mix exercise information from other clusters. Once we acquire informative samples, the pseudo feedback module leverages interacted information to deduce students’ feedback, subsequently generating pseudo response labels for each mixed sample. The final module (i.e., the cognitive diagnosis module) employs the mixed exercises with pseudo labels and interaction records to deduce students’ cognitive levels. Notably, our framework exhibits remarkable extensibility, seamlessly integrating supplementary data into existing methods, thereby enhancing their performance. Sample Augmentation Module To thoroughly augment the information encompassed within the samples for each student, we first sample more informative exercises for each student from un-interacted exercises by clustering students. Subsequently, the sampled exercises are combined with the interacted exercises to generate novel samples, facilitated by attention mechanisms. Collaboration-aware Un-interacted Exercise Sampling This section focuses on the objective of selecting a specific number of exercises for each student si from the uninteracted exercise set Ui. We posit the existence of two types of exercises within Ui that offer limited supplementary information aimed at improving the accuracy of students’ proficiency diagnosis. The initial type encompasses exercises related to the knowledge concepts within Ki that have been interacted with by student si. This is facilitated by the understanding that student si’s proficiency with respect to the knowledge concepts in Ki can be discerned from the interaction records. The second type consists of exercises accomplished by students exhibiting similar proficiency levels, drawing inspiration from the notion of collaboration. Given that these details can be somewhat captured by prevailing cognitive diagnosis models. Thus, we structure the sampling process in the following manner. Initially, we partition students into W groups based on their performance in exercises and the exerciseconcept relational matrix Q. Subsequently, for the student si with an interaction set Ri of size t, we give preference to exercises that are commonly completed by peers within the remaining W −1 clusters, as these exercises have garnered more feedback. In other words, for student si, we draw a sample of 2n exercises, forming the candidate set U cand i = {u1, u2, ..., u2n} ⊆Ui which intentionally excludes exercises linked to the knowledge concepts in Ki, and the value of n serves as a hyperparameter. Attention-based Sample Augmentation Using the sampled 2n exercises U cand i for student si, we combine these exercises with the interacted exercises Ei to create a newly generated sample set, thereby augmenting our samples. More precisely, for each interacted exercise ej ∈Ei, we randomly select n exercises (denoted U E i,j) from the set U cand i . Subsequently, we combine these n + 1 exercises, leveraging an attention mechanism to produce n + 1 new samples. Consequently, for student si who possesses an interacted exercise set Ei containing t exercises, we will generate a total of t × (n + 1) new samples. Interacted exercises consistently provide substantial information, whereas sampled uninteracted exercises offer a range of diverse and informative insights. This mixing operation serves to balance the informativeness and diversity of samples, thereby enhancing the robustness and precision of student si’s diagnosis. As we apply mixture to the vector representations of exercise instances, we initiate the embedding process of exercises by performing a matrix multiplication. Specifically, the one-hot vector xej for each exercise ej, along with xum for exercise in U E i,j, is multiplied by a trainable matrix E ∈RM×d to attain their initialized embedding representation eE j , eE um ∈R1×d, where M is the number of exercises, d is the embedding size: eE j = xej × E, eE um = xum × E. (1) Cognitive diagnosis models commonly utilize eE j as the exercise ej’s feature vector. The dimensions of eE j correspond to the quantity of knowledge concepts, with each dimension representing an exercise attribute concerning the relevant concept. In this study, for a deeper understanding of exercises related to un-interacted knowledge concepts, it is essential to amplify the weights of correlated knowledge The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8879 Figure 2: The overview architecture of CMES: (a) The Sample Augmentation Module, which consists of Collaboration-aware Un-interacted Exercise Sampling and Attention-based Sample Augmentation; (b) The Pseudo Feedback Module; (c) Model Training via the Cognitive Diagnosis Task. concepts in the information mixing process. As such, we propose to construct a weight matrix based on the Q matrix denoted as Q′ ∈RM×C, where M and C represent the number of exercises and knowledge concepts respectively: Q′ m,c = α, if Qm,c = 0 β, if Qm,c = 1 , (2) where Qm,c denotes exercise em is affiliated with knowledge concept kc, α and β represent hyperparameters, subject to α < β. Then, we multiply the knowledge concept weight vector Q′ j with the initialized embedding vector eE j of exercise ej to obtain the exercise embedding vector eE j′ ∈R(n+1)×d with knowledge concept weight enhancement as follows: eE j′ = eE j · Q′ j. (3) Based on the embedding representation vectors of exercises, for each interacted exercise ej ∈Ei and the randomly selected n exercises U E i,j, we employ a self-attention network to mix them and obtain n + 1 new embedding vectors. The embedding vectors of generated samples incorporate the learned target embedding as well as the information from one another. Here we adopt the Scaled Dot-Product Attention to capture the information among the sampled instances and interacted exercises: Q, K, V =eE j′ × WQ, eE j′ × WK, eE j′ × WV Aj = softmax(Q × KT √ d )V , (4) where WQ ∈Rd×d, WK ∈Rd×d, and WV ∈Rd×d are three trainable matrices. Aj ∈R(n+1)×d is the result computed by the attention module, representing a weighted vector that captures information from other exercises. Then Aj is taken as the j-th item (i.e, U E′ i,j ) of the diagnosis-generated sample set U E′ i = {U E′ i,1, U E′ i,2, . . . , U E′ i,j , . . . , U E′ i,t }, which encompasses t × (n + 1) samples for student si. Pseudo Feedback Module The samples generated by the sample augmentation module lack genuine response labels, which is necessary for cognitive diagnosis models. Therefore, within this module, we introduce a learning-to-rank task (Cao et al. 2007) to deduce the corresponding pseudo response label for these generated samples, relying on the following assumption. Assumption. We assume that the correct probability of student si answering the interacted sample eE j is greater than or equal to that of this student answering each generated sample, when rij = 1. Otherwise, when rij = 0, the ranking relationship is the opposite. This assumption is formally defined as follows: P (rim) ≤P (rij) ≤1, rij = 1, P (rim) ≥P (rij) ≥0, rij = 0, (5) where rij is the real response label of student si on the exercise ej, rij = 1 if student si correctly answered exercise ej, otherwise rij = 0; P (rim) and P (rij) denote the probabilities of student si correctly answering the exercises eE′ u′ m ∈U E′ i,j and eE j , respectively. Based on the above assumption, we simply define a learning objective, which is to maximize the following function: Y eE′ u′ m ∈UE′ i,j rij × Pi(eE j > eE′ u′ m) + (1 −rij)Pi(eE′ u′ m > eE j ), (6) where Pi (a > b) represents the probability of student si correctly answering exercise a is higher than that of correctly answering exercise b. We employ the BPR (Bayesian Personalized Ranking) (Rendle et al. 2012) loss function to simplify the learning of the objective: LF eedback = − X eE′ u′m ∈UE′ i ( rij ∗ln σ(y′ ij −y′ im) +(1 −rij) ∗ln σ(y′ im −y′ ij) ) , (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8880 where σ (·) is the sigmoid function, y′ ij and y′ im are obtained by the diagnosis function (denoted by f1) of the cognitive diagnosis model. Then, for each un-interacted exercise eu′ m, we map y′ im into ˆ yim ∈{0, 1} as the pseudo feedback response label of student si on exercise eu′ m, where 1 indicates student si may correctly answer the exercise eu′ m, while 0 indicates possible wrong answers. Cognitive Diagnosis Module Learning Model with CMES Framework. Our framework is applicable to any prevailing cognitive diagnosis model. The Sample Augmentation Module and Pseudo Feedback Module cater to student si in the cognitive diagnosis model by providing personalized pairs of sampled exercises U E′ i and their corresponding pseudo feedback response labels. Training. Through predicting students’ proficiency levels, we derive the ultimate mastery status for each student. In addition to employing the generated sample set U E′ i , we also leverage the interaction set R for diagnostic purposes. The loss function is defined as a composite of two components. Initially, we employ the frequently employed cross-entropy loss function (Yang et al. 2021, 2022) within conventional CD models on the data from the interaction set R: Linter = − X (si,ej,rij)∈R (rij log yij) + (1 −rij) log (1 −yij) , (8) where yij represents the proficiency prediction for student si on exercise ej attained through diagnosis function of the cognitive diagnosis model (denoted by f2). Then, we design a loss function for the mixed exercises: Lun−inter = − X si∈S 1 U E′ i X eE′ u′m ∈UE′ i  (ˆyim log yim) + (1 −ˆyim) log (1 −yim)  , (9) where yim represents the proficiency prediction for student si on mixed sample eE′ u′ m attained through the cognitive diagnosis function f2, ˆyim indicates the pseudo feedback label of student si on eE′ u′ m. We optimize the cognitive diagnosis module using the following loss function: LCD = Linter + Lun−inter. (10) We optimize the entire framework using the following loss function: LCMES = LCD (Θ1) + α · LF eedback (Θ2) , (11) where α is a balancing hyper parameter that weighs the two loss functions, Θ1 and Θ2 represent the training parameters of the Pseudo Feedback Module and the Cognitive Diagnosis Module respectively. It is worth noting that the diagnosis functions f1 and f2 in the Pseudo Feedback Module and the Cognitive Diagnosis Module apply the same cognitive diagnosis model but different parameters. Experiments As the key contribution of this work is to extend existing cognitive diagnosis models (CDMs) to adaptively utilize un-interacted data, we compare the original CDMs and our optimized CDMs with the CMES1 framework (denoted as Orginal-CDMs and CMES-CDMs respectively) on realworld datasets to address the following research questions: • RQ1: Can CMES-CDMs outperform Original-CDMs in terms of performance? • RQ2: How does our sample augmentation strategy outperform random sampling? • RQ3: Whether the performance of CMES is sensitive to the setting of sampling number? • RQ4: Whether the performance of CMES is sensitive to the setting of student cluster number? • RQ5: How does CMES perform on different ratios of the training set? Experimental Settings Datasets Description. We conduct experiments on two real-world datasets ASSISTments (Feng, Heffernan, and Koedinger 2009) and Math, which both provide studentexercise interaction records and the exercise-knowledge concept relational matrix. ASSISTments is a publicly available dataset collected from the online tutoring system ASSISTments. Math is a proprietary dataset assembled by a renowned e-learning platform, comprising mathematics practice and examination records of elementary and secondary school students. For both datasets, we filter out students with less than 15 response logs to ensure sufficient data for model learning. After processing, the statistics of the two datasets are shown in Table ??. We apply 70% : 10% : 20% training/validation/test split for each student’s response logs in the two datasets. Statistics ASSISTments MATH # Students 4,163 1,967 # Exercises 17,746 1,686 # Knowledge concepts 123 61 # Response logs 278,868 118,348 # Avg logs per student 67 60 Table 1: The statistics of the datasets. Evaluation Metrics. Considering that there is no true knowledge mastery of students, in the literature, the mainstream approach is to indirectly evaluate the effectiveness of CDMs by using the knowledge mastery vector obtained to predict the student’s exercising performance. Three famous metrics, i.e., the Root Mean Square Error (RMSE) (Tian et al. 2022), the Prediction Accuracy (ACC) (Tian et al. 2021) and Area Under an ROC Curve (AUC) (Bradley 1997) were chosen to evaluate predictive performance. 1https://github.com/WangCQ206/IntelligentEducation/tree/main/CMES The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8881 Metrics ACC RMSE AUC Orginal-CDMs CMES-CDMs Orginal-CDMs CMES-CDMs Orginal-CDMs CMES-CDMs IRT 68.89% 70.59% 0.4684 0.4547 70.45% 74.40% MIRT 70.79% 72.36% 0.4634 0.4368 73.93% 75.55% NCD 72.27% 72.89% 0.4335 0.4283 75.22% 76.23% CDGK 72.08% 73.01% 0.4356 0.4306 74.83% 75.51% ECD 72.47% 72.80% 0.4334 0.4287 74.97% 76.25% RCD 72.99% 73.06% 0.4243 0.4237 76.40% 76.51% (a) ASSISTments Metrics ACC RMSE AUC Orginal-CDMs CMES-CDMs Orginal-CDMs CMES-CDMs Orginal-CDMs CMES-CDMs IRT 70.88% 72.75% 0.4505 0.4460 71.62% 76.37% MIRT 72.99% 74.60% 0.4284 0.4097 75.31% 78.04% NCD 74.13% 74.94% 0.4102 0.4053 77.14% 78.81% CDGK 73.68% 74.63% 0.4121 0.4068 77.00% 78.13% ECD 74.16% 74.83% 0.4101 0.4077 77.18% 78.30% RCD 74.86% 75.16% 0.4063 0.4055 78.34% 78.64% (b) MATH Table 2: Experimental results on student performance prediction. The best results are highlighted in bold. Our CMES-CDMs significantly outperform the Orginal-CDMs with p <0.01. Cognitive Diagnosis Models. To validate the effectiveness of CMES framework, we conducted the comparison experiments based on six representative CDMs, namely IRT (Lord 1980), MIRT (Reckase 2009), NCD (Wang et al. 2020a), CDGK (Wang et al. 2021), ECD (Zhou et al. 2021) and RCD (Gao et al. 2021). Parameter Settings. We first initialized all the parameters in the networks with Xavier (Glorot and Bengio 2010) initialization and used the Adam (Kingma and Ba 2014) optimizer with a fixed batch size of 256 during the training process. For the multi-dimensional models (i.e., MIRT, NeuralCD, CDGK, ECD and RCD), we set the dimensions of latent features for both students and exercises to be equal to the number of knowledge concepts, i.e., 123 for ASSISTments and 61 for MATH datasets. Based on the parameter tuning, we set n to 20 for ASSISTments and 5 for Math respectively; we set W to 50 and 20 for ASSISTments and Math respectively. Finally, experimental results for all models are obtained by performing standard 5-fold crossvalidation. The hyper-parameters of comparison approaches are tuned on the validation set according the original paper. All models are implemented in Pytorch, and all experiments are conducted on Linux servers with Tesla V100. Performance Comparison (RQ1) We compare six pairs of Orginal-CDMs and CMES-CDMs in terms of RMSE, ACC, and AUC. The experimental results are exhibited in Table 2. For each pair, better results are bolded. As shown in the table, for each pair, CMESCDM outperforms Orginal-CDM in terms of all evaluation metrics on all datasets. Even for RCD that models the intrinsic correlations among knowledge concepts, our CMES can still improve its efficacy. These observations verify that our proposed CMES framework by excavating and leveragFigure 3: The comparison results between our sampling strategy CMES and the random sampling strategy (RSS). ing the information within un-interacted exercises can match prevailing CDMs and boost the diagnosis performance of existing CDMs. Effectiveness of the Sampling Strategy (RQ2) To validate the effectiveness of the sample augmentation strategy, we compare it with the random sampling strategy (RSS). RSS randomly samples exercises for each student si from Ui. These sampled exercises are directly used as extra training samples without the information mixture process. The randomly sampled exercises are then fed into the pseudo feedback module to assess the potential labels. Figure 3 exhibits the comparison results among NCD The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8882 Figure 4: Impact of sampling number. with our sample strategy (namely CMES-NCD), NCD with the random sampling strategy (namely RSS-NCD) and the original NCD (original-NCD). CMES-NCD markedly surpasses RSS-NCD and original-NCD across all metrics on both datasets, while the random sampling strategy deteriorates model performance. The information gathered from these randomly sampled exercises might lack diversity, resulting in ineffective diagnostic values. Even the redundant exercises delude the cognitive diagnosis model, hindering it from accurately diagnosing students’ cognitive states. In contrast, the proposed CMES performs collaboration-aware sampling and mixes the information of exercises, enriching diagnostic information to enable the model to infer students’ cognitive states more comprehensively. Sensitivity Analysis of Sampling Number (RQ3) We choose two representative CDMs (i.e., MIRT and NCD) combined with our CMES framework to investigate the performance change when varying the sampling number for each student (i.e., the parameter n) in the range of {5, 10, 20, 30, 40}. As shown in Table 4, on the ASSISTments, the optimal performance of CMES-MIRT and CMES-NCD is achieved at n = 20, while their peak performance is reached at n = 5 on the Math. It maybe attributed to the exercise pool size of the two datasets. As shown in Table ??, the number of exercise in ASSISTments is more larger than that of Math. The performance of CMES-MIRT and CMES-NCD starts to decrease when n > 20 and n > 5 on ASSISTments and MATH respectively. The degradation is more significant for MIRT, because the simple studentexercise interaction function in MIRT model cannot capture fine-grained exercise information. Excessive exercises confuse the diagnosis model and lead to negative optimization. Sensitivity Analysis of Student Cluster (RQ4) We further used NCD combined with our CMES to probe the impact of the number of clusters W. Here we search W in the range of {0, 50, 100, 150, 200} and {0, 20, 50, 80, 100} for ASSISTments and MATH respectively. As depicted in Figure 5, the optimal values for W are set to 50 and 20 for ASSISTments and MATH. From this observation, on the one hand, the optimal setting for W seems to be related to the student size, as shown in Table ??, the student number in ASSISTments is larger than that in Math. On the other hand, inappropriate setting for the student cluster number W will result in significant performance degradation, which Figure 5: Impact of student cluster number. Figure 6: The training set with different ratio. manifests that the performance of CMES is sensitive to the setting of student cluster number and answers RQ4. Case Study (RQ5) We selected 20% of the dataset ASSISTments as the test set, and utilized data sets with sizes of 80%, 70%, 60% and 50% of the full dataset from the remaining data to train model respectively. As depicted in Figure 6, we observe that CMESNCD trained by different size of training sets all demonstrate excellent performance. The performance of CMESNCD trained on 60% of the data is on par with orginal-NCD trained on 80%. Additionally, the enhancement attained by CMES-NCD is most pronounced when trained on 50% of the data, which surpasses the performance of orginal-NCD trained on 70% of the data, and nears its performance trained on 80%. These observations validate that our CMES framework can mitigate data scarcity challenges by extracting more information from un-interacted exercises. Conclusion In this work, we attempted to explore informative, uninteracted exercises related to broader knowledge concepts with the aim of providing a more comprehensive diagnostic assessment of students. We proposed a generic framework CMES (Collaborative-aware Mixed Exercise Sampling) that enables sampling of rich information from un-interacted exercises and facilitates the evaluation of potential true labels. Experimental results on real-world datasets demonstrate the effectiveness of the sampling strategy and the scalability of our framework. We intend to further investigate sampling strategies tailored to the characteristics of cognitive diagnostic models. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8883 Acknowledgements This work was supported in part by the National Natural Science Foundation of China (No. 62107001, No. U21A20512, and No.62302010), in part by the Anhui Provincial Natural Science Foundation (NO.2108085QF272), in part by the University Synergy Innovation Program of Anhui Province (NO.GXXT-2021-004), in part by the Key Research and Development Project of Qinghai Province (NO.2023-GXC13), and in part by the China Postdoctoral Science Foundation (No.2023M740015). References Bradley, A. P. 1997. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7): 1145–1159. Cao, Z.; Qin, T.; Liu, T.-Y.; Tsai, M.-F.; and Li, H. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, 129–136. Caselles-Dupr´e, H.; Lesaint, F.; and Royo-Letelier, J. 2018. Word2vec applied to recommendation: Hyperparameters matter. In Proceedings of the 12th ACM Conference on Recommender Systems, 352–356. Chen, T.; Sun, Y.; Shi, Y.; and Hong, L. 2017. On sampling strategies for neural network-based collaborative filtering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 767–776. De La Torre, J. 2009. DINA model and parameter estimation: A didactic. Journal of educational and behavioral statistics, 34(1): 115–130. Ding, J.; Quan, Y.; He, X.; Li, Y.; and Jin, D. 2019. Reinforced Negative Sampling for Recommendation with Exposure Data. In IJCAI, 2230–2236. Macao. Feng, M.; Heffernan, N.; and Koedinger, K. 2009. Addressing the assessment challenge with an online system that tutors as it assesses. User Modeling and User-Adapted Interaction, 19(3): 243–266. Gao, W.; Liu, Q.; Huang, Z.; Yin, Y.; Bi, H.; Wang, M.-C.; Ma, J.; Wang, S.; and Su, Y. 2021. RCD: Relation map driven cognitive diagnosis for intelligent education systems. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, 501–510. Glorot, X.; and Bengio, Y. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 249–256. JMLR Workshop and Conference Proceedings. Guo, G.; Zhou, H.; Chen, B.; Liu, Z.; Xu, X.; Chen, X.; Dong, Z.; and He, X. 2020. Ipgan: Generating informative item pairs by adversarial sampling. IEEE transactions on neural networks and learning systems, 33(2): 694–706. Huang, T.; Dong, Y.; Ding, M.; Yang, Z.; Feng, W.; Wang, X.; and Tang, J. 2021. Mixgcf: An improved training method for graph neural network-based recommender systems. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 665–674. Kingma, D.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. Computer Science. Liu, B.; and Wang, B. 2023. Bayesian Negative Sampling for Recommendation. In 2023 IEEE 39th International Conference on Data Engineering (ICDE), 749–761. IEEE. Lord, F. M. 1980. Applications of Item Response Theory to Practical Testing Problems. LAWRENCE ERLBAUM ASSCCIAATES. Lord, F. M. 2012. Applications of item response theory to practical testing problems. Routledge. Ma, H.; Zhu, J.; Yang, S.; Liu, Q.; Zhang, H.; Zhang, X.; Cao, Y.; and Zhao, X. 2022. A Prerequisite Attention Model for Knowledge Proficiency Diagnosis of Students. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 4304–4308. Qi, T.; Ren, M.; Guo, L.; Li, X.; Li, J.; and Zhang, L. 2023. ICD: A new interpretable cognitive diagnosis model for intelligent tutor systems. Expert Systems with Applications, 215: 119309. Qin, C.; Zhang, L.; Zha, R.; Shen, D.; Zhang, Q.; Sun, Y.; Zhu, C.; Zhu, H.; and Xiong, H. 2023. A Comprehensive Survey of Artificial Intelligence Techniques for Talent Analytics. arXiv preprint arXiv:2307.03195. Reckase, M. D. 2009. Multidimensional Item Response Theory. Springer New York. Rendle, S.; and Freudenthaler, C. 2014. Improving pairwise learning for item recommendation from implicit feedback. In Proceedings of the 7th ACM international conference on Web search and data mining, 273–282. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618. Tian, Y.; Pan, J.; Yang, S.; Zhang, X.; He, S.; and Jin, Y. 2022. Imperceptible and Sparse Adversarial Attacks via a Dual-Population-Based Constrained Evolutionary Algorithm. IEEE transactions on artificial intelligence, 4(2): 268–281. Tian, Y.; Peng, S.; Yang, S.; Zhang, X.; Tan, K. C.; and Jin, Y. 2021. Action Command Encoding for Surrogate-Assisted Neural Architecture Search. IEEE transactions on cognitive and developmental systems, 14(3): 1129–1142. Wang, F.; Liu, Q.; Chen, E.; Huang, Z.; Chen, Y.; Yin, Y.; Huang, Z.; and Wang, S. 2020a. Neural cognitive diagnosis for intelligent education systems. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 6153– 6161. Wang, J.; Yu, L.; Zhang, W.; Gong, Y.; and Zhang, D. 2017. IRGAN: A minimax game for unifying generative and discriminative information retrieval models. ACM SIGIR FORUM. Wang, X.; Huang, C.; Cai, J.; and Chen, L. 2021. Using knowledge concept aggregation towards accurate cognitive diagnosis. In Proceedings of the 30th ACM InternaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8884 tional Conference on Information & Knowledge Management, 2010–2019. Wang, X.; Xu, Y.; He, X.; Cao, Y.; Wang, M.; and Chua, T.-S. 2020b. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of the web conference 2020, 99–109. Yang, S.; Ma, H.; Zhen, C.; Tian, Y.; Zhang, L.; Jin, Y.; and Zhang, X. 2023a. Designing novel cognitive diagnosis models via evolutionary multi-objective neural architecture search. arXiv preprint arXiv:2307.04429. Yang, S.; Tian, Y.; He, C.; Zhang, X.; Tan, K. C.; and Jin, Y. 2021. A gradient-guided evolutionary approach to training deep neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(9): 4861–4875. Yang, S.; Tian, Y.; Xiang, X.; Peng, S.; and Zhang, X. 2022. Accelerating Evolutionary Neural Architecture Search via Multifidelity Evaluation. IEEE Transactions on Cognitive and Developmental Systems, 14(4): 1778–1792. Yang, S.; Wei, H.; Ma, H.; Tian, Y.; Zhang, X.; Cao, Y.; and Jin, Y. 2023b. Cognitive diagnosis-based personalized exercise group assembly via a multi-objective evolutionary algorithm. IEEE Transactions on Emerging Topics in Computational Intelligence. Yang, S.; Yu, X.; Tian, Y.; Yan, X.; Ma, H.; and Zhang, X. 2023c. Evolutionary Neural Architecture Search for Transformer in Knowledge Tracing. arXiv preprint arXiv:2310.01180. Yang, S.; Zhen, C.; Tian, Y.; Ma, H.; Liu, Y.; Zhang, P.; and Zhang, X. 2023d. Evolutionary Multi-Objective Neural Architecture Search for Generalized Cognitive Diagnosis Models. In 2023 5th International Conference on Data-driven Optimization of Complex Systems (DOCS), 1–10. IEEE. YAO, F.; Huang, Z.; Hou, M.; Tong, S.; Liu, Q.; Chen, E.; Sha, J.; and WANG, S. 2023. Exploiting Non-Interactive Exercises in Cognitive Diagnosis. IJCAI 2023. Zhang, W.; Chen, T.; Wang, J.; and Yu, Y. 2013. Optimizing top-n collaborative filtering via dynamic negative item sampling. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, 785–788. Zhou, Y.; Liu, Q.; Wu, J.; Wang, F.; Huang, Z.; Tong, W.; Xiong, H.; Chen, E.; and Ma, J. 2021. Modeling contextaware features for cognitive diagnosis in student learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2420–2428. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8885
2024
987
18,836
Plug-In Diffusion Model for Sequential Recommendation Haokai Ma1, Ruobing Xie3, Lei Meng2,1*, Xin Chen3, Xu Zhang3, Leyu Lin3, Zhanhui Kang3 1 School of Software, Shandong University, China 2 Shandong Research Institute of Industrial Technology, China 3 Tencent, China [email protected], [email protected], [email protected], {andrewxchen, xuonezhang, goshawklin, kegokang}@tencent.com Abstract Pioneering efforts have verified the effectiveness of the diffusion models in exploring the informative uncertainty for recommendation. Considering the difference between recommendation and image synthesis tasks, existing methods have undertaken tailored refinements to the diffusion and reverse process. However, these approaches typically use the highest-score item in corpus for user interest prediction, leading to the ignorance of the user’s generalized preference contained within other items, thereby remaining constrained by the data sparsity issue. To address this issue, this paper presents a novel Plug-In Diffusion Model for Recommendation (PDRec) framework, which employs the diffusion model as a flexible plugin to jointly take full advantage of the diffusion-generating user preferences on all items. Specifically, PDRec first infers the users’ dynamic preferences on all items via a time-interval diffusion model and proposes a Historical Behavior Reweighting (HBR) mechanism to identify the high-quality behaviors and suppress noisy behaviors. In addition to the observed items, PDRec proposes a Diffusionbased Positive Augmentation (DPA) strategy to leverage the top-ranked unobserved items as the potential positive samples, bringing in informative and diverse soft signals to alleviate data sparsity. To alleviate the false negative sampling issue, PDRec employs Noise-free Negative Sampling (NNS) to select stable negative samples for ensuring effective model optimization. Extensive experiments and analyses on four datasets have verified the superiority of the proposed PDRec over the state-of-the-art baselines and showcased the universality of PDRec as a flexible plugin for commonly-used sequential encoders in different recommendation scenarios. The code is available in https://github.com/hulkima/PDRec. Introduction Personalized recommendation aims to capture user preference from the massive user behaviors and predict the appropriate items the user will be interested in (Meng et al. 2020; Ma et al. 2021, 2023a). Sequential recommendation (SR) is a effective method for inferring dynamic interests from the user’s historical behavior sequences(Zhang et al. 2022; Li et al. 2022; Chen et al. 2022). However, most users in the real world only interact with a limited number of items *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Directly use the highest-score item Fully utilize all diffusion outputs DiffRec TI-DiffRec (a) Illustration of DiffRec (b) Illustration of the proposed PDRec SR models As plugins Figure 1: Illustration of the difference between the pioneering DiffRec and PDRec, where each rectangle denotes the user’s diffusion-based preference of the corresponding item. within the overall item corpus, consequently leading to the data sparsity problem (Xia et al. 2021; Chen et al. 2023a,b). Diffusion model (DM), benefiting from its characteristics of diverse representation and informative uncertainty, has achieved state-of-the-art results in the field of image synthesis (Ho, Jain, and Abbeel 2020), semantic segmentation (Brempong et al. 2022), and time series imputation (Lopez Alcaraz and Strodthoff 2023). This demonstrates the dominance of DM as a novel generative paradigm in multiple generation tasks. Looking back to the real-world recommender systems, it could be regarded as a generator of the complete user-item interaction matrix based on the extremely sparse supervised signals (Moon et al. 2023). It prompts an intuitive question: Can we take full advantage of DM’s potent generalization capability to generate user preferences on both observed and unobserved items, thereby addressing the sparsity issue in recommendation? Recently, CODIGEM (Walker et al. 2022) and DiffRec (Wang et al. 2023) have introduced DM into recommendation, which generates users’ preferences based on their historical behaviors, yielding promising results. However, these pioneering studies still grapple with two challenges: (1) How to fully utilize the generalized user preferences from DM? As shown in Fig.1 (a), these methods merely utilize the highest-scored item as the final prediction in recommendation, overlooking users’ preferences towards other items in corpus and struggling with data sparsity issue. However, these preferences encapsulate substantial informative and generalized knowledge during the inference process of DM. (2) How to incorporate the diffusion-based knowledge to construct a universal framework that could smoothly cooperate with different SR models? These DM-based methods are primarily proposed for collaborative filtering (CF), The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8886 failing to fully integrate the time-aware sequential behavioral information. This leads to huge gaps between them and real-world recommenders, limiting their practical feasibility and universality (as indicated in Table 1, even T-DiffRec exhibits notable disparities from traditional SR models). To address these issues, we propose a novel Plug-In Diffusion Model for Recommendation (PDRec) framework, which leverages the diffusion models as a flexible plugin and make full use of the diffusion-generated user preferences on all items. The coarse overview of PDRec is illustrated in Fig.1 (b). Specifically, we first present a timeinterval diffusion model on the basis of T-DiffRec to facilitate the more precise generation of dynamic user preferences for both observed and unobserved items. These diffusionbased preferences on all items (i.e., preferences for observed items, top-ranked unobserved items, and low-scored unobserved items) are utilized for jointly guiding the effective and stable optimization direction: (a) We devise a Historical Behavior Reweighting (HBR) method for users’ historical behaviors, which identifies the high-quality behaviors and reduces noisy interactions via the generated preferences given by previous diffusion models. (b) We also propose a Diffusion-based Positive Augmentation (DPA) approach to convert the unobserved items with top-ranked diffusionbased preferences to the potential positive labels via selfdistillation, bringing in additional high-quality and diverse positive signals to alleviate data sparsity. (c) To alleviate the potential false negative sampling issue, we design a Noisefree Negative Sampling (NNS) strategy, which selects safer negative samples from the low-scored unobserved items provided by diffusion in training. The advantages of PDRec include: (1) HBR facilitates the discovery of more informative supervised signals from the global diffusion aspect, which could better guide model optimization. (2) DPA and NNS introduce additional knowledge on unobserved items that alleviates data sparsity issues. (3) PDRec is effective, universal, and easy-to-deploy, which could be conveniently applied to different datasets, base models, and recommendation tasks. Extensive experiments on four real-world datasets with three base SR models demonstrate that our proposed PDRec achieves significant and consistent improvements across various datasets and tasks, including SR and cross-domain SR. Furthermore, we conduct comprehensive ablation studies and universality analyses to validate the effectiveness and universality of all components in PDRec. The main contributions of this paper are summarized as follows: • We propose a model-agnostic Plug-In Diffusion Model for Recommendation, which fully leverages the diffusionbased preferences on all items to improve base recommenders. To the best of our knowledge, we are the first to integrate the diffusion model as a plugin for different types of recommendation models and downstream tasks. • The proposed HBR, DPA, and NNS are effective, modelagnostic, and easy-to-deploy plug-in strategies, which involve informative diffusion-generating preferences on all items to alleviate data sparsity. • Our PDRec achieves significant and consistent improvements on different datasets, base SR models, and tasks. Its detachable components are well-received in practice. Related Work Sequential recommendation. Sequential Recommendation (SR) is one of the representative methods for capturing users’ dynamic temporal-aware preference evolution patterns by modeling the sequential dependencies of their historical behaviors, thereby recommending the next item that the user may be interested in (Li et al. 2022). In recent years, Convolutional Neural Network (CNN) (Xu et al. 2019; Tang and Wang 2018), Recurrent Neural Network (RNN) (Li et al. 2017; Hidasi et al. 2016) and Transformer (Sun et al. 2019; Kang and McAuley 2018) are introduced into SR to capture users’ preference dependencies. GRU4Rec (Hidasi et al. 2016) employs the Gate Recurrent Unit (GRU) as the sequential encoder to learn users’ long-term dependencies. SASRec (Kang and McAuley 2018), one of the most widely used methods in SR, introduces Transformers for historical behavior interaction modeling. CL4SRec (Xie et al. 2022) is a strong SR models that proposes three sequencebased augmentations to construct contrastive learning (CL) tasks in SR. Nevertheless, existing methods primarily focus on modeling users’ long- and short-term behaviors with various neural architectures, disregarding the potential impact of the time-interval-sensitive knowledge and the recommender’s generalization capability across the entire corpus on the modeling of user preferences. Diffusion models in recommendation. As a prominent deep generative method, Diffusion Models (DM) are inspired by non-equilibrium statistical physics and hasve demonstrated exceptional performance in Super Resolution (Ho et al. 2022; Shi et al. 2022), Semantic Segmentation (Brempong et al. 2022), and Time Series Imputation (Tashiro et al. 2021). Despite this, the relevant studies in the field of recommendation are marked by a notable scarcity. CODIGEM (Walker et al. 2022) leverages DM to generate robust collaborative signals and latent representations by modeling intricate and non-linear patterns. DiffRec (Wang et al. 2023) reduces the added noises into the generative process to retain globally analogous yet personalized collaborative information in a denoising manner. DiffuRec (Li, Sun, and Li 2023) and DiffRec* (Du et al. 2023) corrupts the item representations into the Gaussian distribution and reverses them based on the historical behaviors to employ uncertainty injection in item representation construction. However, the former two DM works exhibit an excessive reliance on the top-ranked data derived from diffusion, not only resulting in computing consumption and homogeneous recommendation results but also disregarding the comprehensive user historical behaviors. While the latter share the same modeling pipeline with SR methods, thus can serve as the base SR model within PDRec. The proposed PDRec differs from these works: (a) instead of directly training a diffusion model, we smartly leverage a pre-trained DM model to diminish the time complexity in model training. (b) We achieve the dual enhancement of recommendation diversity and preference modeling through denoising, knowledge distillation and negative sampling on sequence encoders with the diffusion-based preference. (c) PDRec is model- and task-agnostic, enabling its application across different sequence encoders and recommendation scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8887 Time Interval Reweighting … … … 𝒙! 𝒙′! 𝒙′"#$ … … 𝒙′" 𝒙′% 𝒒(𝒙′"|𝒙′"#$) Reweight 6h 3h 𝒑𝜽(𝒙′"#$|𝒙′") Figure 2: The illustration of our enhanced time-interval diffusion recommendation model (TI-DiffRec). Time-Interval Diffusion Model In this section, we present the Time-Interval Diffusion Recommendation model (TI-DiffRec). Following the classical DM methods (Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021), the pioneering methods (Walker et al. 2022; Wang et al. 2023) typically leverage the original interaction matrix for diffusion, which is challenging to apply in SR. Despite incorporating the temporal order of user interactions, T-DiffRec (Wang et al. 2023) overlooks the time interval between consecutive behaviors, potentially leading to the issue of preference drift. As illustrated in Fig.2, we introduce an additional process of time interval reweighting alongside the diffusion and reverse process to tackle this challenge. • Time Interval Reweighting. To incorporate the time interval information into diffusion, we first generate the time interval-aware input x′ 0 with the behavior sequence Su = {i1 u, i2 u, · · · , ip u} and the corresponding timestamps Tu = {t1 u, t2 u, · · · , tp u} of user u. Subsequently, we compute the time-interval weight wj u = wmin + tj u−t1 u tp u−t1 u (wmax −wmin) of each behavior ij u, where wmin and wmax denote the predefined lower and upper bounds. Thus we define x′ 0 = [x1, x2, · · · , x|I|] as the initial state for diffusion, where xij u = wk u or 0 indicates whether u has interacted with ij u or not (k denotes the index corresponding to ij u within Su). • Diffusion Process. Generally, the diffusion process gradually injects uncertainty noise into the original data until the fully disordered state. The significant difference between DM and other latent variable models is that the transition kernel q x′ t|x′ t−1  used in DM obtains latent variables x′ t in a Markov chain process. Specifically, we employ the Gaussian perturbation as the transition kernel q x′ t|x′ t−1  := N x′ t; √1 −βtx′ t−1, βtI  , where the variances βt ∈(0, 1) controls the Gaussian noise scales added at the step t. Note that the typical DM methods (Ho, Jain, and Abbeel 2020) fix the variances above to constants, demonstrating the notable property of DM that q has no learnable parameters. Thus, we can directly generate q (x′ t |x′ 0) := N (x′ t; √¯αtx′ 0, (1 −¯αt) I) with the notation of αt := 1 −βt, ¯αt := Qt t′=1 αt′ and t ∈ [1, 2, · · · , T]. If T →+∞, x′ T asymptotically converges to the standard Gaussian distribution. That is, given x′ 0, we can easily obtain x′ t = √¯αtx′ 0 + √1 −¯αtϵ by sampling the Gaussian vector ϵ ∼N(0, I) via the reparameterization (Kingma and Welling 2013). • Reverse Process. The reverse process aims to recover !𝒙′! TI-DiffRec … NNS 𝒙′" … 𝒙′# … … 𝒙′$ … 𝒙$ … … HBR DPA … Unobserved Items Observed Items … … … … Serve as a plugin Inference Sequential Recommender 𝒘% 𝑠% 𝑡% 𝑃(𝑗|𝐼%&) Figure 3: The overall structure of the proposed PDRec. user’s interactions step by step from the standard Gaussian distribution through the denoising transition. Precisely, given the Gaussian distribution vector x′ T , we gradually remove the noise and recover the original interactions with the learnable transition kernel pθ x′ t−1 |x′ t  in the reverse direction. The reverse transition phase can be defined as: pθ x′ t−1 | x′ t  := N x′ t−1; µθ (x′ t, t) , Σθ (x′ t, t)  (1) whereµθ (x′ t, t)andΣθ (x′ t, t)are parameterized by Deep Neural Networks (DNN) and θ denotes model parameters. We can model complex interaction generation procedures for recommendation by such an iterative reverse process. Plug-In Diffusion Model Task Formulation and Overall Framework SR aims to improve the next-item recommendation performance via the users’ historical sequential behaviors. To this end, given the behavior sequence Su = {i1 u, i2 u, · · · , ip u} of user u ∈U, where ij ∈I is the j-th behavior of u and p denote the historical behavior length, PDRec tries to recommend the target item ip+1 u that will be interacted by this user. In this section, we describe the proposed model-agnostic Plug-In Diffusion Model for Recommendation (PDRec) framework, which leverages the diffusion model as a flexible plugin to accurately model the dynamic user preferences in SR. The overall structure of PDRec is illustrated in Fig.3. Specifically, PDRec first explicitly regards the user behavior timestamps in diffusion to capture the dynamics of the actual sequential patterns and generate the user’s preferences on all items. Next, PDRec proposes a Historical Behavior Reweighting (HBR) strategy to identify specific indispensable supervised signals with the diffusion-based preferences on observed items. Additionally, PDRec designs a Diffusion-based Positive Augmentation (DPA) method to allievate the data saprsity problem, which conducts selfdistillation to incorporate the probable interactions from the unobserved items as the augmented soft samples into the training process in a dynamic manner. Finally, PDRec employs Noise-free Negative Sampling (NNS) to select stable negative samples, with the aim to mitigate the potential false negative problem. It is noteworthy that PDRec is taskagnostic, allowing such a framework can be easily migrated to cross-domain sequential recommendation (CDSR) tasks, and the related analysis is illustrated in Table.3. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8888 Historical Behavior Reweighting The basic task of sequential preference modeling is to accurately and comprehensively leverage the historical behavior sequences. It is intuitive that different items within the user’s sequential sequence should hold varying degrees of importance for the next-item prediction. Therefore, the critical aspect of historical preference modeling lies in the adaptive and fine-grained differentiation of all observed items. This involves integrating the distinct importance of various items into the training phase. To this end, we propose a Historical Behavior Reweighting (HBR) strategy to reweight the supervised signals in training for behavior sequence denoising. Specifically, given the pre-trained TI-DiffRec model and the complete interaction state x0 of user u in the training set, we first obtain the time-interval-aware state x′ 0. Following the inference process in DiffRec (Wang et al. 2023), we regard the x′ 0 which is naturally noisy and retaining personalized information as the noised state ˆx′ T . Then we employ the reverse denoising by ˆx′ T →· · · →ˆx′ 1 →ˆx′ 0 to generate the diffusion-based preferences ˆx′ 0 ∈R|I| of u on all items I. To reweight the supervised signals, we exclusively focus on the observed preferences ou ∈R|I+ u | and their corresponding ranking ru ∈R|I+ u | from ˆx′ 0 in HBR. The reweight vector wu to the supervised signals of user u is formulated as: ˆ wu =(1−ωr)·ωs· ou−minou maxou−minou +ωr·1+maxru−ru maxru (2) where ωs = len(Su)/sum( ou−minou maxou−minou ), ωr and (1−ωr) denote the ranking weight of observed ranking and preferences respectively. This mixture ensures that the recommender’s optimization direction remains reasonably aligned with the observed prior to the reweighting process. Finally, PDRec generates the final reweight vector wu = ωf · min(max(cw, min( ˆ wu)), max( ˆ wu)) by truncating and rescaling the reweight vector via the truncate value cw and the rescale weight ωf to prevent certain signals dominate the optimization process. Note that PDRec only employs the inference process before model training without introducing excessive computational cost. With HBR, we can not only directly focus on the time-interval-aware preferences related to the user’s behaviors, but also leverage DM’s informative uncertainty to denoise the dispensable items and highlight the indispensable actions in the user’s behavior sequence. Diffusion-based Positive Augmentation Inspired by the promising performance of TI-DiffRec in SR and its inherent strong generalization characteristics, PDRec assumes that the user’s diffusion-based preferences ˆx′ 0 on unobserved items encompass specific samples that user u is potentially interested in but have never seen before. This intuitive observation exhibits heightened prominence within the top-ranked range of the user’s diffusion-based preference. To transfer the generalized knowledge from the pretrained TI-DiffRec to the sequential recommender, PDRec intelligently designs a Diffusion-based Positive Augmentation (DPA) method to distil the essential diffusion-based information by regarding the unobserved items with high preferences as the potential positive samples during training. To do it, PDRec first takes top-ranked m items to form the potential soft samples tu based on the diffusion-based unobserved preferences uu = ˆx′ 0\ou, where ˆx′ 0 and ou denote the diffusion-based preferences of the corpus and supervised signals. Following the assumption that ”the last behavior in a user’s behavioral sequence reflects his/her overall interests”, PDRec calculate the matching score mu = [(hu)⊤t1, (hu)⊤t2, · · · , (hu)⊤tm] between the user’s last behavior representation hu obtained by the sequential encoder and the item embedding matrix Tu = [t1, t2, · · · , tm] of the potential soft samples tu. After re-ranking the matching score mu, PDRec extract the top-ranked n items to obtain the soft positive augmentations su. The optimization approach for su will be listed in the following section. Noise-free Negative Sampling Existing recommendation algorithms generally require both positive and negative examples to model users’ personalized preferences. They expect explicit interactions from the dataset ideally. However, explicit feedback is not always available in real-world scenarios, the ubiquitous users’ implicit interactions may not necessarily reflect their real interests. Conventional recommenders typically employ uniform probability for negative sampling, which fail to consider the dynamic shifts in user preferences, potentially leading to the false negative problem to some extent. Inspired by the exploration of negative sampling strategies in recommendation (Shi et al. 2023; Ma et al. 2023b), PDRec introduces the Noise-free Negative Sampling (NNS) strategy to prioritize the unobserved samples with low-scored diffusionbased preference and select safe negative samples to direct HBR and DPA in the stable optimization direction. In contrast to the DPA utilizing the unobserved items with high preferences as the soft positive augmentations, NNS creatively regard these items with low preferences as the additional negative samples in training. Precisely, given the diffusion-based unobserved preferences uu, PDRec sorts them, selects low-scored items from the unobserved corpus I− u = I\I+ u , and assigns higher sampling probabilities to these items. The sampling probability of NNS is defined as: P NNS(j |I− u )=  1 (1−ωm)lu , j ∈Ku[ωmlu : lu] 0, j ∈others (3) where Ku is the re-ranked item list of I− u obtained by sorting the diffused unobserved preferences uu, lu = |Ku| denotes the number of unobserved items and ωm denotes the initial proportion of the negative sampling. Note that the bigger the ωm is, the more stable the samples will be drawn. Optimization Objectives We calculate the predicted probability ˆy = (hu)⊤vq+1 with the sequence representation hu of user u and the item embedding vq+1. Then we formulate the Binary Cross-Entropy loss LR and the self-distillation loss LD in DPA as follows: LR =− X (u,i)∈R [wu·yu,ilog ˆyu,i+(1−yu,i)log(1−ˆyu,i)] (4) LD =− X (u,i)∈R+ [yu,ilogˆyu,i] (5) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8889 Datasets Metrics T-DiffRec TI-DiffRec GRU4Rec +PDRec Improv. SASRec +PDRec Improv. CL4SRec +PDRec Improv. Toy N@1 0.1033 0.1058 0.0878 0.0899 2.39% 0.1095 0.1247 13.88% 0.1125 0.1254 11.47% N@5 0.1564 0.1618 0.1515 0.1617 6.73% 0.1779 0.2023 13.72% 0.1802 0.2041 13.26% N@10 0.1758 0.1823 0.1755 0.1879 7.07% 0.2020 0.2286 13.17% 0.2046 0.2305 12.66% HR@5 0.2055 0.2151 0.2128 0.2300 8.08% 0.2423 0.2752 13.58% 0.2438 0.2776 13.86% HR@10 0.2657 0.2787 0.2874 0.3112 8.28% 0.3169 0.3568 12.59% 0.3195 0.3595 12.52% AUC 0.5911 0.5968 0.5670 0.5909 4.22% 0.5771 0.6060 5.01% 0.5805 0.6068 4.53% Game N@1 0.1611 0.1746 0.1667 0.1808 8.46% 0.2111 0.2191 3.79% 0.2106 0.2180 3.51% N@5 0.2567 0.2723 0.2818 0.2996 6.32% 0.3310 0.3382 2.18% 0.3294 0.3368 2.25% N@10 0.2895 0.3040 0.3199 0.3380 5.66% 0.3682 0.3753 1.93% 0.3682 0.3750 1.85% HR@5 0.3451 0.3618 0.3893 0.4091 5.09% 0.4409 0.4475 1.50% 0.4385 0.4456 1.62% HR@10 0.4469 0.4600 0.5071 0.5282 4.16% 0.5559 0.5626 1.21% 0.5584 0.5638 0.97% AUC 0.7217 0.7234 0.7601 0.7786 2.43% 0.7865 0.7908 0.61% 0.7857 0.7905 0.61% Book N@1 0.3194 0.3275 0.3072 0.3359 9.34% 0.3594 0.3656 1.73% 0.3554 0.3621 1.89% N@5 0.4398 0.4491 0.4433 0.4757 7.31% 0.4948 0.5063 2.32% 0.4942 0.5047 2.12% N@10 0.4671 0.4776 0.4765 0.5091 6.84% 0.5272 0.5393 2.30% 0.5276 0.5376 1.90% HR@5 0.5459 0.5557 0.5643 0.6004 6.40% 0.6148 0.6306 2.57% 0.6166 0.6304 2.24% HR@10 0.6300 0.6435 0.6667 0.7033 5.49% 0.7150 0.7323 2.42% 0.7197 0.7317 1.67% AUC 0.8160 0.8202 0.8541 0.8728 2.19% 0.8790 0.8898 1.23% 0.8820 0.8895 0.85% Music N@1 0.3401 0.3494 0.3299 0.3540 7.31% 0.3753 0.3826 1.95% 0.3689 0.3755 1.79% N@5 0.4709 0.4773 0.4725 0.5000 5.82% 0.5170 0.5283 2.19% 0.5096 0.5211 2.26% N@10 0.4987 0.5049 0.5069 0.5348 5.50% 0.5503 0.5620 2.13% 0.5435 0.5558 2.26% HR@5 0.5852 0.5886 0.5987 0.6287 5.01% 0.6421 0.6573 2.37% 0.6353 0.6504 2.38% HR@10 0.6706 0.6738 0.7048 0.7361 4.44% 0.7447 0.7612 2.22% 0.7400 0.7573 2.34% AUC 0.8329 0.8318 0.8768 0.8908 1.60% 0.8962 0.9040 0.87% 0.8939 0.9026 0.97% Table 1: Results between backbones and PDRec on four datasets. All improvements are significant (p<0.05 with paired t-tests). Datasets Toy Game Book Music Users 7,996 7,996 12,170 12,170 Items 37,868 11,735 33,697 30,707 Records 114,487 82,871 514,015 558,352 Density 0.0378% 0.0883% 0.1253% 0.1494% Table 2: Statistics of four SR datasets. where R denotes the training set which contains the supervised signals, the random negative samples and the safe negative items sampled within NNS, R+ denotes the soft positive augmentationssuin DPA,wuis the final reweight vector in HBR, yu,i =1/0 denote the positive and the sampled negative pairs respectively, and ˆyu,i denotes the predicted probability of (u, i). To optimize in conjunction with the selfdistillation augmentation, the objective function L is a linear combination of LR and LD with the loss weight ωd of LD: L = LR + ωdLD. (6) Experiments In this section, we conduct extensive experiments and analyses to answer the following four research questions: (RQ1): How does PDRec perform against the state-of-the-art SR baselines? (RQ2): How do different components of PDRec benefit its performance? (RQ3): Is PDRec still effective with other base SR models? (RQ4): Could PDRec be further adopted to other tasks such as cross-domain sequential recommendation? Experimental Settings Dataset. We conduct extensive experiments on four realworld datasets. We select “Toys and Games” and “Video Games” to form the Toy and Game dataset from Amazon (Lin et al. 2022). From Douban, we pick “Books” and “Musics” to form the Book and Music dataset (Wu et al. 2023). Baselines. We implement PDRec on three representative SR models: GRU4Rec (Hidasi et al. 2016), SASRec (Kang and McAuley 2018) and CL4SRec (Xie et al. 2022), and compare it with T-DiffRec (Wang et al. 2023) to validate its effectiveness and universality. Note that T-DiffRec (Wang et al. 2023) is one of the SOTA DM-based recommenders that captures the temporal patterns in user interactions. Parameter settings. For fair comparisons, we set the learning rate and the maximum sequence length as 5e−3 and 200. According to the natural distribution of behaviors, we set the ωm as 0.5 for the relatively sparse Amazon datasets and 0.8 for the denser Douban datasets. Similarly, we define the number of coarse-grained sorted items m, the number of fine-grained resorted items n, and the loss weight ωd of LD as 50, 5 and 0.3 for Amazon. For Douban, these parameter are configured as 100, 1, and 0.01, respectively. Due to the variations in TI-DiffRec’s confidence range, PDRec exhibits minor discrepancies in the parameters of HBR across diverse datasets. That is, the ranking weight ωr, the truncate value cw and the rescale weight ωf are denoted as 0.1, 3 and 2 for Toy, 0.1, 5 and 4 for Game, 0.3, 3 and 4 for Book and 0.1, 5 and 2 for Music. Each experiment is conducted five times with random seeds, and we report the average results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8890 0.25 0.28 0.31 0.34 0.37 HR@10 0.43 0.47 0.51 0.55 0.59 HR@10 0.62 0.65 0.68 0.71 0.74 HR@10 0.65 0.68 0.71 0.74 0.77 HR@10 (e) Amazon Toy (f) Amazon Game (g) Douban Book (h) Douban Music TI-DiffRec SASRec SASRec +HBR SASRec + HBR +NNS PDRec (SASRec) 0.16 0.18 0.20 0.22 0.24 N@10 0.28 0.31 0.34 0.37 0.40 N@10 0.44 0.47 0.5 0.53 0.56 N@10 0.47 0.5 0.53 0.56 0.59 N@10 (a) Amazon Toy (b) Amazon Game (c) Douban Book (d) Douban Music PDRec (w/o TI-DiffRec) Figure 4: Results on ablation study of PDRec (SASRec) on four datasets. Generally, all components are effective. Performance Comparison on SR (RQ1) We conduct the experiments on four public datasets, adopting three typical evaluation metrics, including NDCG@k (N@k), Hit Rate@k (HR@k), and AUC with different k = 1, 5, 10. Following (Kang and McAuley 2018), we randomly sample 99 negative items for each positive instance in testing. Table 1 shows the overall performance comparison results, the best results of the same backbone are in boldface. It reveals the following observations: (1) In general, PDRec significantly outperforms all baselines on four datasets, exhibiting the significance level p<0.05 and the average error range ≤0.004. This indirectly confirms the significance of (a) the observed interactions denoising is able to guide the recommender toward an accurate and unbiased optimization direction; (b) the handling of positive and negative aspects of unobserved interactions effectively leverages the informative yet user-imperceptible knowledge from the diffusion model, expanding user interests while stabilizing the training process. (2) Comparing the improvements across various datasets, we discover that PDRec benefits the relatively sparse Toy and Game datasets more. Meanwhile, PDRec can obtain promising performance even on denser datasets. Furthermore, we also observed that PDRec, implemented with diverse backbones, consistently exhibits significant improvements over its respective backbones. This may be attributed to the precise utilization of diffusion models, PDRec can assist in highlighting the actual long- and short-term sequential dependencies. As a task-agnostic framework, we further expand PDRec into the field of CDSR to employ the feasibility analyses on recommendation scenarios to answer RQ4. (3) Simultaneously, we notice the significant improvement of the proposed TI-DiffRec relative to T-DiffRec (Wang et al. 2023), underscoring the necessity of timeinterval knowledge in SR. Nevertheless, the performance of these DM-based algorithms remains inferior to the existing SOTA sequential recommendation algorithms. In conjunction with the notable improvement of PDRec over these SR methods, the effectiveness of the proposed PDRec is firmly established. It can smartly combine the sequential modeling capability of (the future advanced) SR models and the potent generalized ability of diffusion models on the corpus, thus precisely accomplishing sequential recommendation tasks. Ablation Study (RQ2) In this section, we conduct ablation studies to explore the effectiveness of different components in PDRec. Thus we compare PDRec (SASRec) with different ablation versions of PDRec to verify the benefits of TI-DiffRec, HBR, DPN and NNS, respectively. Note that PDRec (SASRec) equals SASRec+HBR+NNS+DPA. From Fig. 4 we observe that: (1) With HBR, SASRec+HBR achieves consistent improvement over SASRec. This mainly stems from the fact that diffusion-based preferences generated by the powerful TI-DiffRec can effectively denoise the historical behaviors via reweighting. It enables the recommender to emphasize the indispensable supervised signals while disregarding noisy interactions, thereby enhancing training efficiency. (2) Comparing SASRec+HBR+NNS to SASRec+HBR, we find that NNS yields performance gains across most datasets. It demonstrates that these “safe” negative items judged by previous diffusion models could aid in alleviating the inherent false negative problems in model training. (3) PDRec further improves the performance of SASRec+HBR+NNS. DPA emphasizes the top-ranked preferences determined by the diffusion model for unobserved items, thereby inferring user’s more diverse potential preferences given by diffusion models. By double checking these high-quality positive augmentation candidates via selfdistillation, DPA could bring in additional positive signals via a more flexible way to fight against data sparsity. (4) PDRec achieves significant improvement compared to PDRec without TI-DiffRec (i.e., replacing TI-DiffRec with another SASRec). It highlights the necessity of employing diffusion models. Owing to the problem formulation, DM preserves the visibility into all items in the corpus. In conjunction with its powerful generalization ability, DM can offer informative knowledge relative to sequential models (i.e., SASRec), particularly for sparse user-item interaction matrices. Nevertheless, compared to the original DiffRec, PDRec is more effective and practical. Universality Analysis of PDRec (RQ3) PDRec is a model-agnostic framework. To verify this, we employ each ablation variant of PDRec over GRU4Rec (Hidasi et al. 2016) and CL4SRec (Xie et al. 2022) on Toy and Game datasets. Fig. 5 illustrates the results. We can find that: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8891 0.16 0.17 0.18 0.19 N@10 0.27 0.29 0.31 0.33 HR@10 0.29 0.31 0.33 0.35 N@10 0.43 0.46 0.49 0.52 0.55 HR@10 (a) Amazon Toy (b) Amazon Toy (c) Amazon Game (d) Amazon Game TI-DiffRec GRU4Rec GRU4Rec+HBR GRU4Rec+HBR+NNS PDRec (GRU4Rec) 0.16 0.18 0.20 0.22 0.24 N@10 0.24 0.27 0.30 0.33 0.36 HR@10 0.27 0.3 0.33 0.36 0.39 N@10 0.43 0.47 0.51 0.55 0.59 HR@10 (e) Amazon Toy (f) Amazon Toy (g) Amazon Game (h) Amazon Game CL4SRec CL4SRec+HBR CL4SRec+HBR+NNS PDRec (CL4SRec) Figure 5: Results of PDRec on GRU4Rec/CL4SRec and their ablation versions on Toy and Game datasets. Setting Algorithms N@1 N@5 N@10 N@20 N@50 HR@5 HR@10 HR@20 HR@50 AUC Game ↓ Toy T-DiffRec (M) 0.0981 0.1520 0.1727 0.1934 0.2375 0.2029 0.2673 0.3494 0.5780 0.5924 TI-DiffRec (M) 0.1053 0.1598 0.1806 0.2008 0.2407 0.2111 0.2759 0.3562 0.5623 0.5932 SASRec (M) 0.1267 0.2019 0.2261 0.2490 0.2785 0.2722 0.3472 0.4380 0.5873 0.5951 +HBR 0.1283 0.2061 0.2311 0.2533 0.2835 0.2785 0.3558 0.4438 0.5972 0.6092 +HBR+NNS 0.1264 0.2068 0.2323 0.2542 0.2844 0.2815 0.3606 0.4480 0.6013 0.6123 +HBR+NNS+DPA 0.1302 0.2093 0.2348 0.2574 0.2873 0.2826 0.3616 0.4515 0.6026 0.6106 Toy ↓ Game T-DiffRec (M) 0.1674 0.2643 0.2977 0.3247 0.3597 0.3548 0.4584 0.5655 0.7428 0.7232 TI-DiffRec (M) 0.1709 0.2757 0.3096 0.3378 0.3721 0.3723 0.4773 0.5887 0.7622 0.7407 SASRec (M) 0.2273 0.3532 0.3905 0.4190 0.4467 0.4674 0.5826 0.6955 0.8342 0.8007 +HBR 0.2332 0.3597 0.3963 0.4250 0.4547 0.4741 0.5872 0.7006 0.8501 0.8145 +HBR+NNS 0.2352 0.3601 0.3975 0.4257 0.4557 0.4733 0.5890 0.7002 0.8517 0.8138 +HBR+NNS+DPA 0.2363 0.3623 0.3992 0.4275 0.4572 0.4761 0.5904 0.7022 0.8520 0.8153 Table 3: Ablation versions of PDRec on two CDSR datasets. All improvements are significant compared to baselines. (1) PDRec achieves significant improvements over different base models (GRU4Rec and CL4SRec) across diverse datasets. This demonstrates the universality of PDRec on different sequential encoders. Furthermore, it indirectly underscores the potential of PDRec to leverage the possible advancements in SR in the future, thereby extending the lifespan of the proposed DM-utilization frameworks. (2) Progressive improvements are discernible among distinct versions of PDRec, with PDRec outperforming all its variants. It demenstrates that the proposed components are effective and universal for different base sequential encoders and datasets, further reconfirming the universality of PDRec. Results of Cross-domain SR (RQ4) PDRec could also benefit positive transfer in CDSR. We follow typical CDSR settings (Ma et al. 2023c; Zheng et al. 2022) and employ PDRec with SASRec (M) (M indicates directly mixing both source and target domains’ behaviors in chronological order) on Toy→Game and Game→Toy settings. We also implement T-DiffRec (M) and TI-DiffRec (M) on the mixed (M) setting. From Table. 3, we have: (1) PDRec outperforms all diffusion-based models in CDSR, which implies that PDRec could be used in other tasks such as cross-domain scenarios. Its HBR provides an intuitive but effective way to filter negative transfers in cross-domain recommendation (i.e., mixing all domains’ behaviors sequentially and conducting reweighting via diffusion), which could be further explored in the future. (2) PDRec outperforms all of its ablation versions on most CDSR settings, with each component contributing to incremental improvements. It reconfirms the effectiveness and universality of HBR, NNS, and DPA from diffusion model. (3) It is impressive that PDRec exhibits notable improvements across various metrics compared to the original TDiffRec/TI-DiffRec on the mixed behavior sequence (up to 38.3%). It reiterates our main contribution that takes full advantage of the outputs of diffusion model as a plugin in SR. Conclusion In this paper, we propose an effective and model-agnostic Plug-In Diffusion Model for Recommendation (PDRec) framework. Instead of focusing on the highest-score item, PDRec fully leverages the diffusion-based preferences on all items. PDRec employs a historical behavior reweighting method to identify the indispensable behaviors and conducts a knowledge extracting strategy from both the unobserved items via the diffusion-based positive augmentation, and noise-free negative sampling. The extensive experiments and analyses on four datasets, three base models and two recommendation tasks demonstrate the effectiveness and universality of PDRec. In the future, we will continue to explore the tailored hard negative sampling strategies in PDRec and attempt to adapt PDRec as a flexible and detachable plugin in diverse recommendation scenarios. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8892 Acknowledgments This work is supported in part by the TaiShan Scholars Program (Grant no. tsqn202211289), the National Natural Science Foundation of China (Grant no. 62006141), the Excellent Youth Scholars Program of Shandong Province (Grant no. 2022HWYQ-048), the Oversea Innovation Team Project of the ”20 Regulations for New Universities” funding program of Jinan (Grant no. 2021GXRC073) and the Young Elite Scientists Sponsorship Program by CAST (2023QNRC001). ChatGPT and Grammarly were utilized to improve grammar and correct spelling. References Brempong, E. A.; Kornblith, S.; Chen, T.; Parmar, N.; Minderer, M.; and Norouzi, M. 2022. Denoising pretraining for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR). Chen, G.; Zhang, X.; Su, Y.; Lai, Y.; Xiang, J.; Zhang, J.; and Zheng, Y. 2023a. Win-Win: A Privacy-Preserving Federated Framework for Dual-Target Cross-Domain Recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Chen, H.; He, J.; Xu, W.; Feng, T.; Liu, M.; Song, T.; Yao, R.; and Qiao, Y. 2023b. Enhanced Multi-Relationships Integration Graph Convolutional Network for Inferring Substitutable and Complementary Items. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Chen, Y.; Liu, Z.; Li, J.; McAuley, J.; and Xiong, C. 2022. Intent contrastive learning for sequential recommendation. In Proceedings of the ACM Web Conference (WWW)). Du, H.; Yuan, H.; Huang, Z.; Zhao, P.; and Zhou, X. 2023. Sequential Recommendation with Diffusion Models. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2016. Session-based recommendations with recurrent neural networks. In Proceedings of International Conference on Learning Representations (ICLR). Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Proceedings of Advances in neural information processing systems ((NeurIPS). Ho, J.; Saharia, C.; Chan, W.; Fleet, D. J.; Norouzi, M.; and Salimans, T. 2022. Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research (JMLR). Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequential recommendation. In Proceedings of International Conference on Data Mining (ICDM). Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Li, J.; Ren, P.; Chen, Z.; Ren, Z.; Lian, T.; and Ma, J. 2017. Neural attentive session-based recommendation. In Proceedings of ACM International Conference on Information and Knowledge Management (CIKM). Li, M.; Zhao, X.; Lyu, C.; Zhao, M.; Wu, R.; and Guo, R. 2022. MLP4Rec: A Pure MLP Architecture for Sequential Recommendations. Li, Z.; Sun, A.; and Li, C. 2023. DiffuRec: A Diffusion Model for Sequential Recommendation. arXiv preprint arXiv:2304.00686. Lin, G.; Gao, C.; Li, Y.; Zheng, Y.; Li, Z.; Jin, D.; and Li, Y. 2022. Dual Contrastive Network for Sequential Recommendation with User and Item-Centric Perspectives. arXiv preprint arXiv:2209.08446. Lopez Alcaraz, J. M.; and Strodthoff, N. 2023. Diffusionbased time series imputation and forecasting with structured atate apace models. Transactions on machine learning research (TMLR). Ma, H.; Li, X.; Meng, L.; and Meng, X. 2021. Comparative study of adversarial training methods for cold-start recommendation. In Proceedings of ADVM. Ma, H.; Qi, Z.; Dong, X.; Li, X.; Zheng, Y.; and Meng, X. M. L. 2023a. Cross-Modal Content Inference and Feature Enrichment for Cold-Start Recommendation. Proceedings of IJCNN. Ma, H.; Xie, R.; Meng, L.; Chen, X.; Zhang, X.; Lin, L.; and Zhou, J. 2023b. Exploring False Hard Negative Sample in Cross-Domain Recommendation. In Proceedings of the ACM Conference on Recommender Systems (RecSys). Ma, H.; Xie, R.; Meng, L.; Chen, X.; Zhang, X.; Lin, L.; and Zhou, J. 2023c. Triple Sequence Learning for Cross-domain Recommendation. ACM Trans. Inf. Syst. (TOIS). Meng, L.; Feng, F.; He, X.; Gao, X.; and Chua, T.-S. 2020. Heterogeneous fusion of semantic and collaborative information for visually-aware food recommendation. In Proceedings of MM. Moon, J.; Jeong, Y.; Chae, D.-K.; Choi, J.; Shim, H.; and Lee, J. 2023. CoMix: Collaborative filtering with mixup for implicit datasets. Information Sciences. Nichol, A. Q.; and Dhariwal, P. 2021. Improved denoising diffusion probabilistic models. In Proceedings of International Conference on Machine Learning (ICML). PMLR. Shi, W.; Chen, J.; Feng, F.; Zhang, J.; Wu, J.; Gao, C.; and He, X. 2023. On the Theories Behind Hard Negative Sampling for Recommendation. In Proceedings of the ACM Web Conference (WWW). Shi, Y.; De Bortoli, V.; Deligiannidis, G.; and Doucet, A. 2022. Conditional simulation using diffusion Schr¨odinger bridges. In Proceedings of Uncertainty in Artificial Intelligence (UAI). Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of ACM International Conference on Information and Knowledge Management (CIKM). Tang, J.; and Wang, K. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of ACM International Conference on Web Search and Data Mining (WSDM). Tashiro, Y.; Song, J.; Song, Y.; and Ermon, S. 2021. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Proceedings of Advances in Neural Information Processing Systems (NeurIPs). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8893 Walker, J.; Zhong, T.; Zhang, F.; Gao, Q.; and Zhou, F. 2022. Recommendation via collaborative diffusion generative model. In Proceedings of International Conference on Knowledge Science, Engineering and Management, 593– 605. Springer. Wang, W.; Xu, Y.; Feng, F.; Lin, X.; He, X.; and Chua, T.S. 2023. Diffusion Recommender Model. Proceedings of International Conference on Research on Development in Information Retrieval (SIGIR). Wu, B.; He, X.; Wu, L.; Zhang, X.; and Ye, Y. 2023. Graphaugmented co-attention model for socio-sequential recommendation. IEEE Transactions on Systems, Man, and Cybernetics: Systems (SMC). Xia, L.; Huang, C.; Xu, Y.; Dai, P.; Zhang, X.; Yang, H.; Pei, J.; and Bo, L. 2021. Knowledge-enhanced hierarchical graph transformer network for multi-behavior recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; and Cui, B. 2022. Contrastive learning for sequential recommendation. In Proceedings of IEEE International Conference on Data Engineering (ICDE). Xu, C.; Zhao, P.; Liu, Y.; Xu, J.; S. Sheng, V. S. S.; Cui, Z.; Zhou, X.; and Xiong, H. 2019. Recurrent convolutional neural network for sequential recommendation. In Proceedings of International World Wide Web Conferences (WWW). Zhang, M.; Wu, S.; Yu, X.; Liu, Q.; and Wang, L. 2022. Dynamic graph neural networks for sequential recommendation. IEEE Transactions on Knowledge and Data Engineering (TKDE). Zheng, X.; Su, J.; Liu, W.; and Chen, C. 2022. DDGHM: Dual Dynamic Graph with Hybrid Metric Training for Cross-Domain Sequential Recommendation. In Proceedings of ACM International Conference on Multimedia (ACM MM). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8894
2024
988
18,837
Tail-STEAK: Improve Friend Recommendation for Tail Users via Self-Training Enhanced Knowledge Distillation Yijun Ma1, Chaozhuo Li2, Xiao Zhou1* 1 Gaoling School of Artificial Intelligence, Renmin University of China 2 Beijing University of Posts and Telecommunications mayj [email protected], [email protected], [email protected] Abstract Graph neural networks (GNNs) are commonly employed in collaborative friend recommendation systems. Nevertheless, recent studies reveal a notable performance gap, particularly for users with limited connections, commonly known as tail users, in contrast to their counterparts with abundant connections (head users). Uniformly treating head and tail users poses two challenges for tail user preference learning: (C1) Label Sparsity, as tail users typically possess limited labels; and (C2) Neighborhood Sparsity, where tail users exhibit sparse observable friendships, leading to distinct preference distributions and performance degradation compared to head users. In response to these challenges, we introduce Tail-STEAK, an innovative framework that combines selftraining with enhanced knowledge distillation for tail user representation learning. To address (C1), we present TailSTEAKbase, a two-stage self-training framework. In the first stage, only head users and their accurate connections are utilized for training, while pseudo links are generated for tail users in the second stage. To tackle (C2), we propose two data augmentation-based self-knowledge distillation pretext tasks. These tasks are seamlessly integrated into different stages of Tail-STEAKbase, culminating in the comprehensive TailSTEAK framework. Extensive experiments, conducted on state-of-the-art GNN-based friend recommendation models, substantiate the efficacy of Tail-STEAK in significantly improving tail user performance. Our code and data are publicly available at https://github.com/antman9914/Tail-STEAK. Introduction Friend recommender systems play a crucial role in various real-world applications, facilitating the discovery of potential social relationships and enhancing user engagement. The cornerstone of friend recommendation lies in learning effective user representations (Zhou et al. 2019). Recently, inspired by the development of Graph Neural Networks (GNNs), higher-order collaborative signals in social networks have been exploited for user representation learning and achieved significant improvement (Sankar et al. 2021). Despite their success, GNNs usually need qualified and abundant structural connections to learn effective user representations (Liu, Nguyen, and Fang 2021; Zheng et al. *Corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (a) Degree Distribution (b) Degree-Related Evaluation Results Figure 1: Empirical study of degree-related bias in friend recommendation 2022), which high-degree users, or head users, can provide. However, most real-world social networks follow power-law node degree distribution (Adamic et al. 2001), where the majority of users are tail users with few links, as shown in Figure 1(a). As a result, due to limited observable interactions, the preference of tail users is hard to learn, leading to inferior performance in downstream recommendation tasks. As empirically demonstrated in Figure 1(b), for friend recommendation on Deezer and Last.FM social networks based on two state-of-the-art GNN-based models Simple-HGN (Lv et al. 2021) and LightGCN (He et al. 2020), the degree-specific predictive accuracy is approximately proportional to node degree. We denote this phenomenon as degree-related bias. Regrettably, contemporary recommendation algorithms often treat head users and tail users uniformly, resulting in the under-representation of tail users. This bottleneck is deemed unacceptable in real-world networks. Therefore, this paper is dedicated to enhancing tail user preference learning for friend recommendation with limited structural information. We contend that mitigating degree-related bias in friend recommendation introduces two challenges: (C1) Label Sparsity, where the scarcity of labels for tail users complicates preference learning, leading to an imbalance between head and tail users; and (C2) Neighborhood Sparsity, as the sparse interactions of tail users create a distinct preference distribution compared to head users, posing challenges in accurate anticipation and potentially resulting in a preference The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8895 gap. Related works mainly focus on (C2), which attempt to transfer accurate structural knowledge of head nodes to tail nodes for neighborhood sparsity alleviation (Liu et al. 2020; Liu, Nguyen, and Fang 2021; Zheng et al. 2022; Hao et al. 2021), or leverage side information to enrich relational data for inactive users (Zheng et al. 2021; Wang et al. 2019a; Yan et al. 2023). Although they effectively enhance the performance of inactive tail users, they not only ignore the more fundamental challenge (C1), but also need external assistance to solve (C2), which are usually overly complex. To tackle the aforementioned challenges, we propose Tail-STEAK, a novel Tail user oriented Self-Training EnhAnced Knowledge distillation paradigm for alleviating degree-related bias in GNN-based friend recommendation. To address (C1), we overhaul the training paradigm, introducing a fundamental two-stage self-training approach named Tail-STEAKbase. Initially, only head users and their well-qualified interactions are employed for model training in the first stage, leveraging their abundant and relatively accurate structural knowledge. Subsequently, in the second stage, we iteratively conduct top-K pseudo link predictions for tail users from a randomly sampled user set using the model derived from the previous iteration. This model is further refined using both the full training set and pseudo links. For (C2), we propose two data augmentation-based self-knowledge distillation pretext tasks. These tasks aim to implicitly familiarize the model with both head and tail user preference distributions, thereby mitigating preference gaps. Conducted separately for head and tail users, these tasks are integrated into the corresponding stages of TailSTEAKbase, forming the complete Tail-STEAK framework. To achieve data augmentation, we introduce synthesized tail users generated from original head users through aggressive link dropout and ID embedding disturbance in both stages. Additionally, we impute predicted pseudo links into the tail users’ neighborhood and generate synthesized head users in the second stage. All synthesized users are then integrated into the training set for the respective stage. Diverging from mainstream reconstruction-based knowledge distillation methods (Ji et al. 2021), we employ self-discriminationbased distillation through Mutual Information (MI, denoted as MI throughout the paper) maximization between the head view and tail view of the same user. It is essential to highlight that our proposed training paradigm is entirely model-agnostic and does not rely on additional customized modules or external data. We implement our approach on two cutting-edge GNN-based friend recommendation models, conducting comprehensive experiments across three benchmark social networks. The empirical results showcase a substantial enhancement in predictive accuracy for tail users, while maintaining competitive overall performance. Furthermore, our proposed method is versatile and applicable to general recommendation tasks and various other link prediction scenarios. In summary, our contributions are highlighted as follows: • We introduce Tail-STEAKbase, a foundational two-stage self-training paradigm designed for GNN-based friend recommendation, offering qualified pseudo labels for tail users to effectively address the label sparsity challenge. • We devise distinct data augmentation strategies for head and tail users, synthesizing tail users through both embedding and structural space augmentation. • We introduce two self-discrimination-based selfknowledge distillation tasks, seamlessly integrated into Tail-STEAKbase, enhancing the comprehensive Tail-STEAK framework. • Empirical experiments conducted on two GNN-based friend recommendation models across two benchmark social networks substantiate the superiority of our method in tail user preference learning, while maintaining competitiveness in head user learning. Related Work Degree Bias in GNN-based Recommendations Although GNNs have become mainstream solution for graph-related tasks and graph-based recommendation (Wang et al. 2019b; He et al. 2020; Wang et al. 2020; Zhao et al. 2023), there are some recent work revealing that GNNs are likely to suffer performance degradation on tail nodes, which raises a degree-related fairness concern. These works mostly focus on the sparse neighborhood of tail nodes, and make attempts to transfer head structural knowledge to them. For instance, DEMO-Net (Wu, He, and Xu 2019) and SL-DSGCN (Tang et al. 2020) assign interrelated degree-specific RNN-based parameters to input nodes with different degrees; A la carte (Khodak et al. 2018) and Nonce2vec (Herbelot and Baroni 2017) propose to conduct two-stage embedding refinement for robust tail node embedding; Meta-tail2vec (Liu et al. 2020) further utilizes meta-learning based two-stage embedding refinement framework for locality-aware tail node embedding; TailGNN (Liu, Nguyen, and Fang 2021) and Cold Brew (Zheng et al. 2022) both propose to directly impute the weak neighborhood of tail nodes. Tail-GNN utilize transferable neighborhood translation to predict missing neighborhood, while Cold Brew leverage self-attention based virtual neighborhood discovery. Different from aforementioned works, RawlsGCN (Kang et al. 2022) proposes a gradient modulation method to achieve the degree-level Rawlsian gradient fairness. GRADE (Wang et al. 2022) proposes a graph contrastive learning method to enhance the inherent community effect of networks via data augmentation. Degree-related bias in graph-based recommendation is also known as cold-start problem, which is usually alleviated by introducing side information and constructing informative heterogeneous graphs, such as the profile of users and items (Zheng et al. 2021; Zhang et al. 2023), knowledge graphs (Wang et al. 2019a) and social networks (Liu et al. 2021). There is also a recent work (Hao et al. 2021) attempting to pre-train GNN-based recommendation models with reconstruction-based pretext task. Despite their success, they either do not specifically designed for improving tail user embeddings, or need additional modules or data, which is overly complex. More importantly, they fail to pay attention to the intuitive but critical challenge (C2). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8896 Self-Knowledge Distillation Self-knowledge distillation is a kind of knowledge distillation, which has drawn growing attention in computer vision. Related works generally train a student network without auxiliary teacher network, and they can be divided into two groups: First group utilizes auxiliary networks. For example, BYOT (Zhang et al. 2019) introduces a set of auxiliary weak classfiers to perform classification based on the feature map of intermediate layers. FRSKD (Ji et al. 2021) proposes an auxiliary self-teacher network to enable refined knowledge transfer. The second group utilizes data augmentation. DDGSD (Xu and Liu 2019) induces consistent prediction by feeding differently augmented samples into encoder; CSKD (Yun et al. 2020) leverages different instances of the same class as positive pairs for class-level regularization, while SLA (Lee, Hwang, and Shin 2020) proposes to augment data label by combining self-supervision task with the original downstream task. Our Tail-STEAK is inspired by data augmentation based branch, and we adopt the framework proposed in (Xu and Liu 2019). Most data augmentation based methods tend to make intermediate feature maps or predicted logits of different views to be similar. We also conduct distillation based on the outputs of GNN encoders. However, different from existing works, Tail-STEAK maximizes the MI of embeddings from different views instead of minimizing their Euclidean distance, in order to avoid the reconstruction constraint. Preliminaries In this section, we present the problem formulation of alleviating degree-related bias. Given an undirected social network denoted as G = {V, E}, where V = {v1, v2, ..., vC}, E ⊆ V × V represent the user set and the observed link set respectively. Let X ∈RC×δ and A ∈RC×C denote the trainable ID embedding matrix and adjacency matrix, where Xv: ∈Rδ is the ID embedding of user v, and Av: is the adjacency vector originated from v. Auv = Avu = 1 iff (u, v) ∈E. Let Nv denotes the neighboring node set of node v ∈V, and |Nv| denotes the degree of user v. We denote D ∈ RC×C as diagonal degree matrix, where Dvv = |Nv|. Given degree threshold T, we can define the head node set Vhead and tail node set Vtail as Vhead = {v : |Nv| > T} and Vtail = {v : |Nv| ≤T} respectively. It is obvious that V = Vtail ∪Vhead and Vtail ∩Vhead = ∅. We further define Chead and Ctail as the number of head/tail nodes respectively, where Chead + Ctail = C. T is chosen based on the degree distribution of the given network, which is set as the median of given degree distribution in this work. The formal problem definition is presented as follows: Problem. Given a multi-layer GNN-based user encoder f(X, A), our objective is to find a mapping f : V →Rδ that can project each node v ∈V into a δ-dimensional space, and meanwhile obtain more effective tail user embeddings {f(Xv:, Av:) : v ∈Vtail}. Methodology In this section, we start with the introduction of our proposed two-stage self-training paradigm Tail-STEAKbase to solve (C1), along with the pseudo label prediction strategy. Next, to solve (C2), we introduce the proposed data augmentation strategy and self-knowledge distillation pretext tasks, and present the full Tail-STEAK framework. An illustration of the overall framework is presented in Figure 2. Basic Self-Training Paradigm Most existing methods for degree-related bias mitigation fail to solve the fundamental label sparsity issue. To address (C1), inspired by the widespread application of self-training (Tang et al. 2020; Liu et al. 2022), we propose a basic selftraining paradigm denoted as Tail-STEAKbase to provide more qualified pseudo links (i.e. labels) for tail users. Self-training is generally a two-stage procedure, where the model is first trained with available labelled data, and iteratively trained with both labelled data and pseudo-labeled data generated from unlabelled data. Tail users have few links, and directly using the whole E to train the model in the first stage will be harmful for model performance. Therefore, we first train the model only with interactions of head users to learn more accurate user preference knowledge, and then add interactions of tail users in the second stage. As for iterative pseudo link prediction, in each iteration, given tail users in the original training set, we first randomly sample U users that have not connected with these tail users from the whole graph, and then select the most relevant topK users based on model prediction, which will be regarded as highly potential neighbors. The user subset sampling is designed for memory efficient training and diversified gradient provision. We simply set K = T to make head and tail users have similar amount of labels. The weak links between target user and potential neighbors will be regarded as pseudo links. The pseudo links are expected to be less noisy, for that the model should have learned accurate preference distribution from head users in the previous stage, and able to automatically filter out noisy labels during iterative optimization. The predicted pseudo links will be used for both training and data augmentation, and will not participate the message propagation process of original samples. Self-Knowledge Distillation Although pseudo links can explicitly provide more supervision signals for tail users, neighborhood sparsity issue (i.e. (C2)) can also limit the model performance due to the incomplete observable preference distribution. Existing solutions tend to transfer relatively complete head preference knowledge to tail users via a variety of customized modules, which often make them overly complex (Liu et al. 2020; Zheng et al. 2022; Liu, Nguyen, and Fang 2021). In this work, we propose to leverage data augmentation based self-knowledge distillation to extract effective tail user embeddings via learned head knowledge. These operations are free of additional parameters and fully model-agnostic. We first propose two data augmentation methods in both structural space and embedding space for head and tail users reThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8897 Figure 2: Overview of the proposed Tail-STEAK framework. spectively, and then introduce the MI maximization based self-knowledge distillation pretext tasks. Data augmentation. Synthesized data generation is the critical component of data augmentation based selfknowledge distillation (Xu and Liu 2019). In order to mitigate the preference gap between head and tail users, it is natural to consider corrupting the informative neighborhood and ID embedding of head users to simulate tail users, and meanwhile imputing the neighborhood of tail users to predict their head view. Therefore, we propose to conduct aggressive link dropout and ID embedding disturbance on head users to generate synthesized tail users, and impute predicted pseudo links of tail users into their neighborhood to generate synthesized head users, respectively. We first define the two independent data augmentation operator Γhead and Γtail for synthesized tail user and head user generation respectively: Γhead(Xv:, Av:, γ) = ( ˜Xtail v: , ˜Atail v: ), v ∈Vhead (1) Γtail(Xv:, Av:, Mp(v)) = (Xv:, ˜Ahead v: ), v ∈Vtail (2) Γhead conducts data augmentation in both structure and embedding space. For the structural space, denote γ as the maximum preserved neighbors, we randomly select only a few neighbors of each head node to keep, in order to simulate the sparse neighborhood of tail users. Formally, given v ∈Vhead, node degree |Nv| and amount of preserved neighbors z = rand(0, γ), the neighbor preservation probability will be ps v = z |Nv|. Then, we can sample a random mask Ms(v) ∈{0, 1}N for v’s adjacency vector, where Ms i(v) ∼B(ps v) if Avi = 1 and Ms i(v) = 0 otherwise. The final adjacency vector of v can be denoted as: ˜Atail v: = Av: ◦Ms(v) (3) where ◦is element-wise product. Note that the head link dropout is only conducted in the first-hop, the higher-order neighborhood are not affected. For the embedding space, considering that the learned tail user ID embeddings are always noisy due to label sparsity, we add random noise Me(v) sampled from standard Gaussian distribution to the original input user embedding Xv: to simulate the noisy tail user embedding. The embedding masking operation is only conducted on the center user. ˜Xtail v: = Xv: + Me(v), Me(v) ∼N(0, I) (4) As for the Γtail, we simply impute the neighborhood of tail users with predicted pseudo links to generate synthesized head users. Formally, for each v ∈Vtail, given predicted pseudo adjacency vector Mp(v), the imputed adjacency vector of user v can be denoted as: ˜Ahead v: = Av: ∨Mp(v) (5) where ∨is element-wise union. Note that the imputation is only conducted in the first-hop neighborhood. We denote the synthesized tail/head user generated from user v as vtail/vhead respectively. Pretext tasks. We formulate the output of the final layer of GNN-based user encoder as H = f(X, A), where hv is v’s user embedding. Then, the output embedding matrix of synthesized tail users and head users can be denoted as Htail and Hhead respectively, which are defined as: Htail = f(˜Xtail v: , ˜Atail v: ), Hhead = f(Xv:, ˜Ahead v: ) (6) where htail v and hhead v are the embeddings of vtail and vhead. Generally, self-knowledge distillation is reconstruction-oriented (Ji et al. 2021), where the outputs of different input views are expected to be identical. In this work, we instead adopt MI maximization and popular The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8898 InfoNCE contrastive loss for distillation to avoid the strict reconstruction constraint. To adapt to the pair-wise objective, the embeddings of original user and the correspondingly generated synthesized users are regarded as positive pairs, i.e. Stail = {(hv, hhead v ) : v ∈Vtail} and Shead = {(hv, htail v ) : v ∈Vhead}. On the other hand, the embeddings of synthesized users of other users within the same batch will be regarded as negative samples. Using MI-based distillation can make model pay more attention to distributional consistency between synthesized and corresponding source embeddings. Two distillation based pretext tasks are defined on head and tail users respectively, and their pairwise objective functions can be formulated as Ldp and Lim: ϕ(hi, hj) = exp(s(hv, htail v )/τ) (7) ldp(v, vtail) = −log ϕ(hv, htail v ) P (u,utail)∈Shead ϕ(hv, htail u ), (8) lim(v, vhead) = −log ϕ(hv, hhead v ) P (u,uhead)∈Stail ϕ(hv, hhead u ) (9) Ldp(v) = ldp(v, vtail) + ldp(vtail, v) (10) Lim(v) = lim(v, vhead) + ldp(vhead, v) (11) where s is the cosine similarity function, and τ is the temperature hyperparamter. Note that although our proposed self-knowledge distillation based pretext tasks are similar to graph contrastive learning (GCL) based methods, such as GRACE (Zhu et al. 2020) and SGL (Wu et al. 2021), they are designed for different purposes. GCL methods devote to exploit unlabeled data space to alleviate data sparsity and improve overall performance. In contrast, our distillationbased method is designed for alleviating neighborhood sparsity, where the synthesized users are leveraged to conduct self-knowledge distillation, such that the derived model can comprehend both head and tail user preference distributions. Overall Framework We first define the BPR loss (Rendle et al. 2009) Lt for friend recommendation task, which is formulated as Eq. 12: Lt(v, u) = −1 Cn X un∈Vn [log(σ(g(v, u) −g(v, un)))] (12) where un is negative sample, Vn is the negative user set, and Cn is the size of Vn; g is the function for friendship prediction, which is defined as a two-layer MLP here. Based on Tail-STEAKbase and our proposed self-knowledge distillation mechanism, given link batch EB, the objective function of the first stage can be formulated as L1 in Eq. 15: Lhead 1 (v, u) = Lt(v, u) + Lt(vtail, u) + Ldp(v) (13) Lhead 1 = 1 |EB| X (v,u)∈EB 1head(v)Lhead 1 (v, u) (14) L1 = Lhead 1 + λ∥Ω∥2 (15) # Node # Edge |Nv| Median # Tail Deezer 28,281 92,752 4 15,814 Last.FM 136,409 1,685,524 8 45,389 Table 1: Dataset Statistics where |EB| denotes batch size, Ωdenotes all the trainable parameters in encoder f, 1head is an indicator function which returns 1 if the input user v ∈Vhead else 0. λ is a hyperparameter to control the strength of L2 regularization. In the second stage, a new training set is constructed based on both observed and generated links. Both distillation pretext tasks are adopted in this stage. The corresponding objective function can be formulated as L2 in Eq. 18, where 1tail is another indicator function that returns 1 if the input user v ∈Vtail else 0. Ltail 2 (v, u) = Lt(v, u) + Lt(vhead, u) + Lim(v) (16) Ltail 2 = 1 |EB| X (v,u)∈EB 1tail(v)Ltail 2 (v, u) (17) L2 = Ltail 2 + L1 (18) Note that although Tail-STEAK is two-stage, we make modifications solely to the input data and objective function, and no additional modules are integrated into the base model, which keeps Tail-STEAK an end-to-end framework. Experiments Experimental Setup Dataset. We conducted experiments on 2 public benchmark social networks, Deezer (Benedek and Rik 2020) and Last.FM1. Both datasets are friendship networks collected from different services in different time, where nodes and edges represent users and mutual friendships respectively, and node features are not available. The train/val/test split ratio is 70%/10%/20% for all the datasets. For each friendship to predict, We randomly sample 19 and 99 negative samples for training and testing respectively. Relevant statistics are presented in Table 1. Base GNN Models. To evaluate the flexibility of our method, we adopt LightGCN (He et al. 2020) and SimpleHGN (Lv et al. 2021) (denoted as SHGN) as base GNNs. LightGCN is one of the most popular models for recommendation, while SHGN is a state-of-the-art GNN for heterogeneous graph learning. We remove the edge type embeddings and adapt SHGN to friend recommendation task. Baseline Methods. Except for the base GNN models, We select four categories of baselines which are designed for mitigating degree-related bias or data sparsity issue to comprehensively evaluate our Tail-STEAK. (1) Graph contrastive learning methods: DGI (Veliˇckovi´c et al. 2019) maximizes MI between node views from original and corrupted graphs; GRACE (Zhu et al. 2020) augments graphs 1http://lfs.aminer.cn/lab-datasets/multi-sns/lastfm.tar.gz The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8899 SHGN LightGCN > T ≤T > T ≤T NDCG MRR NDCG MRR NDCG MRR NDCG MRR Deezer Base 0.5555 0.4943 0.2669 0.2352 0.5511 0.4948 0.2663 0.2278 DGI 0.5651 0.5088 0.2860 0.2528 0.5653 0.5115 0.2533 0.2178 GRACE 0.6186 0.5684 0.3108 0.2844 0.5743 0.5211 0.2494 0.2143 MvGRL 0.5778 0.5184 0.2709 0.2414 0.5593 0.5050 0.2611 0.2243 SGL 0.5881 0.5357 0.2974 0.2713 0.5724 0.5194 0.2489 0.2139 NCL 0.5921 0.5397 0.3059 0.2758 0.5788 0.5267 0.2475 0.2121 SimGCL 0.6023 0.5511 0.2894 0.2647 0.5733 0.5205 0.2501 0.2158 LFT 0.5486 0.4880 0.2525 0.2211 0.5603 0.5049 0.2674 0.2280 MoE 0.5457 0.4843 0.2751 0.2399 Tail-GNN 0.4660 0.4180 0.1786 0.1412 0.5644 0.5089 0.2486 0.2121 SSNet 0.5574 0.4954 0.2649 0.2350 0.5780 0.5250 0.2499 0.2182 Tail-STEAKno-mask 0.5414 0.4776 0.3258 0.2898 0.5549 0.5109 0.3225 0.2918 Tail-STEAKfull 0.5845 0.5293 0.3550 0.3337 0.5687 0.5210 0.3139 0.2755 Tail Improv. 14.22% 17.33% 20.60% 28.09% Last.FM Base 0.6239 0.5560 0.3153 0.2879 0.6372 0.5692 0.2965 0.2414 DGI 0.6292 0.5618 0.3077 0.2747 0.6365 0.5684 0.2978 0.2424 GRACE 0.6682 0.6070 0.4214 0.3904 0.6432 0.5762 0.3020 0.2436 MvGRL 0.6323 0.5652 0.3103 0.2818 0.6395 0.5726 0.3024 0.2495 SGL 0.6449 0.5794 0.3233 0.2764 0.6397 0.5720 0.3087 0.2498 NCL 0.6482 0.5838 0.3472 0.3059 0.6302 0.5614 0.2715 0.2123 SimGCL 0.6483 0.5838 0.3797 0.3475 0.6389 0.5710 0.3034 0.2446 LFT 0.6175 0.5501 0.1722 0.1399 0.6345 0.5657 0.3101 0.2480 MoE 0.6208 0.5520 0.3211 0.2923 Tail-GNN 0.6003 0.5308 0.3281 0.2872 0.6421 0.5749 0.2768 0.2202 SSNet 0.6238 0.5561 0.3167 0.2905 0.6572 0.5953 0.3328 0.2961 Tail-STEAKno-mask 0.6154 0.5547 0.3784 0.3241 0.6290 0.5631 0.4594 0.3956 Tail-STEAKfull 0.6455 0.5822 0.5409 0.5023 0.6352 0.5698 0.4199 0.3537 Tail Improv. 28.36% 28.66% 48.15% 58.37% Table 2: Degree-Related NDCG@10 and MRR Evaluation Results. Boldfaced scores are the best ones. by link dropout and node feature masking, and correlates generated views via self-discrimination; MvGRL (Hassani and Khasahmadi 2020) introduces graph diffusion into graph contrastive learning. They aim to exploit the unlabeled data space and alleviate data sparsity, but they do not specifically focus on tail node improvement. (2) Adaptive embedding refinement models: LFT (Zhu and Caverlee 2022) first learn a common prior model with all available labels, which is then fine-tuned with nodes with different degrees. Similarly, MoE (Masoudnia and Ebrahimpour 2014) trains several expert encoders for nodes with different degrees, and then derive the best expert combinations via a degree-aware gating network. Note that MoE is not applicable for LightGCN, for that there is no encoder in LightGCN. (3) Weak neighborhood imputation models: Tail-GNN (Liu, Nguyen, and Fang 2021) attempts to directly impute weak neighborhood of tail nodes via transferable neighborhood translation. (4) Self-supervised learning methods for recommendation: SGL (Wu et al. 2021) and SimGCL (Yu et al. 2022) perform augmentation over graph structure and user embeddings via random dropout respectively. NCL (Lin et al. 2022) proposes heuristic-based strategies to construct different views based on structural and semantic neighborhood. They are designed specifically for recommendation, in order to alleviate data sparsity issue. We also adopt SSNet (Song et al. 2022) for comparison, which is recently proposed to alleviate scale distortion issue in friend recommendation. We do not consider meta-tail2vec (Liu et al. 2020) and Cold Brew (Zheng et al. 2022) as baselines, for that embedding reconstruction is required in both methods, which is not suitable for recommendation. GRADE (Wang et al. 2022) is also abandoned due to its massive memory requirement. For our method, we evaluate two versions of Tail-STEAK. Tail-STEAKno-mask removes the feature-space operation in the first stage, while Tail-STEAKfull is the full version. Evaluation Metrics. Following previous works of friend recommendation, we adopt 2 commonly used metrics for evaluation, which are MRR (Mean Reciprocal Ranking) and NDCG@K (Normalized Discounted Curriculum Gain). Both MRR and NDCG can reflect the ranking quality. We set K = 10 for NDCG@K. Implementation Details. We implement Tail-STEAK via PyTorch-Geometric (Fey and Lenssen 2019). We adopt 2layer GNN architecture, and ID embedding dimension δ is fixed as 64. γ in Γhead and U for pseudo link generation are tuned from {1,2,...,8} and {200, 500, 1000, 2000} respectively. Adam (Kingma and Ba 2014) is adopted for optimization with learning rate 0.001, and λ = 0.0001. You can refer to our Github repo for more details. Comparative Results We report the average degree-related performance of our proposed Tail-STEAK and other baselines after 5 runs with The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8900 Methods SHGN LightGCN > T ≤T > T ≤T Tail-STEAKfull 0.5845 0.3550 0.5687 0.3139 w/o ID disturb 0.5414 0.3258 0.5549 0.3225 w/o 2nd tail-based KD 0.5983 0.3685 0.5577 0.2585 w/o 2nd KD 0.5939 0.3628 0.5588 0.2575 w/o 2nd stage 0.5951 0.3656 0.5693 0.2481 w/o sep stage 0.5844 0.3458 0.5477 0.2569 w/o user subset sample 0.6108 0.3381 0.5806 0.2663 w/o MI loss 0.5137 0.3132 0.5444 0.2531 w/o KD 0.5588 0.3378 0.5507 0.2603 Table 3: Degree-Related NDCG@10 Evaluation Results of Ablation Study on Deezer. different seeds in Table 2. The relative improvement of tail user performance is also presented. We have following observations: (1) Our Tail-STEAK consistently outperforms state-of-the-art baselines by a large margin in terms of tail users in both base GNN models and in both selected datasets.(2) Althogh head user performance of Tail-STEAK always drops compared with the best-performing baselines, it alomst consistently outperforms baselines specifically tailored to address degree-related bias like Tail-GNN and LFT. (3) Among all contrastive learning based methods, only GRACE improve both head and tail user performance compared with base model. DGI and MvGRL mainly improve head user performance, while the effect of SGL, NCL and SimGCL depends on specific dataset. (4) LFT and MoE are less beneficial and even harmful for both head and tail user performance. We believe the reason is that model adaptation based on few labels of tail users may cause over-fitting. However, for LightGCN, LFT can significantly improve tail user performance. (5) Both Tail-GNN and SSNet can effectively improve both head and tail user performance of LightGCN in certain datasets. However, they are much less effective and even harmful for SHGN. (6) By comparing TailSTEAKno-mask and Tail-STEAKfull, we can find that the ID embedding disturbance operation has significant impact on the model performance, which is beneficial for SHGN and harmful for LightGCN. We believe the reason is that random noise in feature space is helpful for the learning of GNNs with non-linear transformations like SHGN. Ablation Study To empirically discuss the impact of each proposed component, we conduct two branches of ablation experiments on Deezer. Specifically, in the first branch, we first remove terms Ltail 2 (w/o 2nd tail-based KD) and Lhead 1 + Ltail 2 (w/o 2nd KD) in L2 respectively, and then remove the whole second stage (w/o 2nd stage). In the second branch, we discuss several noteworthy alternatives of Tail-STEAK. We first re-train GNN models only based on the second stage (w/o sep stage), and then use all available users for pseudo link prediction (w/o user subset sample). We also replace the MI maximization based distillation loss with traditional reconstruction-based loss (w/o MI loss), and further replace the distillation strategy with pure data augmentation operation (w/o KD), where the synthesized users are only used as training samples. Note that we have shown the impact of ID embedding disturbance (w/o ID disturb) in previous section, so we will skip relevant discussion here. We report the degree-related evaluation results in Table 3. Based on the evaluation results, we can find that our proposed components have different impact on different base model. For the first branch: (1) For SHGN, head user based self-knowledge distillation significantly improves both head and tail user performance, while tail user based distillation can be harmful for both user groups. Predicting pseudo links can bring limited improvement for SHGN-based friend recommenders, and the pseudo labels may harm the performance without the guidance of head user based distillation. (2) For LightGCN, head user based distillation has little impact on tail user performance, which is opposite for tail user based distillation. Pseudo link prediction is also critical for improving tail user performance for LightGCN-based recommender. For the second branch: (1) Separating TailSTEAK into two stages and pre-train a qualified model in the first stage is necessary for SHGN, while it is less helpful for LightGCN. (2) Introducing diversified supervision signals via randomly sampled potential friends is helpful for SHGN-based friend recommender optimization, which can also be harmful for LightGCN-based recommenders. (3) MI maximization based distillation loss is superior than reconstruction-based distillation loss for both base models. (4) Removing all the knowledge distillation based objective terms will lead to significant performance degradation for SHGN, while have little impact on LightGCN. The reason why proposed components have different impact lies in the significant difference of base model architecture . There is no actual encoder in LightGCN, which only has ID embedding layer and iteratively performs message propagation among adjacent users. In contrast, SHGN has trainable encoders with non-linear transformations. Conclusion In this work, we studied the problem of degree-related bias in graph-based friend recommendation. We identify two major challenges in this problem: (C1) Label sparsity; (C2) Neighborhood sparsity. To tackle these challenges, we propose Tail-STEAK, a novel model-agnostic self-training enhanced knowledge distillation framework free of additional parameters. Tail-STEAK is developed based on a two-stage self-training paradigm named Tail-STEAKbase to address (C1), where only head nodes and their qualified connections are used for model training in the first stage, followed by predicting pseudo links for tail users in the second stage. To address (C2), two data augmentation-based self-knowledge distillation pretext tasks are further incorporated into TailSTEAK, which conduct data augmentation in both feature and structural space to distill the rich knowledge of head users into tail users, in order to help model comprehend both head and tail user preference distributions. Comprehensive experiments on two GNN-based friend recommendation models and benchmark datasets demonstrate that TailSTEAK can significantly improve tail user performance, and meanwhile maintains competitive head user performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8901 Acknowledgements This work was supported by the National Natural Science Foundation of China (NSFC Grant No.62106274); the Fundamental Research Funds for the Central Universities, Renmin University of China (No.22XNKJ24). We also wish to acknowledge the support provided by the Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the ”Double-First Class” Initiative. References Adamic, L. A.; Lukose, R. M.; Puniyani, A. R.; and Huberman, B. A. 2001. Search in power-law networks. Phys. Rev. E, 64: 046135. Benedek, R.; and Rik, S. 2020. Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, CIKM ’20, 1325–1334. Fey, M.; and Lenssen, J. E. 2019. Fast Graph Representation Learning with PyTorch Geometric. In Proceedings of the 7th International Conference on Learning Representations (RLGM Workshop), ICLR ’18. Hao, B.; Zhang, J.; Yin, H.; Li, C.; and Chen, H. 2021. PreTraining Graph Neural Networks for Cold-Start Users and Items Representation. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, WSDM ’21, 265–273. Hassani, K.; and Khasahmadi, A. H. 2020. Contrastive Multi-View Representation Learning on Graphs. In Proceedings of the 37th International Conference on Machine Learning, ICML ’20. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, 639–648. Herbelot, A.; and Baroni, M. 2017. High-risk learning: acquiring new word vectors from tiny data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP ’17, 304–309. Ji, M.; Shin, S.; Hwang, S.; Park, G.; and Moon, I.-C. 2021. Refine myself by teaching myself: Feature refinement via self-knowledge distillation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, CVPR ’21, 10664–10673. Kang, J.; Zhu, Y.; Xia, Y.; Luo, J.; and Tong, H. 2022. RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network. In Proceedings of the ACM Web Conference 2022, WWW ’22, 1214–1225. Khodak, M.; Saunshi, N.; Liang, Y.; Ma, T.; Stewart, B.; and Arora, S. 2018. A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL ’18, 12–22. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lee, H.; Hwang, S. J.; and Shin, J. 2020. Self-supervised Label Augmentation via Input Transformations. In Proceedings of the 37th International Conference on Machine Learning, ICML ’20, 5714–5724. Lin, Z.; Tian, C.; Hou, Y.; and Zhao, W. X. 2022. Improving Graph Collaborative Filtering with Neighborhood-Enriched Contrastive Learning. In Proceedings of the ACM Web Conference 2022, WWW ’22, 2320–2329. Liu, H.; Hu, B.; Wang, X.; Shi, C.; Zhang, Z.; and Zhou, J. 2022. Confidence May Cheat: Self-Training on Graph Neural Networks under Distribution Shift. In Proceedings of the ACM Web Conference 2022, WWW ’22, 1248–1258. Liu, Z.; Nguyen, T.-K.; and Fang, Y. 2021. Tail-GNN: TailNode Graph Neural Networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’21, 1109–1119. Liu, Z.; Shen, Y.; Cheng, X.; Li, Q.; Wei, J.; Zhang, Z.; Wang, D.; Zeng, X.; Gu, J.; and Zhou, J. 2021. Learning Representations of Inactive Users: A Cross Domain Approach with Graph Neural Networks. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, CIKM ’21, 3278–3282. Liu, Z.; Zhang, W.; Fang, Y.; Zhang, X.; and Hoi, S. C. 2020. Towards Locality-Aware Meta-Learning of Tail Node Embeddings on Networks. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, CIKM ’20, 975–984. Lv, Q.; Ding, M.; Liu, Q.; Chen, Y.; Feng, W.; He, S.; Zhou, C.; Jiang, J.; Dong, Y.; and Tang, J. 2021. Are We Really Making Much Progress? Revisiting, Benchmarking and Refining Heterogeneous Graph Neural Networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’21, 1150–1160. Masoudnia, S.; and Ebrahimpour, R. 2014. Mixture of experts: a literature survey. The Artificial Intelligence Review, 42(2): 275. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and SchmidtThieme, L. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 452–461. Sankar, A.; Liu, Y.; Yu, J.; and Shah, N. 2021. Graph Neural Networks for Friend Ranking in Large-Scale Social Platforms. In Proceedings of the Web Conference 2021, WWW ’21, 2535–2546. Song, X.; Lian, J.; Huang, H.; Wu, M.; Jin, H.; and Xie, X. 2022. Friend Recommendations with Self-Rescaling Graph Neural Networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’22, 3909–3919. Tang, X.; Yao, H.; Sun, Y.; Wang, Y.; Tang, J.; Aggarwal, C.; Mitra, P.; and Wang, S. 2020. Investigating and Mitigating Degree-Related Biases in Graph Convoltuional Networks. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, CIKM ’20, 1435–1444. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8902 Veliˇckovi´c, P.; Fedus, W.; Hamilton, W. L.; Li˙o, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep Graph Infomax. In Proceedings of the 7th International Conference on Learning Representations, ICLR ’19. Wang, R.; Wang, X.; Shi, C.; and Song, L. 2022. Uncovering the Structural Fairness in Graph Contrastive Learning. In Proceedings of the 36th Advances in Neural Information Processing Systems, NeurIPS ’22. Wang, X.; He, X.; Cao, Y.; Liu, M.; and Chua, T.-S. 2019a. KGAT: Knowledge Graph Attention Network for Recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’19, 950–958. Wang, X.; He, X.; Wang, M.; Feng, F.; and Chua, T.-S. 2019b. Neural Graph Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19, 165–174. Wang, X.; Jin, H.; Zhang, A.; He, X.; Xu, T.; and Chua, T.-S. 2020. Disentangled Graph Collaborative Filtering. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, 1001–1010. Wu, J.; He, J.; and Xu, J. 2019. DEMO-Net: Degree-Specific Graph Neural Networks for Node and Graph Classification. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, 406–415. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; and Xie, X. 2021. Self-Supervised Graph Learning for Recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’21, 726–735. Xu, T.-B.; and Liu, C.-L. 2019. Data-Distortion Guided Self-Distillation for Deep Neural Networks. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’19. Yan, H.; Li, C.; Long, R.; Yan, C.; Zhao, J.; Zhuang, W.; Yin, J.; Zhang, P.; Han, W.; Sun, H.; et al. 2023. A Comprehensive Study on Text-attributed Graphs: Benchmarking and Rethinking. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Yu, J.; Yin, H.; Xia, X.; Chen, T.; Cui, L.; and Nguyen, Q. V. H. 2022. Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, 1294–1303. Yun, S.; Park, J.; Lee, K.; and Shin, J. 2020. Regularizing class-wise predictions via self-knowledge distillation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 13876–13885. Zhang, L.; Song, J.; Gao, A.; Chen, J.; Bao, C.; and Ma, K. 2019. Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), ICCV ’19, 3712–3721. Zhang, P.; Guo, J.; Li, C.; Xie, Y.; Kim, J. B.; Zhang, Y.; Xie, X.; Wang, H.; and Kim, S. 2023. Efficiently leveraging multi-level user intent for session-based recommendation via atten-mixer network. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 168–176. Zhao, J.; Qu, M.; Li, C.; Yan, H.; Liu, Q.; Li, R.; Xie, X.; and Tang, J. 2023. Learning on large-scale text-attributed graphs via variational inference. ICLR. Zheng, J.; Ma, Q.; Gu, H.; and Zheng, Z. 2021. Multi-View Denoising Graph Auto-Encoders on Heterogeneous Information Networks for Cold-Start Recommendation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’21, 2338–2348. Zheng, W.; Huang, E. W.; Rao, N.; Katariya, S.; Wang, Z.; and Subbian, K. 2022. Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods. In Proceedings of the 10th International Conference on Learning Representations, ICLR ’22. Zhou, X.; Liu, D.; Lian, J.; and Xie, X. 2019. Collaborative metric learning with memory network for multi-relational recommender systems. arXiv preprint arXiv:1906.09882. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2020. Deep Graph Contrastive Representation Learning. In Proceedings of the 33rd Advances in Neural Information Processing Systems, NeurIPS ’20. Zhu, Z.; and Caverlee, J. 2022. Fighting Mainstream Bias in Recommender Systems via Local Fine Tuning. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, WSDM ’22, 1497–1506. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8903
2024
989
18,838
Disentangled Diffusion-Based 3D Human Pose Estimation with Hierarchical Spatial and Temporal Denoiser Qingyuan Cai1*, Xuecai Hu1*, Saihui Hou1,2†, Li Yao1, Yongzhen Huang1,2† 1School of Artificial Intelligence, Beijing Normal University 2WATRIX.AI [email protected], {huxc1208, housaihui, yaoli, huangyongzhen}@bnu.edu.cn Abstract Recently, diffusion-based methods for monocular 3D human pose estimation have achieved state-of-the-art (SOTA) performance by directly regressing the 3D joint coordinates from the 2D pose sequence. Although some methods decompose the task into bone length and bone direction prediction based on the human anatomical skeleton to explicitly incorporate more human body prior constraints, the performance of these methods is significantly lower than that of the SOTA diffusion-based methods. This can be attributed to the tree structure of the human skeleton. Direct application of the disentangled method could amplify the accumulation of hierarchical errors, propagating through each hierarchy. Meanwhile, the hierarchical information has not been fully explored by the previous methods. To address these problems, a Disentangled Diffusion-based 3D human Pose Estimation method with Hierarchical Spatial and Temporal Denoiser is proposed, termed DDHPose. In our approach: (1) We disentangle the 3D pose and diffuse the bone length and bone direction during the forward process of the diffusion model to effectively model the human pose prior. A disentanglement loss is proposed to supervise diffusion model learning. (2) For the reverse process, we propose Hierarchical Spatial and Temporal Denoiser (HSTDenoiser) to improve the hierarchical modeling of each joint. Our HSTDenoiser comprises two components: the Hierarchical-Related Spatial Transformer (HRST) and the Hierarchical-Related Temporal Transformer (HRTT). HRST exploits joint spatial information and the influence of the parent joint on each joint for spatial modeling, while HRTT utilizes information from both the joint and its hierarchical adjacent joints to explore the hierarchical temporal correlations among joints. Extensive experiments on the Human3.6M and MPI-INF-3DHP datasets show that our method outperforms the SOTA disentangledbased, non-disentangled based, and probabilistic approaches by 10.0%, 2.0%, and 1.3%, respectively. Introduction 3D Human Pose Estimation (HPE) has crucial applications in virtual reality (Hagbi et al. 2010), human motion recognition (Zhang et al. 2022b), and human-computer interaction (Kisacanin, Pavlovic, and Huang 2005). The goal is to *These authors contributed equally. †Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Left: The hierarchy defined in our method and the forward kinematic structure (drawn with brown dashed lines) based on the Human3.6M dataset. Right: The MPJPE of the hierarchy 1-5 joints comparison among Anatomy3D (Chen et al. 2021), MixSTE (Zhang et al. 2022a), D3DP (Shan et al. 2023) and our method. regress the 3D joints locations of a human in the 3D space using the input of 2D pose sequence. Most of the methods first derive predictions of 2D joints using estimators such as HRNet (Wang et al. 2020), CPN (Chen et al. 2018), OpenPose (Cao et al. 2017) and AlphaPose (Fang et al. 2017), and then perform 2D-to-3D lifting to obtain the final estimation results. Recently, monocular 3D human pose estimation has experienced significant advancements. Many methods have been proposed to alleviate the depth ambiguity. (Pavllo et al. 2019) considers this issue by exploring temporal information with the convolutional network while the transformerbased methods (Zheng et al. 2021; Zhang et al. 2022a) make use of spatial-temporal information to compensate for the information loss in the 2D to 3D mapping process. Learning or introducing human pose prior is another method to mitigate the depth ambiguity. (Shan et al. 2023; Ci et al. 2023; Gong et al. 2023) introduce the original pose distribution prior in the training phase, and model 2D-to-3D lifting as a process to denoise from the pose distribution with uncertain noise. Moreover, some disentangle-based methods like (Xu et al. 2020; Chen et al. 2021; Wang et al. 2022) explicitly predict the bone length and bone direction, subsequently composing 3D joints locations based on the forward kinematics of the human skeleton. Such methods employ explicit pose constraints, integrating symmetry loss, joint angle limits (Xu et al. 2020), and the consistent bone length in the videos The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 882 (Chen et al. 2021). However, there are two problems existing in these methods: (1) Despite the advantages of disentangle-based techniques in incorporating human pose priors, they come with the drawback of amplified error accumulation, resulting in decreased performance. Meanwhile, diffusion-based 3D HPE methods (Shan et al. 2023; Ci et al. 2023; Gong et al. 2023) directly add noise to the original 3D pose which is not conducive to learn the explicit human pose prior. What if we disentangle the diffusion model by adding noise to bone length and direction separately? This disentanglebased model can separately focus on the temporal consistency of bone length and joint angle variations, better enabling the diffusion model to learn human pose prior. (2) Although the transformer-based methods have the ability to explore the spatial-temporal context information, these models generally lack attention to the fine-grained hierarchical information among joints. As shown in the left side of Figure 1, we group joints into six hierarchies based on the kinematic tree depth of the human body. The experiment results in the right side of Figure 1 show a rising hierarchical accumulation error when the hierarchy increases from 1 to 5. To solve the problems mentioned above, (1) we introduce the disentangled method in the forward process of diffusion model instead of decomposing the 3D HPE task into bone length and bone direction prediction task, which simplifies learning the human pose prior. (2) For better modeling the hierarchical relation among joints, we propose HSTDenoiser, which contains two modules: the HierarchicalRelated Spatial Transformer (HRST) and the HierarchicalRelated Temporal Transformer (HRTT). In HRST, due to the spatial information of a joint is influenced by its parent joint, we supply the joint’s attention with information from its parent joint. Besides, in HRTT, we try to make cross-attention of the joint and the adjacent joints to learn the temporal interrelationships. HRST and HRTT make the joints pay more attention to their hierarchical-related joints, which consequently improves performance on higher-hierarchy joints and contributes to overall better performance. In conclusion, our contributions can be summarized as follows: • We propose the first Disentangled Diffusion-based 3D human Pose Estimation method with Hierarchical Spatial and Temporal Denoiser (DDHPose), which introduces Hierarchical Information in two ways. • We present the Disentangle Strategy for the forward diffusion process based on hierarchical information to better model explicit pose prior. Additionally, we incorporate a disentanglement loss to guide the model’s training. • The HSTDenoiser is introduced, comprising the Hierarchical-Related Spatial Transformer (HRST) and the Hierarchical-Related Temporal Transformer (HRTT). This denoiser strengthens the relation among the hierarchical joints by enhancing the attention weight of adjacent joints in the reverse diffusion process. • Our method outperforms the performance of disentanglebased, non-disentangle based, and probabilistic methods on 3D HPE benchmarks. The qualitative results show that our method has better performance on the higher hierarchy joints. Related Work 3D Human Pose Estimation 3D HPE can be divided into two categories, one that directly regresses the 3D human pose from raw RGB images and another that first detects the 2D human pose from raw RGB images by using one of the 2D human pose estimation methods like HRNet (Wang et al. 2020), CPN (Chen et al. 2018), OpenPose (Cao et al. 2017), AlphaPose (Fang et al. 2017) and then make a 2D-to-3D lifting to get the final estimation results. (Tekin et al. 2016; Pavlakos et al. 2017; Sun et al. 2018) directly use convolutional neural network to regress 3D pose from a feature volume. Based on the accuracy improvement of 2D human pose estimation, (Pavllo et al. 2019) uses a fully convolutional model based on dilated temporal convolutions to estimate 3D poses and achieves better results. (Zheng et al. 2021; Zhang et al. 2022a; Zhao et al. 2023) demonstrate that 3D poses in the video can be effectively estimated with spatial-temporal transformer architecture. Due to the superior performance of two-stage methods, we also employ a two-stage approach for 3D human pose estimation in this paper. While these models are capable of exploring spatial-temporal context information, they always fail to incorporate fine-grained hierarchical information. This leads to a higher hierarchical accumulation error from hierarchy 1 to hierarchy 5 in the right portion of Figure 1. Therefore, we apply HRST and HRTT in our method, providing more hierarchical features for better modeling. Diffusion Model The diffusion model belongs to a class of generative models, which has outstanding performance in image generation (Batzolis et al. 2021; Nichol et al. 2021; Ho et al. 2022), image super-resolution (Saharia et al. 2022), semantic segmentation (Baranchuk et al. 2021), multi-modal tasks (Fan et al. 2023) and so on. The diffusion model is first introduced by (Sohl-Dickstein et al. 2015), which defines two stages which are the forward process and the reverse process. The forward process refers to the gradual addition of Gaussian noise to the data until it becomes random noise, while the reverse process is the denoising of noisy data to obtain the true samples. The following works DDPM (Ho, Jain, and Abbeel 2020) and DDIM (Song, Meng, and Ermon 2020) simplify and accelerate previous diffusion models which make a solid foundation in this area. Recent explorations (Choi, Shim, and Kim 2022; Holmquist and Wandt 2022; Ci et al. 2023; Shan et al. 2023) try to apply the diffusion model to 3D human pose estimation. Note that (Gong et al. 2023) also uses a diffusion model for 3D HPE, but they additionally introduce the heatmap distribution of 2D pose, and the depth distribution to initialize 3D pose distribution, making a GMM-based forward diffusion process, so that they have a better performance than the other diffusion-based 3D HPE model. However, these approaches directly add t-step noise in the forward process to the original 3D pose, which is not conducive The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 883 Figure 2: (a): The overview of DDHPose’s training pipeline. (b): The architecture of our HSTDenoiser, which contains HRST and HRTT. HSP embedding and TP embedding are used in the spatial-temporal transformer to better modeling the hierarchical relation of spatial position information and temporal position information. f is the feature extracted by the spatial-temporal transformer and fc is the child joint feature separated from f. The input consists of N frames for both 2D pose and 3D pose. For better clarity, only three frames of input are illustrated here as an example. to learning the explicit human pose prior. Additionally, (Xu et al. 2020; Chen et al. 2021; Wang et al. 2022) have a higher accumulation of errors that disentangle the 3D joint location to the prediction of bone length and bone direction. We introduce the disentanglement strategy in the forward process of the diffusion model, integrating the explicit human body prior to the diffusion model, and proposing the first disentangle-based diffusion model for 3D HPE. As a result, we achieve outstanding results on 3D HPE benchmarks. Method The overview of our proposed DDHPose is in Figure 2(a). In our framework, we decompose the 3D joint location into the bone length and bone direction, adding noise in the forward process. After the forward process, the noisy bone length, noisy bone direction, and 2D pose are fed to HSTDenoiser, which contains HRST and HRTT to reverse the 3D pose from the noisy input. Further details will be introduced in the following section. 3D POSE Disentanglement Strategy We first introduce the motivation of why we use the disentanglement strategy in our paper. The original nondisentangle diffusion-based methods directly take the 3D joint sequence as input without any skeleton structural prior. Modeling the dependencies among each joint pair tends to be challenging due to their complex and dense relation which makes the optimization task more difficult. But in our approach, Our disentangle-based method first decomposes ground truth 3D pose y0 ∈RN×J×3 to bone length l0 ∈RN×(J−1)×1 and bone direction d0 ∈RN×(J−1)×3, where N is the frame length of the input sequences, J is the number of joints. This operation divides the dense and high-dimensional problem into multiple sparse and lowdimensional sub-problems, making the gradient-based optimization easier. Besides, The disentangling representation with bone length and direction makes it easier to add structural constraints, such as temporal consistency in bone length. The addition of bone length loss as a constraint enhances output certainty and shows effectiveness in the experiment. For the i-th bone, ground truth length li 0 and direction di 0 can be defined as: li 0 = ∥yci 0 −ypi 0 ∥2 , di 0 = yci 0 −ypi 0 ∥yci 0 −ypi 0 ∥2 (1) where ci and pi are the child joint and parent joint, which are in the upstream and downstream of the i-th bone according to the forward kinematic structure defined in the left portion of Figure 1. The disentangled bone length and bone direction are both processed through the forward and reverse processes. The Forward Process The Forward Process is an approximate posterior that follows the Markov chain that gradually adds Gaussian noise N(0, I) to the original data x0. Followed by DDPM (Ho, Jain, and Abbeel 2020), the forward process can be defined as: q(xt|x0) := N(xt; √¯αtx0, (1 −¯αt)I) (2) where ¯αt := Qt s=0αs and αs := 1−βs, βs is a noise schedule and we adopt the cosine-schedule proposed by (Song The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 884 and Ermon 2020) which always increases as the sampling step t increases. During the training stage in Figure 2(a), when we get the disentangled bone length l0 and bone direction d0, we can do the forward process separately in Eq (2) to get the noisy bone length lt and bone direction dt by adding t-step Gaussian noise as: lt = √¯αtl0 + √ 1 −¯αtϵ, dt = √¯αtd0 + √ 1 −¯αtϵ (3) where ϵ is the random Gaussian sampled at the t-step. The Reverse Process In the training stage, under the condition of a 2D pose sequence x ∈RN×J×2, the contaminated bone length lt and direction dt from the forward process are concatenated. This combined information is then processed through the HSTDenoiser and a regression head, resulting in the denoised 3D joints locations ˜y0. Then using our disentanglement strategy to decompose bone length ˜l0 and bone direction ˜d0 for disentanglement supervision during training. At the inference stage, inspired by the method in D3DP (Shan et al. 2023), we simultaneously sample H hypotheses from the Gaussian distribution as the initial noisy bone length and direction. They are then denoised through the trained denoiser, resulting in the denoised bone length ˜l0:H,0 and bone direction ˜d0:H,0. Then ˜l0:H,0 and ˜l0:H,0 are employed to generate noisy samples ˜l0:H,t′ and ˜d0:H,t′ in the next iteration at step t′ via DDIM (Song, Meng, and Ermon 2020): ˜l0:H,t′ = √¯αt′˜l0:H,0 + q 1 −¯αt′ −σ2 t · ϵtl + σtϵ ˜d0:H,t′ = √¯αt′ ˜d0:H,0 + q 1 −¯αt′ −σ2 t · ϵtd + σtϵ (4) where ϵtl = ˜l0:H,t−√¯αt˜l0:H,0 √1−¯αt , ϵtd = ˜d0:H,t−√¯αt ˜d0:H,0 √1−¯αt are the noise at step t and σt = p (1 −¯αt′ )/(1 −¯αt) · p 1 −¯αt/¯αt′ controls the stochastic of the diffusion process. We can control the hypothesis number H and iteration times W in the whole process. Appropriately increasing H and W can optimize the final prediction of the bone length and bone direction and improve the performance of MPJPE and P-MPJPE in our experiments. Hierarchical Spatial and Temporal Denoiser Both in the training or inference phase, noisy bone length and bone direction are fed into our HSTDenoiser to reconstruct the original data. HSTDenoiser, which consists of HRST and HRTT, is used to explore the hierarchical information, specifically the relation among the joint, the parent joint, and the child joint. The main architecture is shown in Figure 2(b). We utilize a linear layer to enhance the input feature and use the spatial-temporal transformer block in MixSTE (Zhang et al. 2022a) to extract joint features. We also introduce Hierarchical spatial position embedding HSP for better spatial position modeling and temporal embedding TP for better temporal position modeling. HSP embedding not only contains the spatial position information of each joint but also contains the joint hierarchy information. Inspired by (Li et al. 2023), we split the joints Figure 3: The main components of our HSTDenoiser. (a): Hierarchical-Related Spatial Transformer(HRST). (b): Hierarchical-Related Temporal Transformer(HRTT). into six hierarchies according to the joint’s depth of the human body tree-like structure to build hierarchical embedding, which is shown in the left portion of Figure 1. It means the joints in the same hierarchy share the same embedding. Based on hierarchical embedding, the hierarchical-related information can be well learned by our model. After one layer of spatio-temporal transformer modeling, we utilize the HRST and HRTT, which we introduce in the subsequent section, to model the spatio-temporal correlations of joints through d loops alternately. Transformer The Transformer model we used in our approach is followed by (Vaswani et al. 2017), the basic idea of query, key, and value is that the query is used to match with the key, and then according to the degree of matching, to selectively focus on the value. This design allows the output to selectively pay attention to the value based on the query. The mechanism of attention can be formulated by: Attention = Softmax(A)V, A = QKT √dm (5) where Q, K, V ∈RZ×dm are generated by the input feature, Z is the number of tokens and dm is the dimension of feature. A ∈RZ×Z denotes the attention weight matrix. The input of our transformer module is the noisy bone length and direction generated by the forward diffusion process. For better denoising, the 2D pose sequence is added as the condition and concatenated with the 3D noisy data as the whole input. HRST In HRST, we enhance the modeling of joint spatial information with its parent joint feature. Based on the forward kinematics structure, we define all the hierarchicalrelated joints triplets in the human body as {Jp, J, Jc}, where Jp, J, Jc are the set of jp, j, jc. In each of the hierarchical-related joints triplets {jp, j, jc}, jp is the parent The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 885 Algorithm 1: Hierarchical-Related Spatial Transformer Input: Q, K, V generated by Joint feature f ∈RN×J×C Parameter: Hierarchical Related joints triplets {Jp, J, Jc} Output:Hierarchical-Related spatial attention map 1: A = QKT √dm 2: for jp, j, jc in Jp, J, Jc do 3: A[j][jc] += A[jp][j] 4: A[jc][j] += A[jp][j] 5: A[j][jc]/2.0 , A[jc][j]/2.0 6: end for 7: return A joint of joint j and jc is the child joint of joint j according to the forward kinematic structure. Because our method decomposes the location of joints into bone length and direction, we believe that the position of a joint is influenced by the position of its parent joint, combined with the bone length and direction. The attention of the parent joint significantly affects the attention of the joint. Therefore, in HRST, we augment the parent joint’s influence on each joint at the spatial level, specifically as illustrated in Algorithm 1. After Algorithm 1, we derive the weight matrix A and then compute the attention utilizing Eq (5). HRTT We propose HRTT to further introduce the interrelationship between each joint and its hierarchical adjacent joints in the temporal dimension. In the process of exploring joint temporal information, we believe that due to the treelike structure of the human skeleton, there exists a strong temporal correlation among a joint, its parent joint, and its child joints. Because we have enhanced the relation between a joint and its parent joint, the temporal relation between a joint and its child joints is more considered in HRTT. Specifically, we primarily adopt a cross-attention mechanism to capture the relationship between the current joint and its child joints. According to the kinematic chain structure of human joints, we compute the average of its features with its child joints as the separated child joint feature fc. We use a residual architecture to build the attention weight matrix. For detail, we use joint temporal feature f after HRST to generate Q, K, V , formulating the self-attention weight matrix As = QKT √dm . Concurrently, in the residual branch, we use the separated child joint feature fc to generate Kc, making cross-attention weight matrix with Q, which can be defined as Ac = QKT c √dm . Therefore, Ac contains the crossattention between a joint and its child joints in the temporal dimension. The shape of the temporal attention map As and Ac is N × N, where N is the frame length of the input sequences. As and Ac together dictate the temporal attention focused on V as shown in Eq (6). Since As contains the enhanced weight of both the joint and its parent joint feature, Ac in the residual branch augments the relation between a joint and its child joints. Hence, HRTT effectively captures the relationships between the hierarchical adjacent joints. AttentionHRT T = Softmax(As + Ac)V (6) Loss Function 3D Disentanglement Loss 3D disentanglement loss is utilized to aid the model in learning the explicit priors during the forward diffusion process. Given the 3D ground truth pose sequence y0 and the predicted 3D pose sequence ˜y0 , we decompose y0 to bone length l0 and bone direction d0. Similarly, we can obtain the disentangled bone length prediction ˜l0 and bone direction prediction ˜d0. And for the i-th bone, length li 0, ˜li 0 and direction di 0, ˜di 0 are defined as: li 0 = ∥yci 0 −ypi 0 ∥2 , ˜li 0 = ∥˜yci 0 −˜ypi 0 ∥2 di 0 = yci 0 −ypi 0 ∥yci 0 −ypi 0 ∥2 , ˜di 0 = ˜yci 0 −˜ypi 0 ∥˜yci 0 −˜ypi 0 ∥2 (7) where ci and pi is the child joint and parent joint of the i-th bone. Then the disentanglement loss we use in our training stage can be defined as: Ll = ˜l0 −l0 2 , Ld = ˜d0 −d0 2 Ldis = Ll + Ld (8) 3D Pose Loss During the model’s training process, we also use 3D pose loss to constrain the denoised 3D pose regressed by the model: Lpos = ∥˜y0 −y0∥2 (9) By combining the 3D disentanglement loss and the 3D pose loss, the overall loss we used to supervise our model is given as: L = Ldis + Lpos (10) Experiments Dataset Human3.6M (Ionescu et al. 2014) is widely used in 3D HPE task. It contains 3.6 million 3D human poses and corresponding images with 11 professional actors and collected in 17 scenarios. Following the previous work (Pavllo et al. 2019; Zheng et al. 2021; Zhang et al. 2022a), we use S1-S9 for training and use S9 and S11 for testing. MPI-INF-3DHP (Mehta et al. 2017) record 8 actors, composed of 4 males and 4 females, each undertaking 8 different sets of activities. We use eight activities performed by eight actors to train our model, while the test dataset has seven different activities. Metrics We use the mean per joint position error (MPJPE) and procrustes mean per joint position error(P-MPJPE) for evaluation. MPJPE measures the Euclidean distance between the ground truth and the predicted 3D positions of each joint while P-MPJPE makes procrustes analysis involves scaling, translating, and rotating the predicted pose to best align it with the ground truth, providing a more fair comparison. Following D3DP (Shan et al. 2023), we use J-AGG based MPJPE and P-MPJPE to evaluate our results. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 886 Deterministic methods: Disentangle-based model Protocol #1: MPJPE Dir. Disc. Eat Greet Phone Photo Pose Pur. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg. DKA (Xu et al. 2020)(N=9) 37.4 43.5 42.7 42.7 46.6 59.7 41.3 45.1 52.7 60.2 45.8 43.1 47.7 33.7 37.1 45.6 Anatomy3D (Chen et al. 2021)(N=243) 41.4 43.5 40.1 42.9 46.6 51.9 41.7 42.3 53.9 60.2 45.4 41.7 46.0 31.5 32.7 44.1 Virtual Bones (Wang et al. 2022)(N=243) 42.4 43.5 41.0 43.5 46.7 54.6 42.5 42.1 54.9 60.5 45.7 42.1 46.5 31.7 33.7 44.8 Ours (N=243, H=1, W=1) 37.3 40.0 35.2 37.7 41.1 46.7 38.4 38.4 52.2 53.3 41.4 38.9 38.8 27.6 27.7 39.7 Deterministic methods: Non-Disentangle based model Protocol #1: MPJPE Dir. Disc. Eat Greet Phone Photo Pose Pur. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg. VideoPose3D (Pavllo et al. 2019)(N=243) 45.2 46.7 43.3 45.6 48.1 55.1 44.6 44.3 57.3 65.8 47.1 44.0 49.0 32.8 33.9 46.8 PoseFormer (Zheng et al. 2021)(N=81) 41.5 44.8 39.8 42.5 46.5 51.6 42.1 42.0 53.3 60.7 45.5 43.3 46.1 31.8 32.2 44.3 P-STMO (Shan et al. 2022)(N=243) 38.9 42.7 40.4 41.1 45.6 49.7 40.9 39.9 55.5 59.4 44.9 42.2 42.7 29.4 29.4 42.8 MixSTE (Zhang et al. 2022a)(N=243) 37.6 40.9 37.3 39.7 42.3 49.9 40.1 39.8 51.7 55.0 42.1 39.8 41.0 27.9 27.9 40.9 PoseFormerV2 (Zhao et al. 2023)(N=243) 41.3 45.5 41.5 44.0 46.7 53.8 42.6 42.6 55.2 64.6 45.7 42.9 45.8 32.3 32.9 45.2 STCFormer (Tang et al. 2023)(N=243) 38.4 41.2 36.8 38.0 42.7 50.5 38.7 38.2 52.5 56.8 41.8 38.4 40.2 26.2 27.7 40.5 Ours (N=243, H=1, W=1) 37.3 40.0 35.2 37.7 41.1 46.7 38.4 38.4 52.2 53.3 41.4 38.9 38.8 27.6 27.7 39.7 Probabilistic methods Protocol #1: MPJPE Dir. Disc. Eat Greet Phone Photo Pose Pur. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg. MHFormer (Li et al. 2022)(N =351, H=3) 39.2 43.1 40.1 40.9 44.9 51.2 40.6 41.3 53.5 60.3 43.7 41.1 43.8 29.8 30.6 43.0 GFPose (Ci et al. 2023)(H = 10) 39.9 44.6 40.2 41.3 46.7 53.6 41.9 40.4 52.1 67.1 45.7 42.9 46.1 36.5 38.0 45.1 D3DP (Shan et al. 2023)(N=243,∗) 37.3 39.5 35.6 37.8 41.3 48.2 39.1 37.6 49.9 52.8 41.2 39.2 39.4 27.2 27.1 39.5 Ours (N=243, H=5, W=1) 37.2 39.9 35.1 37.6 41.0 46.5 38.3 38.3 52.1 53.1 41.3 38.8 38.7 27.5 27.6 39.5 Ours (N=243, H=20, W=10) 36.4 39.5 34.9 37.6 40.1 45.9 37.8 37.8 51.5 52.2 40.8 38.3 38.3 27.0 27.0 39.0 Table 1: Results on Human3.6M in millimeters under MPJPE. N, H, W: the number of input frames, hypotheses, and iterations used in the inference stage. In this table, we compare with the deterministic and probabilistic methods. The best results are highlighted in bold. (∗)-For clarity, H=20, W=10 is omitted. Method MPJPE P-MPJPE DiffPose (Gong et al. 2023)(N=243, ♭) 36.9 28.7 DiffPose♯(Gong et al. 2023)(N=243, ♭) 40.1 31.1 Ours (N=243, H=5, W=50) 39.2 31.1 Table 2: Comparison with DiffPose (Gong et al. 2023) on Human3.6M. (♯)- Stand-Diff implemented in DiffPose. (♭)For clarity, H=5, W=50 is omitted. Comparison with State-of-the-art Methods Results on Human3.6M The results of our method on Human3.6M are presented in Table 1. We first compare our method with the SOTA deterministic 3D human pose estimation methods. Based on whether the regression of the 3D pose locations is decomposed into the regression of bone length and bone direction, we divide the methods into disentangle-based methods and non-disentangle based methods. For disentangle-based methods, we can see from the table that our method achieves the best MPJPE of 39.7mm, surpassing Anatomy3D (Chen et al. 2021) by 4.4mm(10.0%) in MPJPE. For non-disentangle based model, we improve STCFormer (Tang et al. 2023) by 0.8mm(2.0%) under MPJPE. And then we compare our method with probabilistic methods, our method reaches the SOTA MPJPE of 39.0mm, outerperforms D3DP (Shan et al. 2023) by 0.5mm(1.3%). As for DiffPose (Gong et al. 2023), we separately compare with it in Table 2. Note that, the DiffPose additionally introduces the heatmaps derived from an off-the-shelf 2D pose detector and depth distributions to initialize the pose Method PCK↑AUC↑MPJPE↓ Anatomy3D (Chen et al. 2021)(N=243) 87.8 53.8 79.1 PoseFormer (Zheng et al. 2021)(N=9) 88.6 56.4 77.1 P-STMO (Shan et al. 2022)(N=81) 97.9 75.8 32.2 MixSTE (Zhang et al. 2022a)(N=243) 96.9 75.8 35.4 D3DP (Shan et al. 2023)(N=243,♮) 97.7 77.8 30.2 Ours (N=243,♮) 98.5 78.1 29.2 Table 3: Results on MPI-INF-3DHP under PCK, AUC, and MPJPE using ground truth 2D pose as inputs. The best results are highlighted in bold. (♮)-For clarity, H=1, W=1 is omitted. distribution. The probabilistic methods in Table 1 only use the 2D pose sequences. Thus, it might not be fair to directly compare with DiffPose. But according to DiffPose, the implementation of Stand-Diff only uses 2D pose sequences by reversing the 3D pose from a standard Gaussian noise, which achieves a larger MPJPE error than our DDHPose with the same setting (40.1mm vs 39.2mm). The results demonstrate that our method can notably boost performance by 0.9mm through the Disentangle Strategy and the utilization of hierarchical relations. Results on MPI-INF-3DHP We also evaluate our method on the MPI-INF-3DHP dataset under PCK, AUC, and MPJPE metrics. In Table 3, our approach outperforms the SOTA method by 0.8 in PCK, 0.3 in AUC, and 1.0mm in MPJPE under the single hypothesis condition. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 887 Disentangled Input Disentangled Output MPJPE P-MPJPE ✗ ✗ 40.23 31.56 ✗ ✓ 41.72 33.06 ✓ ✗ 39.65 31.24 ✓ ✓ 40.48 32.08 Table 4: The impact of disentanglement strategy. The disentanglement strategy with Disentangled input and without Disentangled output has the best result highlighted in bold. Ablation Study In order to evaluate each design in our method, we conduct ablation experiments on the Human3.6M dataset using 2D pose sequence extracted by CPN. Impact of Disentanglement Strategy In this section, we separately compare the effect of the Disentangle Input and Disentangle Output strategy. For the Disentangle Input Strategy, our method divides the dense and high-dimensional optimization problem into two low-dimensional sub-problems, simplifying the learning of the human pose prior. As shown in the left portion of Figure 4, employing the Disentangle Input strategy results in faster convergence and lower training 3D pose loss compared to not using it in the initial training epoch. This leads to improved quantitative results (39.65mm vs 40.23mm), as highlighted in Table 4. For Disentangle Output, the denoiser in the reverse process directly regresses bone length and direction, generating the 3D pose using C = Cp + l × d, where C and Cp are joint and parent joint coordinates, and l, d represent predicted bone length and direction. This equation indicates that a joint’s coordinate depends not only on its own bone properties but also on all parent joints along the bone chain. As illustrated in the right portion of Figure 4, hierarchy 1 exhibits lower errors in the Disentangled Output setting, while higher hierarchical levels accumulate errors more than without using Disentangled Output. Quantitative results in Table 4 show that employing the Disentangle Output strategy increases MPJPE from 40.23mm to 41.72mm. Effect of each module As shown in Table 5. We can divide our method into three modules: Hierarchical embedding, HRST, and HRTT. In our experiment, we sequentially add the modules to the baseline, which doesn’t use any of Figure 4: Left: Training Loss Comparison (w/o Disentangle Output). Right: Hierarchical Error Comparison (w/o Disentangle Input). baseline Hierarchical embedding HRST HRTT MPJPE P-MPJPE ✓ 40.10 31.94 ✓ ✓ 40.08 31.83 ✓ ✓ ✓ 39.68 31.55 ✓ ✓ ✓ ✓ 39.65 31.24 Table 5: Effect of each module in our experiments on Human3.6M dataset. The best results are highlighted in bold. the three modules to verify the effectiveness of each module. For simplicity, we set both H and W to 1. The result shows that hierarchical embedding provides a slight improvement over the baseline. Adding HRST can boost MPJPE from 40.08mm to 39.68mm and lifts P-MPJPE from 31.83mm to 31.55mm. Further integrating HRTT refines the MPJPE from 39.68mm to 39.65mm and boosts PMPJPE from 31.55mm to 31.24mm. The results suggest that information from the parent joint influences the regression of the joint itself, which assists the model in learning the joint’s spatial information. The shared information between parent and child joints aids the model in inferring the temporal feature of the joint. Effect of Loss Function We employ the 3D pose loss to constrain the denoised 3D pose regressed by our model and utilize the 3D disentanglement loss to aid the model in learning the explicit human body prior during the forward diffusion process. The contribution of the loss function is in Table 6. The result shows that using 3D disentanglement loss is essential for a better result. With the 3D disentanglement loss, the performance of MPJPE and P-MPJPE can be improved by 0.22mm and 0.70mm. MPJPE P-MPJPE 3D pos loss 39.87 31.94 3D pos loss + 3D dis loss 39.65 31.24 Table 6: Ablation study for loss function proposed in our method. The best results are highlighted in bold. Conclusion In this paper, we propose DDHPose, a diffusion-based 3D HPE method that introduces hierarchical information in two ways: (1)We propose the Disentangle Strategy for the forward diffusion process, which decomposes the 3D pose into bone length and direction based on the Hierarchical Information. This simplifies learning the human pose prior, reduces optimization dimension, and speeds up gradient descent. (2)We propose HSTDenoiser to strengthen the relation among the hierarchical joints by enhancing the attention weight of adjacent joints for each joint in the reverse diffusion process. Extensive results on Human3.6M and MPIINF-3DHP reveal that our method surpasses the disentanglebased method, non-disentangle based method, and the probabilistic approaches on 3D HPE benchmarks. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 888 Acknowledgments This work is jointly supported by National Natural Science Foundation of China (62276025, 62206022), Beijing Municipal Science & Technology Commission (Z231100007423015) and Shenzhen Technology Plan Program (KQTD20170331093217368). References Baranchuk, D.; Rubachev, I.; Voynov, A.; Khrulkov, V.; and Babenko, A. 2021. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126. Batzolis, G.; Stanczuk, J.; Sch¨onlieb, C.-B.; and Etmann, C. 2021. Conditional image generation with score-based diffusion models. arXiv preprint arXiv:2111.13606. Cao, Z.; Simon, T.; Wei, S.-E.; and Sheikh, Y. 2017. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In CVPR. Chen, T.; Fang, C.; Shen, X.; Zhu, Y.; Chen, Z.; and Luo, J. 2021. Anatomy-aware 3d human pose estimation with bonebased pose decomposition. IEEE Transactions on Circuits and Systems for Video Technology, 32(1): 198–209. Chen, Y.; Wang, Z.; Peng, Y.; Zhang, Z.; Yu, G.; and Sun, J. 2018. Cascaded pyramid network for multi-person pose estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7103–7112. Choi, J.; Shim, D.; and Kim, H. J. 2022. Diffupose: Monocular 3d human pose estimation via denoising diffusion probabilistic model. arXiv preprint arXiv:2212.02796. Ci, H.; Wu, M.; Zhu, W.; Ma, X.; Dong, H.; Zhong, F.; and Wang, Y. 2023. Gfpose: Learning 3d human pose prior with gradient fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4800–4810. Fan, W.-C.; Chen, Y.-C.; Chen, D.; Cheng, Y.; Yuan, L.; and Wang, Y.-C. F. 2023. Frido: Feature pyramid diffusion for complex scene image synthesis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 579–587. Fang, H.-S.; Xie, S.; Tai, Y.-W.; and Lu, C. 2017. RMPE: Regional Multi-person Pose Estimation. In ICCV. Gong, J.; Foo, L. G.; Fan, Z.; Ke, Q.; Rahmani, H.; and Liu, J. 2023. Diffpose: Toward more reliable 3d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13041–13051. Hagbi, N.; Bergig, O.; El-Sana, J.; and Billinghurst, M. 2010. Shape recognition and pose estimation for mobile augmented reality. IEEE transactions on visualization and computer graphics, 17(10): 1369–1379. Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33: 6840–6851. Ho, J.; Saharia, C.; Chan, W.; Fleet, D. J.; Norouzi, M.; and Salimans, T. 2022. Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research, 23(1): 2249–2281. Holmquist, K.; and Wandt, B. 2022. Diffpose: Multihypothesis human pose estimation using diffusion models. arXiv preprint arXiv:2211.16487. Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2014. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7): 1325–1339. Kisacanin, B.; Pavlovic, V.; and Huang, T. S. 2005. Realtime vision for human-computer interaction. Springer Science & Business Media. Li, H.; Shi, B.; Dai, W.; Zheng, H.; Wang, B.; Sun, Y.; Guo, M.; Li, C.; Zou, J.; and Xiong, H. 2023. PoseOriented Transformer with Uncertainty-Guided Refinement for 2D-to-3D Human Pose Estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 1296–1304. Li, W.; Liu, H.; Tang, H.; Wang, P.; and Van Gool, L. 2022. Mhformer: Multi-hypothesis transformer for 3d human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13147–13156. Mehta, D.; Rhodin, H.; Casas, D.; Fua, P.; Sotnychenko, O.; Xu, W.; and Theobalt, C. 2017. Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision. In 3D Vision (3DV), 2017 Fifth International Conference on. IEEE. Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; and Chen, M. 2021. Glide: Towards photorealistic image generation and editing with textguided diffusion models. arXiv preprint arXiv:2112.10741. Pavlakos, G.; Zhou, X.; Derpanis, K. G.; and Daniilidis, K. 2017. Coarse-to-fine volumetric prediction for single-image 3D human pose. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7025–7034. Pavllo, D.; Feichtenhofer, C.; Grangier, D.; and Auli, M. 2019. 3D human pose estimation in video with temporal convolutions and semi-supervised training. In Conference on Computer Vision and Pattern Recognition (CVPR). Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2022. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4713–4726. Shan, W.; Liu, Z.; Zhang, X.; Wang, S.; Ma, S.; and Gao, W. 2022. P-stmo: Pre-trained spatial temporal many-to-one model for 3d human pose estimation. In European Conference on Computer Vision, 461–478. Springer. Shan, W.; Liu, Z.; Zhang, X.; Wang, Z.; Han, K.; Wang, S.; Ma, S.; and Gao, W. 2023. Diffusion-Based 3D Human Pose Estimation with Multi-Hypothesis Aggregation. arXiv preprint arXiv:2303.11579. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and Ganguli, S. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, 2256–2265. PMLR. Song, J.; Meng, C.; and Ermon, S. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. Song, Y.; and Ermon, S. 2020. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33: 12438–12448. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 889 Sun, X.; Xiao, B.; Wei, F.; Liang, S.; and Wei, Y. 2018. Integral human pose regression. In Proceedings of the European conference on computer vision (ECCV), 529–545. Tang, Z.; Qiu, Z.; Hao, Y.; Hong, R.; and Yao, T. 2023. 3D Human Pose Estimation With Spatio-Temporal Criss-Cross Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4790–4799. Tekin, B.; Rozantsev, A.; Lepetit, V.; and Fua, P. 2016. Direct prediction of 3d body poses from motion compensated sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 991–1000. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. Neural Information Processing Systems,Neural Information Processing Systems. Wang, G.; Zeng, H.; Wang, Z.; Liu, Z.; and Wang, H. 2022. Motion Projection Consistency Based 3D Human Pose Estimation with Virtual Bones from Monocular Videos. IEEE Transactions on Cognitive and Developmental Systems. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. 2020. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3349–3364. Xu, J.; Yu, Z.; Ni, B.; Yang, J.; Yang, X.; and Zhang, W. 2020. Deep kinematics analysis for monocular 3d human pose estimation. In Proceedings of the IEEE/CVF Conference on computer vision and Pattern recognition, 899–908. Zhang, J.; Tu, Z.; Yang, J.; Chen, Y.; and Yuan, J. 2022a. MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13232–13242. Zhang, J.; Ye, G.; Tu, Z.; Qin, Y.; Qin, Q.; Zhang, J.; and Liu, J. 2022b. A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition. CAAI Transactions on Intelligence Technology, 7(1): 46–55. Zhao, Q.; Zheng, C.; Liu, M.; Wang, P.; and Chen, C. 2023. PoseFormerV2: Exploring Frequency Domain for Efficient and Robust 3D Human Pose Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8877–8886. Zheng, C.; Zhu, S.; Mendieta, M.; Yang, T.; Chen, C.; and Ding, Z. 2021. 3d human pose estimation with spatial and temporal transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 11656–11665. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 890
2024
99
18,839
Graph Contrastive Invariant Learning from the Causal Perspective Yanhu Mo1, Xiao Wang2∗, Shaohua Fan3,4, Chuan Shi1* 1Beijing University of Posts and Telecommunications 2 Beihang University 3 Tsinghua University 4 Key Laboratory of Big Data Artificial Intelligence in Transportation, Ministry of Education(Beijing Jiaotong University) {moyanhu, shichuan}@bupt.edu.cn, xiao [email protected], [email protected] Abstract Graph contrastive learning (GCL), learning the node representation by contrasting two augmented graphs in a selfsupervised way, has attracted considerable attention. GCL is usually believed to learn the invariant representation. However, does this understanding always hold in practice? In this paper, we first study GCL from the perspective of causality. By analyzing GCL with the structural causal model (SCM), we discover that traditional GCL may not well learn the invariant representations due to the non-causal information contained in the graph. How can we fix it and encourage the current GCL to learn better invariant representations? The SCM offers two requirements and motives us to propose a novel GCL method. Particularly, we introduce the spectral graph augmentation to simulate the intervention upon non-causal factors. Then we design the invariance objective and independence objective to better capture the causal factors. Specifically, (i) the invariance objective encourages the encoder to capture the invariant information contained in causal variables, and (ii) the independence objective aims to reduce the influence of confounders on the causal variables. Experimental results demonstrate the effectiveness of our approach on node classification tasks. 1 Introduction Graph Neural Networks (GNNs) learn node representations by aggregating information from neighborhoods, which have received a great deal of attention and achieved competitive performance on various tasks over the past few years (Kipf and Welling 2017; Velickovic et al. 2018; Hamilton, Ying, and Leskovec 2017). Despite the great success, most GNNs are trained with node labels, while it is well known that manual annotations are expensive and hard to collect; therefore, self-supervised learning (SSL) has gained popularity due to its competitive performance and label-free setting (Chen et al. 2020; He et al. 2020). Graph Contrastive Learning (GCL), as one of the most successful strategies for self-supervised representation learning on graphs, has shown state-of-the-art performance on many downstream tasks (Velickovic et al. 2019; Zhu et al. 2020; Qiu et al. 2020). The typical GCL method mainly includes three parts: graph augmentation, encoding architecture, and contrastive *Corresponding Author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. loss. Most existing GCL methods learn the representations by comparing the augmentations. First, GCL generates two augmented graphs from the original graph based on some augmentation strategies, e.g., dropping edges (Velickovic et al. 2019). Then the two augmented graphs are fed into the encoding architecture (e.g., GCN (Kipf and Welling 2017)) to learn the node representations. Finally, the contrastive loss (e.g., InfoNCE (Zhu et al. 2020)) is used to train the GCL model by making the representations of positive pair in two augmented graphs similar and representations of negative pairs dissimilar. It is believed that GCL is able to learn the invariant representations by contrasting the positive and negative pairs (Zhu et al. 2021; Liu et al. 2022a). The learned invariant representations will be beneficial for the downstream tasks. Generically, the question we want to ask is: does GCL always possess the invariant representation ability in practice? When will it fail and how to enhance this ability? A well-informed answer can provide a deeper understanding of the learning mechanism of GCL, identify the weakness of GCL, and motivate more powerful GCL to be proposed. The invariant representation usually represents the essential information, which can also be considered as some kind of causal variables in a graph (Arjovsky et al. 2019; Wu et al. 2022; Fan et al. 2023a). This natural and intuitive connection inspires us to explore the GCL from the causal perspective. We start with a causal analysis of GCL (more details are in Section 3) based on the structural causal model (SCM). The SCM indicates that if there are both causal and non-causal factors in a graph, only when the causal factors are the same and the non-causal factors are different in the original and two augmented graphs, GCL is able to learn the invariant causal factors. However, with the graph structures change by graph augmentation strategies, e.g., random augmentation, it is very hard to guarantee that and finally weakens the invariant representation learning ability. Once the weakness is identified, a natural question is how can we fix it and improve the invariant learning ability? Still, SCM offers the following two requirements for GCL: one is on the augmentation mechanism, i.e., a better graph augmentation strategy should take the causal and non-causal factors into account. Without distinguishing the two factors, the representations obtained by GCL methods may contain both causal and non-causal information, which might weaken the performance on downstream tasks. The second is on the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8904 learning mechanism, i.e., given causal and non-causal factors, how to ensure that GCL is able to learn the causal variables. This is extremely challenging because there is usually no prior knowledge of labels, etc. Moreover, even for different causal variables, they may be also statistically dependent due to the backdoor path caused by the confounders. In this paper, we propose a novel graph contrastive learning method from the standpoint of causality, called GCIL1 (Graph Contrastive Invariant Learning). Specifically, we first elaborate the structural causal model (SCM) to describe the graph generation process. According to the SCM, we generate two views from the perspective of the graph spectrum, which simulate perturbing the non-causal factors while keeping the causal contents unchanged. We assume each dimension in the representations holds the Gaussian distribution, and then we propose an invariance objective to guarantee that the representations generated from two views maintain the same mean and standard deviation for each dimension. Therefore, we can extract the causal information from the original graph. Finally, we utilize the independent module to push the causal factors mutually independent, thus none of them influence others by the backdoor path. Our contributions are summarized as follows: • We study graph contrastive learning from the perspective of causality, and point out that existing methods may not learn the invariant representations due to the non-causal information contained in the graph. • Based on the theory of causality, we propose a novel graph contrastive learning method that aims to learn an invariant representation via extracting the causal information contained in the original graph. • We validate the effectiveness of GCIL compared with state-of-the-art methods on four datasets, and our method outperforms both semi-supervised and self-supervised baselines. Figure 1: SCM of the graph generation process. The dashed circle and solid circle represent unobserved and observed variables, respectively. 2 Related Work Graph Neural Networks. Graph Neural Networks (GNNs) show outstanding performance in various tasks recently. For example, GCN (Kipf and Welling 2017) averages the information of one-hop neighbors. GAT (Velickovic et al. 2018) 1https://github.com/BUPT-GAMMA/GCIL assigns different weights to different neighbors. GraphSAGE (Hamilton, Ying, and Leskovec 2017) aggregates a subset of neighbors using various pooling methods. In the Bruna et al. (2014), fourier bias is used to decompose graph signals; ChebNet (Defferrard, Bresson, and Vandergheynst 2016) improves efficiency with the Chebyshev expansion of the graph Laplacian. Recent surveys (Wu et al. 2020; Zhou et al. 2020) provide a more complete review. Graph Contrastive Learning. Self-supervised representation learning has attracted considerable attention in computer vision (Oord, Li, and Vinyals 2018; Chen et al. 2020; He et al. 2020). Motivated by the local-global mutual information maximization, DGI (Velickovic et al. 2019) contrasts local node embedding and global summary vector. MVGRL (Hassani and Khasahmadi 2020) employs diffusion or distance matrices to handle contrastive tasks. GRACE (Zhu et al. 2020), GraphCL (You et al. 2020), GCA (Zhu et al. 2021) obtain two views by perturbing the original graph, and then they learn representations using InfoNCE loss. ProGCL (Xia et al. 2022) use of EM algorithm to samples more appropriate hard negative samples to learn node embeddings. CCA-SSG (Zhang et al. 2021) optimizes a feature-level objective rather than discriminating positive and negative samples. Recent advances in graph contrastive learning have been summarized in several surveys (Liu et al. 2022b; Xie et al. 2022). Causality in Graph Neural Networks. Causality studies the relationship between variables(Pearl, Glymour, and Jewell 2016; Pearl 2009), which has shown many benefits in deep learning. With the help of causality, many methods have achieved great success in various computer vision tasks (Zhang et al. 2020; Mitrovic et al. 2020). There is some research related to the graph. For example, DIR-GNN (Wu et al. 2022) conducts the intervention on the non-causal part to generate the representations to discover the rationale on the graph. DisC (Fan et al. 2022a) disentangles the graph into causal and bias subgraphs, thus alleviating the bias in the datasets. RGCL (Li et al. 2022) introduces an invariance look into self-supervised learning and proposes a method to preserve stable semantic information. Fan et al. (2022b) explores agnostic label selection bias in GNNs. CIGA (Chen et al. 2022) guarantees OOD generalization across distribution shifts by capturing the invariance of graphs. GraphNOTEARS (Fan et al. 2023b) studies the associated mechanism for generating node features in dynamic graph data. Different from them, we exploit the self-supervised node classification task from the standpoint of causality and propose a new graph contrastive learning loss based on the theory of causality. 3 Causal Analysis on GCL Notations and Framework Notations. Let G = (V, E) denote a graph, where V is the set of nodes with ∣V ∣= N and E is the set of edges. Each graph G has an adjacency matrix AN×N, where Aij denotes the relation between two nodes, i.e., Aij = 1 iff there is an edge between vi and vj. Graph G is often assigned with a node feature matrix X = [x1, x2, ..., xN]T ∈RN×F , where xi is the feature vector of node vi. The goal of GCL is to learn The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8905 an optimal GNN encoder to acquire node representations without requiring any label information. Graph Contrastive Learning (GCL). Given graph G, the pipeline of GCL is to generate two graph views VA, VB with graph augmentation generators τA, τB as follows: VA = τA(G, X), VB = τB(G, X). (1) Then, the two graph views are fed into a shared GNN encoder g(⋅) to generate the node representations ZA, ZB, respectively: ZA = g(VA), ZB = g(VB), (2) where ZA = [zA 1 , zA 2 , ..., zA N]T ∈RN×d and ZB = [zB 1 , zB 2 , ..., zB N]T ∈RN×d. The d denotes the dimension of representation. We employ a contrastive loss (Velickovic et al. 2019; Chen et al. 2020) to optimize the encoder g(⋅), pushing it to construct the invariant representations. Once the encoder g(⋅) is well trained, we can finally obtain the node representations Z = g(G, X). Causal Interpretation Before analyzing graph contrastive learning with a causal view, we first elaborate on the structural causal model (SCM) to describe the graph generation process based on the following assumptions: (1) The original graph G can be decomposed into the set of causal variables C and the set of non-causal variables S. (2) Only C causally influences both the input graph G and downstream tasks, and S does not provide any information about the downstream tasks. (3) There is no causal relationship between S and C., i.e., the generation process of C is independent of S. Based on these assumptions, the SCM for the node classification task can be depicted in Figure 1, where Y represents the node labels. The dashed circle in SCM means the unobserved latent variable and the solid circle represents the observed variable. Please note that the causal variables in C may also be statistically dependent, so when we estimate the causal effect of a causal variable (marked by gray nodes), other causal variables are considered confounders (marked by red nodes). More details between variables are described as follows: • C →G ←S. The observed node data is generated by two unobserved latent variables: causal variables C and non-causal variables S. • C ⟶Y . This link means the causal part C contains all the necessary information for the node classification task. • G ⇠⇢Y . This dashed link indicates statistical dependence between G and Y . Since the raw graph contains two kinds of information, we will construct the statistical dependence relationship between G and Y if we do not distinguish the latent variables C and S. • Common causes between variables are called confounders. In the SCM, the confounder provides a backdoor path to the causal variables, contributing to the correlation between different causal variables. Now we analyze GCL based on the above SCM. In GCL, the augmented graphs VA will contain the causal variables CA and non-causal variables SA, also, VB will contain CB and SB. Ideally, only when CA = CB = C and SA ≠SB, GCL is able to capture the invariant information C well. However, let us take the widely used graph augmentation strategy (i.e., random augmentation (Zhu et al. 2020)) as an example. Because the random augmentation does not distinguish C and S, it is very hard to guarantee that CA = CB = C and SA ≠SB still hold after graph augmentation. As a result, the learned final representation Z will contain both C and S, and the predicted label Y ′ will be made as follows: h(Z) = h(C, S) = Y ′, (3) where h denotes a classification head. Apparently, because the representation contains non-causal information, the prediction might change when the non-causal part shifts, i.e.: h(C, S) ≠h(C, S′). (4) It is unreasonable because the two representations contain the same causal information, yet their predictions differ. Therefore, we should perturb the non-causal factors S while keeping the causal factors C unchanged when generating augmentations, which can be regarded as conducting intervention on S, and meanwhile, ensure the consistency of causal information between different augmentations. Thus, the model should satisfy the following equation: P do(S=si) (Y ∣C) = P do(S=sj) (Y ∣C) , (5) where do (S = s) represents the intervention on non-causal factors S. This formulation encourages the model to extract information only contained in causal factors C and discard trivial information. Moreover, as shown in Figure 1, the causal variables contained in the C might be influenced by confounders (i.e., c1 ←Confounder →c2, where c1, c2 denote two causal variables, respectively). The confounders lead to the correlation between variables, indicating that different causal variables will encode similar information. To obtain a more informative representation, it is essential to eliminate the influence of the confounders, which refers to the mutually independent between causal variables, i.e., ∀ci, cj ∈C, ci í cj, (6) where ci and cj denote two different causal variables. This formulation implies that different variables in C are mutually independent, effectively mitigating the influence of confounders. 4 The Proposed Model: GCIL In this section, we illustrate our proposed graph contrastive learning algorithm inspired by causality. The overall framework is shown in Figure 2. We initially generate two views through spectral graph augmentation and random augmentation, which can be seen as causal interventions. The two views are subsequently fed into a shared GNN encoder to generate node representations. Finally, we propose two dimension-level contrastive objectives. The invariance objective pushes the encoder to capture the invariant information contained in causal variables, while the independence objective encourages the different dimensions of the representations to be independent. Thus, we can obtain the invariant representations from the well-trained model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8906 Figure 2: Overview of the GCIL framework. Given an original graph G, we first generate two views by spectral and random augmentation. The two views are subsequently fed into a shared GNN encoder to generate representations. At last, we optimize the invariance objective and the independence objective to render the model to learn the invariant representations. Causal Intervention Based on the above analysis, we need to generate graph augmentations that meet the following conditions: perturbing the non-causal information and maintaining causal information. However, it is very challenging that graph G does not provide any prior knowledge about C and S. Following (Liu et al. 2022a), perturbing the structure of the initial graph will change the strength of frequencies in the graph spectrum. The general graph augmentation should satisfy that: in two contrasted augmentations, the difference in high-frequency amplitudes should be greater than in low-frequency amplitudes. In other words, the lowest-frequency information can be approximately regarded as an invariant pattern between the two views. We consider the low-frequency information in the graph as causal content and the high-frequency information as non-causal content. Therefore, we conduct the intervention on S by disrupting the high-frequency information while leaving the low-frequency information unchanged. In particular, given an original adjacency matrix A, our goal is to obtain an adjacency matrix A′ by intervention, i.e., A′ = A+∆A+ −∆A−, where ∆A+ and ∆A−indicate which edge is added and deleted, respectively. ∆A+ is obtained by maximizing the following objective function: J =< ΘL, ∆A+ >2 + ϵH (∆A+) + < f, ∆A+1n −a > + < g, ∆⊤ A+1n −b >, (7) where Θ is a parameter updated in training and L is the laplacian matrix of G. ϵ is the weight parameter, f and g are Lagrange multipliers, and a,b are the node degree distributions. H(⋅) represents the entropy regularization, i.e., H(P ) = −∑i,j Pi,j (log (Pi,j) −1). The calculation of ∆A−can be referred to (Liu et al. 2022a). We then corrupt two graphs with random data augmentation as follows: VA = τA(A, X), VB = τB(A′, X), τA, τB ∈T, (8) where T represents the whole augmentation function space. Invariance Objective Now, based on the analysis in Section 3, we need to require that the relationship between VA and VB in Eq. 8 satisfies Eq. 5, so that the model can learn invariant representation. However, GCL does not provide any information about labels Y . To achieve the goal in Eq. 5, the equation can be reformulated as follows: CE(C, S = si) = CE(C, S = sj), (9) where CE represents the causal effect of variables. That is, we need to capture the consistency between the node representations ZA, ZB ∈RN×d obtained by Eq. 2. Then, we assume that each dimension in the representation holds the Gaussian distribution, and we propose an invariance objective that encourages the representations to remain unchanged dimension-wise, i.e., by enforcing the consistency of the mean and standard deviations for each dimension of the representations. Formally, the learning objectives can be formulated as follows: min g ∑ i ÂÂÂÂÂZA i −ZB i ÂÂÂÂÂ 2 2 , s.t.Std(ZA i ) = Std(ZB i ) = λ, (10) where ZA i , ZB i denote the i-th dimension of two embedding matrices, respectively, and Std represents the standard deviation. The first term encourages the mean of two embedding matrices to be equal in the same dimension, and the second term pushes the standard deviation close to λ, where λ is the hyper-parameter. Independence Objective According to the SCM, different causal variables might be correlated due to the confounders, leading to less informative representations generated by the model. To mitigate this issue, we propose an independence objective to satisfy Eq. 6. The independence objective seeks to force the different causal variables to be mutually independent, thereby eliminating the correlation between causal variables. Specifically, we use the Hilbert-Schmidt Independence Criterion (HSIC) to The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8907 measure the independence between factors. A value of 0 for HSIC indicates that the two variables are independent. Minimizing the following equation different dimensions in the representation Z to be independent: ∑ i≠j HSIC(Zi, Zj) = ∑ i≠j 1 (N −1)2 Tr(KiHKjH), (11) where Zi is the i-th dimension of embedding matrix, H represents the centering matrix I −11T 1 N , Ki and Kj are the two kernel matrices of Zi, Zj respectively. Here, we introduce the notion of a kernel matrix Ki ∈ RN×N, which calculates the kernel function value between each sample of the i-th variable. For example, Ki a,b = κ(Zi,a, Zi,b), where κ denotes the kernel function, Ki a,b represents the number in the a-th row and b-th column of the matrix Ki, and Zi,a denotes the value in the i-th column and a-th row in the representation matrix Z. Using a complex kernel (e.g., Gaussian kernel) in HSIC to measure independence between dimensions can result in high space complexity, making it challenging to implement in scenarios with large sample sizes and dimensions. Inspired by lemma 1 in (Mialon, Balestriero, and LeCun 2022), minimizing the HSIC of different dimensions can be considered as minimizing the sum of the off-diagonal elements of the covariance matrix. The proof for this assertion is as follows: Let kernel function κ(Zi,a, Zi,b) = g(Zi,a)g(Zi,b)T , where g : R ⟶RL is an elementwise projector. We denote the mapping of such projectors on Z as Q = g(Z) = [g(Z1), ..., g(Zd)] ∈RN×DL. According to the lemma 1 in Mialon, Balestriero, and LeCun (2022), we have: HSIC(Zi, Zj) = 1 (N −1)2 Tr(g(Zi)g(Zi)T Hg(Zj)g(Zj)T H) = 1 (N −1)2 ÂÂÂÂÂg(Zi)T Hg(Zj)ÂÂÂÂÂ 2 F = ∥Cov(g(Zi), g(Zj))∥2 F = ∥Cov(Q)(i−1)L∶iL,(j−1)L∶jL∥ 2 F . (12) where Tr represents the trace of matrix, Cov represents the covariance of two variables. We let g(X) = X, i.e. linear kernel, in this case Z = Q, thus we have ∑ i≠j HSIC(Zi, Zj) = ∑ i≠j Cov(Q)2 i,j = ∑ i≠j Cov(Z)2 i,j . (13) The calculation of HSIC values for different dimensions is converted to the calculation of covariance. Minimizing the eq. 13 ensuring independence between different dimensions. Optimization Objective We further normalize the embedding matrix dimensionwisely and ˜Z represents the node embeddings after normalization. Note that ∥˜Zi∥ 2=1, thus ming ∑i ÂÂÂÂÂZA i −ZB i ÂÂÂÂÂ 2 2 can be replaced by maximizing the inner product of ˜ZA i and ˜ZB i , Dataset #Nodes #Edges #Classes #Features Cora 2,708 10,556 7 1,433 Citeseer 3,327 9,228 6 3,703 Pubmed 19,717 88,651 3 500 Wiki-CS 11,701 432,246 10 300 Flickr 7575 479476 9 12047 Table 1: Statistics of benchmark datasets e.g. maxg ∑i ˜ZA i ⋅˜ZB i , where ⋅represents inner product. The si denotes the standard deviation of the i-th dimension before normalization. Minimizing √ ∥si −λ∥2 2 pushes the standard deviation close to λ. The overall optimization objective of our proposed GCIL is summarized as follows: L = −α ∑ i ˜ZA i ⋅˜ZB i + β ∑ i { √ ∥sA i −λ∥ 2 2 + √ ∥sB i −λ∥ 2 2} +γ ∑ i≠j Cov(˜ZA) 2 i,j + Cov(˜ZB) 2 i,j, (14) where α, β, and γ are hyper-parameters controlling the importance of each term in the loss. The λ represents the desired standard deviation of the dimensions. 5 Experiements Experimental Setup Dataset. To evaluate our method, we consider five commonly used node classification benchmark datasets from the previous works (Velickovic et al. 2019; Mernyei and Cangea 2020; Liu et al. 2022a), including Cora, Citeseer, Pubmed, WikiCS, and Flickr. The statistic of these datasets is summarized in Table 1. In the above datasets, three benchmark citation datasets contain sparse one-hot features, while Wiki-CS has dense numerical features. We adopt the public splits for Cora, Citeseer, Pubmed, and Flickr, where the training set contains 20 nodes per class, 500 nodes for validation, and 1000 for testing. For the Wiki-CS dataset, we evaluate the models on the public splits provided in (Mernyei and Cangea 2020). More details of datasets are presented in Appendix A. Baseline. We choose two kinds of methods as benchmarks: semi-supervised methods and self-supervised methods. (1) Semi-supervised mehtods: GCN (Kipf and Welling 2017) and GAT (Velickovic et al. 2018). (2) Self-supervised methods: DGI (Velickovic et al. 2019), MVGRL (Hassani and Khasahmadi 2020), GRACE (Zhu et al. 2020), GCA (Zhu et al. 2021), GraphCL (You et al. 2020), COSTA (Zhang et al. 2022), CCA-SSG (Zhang et al. 2021), ProGCL(Xia et al. 2022). Parameter Settings. We set the dimensions of all datasets to 512, with a learning rate of 0.0002 for the Flickr dataset and 0.001 for all other datasets. The weight decay for all datasets is 0.0001. Additional details of parameter settings are presented in Appendix C. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8908 Dataset Cora Citeseer PubMed Wiki-CS Flickr Metrics Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 GCN 80.6±0.7 81.5±0.6 68.1±0.5 70.9±0.5 78.5±0.3 78.9±0.3 73.2±0.8 77.5±0.4 48.9±1.6 50.2±1.2 GAT 81.3±0.3 82.3±0.2 67.5±0.2 72.0±0.9 77.4±0.2 77.8±0.2 75.5±0.4 78.3±0.4 35.0±0.8 37.1±0.3 DGI 80.4±0.7 82.0±0.5 67.7±0.9 71.7±0.8 76.8±0.9 76.7±0.9 70.6±0.1 75.6±0.1 31.2±1.6 33.0±1.6 MVGRL 81.5±0.5 82.8±0.4 66.8±0.7 72.5±0.5 79.8±0.4 79.7±0.3 74.9±0.1 78.1±0.1 31.2±2.9 33.4±3.0 GRACE 79.2±1.0 80.0±1.0 65.1±1.2 68.7±1.1 80.0±0.7 79.9±0.7 74.8±0.2 78.2±0.1 35.7±1.3 37.3±1.0 GCA 79.9±1.1 81.1±1.0 62.8±1.3 65.9±1.0 80.8±0.6 81.4±0.6 74.9±0.0 78.3±0.0 41.2±0.5 42.2±0.6 GraphCL 80.7±0.9 82.3±0.9 67.8±1.0 71.9±0.9 77.0±0.4 76.8±0.5 32.1±1.1 34.5±0.9 CCA-SSG 82.9±0.8 83.6±0.9 67.9±1.0 73.1±0.7 80.7±0.6 81.0±0.6 73.7±0.2 78.2±0.1 37.0±1.1 39.3±0.9 COSTA 81.2±0.4 82.4±0.7 62.9±3.6 66.4±4.3 79.8±0.6 80.4±0.9 72.2±5.1 76.2±2.3 42.0±2.6 42.8±2.5 ProGCL 81.0±0.4 81.9±0.6 62.9±0.9 65.9±0.9 80.5±2.3 80.5±2.3 75.3±0.1 78.7±0.1 37.8±1.1 38.5±1.0 Ours 83.8±0.5 84.4±0.7 69.1±0.4 73.7±0.5 81.5±0.5 81.6±0.7 75.6±0.1 78.6±0.1 40.0±0.8 43.0±0.4 Table 2: Quantitative results on node classification, where both the mean accuracy and the standard deviation are shown in the table. The best performance of self-supervised is bolded in table. The ’-’ indicates Out-of-Memory on a 24GB GPU. Ablation Cora Wiki-CS Ma-F1 Mi-F1 Ma-F1 Mi-F1 w/o indep 57.5 57.8 34.3 51.4 w/o inv 79.2 80.5 57.5 63.6 w/o Aug 83.3 84.1 75.1 78.0 GCIL 83.8 84.4 75.6 78.6 Table 3: Ablation Results on Cora and Wiki-CS Node Classification In this section, we evaluate the proposed GCIL on node classification. We conducted experiments on five datasets and the results are shown in Table 2. The best results of all self-supervised learning methods are bolded in the table. Please note that GraphCL has an out-of-memory issue on the Wiki-CS dataset. As we can see, our method GCIL achieves excellent performance on all five datasets compared to self-supervised methods. Except for the Flickr dataset, our method performs even better than semi-supervised learning methods. Specifically, we achieve the SOTA performance on Cora, Citeseer, and Pubmed datasets. In addition, we achieve the best result in Macro-F1 for the Wiki-CS dataset and only lower than the ProGCL method on Micro-F1. In the Flickr dataset, we can see that the semi-supervised method GCN achieves the best performance in two metrics. Our method outperforms all self-supervised methods in the Micro-F1 metric. We empirically find that a two-layer GCN encoder yields the best accuracy on the Cora, Pubmed, Wiki-CS, and Flickr datasets, while a one-layer GCN encoder achieves the best performance on the Citeseer dataset. We note that some GCL methods (Zhu et al. 2020; Zhang et al. 2022) also utilize two layers of GCN on the Citeseer dataset. This indicates that our approach achieves strong performance with fewer computations compared to the aforementioned methods on certain datasets. Ablation Studies In this section, we investigate the impact of Causal Intervention (Aug.), Invariance Objective (Inv.) and Independence Objective (Indep.) in GCIL. We design three variants of GCIL. (1) GCIL w/o Inv: GCIL without Invariance Objective (inv.) module, i.e., hyper-parameter α and β are set to zero, and γ corresponds with GCIL. (2) GCIL w/o Indep: GCIL without Independence Objective (indep.) module, i.e., hyper-parameter γ is set to zero while α and β corresponds with GCIL. (3) GCIL w/o Aug: The spectral-based augmentation is removed while maintaining the optimization objective. The results of three variants on four datasets are reported in Table 3. From the results, we can find that the overall performance order is as follows: GCIL > GCIL w/o Aug > GCIL w/o Indep > GCIL w/o Inv. The invariance objective has the greatest impact on the performance, which illustrates capturing the consistency between the node representations greatly encourages the model to encode task-related information. The performance of GCIL w/o Indep is worse than GCIL, which suggests that promoting independence between dimensions encourages the model to learn more informative node representations. Additionally, the result of GCIL w/o Aug indicates that our augmentation encourages the model to learn the invariant representations better than random augmentation. Finally, GCIL obtained the best performance, which indicates the effectiveness of considering three components together. Hyper-parameter Sensitivity In this section, we investigate the effect of α, β, γ, and embedding dimension. The results on four hyper-parameters are respectively reported in Figure 3. Analysis of α. The α controls the importance of the term that guarantees a constant mean of the same dimension in two representations. We fix the hyper-parameters β and γ while varying α from 0.5 to 1.4 and report the F1-Macro and F1-Micro performance of GCIL. As shown in Figure 3(a) and 3(b), the performance of GCIL first increases and then The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8909 0.6 0.8 1.0 1.2 1.4 79 80 81 82 83 84 85 F1-Macro F1-Micro (a) Cora: α 0.6 0.8 1.0 1.2 1.4 60 63 66 69 72 75 78 F1-Macro F1-Micro (b) Wiki-CS: α 0 1 2 3 4 5 6 7 8 9 79 80 81 82 83 84 85 F1-Macro F1-Micro (c) Cora: β 0 1 2 3 4 5 6 7 8 9 60 63 66 69 72 75 78 F1-Macro F1-Micro (d) Wiki-CS: β 1e-4 5e-4 1.2e-3 1.8e-3 5e-3 70 72 75 78 80 82 85 F1-Macro F1-Micro (e) Cora: γ 1e-3 5e-3 1.2e-2 1.8e-2 5e-2 42 48 54 60 66 72 78 F1-Macro F1-Micro (f) Wiki-CS: γ Figure 3: The hyper-parameter sensitivity of GCIL with varying α, β and γ on Cora and Wiki-CS datasets. decreases with the growth of α, while the performance fluctuation is very slight. It indicates that GCIL is not sensitive to α. Analysis of β. The β controls the importance of the term that guarantees constant standard deviation of the same dimension in two representations. Similar to the above process, we fix the hyper-parameters α and γ while varying β from 0 to 9. In Figure 3(c) and 3(d), we can see that with a suitable range, the performance of our method benefits from the increase of β, and the performance will drop slightly with the large value of β. Analysis of γ. The parameter γ controls the contribution of the independence objective. We varied γ from 1e-4 to 1e-2 for Cora and from 1e-3 to 0.1 for Wiki-CS to test its impact. Increasing attention on the independent term improved performance, demonstrating the objective’s effectiveness in encoding relevant information. However, excessively large values of γ sharply decreased performance, indicating poor optimization of other loss terms. We can choose the appropriate γ by observing the degree of optimization of the invariance objective. Visualization To better understand the correlated relationships between the dimensions of the representation, we visualize the correlation matrix of the representation matrix Z for the Wiki-CS 0 150 300 450 Dimension 0 150 300 450 Dimension (a) w/o indep and std 0 150 300 450 Dimension (b) w/o std 0 150 300 450 Dimension 0.75 0.50 0.25 0.00 0.25 0.50 0.75 (c) GCIL Figure 4: Correlation matrix of the representations on WikiCS. dataset. In Figure 4, each row and column corresponds to a representation dimension, and the color indicates the Pearson correlation coefficient (Cohen et al. 2009). In Figure 4(a), we set the hyper-parameter β and γ to 0, while keeping the other hyper-parameters unchanged. We observe that strong correlations exist among different dimensions of the representation. This suggests that these dimensions may encode similar information. In Figure 4(b), we set the hyper-parameter β to 0 and discover that the correlations between dimensions are reduced compared to the previous case. However, different dimensions still correlate with each other. In contrast, the result of GCIL, as shown in Figure 4(c), the correlation matrix of the representation demonstrates that almost all off-diagonal values converge to 0. This indicates that different dimensions contain orthogonal and distinct information, which highlights the effectiveness of our method in learning informative representations. Overall, these visualizations confirm that our proposed method successfully captures and utilizes orthogonal information in the representation, leading to improved performance in capturing meaningful features. 6 Conclusion In this paper, we study graph contrastive learning from the causal perspective and find out that previous methods may discard the causal information contained in the original graph, which prevents the model from learning the invariant representations. To learn the invariant representations, we propose a novel GCL method from the causal view. We first simulate conducting intervention on non-causal factors with spectral graph augmentation. Then, we design the invariance objective and the independence objective to encourage the model to extract causal information contained in the graph. Experimental results demonstrate that our proposed GCIL obtains the best performance across baselines on four node classification datasets. Acknowledgments This work is supported in part by the National Natural Science Foundation of China (No. U20B2045, 62192784, U22B2038, 62002029, 62172052, 62322203). This work is also supported by Foundation of Key Laboratory of Big Data Artificial Intelligence in Transportation(Beijing Jiaotong University), Ministry of Education (No.BATLAB202301). This project is also funded by China Postdoctoral Science Foundation (No. 2023M741946) and China Postdoctoral Researcher Program (No.GZB20230345). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8910 References Arjovsky, M.; Bottou, L.; Gulrajani, I.; and Lopez-Paz, D. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893. Bruna, J.; Zaremba, W.; Szlam, A.; and Lecun, Y. 2014. Spectral networks and locally connected networks on graphs. In ICLR. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In ICML, 1597–1607. Chen, Y.; Zhang, Y.; Bian, Y.; Yang, H.; Kaili, M.; Xie, B.; Liu, T.; Han, B.; and Cheng, J. 2022. Learning causally invariant representations for out-of-distribution generalization on graphs. Advances in Neural Information Processing Systems, 35: 22131–22148. Cohen, I.; Huang, Y.; Chen, J.; Benesty, J.; Benesty, J.; Chen, J.; Huang, Y.; and Cohen, I. 2009. Pearson correlation coefficient. Noise reduction in speech processing, 1–4. Defferrard, M.; Bresson, X.; and Vandergheynst, P. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. NeurIPS. Fan, S.; Wang, X.; Mo, Y.; Shi, C.; and Tang, J. 2022a. Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure. In NeurIPS. Fan, S.; Wang, X.; Shi, C.; Cui, P.; and Wang, B. 2023a. Generalizing graph neural networks on out-of-distribution graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence. Fan, S.; Wang, X.; Shi, C.; Kuang, K.; Liu, N.; and Wang, B. 2022b. Debiased graph neural networks with agnostic label selection bias. IEEE transactions on neural networks and learning systems. Fan, S.; Zhang, S.; Wang, X.; and Shi, C. 2023b. Directed acyclic graph structure learning from dynamic graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 7512–7521. Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive representation learning on large graphs. NeurIPS. Hassani, K.; and Khasahmadi, A. H. 2020. Contrastive multiview representation learning on graphs. In ICML, 4116–4126. He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In CVPR, 9729–9738. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR. Li, S.; Wang, X.; Zhang, A.; Wu, Y.; He, X.; and Chua, T.-S. 2022. Let invariant rationale discovery inspire graph contrastive learning. In ICML, 13052–13065. Liu, N.; Wang, X.; Bo, D.; Shi, C.; and Pei, J. 2022a. Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum. In NeurIPS. Liu, Y.; Jin, M.; Pan, S.; Zhou, C.; Zheng, Y.; Xia, F.; and Yu, P. 2022b. Graph self-supervised learning: A survey. IEEE Transactions on Knowledge and Data Engineering. Mernyei, P.; and Cangea, C. 2020. Wiki-cs: A wikipediabased benchmark for graph neural networks. arXiv preprint arXiv:2007.02901. Mialon, G.; Balestriero, R.; and LeCun, Y. 2022. Variance covariance regularization enforces pairwise independence in self-supervised representations. arXiv preprint arXiv:2209.14905. Mitrovic, J.; McWilliams, B.; Walker, J. C.; Buesing, L. H.; and Blundell, C. 2020. Representation Learning via Invariant Causal Mechanisms. In ICLR. Oord, A. v. d.; Li, Y.; and Vinyals, O. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Pearl, J. 2009. Causality. Cambridge university press. Pearl, J.; Glymour, M.; and Jewell, N. P. 2016. Causal inference in statistics: A primer. John Wiley & Sons. Qiu, J.; Chen, Q.; Dong, Y.; Zhang, J.; Yang, H.; Ding, M.; Wang, K.; and Tang, J. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In KDD, 1150–1160. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. In ICLR. Velickovic, P.; Fedus, W.; Hamilton, W. L.; Li`o, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep Graph Infomax. ICLR. Wu, Y.; Wang, X.; Zhang, A.; He, X.; and Chua, T. 2022. Discovering Invariant Rationales for Graph Neural Networks. In ICLR. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 4–24. Xia, J.; Wu, L.; Wang, G.; Chen, J.; and Li, S. Z. 2022. Progcl: Rethinking hard negative mining in graph contrastive learning. In ICML, 24332–24346. PMLR. Xie, Y.; Xu, Z.; Zhang, J.; Wang, Z.; and Ji, S. 2022. Selfsupervised learning of graph neural networks: A unified review. IEEE Transactions on Pattern Analysis and Machine Intelligence. You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; and Shen, Y. 2020. Graph contrastive learning with augmentations. NeurIPS, 5812–5823. Zhang, D.; Zhang, H.; Tang, J.; Hua, X.-S.; and Sun, Q. 2020. Causal intervention for weakly-supervised semantic segmentation. NeurIPS, 655–666. Zhang, H.; Wu, Q.; Yan, J.; Wipf, D.; and Yu, P. S. 2021. From canonical correlation analysis to self-supervised graph neural networks. NeurIPS, 76–89. Zhang, Y.; Zhu, H.; Song, Z.; Koniusz, P.; and King, I. 2022. COSTA: Covariance-Preserving Feature Augmentation for Graph Contrastive Learning. In KDD, 2524–2534. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; and Sun, M. 2020. Graph neural networks: A review of methods and applications. AI Open, 57–81. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2020. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8911 Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2021. Graph contrastive learning with adaptive augmentation. In TheWebConf, 2069–2080. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8912
2024
990
18,840
HGE: Embedding Temporal Knowledge Graphs in a Product Space of Heterogeneous Geometric Subspaces Jiaxin Pan1, Mojtaba Nayyeri1, Yinan Li1, Steffen Staab1,2 1University of Stuttgart, Stuttgart, Germany 2University of Southampton, Southampton, United Kingdom [email protected], [email protected], [email protected], [email protected] Abstract Temporal knowledge graphs represent temporal facts (s, p, o, τ) relating a subject s and an object o via a relation label p at time τ, where τ could be a time point or time interval. Temporal knowledge graphs may exhibit static temporal patterns at distinct points in time and dynamic temporal patterns between different timestamps. In order to learn a rich set of static and dynamic temporal patterns and apply them for inference, several embedding approaches have been suggested in the literature. However, as most of them resort to single underlying embedding spaces, their capability to model all kinds of temporal patterns was severely limited by having to adhere to the geometric property of their one embedding space. We lift this limitation by an embedding approach that maps temporal facts into a product space of several heterogeneous geometric subspaces with distinct geometric properties, i.e. Complex, Dual, and Split-complex spaces. In addition, we propose a temporal-geometric attention mechanism to integrate information from different geometric subspaces conveniently according to the captured relational and temporal information. Experimental results on standard temporal benchmark datasets favorably evaluate our approach against state-of-the-art models. Introduction Knowledge Graphs (KGs) (Hogan et al. 2021) model facts in real-world applications as directed edge-labeled graphs. Temporal KGs (TKGs) include timestamps to their facts in order to model the temporal validity of facts. Depending on the representational model, timestamps may represent time points or time intervals. For instance, a quadruple (Boris Johnson, IsPrimeministerOf, UK, [2019, 2022]) in a TKG represents the fact that Boris Johnson is the prime minister of UK between 2019 and 2022. Relations in temporal knowledge graphs may exhibit various structural temporal patterns. In the left part of Figure 1, (Charles III, marriedWith, Camilla, 2005) and (Camilla, marriedWith, Charles III, 2005) forms a symmetrical structure in time. In the middle part, at first (Elizabeth BowesLyon, hasChild, Elizabeth II, 1926) and then (Elizabeth II, hasChild, Charles III, 1948). The transition of hasChild relation through Elizabeth II forms a hierarchy structure in Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Euclidean Unit Circle Galilean Unit Circle Minkowskian Unit Circle Visit 1970 Visit 1978 hasChild 1926 hasChild 1930 hasChild 1948 hasChild 1950 hasChild 1964 Hierarchy Structure Symmetric Structure Star Structure hasGrandChild, 1961 marriedWith 2005 marriedWith 2005 hasChild 1961 Visit 1980 Visit 1968 Figure 1: Unit spheres in their corresponding spaces. All points on the orange hyperplanes have the same distance to their origin. Different spaces favor different temporal patterns: Left: Unit circle represented in Complex space (top) is suitable for representing periodicities and for inferencing with ‘periodic’ logical temporal patterns, e.g. symmetry (bottom). Middle: Minkowskian unit circle in Split-complex space (top) is suitable for representing a temporal hierarchy formed by Make statement. Right: Galilean unit circle represented in Dual space (top) is suitable for representing temporal star patterns (bottom). TKGs. In the right part, Charles III, Visit Malta, France, Belgium, USA etc at different timestamps, forming a star structure over time. Moreover, as Charles III shows, the structures which entities are involved in temporal knowledge graphs may evolve over time. How to preserve different relational structural patterns and how to capture evolving temporal patterns for entities is a fundamental challenge in TKGEs. Existing embedding approaches such as TeRO, RotateQVS, and TLT-KGE(Xu et al. 2020; Chen et al. 2022; Zhang et al. 2022) resorted to single underlying embedding spaces, such as Complex space or Quaternion space to model symmetric patterns by the rotations on a unit hypersphere. Other works (Chami et al. 2020; Balazevic, Allen, and Hospedales 2019; Montella, Barahona, and Heinecke 2021; Han et al. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8913 2020) use hyperbolic space to preserve hierarchical patterns in temporal KGs. However, their capability to model all kinds of structural patterns was severely limited by having to adhere to the geometric properties of their one embedding space. (Han et al. 2020) has shown the advantage of using multiple geometric subspaces (spherical, hyperbolic, etc) in different dimensions to preserve heterogeneous structural patterns in temporal KGs. However, it ignores the evolution of structural patterns between entities and requires a manual selection of subspaces dimension. How to integrate suitable subsets of geometries to model different relational structural patterns as well as capturing evolutionary temporal patterns between entities remain an open problem in these approaches. In this paper, we address these problems by introducing a new product space covering various geometric subspaces namely a) complex, b) split-complex and c) dual spaces with a temporal relational attention mechanism and a temporal geometric attention mechanism to model both structural and evolutionary temporal patterns. Figure 1 illustrates the spaces and some corresponding patterns. a) Consider the left part of Figure 1: In the complex space, Euclidean unit circles are induced by circular rotations. Thus, points on the circle establish periodicities and various logical temporal patterns, e.g. relations that are symmetry in time (Xu et al. 2020). Circular rotations are modeled by circular sine and cosine functions in the complex space. b) Consider the middle part of Figure 1: In the split-complex space, a Minkowskian unit circle is induced through hyperbolic rotation, where points on the circle can be mapped using hyperbolic sine and cosine. Thus, the split-complex space can capture a temporal hierarchy, e.g. children must be born after their parents. c) Consider the right part of Figure 1: In the dual space, a Galilean unit circle is induced by the rotation that maps points on the circle using Galilean sine and cosine. Points on the induced circle (two parallel lines) are equidistant to the center, making it useful for modeling star-shaped subgraphs. The combination of these three spaces together with their geometries and corresponding operators allows for capturing diverse logical and structural patterns such as relational symmetry in time, temporal hierarchy patterns, and temporal star patterns. Which geometry should be preferred in a specific case, however, needs to be learned. For this purpose, we provide a temporal geometric attention mechanism to select the preferred geometries for a given relation and time. Moreover, to deal with the evolution of patterns between entities, we propose the temporal-relational attention mechanism to balance static embedding and time-evolving embedding. We compare our TKGE model, heterogeneous geometric embedding (HGE), to TKGE methods in Complex space such as TComplEx (Lacroix, Obozinski, and Usunier 2020), TeRo (Xu et al. 2020), TLT-KGE (Zhang et al. 2022) and find that our model obtains better results for link prediction tasks in TKGs. In summary, the key contributions of this paper are as follows: • We extend state-of-the-art Temporal Knowledge Graph Embedding (TKGE) models that use Complex spaces to a new method, HGE. By utilizing multiple heterogeneous geometries, HGE embeds temporal facts in a product space of Complex, Split-complex, and Dual subspaces. • Our theoretical analysis shows that our embedding method can capture a range of various structural and logical temporal patterns by utilizing the rotation operations acting on Euclidean, Minkowskian, and Galilean unit circles. These theoretical considerations are supported by experiments and ablation studies on pre-existing benchmark datasets. • Two novel kinds of attention mechanisms, temporalrelational attention, and temporal-geometric attention allow for representing relation changing frequencies and suitable geometries, respectively. • Experimental results on benchmark datasets show that HGE uniformly improves several state-of-the-art TKGE models. Subsequent ablation studies verify the general benefit of the attention-based product space models over the Complex space. Preliminaries Definition 1 (Time Interval). Let T be the set of closed intervals on the real line R. For a time interval τ = [m, n] ∈ T, τ ⊆R, with m, n ∈τ, m ≤n it holds that ∀t ∈R : m ≤ t ≤n ⇒t ∈τ. Definition 2 (Temporal Knowledge Graph). Let V be a set of vertices, R be a set of relation labels, T be the set of all time intervals, G ⊆V × R × V × T, then a temporal fact (s, p, o, τ) ∈G with subject s, object o and relation label p is valid during time interval τ. A temporal knowledge graph TKG = (V, R, G) defines a set of temporal facts. In addition, we denote Gi as i-th snapshot of the TKG We re-use Allen’s interval calculus to express relations between time intervals (Allen 1983). It defines 13 possible relations between two time intervals such that these relations are exhaustive and pairwise disjoint. For example, Allen relation Contains(τ1, τ2) holds between two time intervals τ1 = [m1, n1], τ2 = [m2, n2] if m1 < m2 < n2 < n1. Following (Singh et al. 2023), we refer to the 13 relations of Allen interval calculus as Allen relations and the relation in temporal knowledge graphs as KG relations. Appendix A describes the details of 13 Allen relations. Embedding Model in Heterogeneous Geometric Subspaces To capture heterogeneous structural and logical patterns in a temporal KG, we propose the HGE model which extends the complex space adopted by existing models(Zhang et al. 2022; Lacroix, Obozinski, and Usunier 2020) to an attention-based product space. We introduce the key components of our temporal knowledge graph embedding method, HGE, in the following order: a) embedding space, b) temporal-relational attention, c) temporal-geometric attention. Figure 2 shows the structure of our proposed HGE model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8914 Temporal Knowledge Graph Euclidean Unit Circle Backbone Models Minkowskian Unit Circle Galilean Unit Circle G1 G2 Gt Temporal Geometric Attention Temporal Relational Attention ps, p! pst Product Space Vector Sharing Scoring Figure 2: An illustration for the HGE. At first, entities, relations and timestamps in temporal knowledge graphs are represented in heterogeneous geometric subspaces: 1) complex space, 2) split-complex space, 3) dual space respectively. Based on the static relation embedding ps, and dynamic relation embedding pc, temporal relational attention learns hybrid relation embedding pst based on each relation’s changing frequencies. Temporal geometric attention incorporates embeddings in geometric subspaces into a product space by pst, which decides the suitable geometry for each relation. Finally, the scoring function is performed on the embeddings learned in the product space. Embeddings in Geometric Subspaces We aim to embed the elements of a temporal knowledge graph (entities, relations, and times) into a d dimensional product space M = M1 × . . . × Md where each Mi is a Complex, Dual or Split-complex space, i.e. Mi ∈{C, S, D}. For a given fact (s, p, o, τ) ∈G, we use the mappings fe : E −→Mi, fr : R −→Mi, fτ : T −→Mi to assign d dimensional vectors to each element of a TKG as sMi, pMi, oMi, τMi respectively. We introduce the three fundamental parts of the product space for developing our model, namely Complex, Splitcomplex and Dual spaces together with their geometric interpretations. Given a quadratic formula k2 + g = 0, g = {−1, 1, 0}, we have the three number systems based on the value of g: Complex Vector Space Complex numbers (Harkin and Harkin 2004; Helzer 2000) allow for solving the quadratic formula k2 + 1 = 0, by defining a new number k = i where i2 = −1. i is used to define the set of Complex numbers C = {q = a + bi|a, b ∈R, i2 = −1}, where a is the real and b the imaginary part. The multiplication of two Complex numbers q1 = a + bi, q2 = c + di is defined by q1 ∗q2 = (ac −bd) + (ad + bc)i. It has been proved by previous works (Zhang et al. 2022; Lacroix, Obozinski, and Usunier 2020; Xu et al. 2020) to represent temporal knowledge graphs effectively. Following their work, we represent s, p, o, τ in complex space as: sC = sCa + sCbi, pC = pCa + pCbi, oC = oCa + oCbi, τC = τ Ca + τ Cbi (1) where s{.}, p{.}, o{.}, τ {.} ∈Rd. {.}a represents the real part of each element and {.}b represents the imaginary part. Split-complex Vector Space Dealing with quadratic formula k2 −1 = 0, a split-complex number (Harkin and Harkin 2004; Helzer 2000) is defined as p = a + jb, where k = j, j2 = 1, j ̸= 1, −1. Formally the space of splitcomplex number is defined as S = {q = a + bj|a, b ∈ R, j2 = 1, j ̸= 1, −1}. a, b are real and split parts, respectively. The multiplication of two Split-Complex numbers q1 = a + bj, q2 = c + dj is defined by q1 ∗q2 = (ac + bd) + (ad + bc)j. We represent s, p, o, τ in splitcomplex space as: sS = sSa + sSbj, pS = pSa + pSbj, oS = oSa + oSbj, τS = τ Sa + τ Sbj, (2) Dual Vector Space Dual numbers (Angeles 1998; Helzer 2000) are similar to Complex numbers, but their imaginary ϵ is defined such that ϵ2 = 0, ϵ ̸= 0. The dual space is then defined as D = {q = a + bϵ|a, b ∈R, ϵ2 = 0, ϵ ̸= 0} where a, b are real and dual components of the dual numbers. The multiplication of two Dual numbers q1 = a+bϵ, q2 = c+dϵ is defined by q1 ∗q2 = (ac) + (ad + bc)ϵ. We represent s, p, o, τ in dual space as: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8915 sD = sDa + sDbϵ, pD = pDa + pDbϵ, oD = oDa + oDbϵ, τD = τ Da + τ Dbϵ (3) Embeddings in Attention-based Product Space How to fuse information from different subspaces into a product space efficiently remains a challenging task in the knowledge graph embedding task. Existing work (Han et al. 2020) assigns different dimensions di for each subspace Mi, where P di = d, and calculates their individual loss which will be aggregated subsequently to a total loss. Such a stacking strategy requires the manual selection of suitable di numbers for every new task and consumes huge computation resources to reach optimal di decision. To capture suitable geometries from various subspaces efficiently, we introduce an attention-based product space. Rather than stacking ad hoc vectors for each subspace, our method reuses vectors for every subspace and aggregates Scoring Vectors of subspaces by relational and temporal information. Real and Imaginary Vector Sharing Existing methods (Han et al. 2020) assigns different vectors for each subspaces. However, pre-experiments in Appendix C illustrate that although their geometric interpretations are diverse, real and imaginary vectors in different subspaces are almost unanimous when trained to optimal settings with the same embedding sizes. Accordingly, we share the real and imaginary vectors between all subspaces as follows: {.}Ca = {.}Sa = {.}Da {.}Cb = {.}Sb = {.}Db (4) where {.} ∈{s, p, o, τ}. With the reusing strategy, our method avoids the manual selection of subspace dimensions and saves embedding space. If not specified, we use s = [sa, sb], p = [pa, pb], o = [oa, ob] and τ = [τ a, τ b] to represent embeddings a generic geometric subspace in the following section for simplicity. Temporal-relational Attention Relations in TKGs may exhibit different frequencies of change varying from fully static to quickly changing behavior (Lacroix, Obozinski, and Usunier 2020). For example, the relation capitalOf is not changing often over time, while the relation isPresidentOf exhibits more frequent changes. Therefore, for each relation p, we provide two vectors ps, pc ∈M. The first captures the static behavior and the second captures the dynamic behavior by multiplication with time embedding τ τ. We provide a temporal attention mechanism to emphasize static or dynamic behavior depending on the characteristics of the relation: psτ = ατ (pc ∗τ τ) + αsps (ατ, αs) = Softmax (wp (pc ∗τ τ) , wpps) (5) where wp is the relation-specific weight. Scoring Vectors from Subspaces We take all values in each subspace for entities, relations, and times si, psτi, oi ∈ Mi and compute ci = ⟨si, psτi, oi⟩1, where ⟨, , ⟩is the product in Complex, Split-complex and Dual spaces computed as follows: cC = ⟨(sapsτa −sbpsτb) + (sapsτb + sbpsτa)i, oa + iob⟩ = (sapsτaoa −sbpsτboa −sapsτbob −sbpsτaob)+ (sapsτaob −sbpsτbob + sapsτboa + sbpsτaoa)i, cS = ⟨(sapsτa + sbpsτb) + (sapsτb + sbpsτa)j, oa + job⟩ = (sapsτaoa + sbpsτboa + sapsτbob + sbpsτaob)+ (sapsτaob + sbpsτbob + sapsτboa + sbpsτaoa)j, cD = ⟨(sapsτa) + (sapsτb + sbpsτa)ϵ, oa + ϵob⟩ = (sapsτaoa) + (sapsτaob + sapsτboa + sbpsτaoa)j. (6) Temporal-geometric Attention Scoring vectors represent distinctive geometric information captured by each subspace. We propose a temporal-geometric attention mechanism to integrate them based on current relational and time information. βi = Softmax (psτci) , i ∈{C, D, S}. (7) It emphasizes the most suitable geometry for each query via the augmented relation embedding pst. As the changing frequencies of relations could be reflected by pst, HGE could model the static and dynamic logical and structural patterns in TKGs. The overall score aggregates the inner product in all subspaces: SM(s, r, o, τ) = d X i=1 βici, (8) It’s worth noting that new geometric subspaces could be easily incorporated into Equation 8 given shared real and imaginary vectors and appropriate scoring vectors. Theoretical Analysis on Temporal Patterns Knowledge graphs exhibit patterns. A structural pattern is a regularity in the graph, e.g. a tree as given in the middle of Figure 1, that may or may not allow for logical conclusions, but which may be hard to represent in some embedding methods. A logical pattern represents a rule that allows for concluding new facts when applied to given facts. For instance, (Charles,marriedWith,Camilla) implies (Camilla,marriedWith,Charles) because marriedWith is symmetric. Embeddings for temporal knowledge graphs must account for temporal facts including time components and express corresponding temporal patterns. Four kinds of logical patterns, symmetric, inverse, asymmetric and evolve are mostly considered and studied in existing TKGE models (Chen et al. 2022; Xu et al. 2020). However, their definitions either neglect time information or merely consider patterns 1Similar to previous work(Xu et al. 2020; Lacroix, Obozinski, and Usunier 2019), we adopt conjugate on oi to increase the performance in experiments. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8916 when facts happen at the same time. We generalize and go beyond these approaches and consider static temporal patterns and dynamic temporal patterns. If a structural or a logical temporal pattern holds regardless of time information as in traditional knowledge graphs, we call it a static temporal pattern. If a structural or a logical temporal pattern represents or draws conclusions using time information, we call it a dynamic temporal pattern. In the following, we will formally define a few temporal patterns. For simplicity, we only illustrate the occasion when τ is a time interval. However, it’s convenient to extend the following definitions when τ is a time point. Examples of each definition are indicated after “//”. Static Logical Temporal Patterns Definition 3. A temporal relation p is symmetric at all points in time iff ∀s, o, τ : (s, p, o, τ) →(o, p, s, τ). // marriedWith A temporal relation p is anti-symmetric at all points in time iff ∀s, o, τ : (s, p, o, τ) →¬(o, p, s, τ). // locatedIn Definition 4. A temporal relation p1 is the inverse of temporal relation p2 at all points in time iff ∀s, o, τ : (s, p1, o, τ) →(o, p2, s, τ). // advises, advisedBy Dynamic Logical Temporal Patterns Definition 5. A temporal relation p is temporal symmetric iff ∀s, o, τ1 : ∃τ2 : (s, p, o, τ1) →(o, p, s, τ2). // consults A temporal relation p is temporal anti-symmetric iff ∀s, o : ∃τ1 : (s, p, o, τ1) →∀τ2¬(o, p, s, τ2). // arrest Definition 6. A relation p1 at time τ1 is the temporal inverse of relation p2 at time τ2 iff ∀s, o : ∃τ1, τ2 : (s, p1, o, τ1) →(o, p2, s, τ2). // invitesToVisit, Visit Definition 7. Relation p1 evolves into relation p2 iff ∀s, o : ∃τ1, τ2 : Precedes(τ1, τ2) & (s, p1, o, τ1) → (s, p2, o, τ2). // engagedWith, marriedWith Definition 8. Relation p is temporary in time iff ∀s, o, τ1 : (s, p, o, τ1) →∃τ0, τ2 : Precedes(τ0, τ1) & Precedes(τ1, τ2) & ¬(s, p, o, τ0) & ¬(s, p, o, τ2). //worksFor Modeling Temporal Patterns We present a theoretical analysis corresponding to the ability of our method in modeling various temporal patterns introduced in as follows: (See details in Appendix F) Proposition 1. HGE can model (anti-)symmetry and temporal (anti-)symmetry in Definitions 3 and 5. Proposition 2. HGE can model inverse and temporal inverse patterns in Definitions 4 and 6. Proposition 3. HGE can model evolves pattern in Definition 7. Proposition 4. HGE can model temporary relations in Definition 8. Experiments Experimental Settings Dataset To evaluate the effectiveness of the proposed attention-based product space embedding, we perform the link prediction task on four popular temporal knowledge graph benchmark datasets, i.e. ICEWS14 (Garcia-Duran, Dumanˇci´c, and Niepert 2018), ICEWS05-15 (GarciaDuran, Dumanˇci´c, and Niepert 2018), GDELT (Trivedi et al. 2017) and Wikidata12k (Lacroix, Obozinski, and Usunier 2020). ICEWS14 and ICEWS05-15 are two subset datasets from the Integrated Conflict EarlyWarning System (ICEWS)(Lautenschlager, Shellman, and Ward 2015), which contain news facts in 2014 and between 2005 and 2015 respectively. The Global Database of Events, Language, and Tone (GDELT) is a large knowledge graph that describes facts about human behaviors. We adopt the same data subset as (Gao et al. 2020), which uses the subset of facts from April 1, 2015 to March 31, 2016. Wikidata12k is a subset of wikidata dump (Erxleben et al. 2014). It represents the time information τ ∈T as time intervals, in which m or n could be empty, referring to intervals (−∞, n] or [m, ∞). Table 5 summarises the statistics of four datasets. Backbone and Baseline Models Our proposed model, HGE, aims to generalize complex-space-based TKGE models to an attention-based product space of heterogeneous geometric subspaces. Hence, we choose several state-of-theart complex-space-based TKGE models as HGE’s backbone models to validate its effectiveness. TeRo (Xu et al. 2020) defines the evolution of entity embeddings from the initial state to the current time as a rotation in complex vector space. TComplEx and TNTComplEx (Lacroix, Obozinski, and Usunier 2020) models temporal knowledge graph completion as an order 4 tensor completion problem. TLT-KGE (Zhang et al. 2022) models semantic information and temporal information as different parts of complex space or quaternion space. Complex or quaternion operations exchange information between different parts. To give a comprehensive overview, we also compare our model with non-complex space temporal knowledge graph embedding baselines TTransE (Garcia-Duran, Dumanˇci´c, and Niepert 2018), TA-DistMult (Leblay and Chekol 2018), RotateQVS(Chen et al. 2022), BoxTE (Messner, Abboud, and Ceylan 2022), and LCGE(Niu and Li 2023)2. Evaluation Metrics We adopt the link prediction task to evaluate our proposed model. Link prediction infers the missing entities for incomplete facts. During the test step, we follow the procedure of (Xu et al. 2020) to generate candidate quadruples. From a test quadruple (s, p, o, τ), we replace s with ¯s ∈E and o with ¯o ∈E to get candidate quadruples (s, p, ¯o, τ) ∪(¯s, p, o, τ). If τ is a time interval [m, n], we sample a time point (appearing in the dataset) uniformly at random, in the range [m, n] as (Lacroix, Obozinski, and Usunier 2019). When m or n is empty, we set it as the first or last time point of the dataset. All candidate quadruples will be ranked by their scores using a time-aware 2We notice some inconsistent inference issues in LCGE’s original code. Please refer to Appendix J for detailed discussions. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8917 filtering strategy (Goel et al. 2020). We evaluate our models with four metrics: Mean Reciprocal Rank (MRR), the mean of the reciprocals of predicted ranks of correct quadruples, and Hits@(1/3/10), the percentage of ranks not higher than 1/3/10. For all experiments, the higher the better. To have a fair comparison, we set entity and relation embedding dimension sizes as reported in the original papers. For TeRo-based models, we set the dimension size of d as 500 on four benchmark datasets. For TComplEx-based, TNTComplEx-based, and TLT-KGE-based models, we set the dimension size of d as 1200, 1200, 1500 and 2000 on ICEWS14, ICEWS05-15, GDELT and Wikidata12k respectively. The training epoch is set to 200. We adopt the same regularizer, loss function, and negative sampling size as reported in the original papers3. HGE’s Performance Comparison We evaluate HGE’s performance gain on four datasets. Table 1 shows the performances of the original backbones and backbones plugged with HGE on time point datasets ICEWS14, ICEWS05-15, and GDELT. From Table 1, we have the following observations: (i) HGE can provide significant improvements over chosen backbones consistently on all datasets, which verifies the effectiveness of the proposed HGE module. (ii) We observe the proposed method is more effective on the dense dataset GDELT. GDELT provides more instances for each relation-timestamp pair. We conjecture it benefits the temporal-geometric attention mechanism, in which finegrained geometric attention is influenced by both relational and temporal information. Conversely, ICEWS05-15 is the sparsest dataset. As a result, HGE dose not greatly improve the performance of backbones on ICEWS05-15 and even decreases TeRo’s performance. (iii) We find that HGE achieves greater performance gains for TNTComplEx and TComplEx than for TLT-KGE. As the TLT-KGE model provides interactions between time information and relation information in complex numbers, we believe it substitutes the function of the temporal-relation attention mechanism to some degree. However, Table 7 in Appendix presents that TNTComplEx+HGE reaches comparable results as TLT-KGE with only half parameter numbers, which demonstrates the proposed temporal-relation attention mechanism is more efficient to combine time and relation information. Table 2 shows link prediction results on the time interval dataset. With HGE, all metrics get improvement, reflecting HGE could boost the performance of backbones on different kinds of TKGs. Ablation Study We conduct ablation study experiments on backbone TNTComplEx to investigate the effectiveness of each component. From Table 3, we have the following observations: (i) Our proposed subspace integration strategy achieves higher performance than the stacking strategy introduced by 3The code, details of training and appendix are provided in https://github.com/NacyNiko/HGE Poland 𝛽ℂ= 0.32 (Barack Obama, intent to cooperate, ?, 153) Complex space: Angela Merkel Split-complex space: Japan Dual space: Poland 𝛽𝕊= 0.27 𝛽𝔻= 0.41 Query: Barack Obama Angela Merkel Japan Poland France 𝑟$, 105 𝑟$, 54 𝑟$, 114 Canada 𝑟$, 268 𝑟%, 110 𝑟%, 140 Figure 3: A case study of HGE model. We omit some entities connected to France by relation r1 which forms a temporal star structure for brevity. r1 stands for intent to cooperate relation and r2 stands for consult relation. Time information is shown in ids. (Han et al. 2020). We find out individual loss for each subspace in TNTComplEx+stack becomes unbalanced during training time. We conjecture the model may pay too much attention to optimizing the unsuitable geometry subspaces for certain facts and hamper further improvement. (ii) We observe that the temporal-relation attention mechanism contributes more performance gain on GDELT. GDELT is a dense dataset and has more facts for the enumeration of objects of relation types and timestamps than other datasets. We conjecture it benefits from the fine-grained geometric attention mechanism in which the attention weights are influenced by both relation type and timestamps. (iii) We find that the temporal-geometric attention mechanism is more effective on ICEWS14 and ICEWS05-15 datasets. Compared to GDELT, they contain more relation types and thus provide a wider variety of relational structural patterns in the datasets. This illustrates the importance of introducing heterogeneous geometric spaces in HGE to represent the diverse structure in temporal knowledge graphs. Case Study Intent to cooperate relation forms a temporal-star structure in TKGs as the head entity could express this attitude to multiple tail entities. In Figure 3, on account of the query (Barack Obama, intent to cooperate, ?, 153), complex space predicts the wrong answer Angela Merkel as it supposes a symmetric instance exists for (Angela Merkel, r1, Barack Obama, 105). Split-complex space predicts the wrong answer Japan to form a hierarchy path between Angela Merkel, Barack Obama and Japan. Dual space predicts the correct answer Poland as it has been the object entity in the temporal star structure formed by France. Given that Barack Obama consults Japan recently, HGE chooses the correct answer Poland with the help of the temporal-geometric attention mechanism. Related Works TKGE models incorporate time information in different ways. TTransE (Leblay and Chekol 2018) and TA-DistMult (Garcia-Duran, Dumanˇci´c, and Niepert 2018) insert the time The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8918 Model ICEWS14 ICEWS05-15 GDELT MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 TTransE 25.5 7.4 60.1 27.1 8.4 61.6 11.5 0 16.0 31.8 TADistMult 47.7 36.3 68.6 47.4 34.6 72.8 20.6 12.4 21.9 36.5 RotateQVS 59.1 50.7 64.2 75.4 63.3 52.9 70.9 81.3 27.0 17.5 29.3 45.8 BoxTE(k=2) 61.5 53.2 66.7 76.7 66.4 57.6 72.0 82.2 33.9 25.1 36.6 50.7 LCGE 61.6 53.2 66.7 77.5 61.8 51.4 68.1 81.2 TeRo 56.2 46.8 62.1 73.2 58.6 46.9 66.8 79.5 23.2 14.5 24.9 30.9 +HGE 58.6 49.5 64.5 74.9 57.8 45.3 66.5 80.4 23.4 14.7 25.2 40.5 △Improve 4.3% 5.8% 3.9% 1.4% -1.5% -3.4% -0.1% 1.1% 0.9% 1.4% 1.2% 31.1% TComplEx 61.9 54.2 66.1 76.7 66.5 58.3 71.6 81.1 34.6 25.9 37.2 51.5 +HGE 62.6 54.7 67.2 77.4 67.2 59.3 72.0 81.7 36.8 27.4 40.1 55.3 △Improve 1.1% 0.9% 1.7% 0.9% 1.1% 1.7% 0.6% 0.7% 5.2% 5.8% 7.8% 7.4% TNTComplEx 60.7 51.9 65.9 77.2 66.6 58.3 71.8 81.7 34.1 25.2 36.8 51.5 +HGE 63.0 55.1 67.5 78.0 68.1 60.1 72.9 82.9 37.1 28.3 40.0 54.1 △Improve 3.7% 6.2% 2.4% 0.6% 2.3% 3.1% 1.5% 1.5% 8.8% 12.3% 8.7% 5.0% TLT-KGE 63.0 54.9 67.8 77.7 68.6 60.7 73.5 83.1 35.6 26.7 38.5 53.2 +HGE 63.4 55.0 68.5 78.8 68.8 60.8 74.0 83.5 37.1 27.7 40.2 55.6 △Improve 0.6% 0.1% 1.0% 1.4% 0.3% 0.2% 1.4% 0.5% 4.2% 3.7% 4.4% 3.0% Table 1: Link prediction results on ICEWS14, ICEWS05-15, and GDELT. The best results among all models are in bold. Additionally, we underline the best results among models with the same backbone model. Model [a, b] [a, ∞) (−∞, b] TNTComplEx 27.4 37.8 51.7 +HGE 28.4 37.8 57.0 TLT-KGE 27.0 36.0 48.0 +HGE 27.4 37.7 51.7 Table 2: Link Prediction results of MRR on Wikidata12k. Model ICEWS14 ICEWS05-15 GDELT TNTComplEx 60.7 66.6 34.1 +stack 62.0 67.3 35.6 +tra 62.0 67.4 36.9 +tga 62.6 67.5 36.4 +HGE 63.0 68.1 37.1 Table 3: MRR performance of HGE components. +tra stands for merely using temporal-relational attention mechanism. +tga stands for merely using temporal-geometric attention mechanism. +stack stands for integrating subspaces with the stacking strategy in (Han et al. 2020). information into different score functions as another element. TeRo (Xu et al. 2020) defines the temporal evolution of entity embeddings as a rotation from the initial time to the current time in complex vector space. T(NT)ComplEx (Lacroix, Obozinski, and Usunier 2019) is a semantic matching approach that models temporal knowledge graph completion as an order 4 tensor completion problem. TeLM (Xu et al. 2021) also performs 4th-order tensor factorization on temporal knowledge graphs but adds a bias component between the neighboring temporal embeddings in the temporal regularizer. Moreover, it adopts multivector embeddings for entities, relations, and timestamps. Inspired by TeRo (Xu et al. 2020), RotateQVS (Chen et al. 2022) embeds entities in quaternion space and temporal changes are represented as rotations. BoxTE (Messner, Abboud, and Ceylan 2022) extends BoxE (Abboud et al. 2020) by including relation-specific time embeddings. TLTKGE (Zhang et al. 2022) models semantic information and temporal information as different parts of complex space or quaternion space. Complex or quaternion operations exchange information between different parts. LCGE (Niu and Li 2023) use temporal rules to regularize entity embedding and adopts commonsense reasoning as the extra learning task. Most of the reviewed TKGE approaches model temporal patterns by using a single geometry, and do not present multiple geometries to capture diverse temporal patterns. Several manifold-based TKGE models have been proposed in (Montella, Barahona, and Heinecke 2021; Han et al. 2020). (Montella, Barahona, and Heinecke 2021) is an extension of AttH (Chami et al. 2020) to temporal KGEs which use hyperbolic manifolds as embedding space. It only uses a single geometry for embedding space. (Han et al. 2020) embeds TKGs into a product space of several manifolds to model multiple structural patterns. However, it does not select the most suitable manifold depending on structural patterns existing in TKGs but chooses it manually. Conclusion We present HGE, a new temporal KGE model that utilizes multiple geometries. HGE extends state-of-the-art TKGEs from a Complex space to the product space that embeds temporal facts in Complex, Split-complex, and Dual subspaces via two temporal attention mechanisms. The temporalrelational attention mechanism captures relations with varying change frequencies. The temporal geometric attention mechanism fuses information from different geometries according to the captured relational and temporal information. Extensive experiments on benchmark datasets validate that our model uniformly improves several state-of-the-art Complex-based TKGE models. In the future, we plan to include more types of heterogeneous geometric spaces. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8919 Acknowledgments This research was funded by the German Research Foundation (DFG) via grant agreement number STA 572/18-1 (Open Argument Mining) and the German Federal Ministry for Economic Affairs and Climate Action under Grant Agreement Number 01MK20008F (Service-Meister). We would also like to thank the valuable advice from Daniel Hern´andez, Le Chen, Shutong Feng, and Yaxi Hu. References Abboud, R.; Ceylan, I.; Lukasiewicz, T.; and Salvatori, T. 2020. Boxe: A box embedding model for knowledge base completion. Advances in Neural Information Processing Systems, 33: 9649–9661. Allen, J. F. 1983. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11): 832–843. Angeles, J. 1998. The application of dual algebra to kinematic analysis. In Computational methods in mechanical systems, 3–32. Springer. Balazevic, I.; Allen, C.; and Hospedales, T. 2019. Multirelational poincar´e graph embeddings. Advances in Neural Information Processing Systems, 32. Chami, I.; Wolf, A.; Juan, D.-C.; Sala, F.; Ravi, S.; and R´e, C. 2020. Low-Dimensional Hyperbolic Knowledge Graph Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 6901–6914. Chen, K.; Wang, Y.; Li, Y.; and Li, A. 2022. RotateQVS: Representing Temporal Information as Rotations in Quaternion Vector Space for Temporal Knowledge Graph Completion. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 5843–5857. Erxleben, F.; G¨unther, M.; Kr¨otzsch, M.; Mendez, J.; and Vrandeˇci´c, D. 2014. Introducing wikidata to the linked data web. In The Semantic Web–ISWC 2014: 13th International Semantic Web Conference, Riva del Garda, Italy, October 19-23, 2014. Proceedings, Part I 13, 50–65. Springer. Gao, C.; Sun, C.; Shan, L.; Lin, L.; and Wang, M. 2020. Rotate3d: Representing relations as rotations in threedimensional space for knowledge graph embedding. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 385–394. Garcia-Duran, A.; Dumanˇci´c, S.; and Niepert, M. 2018. Learning Sequence Encoders for Temporal Knowledge Graph Completion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4816–4821. Goel, R.; Kazemi, S. M.; Brubaker, M.; and Poupart, P. 2020. Diachronic embedding for temporal knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 3988–3995. Han, Z.; Chen, P.; Ma, Y.; and Tresp, V. 2020. DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7301–7316. Harkin, A. A.; and Harkin, J. B. 2004. Geometry of generalized complex numbers. Mathematics magazine, 77(2): 118–129. Helzer, G. 2000. Special relativity with acceleration. The American Mathematical Monthly, 107(3): 219–237. Hogan, A.; Blomqvist, E.; Cochez, M.; de Melo, G.; Gutierrez, C.; Kirrane, S.; Labra Gayo, J. E.; Navigli, R.; Neumaier, S.; Ngonga Ngomo, A.-C.; et al. 2021. Knowledge Graphs. ACM Computing Surveys, 54(4): 1–37. Lacroix, T.; Obozinski, G.; and Usunier, N. 2019. Tensor Decompositions for Temporal Knowledge Base Completion. In International Conference on Learning Representations. Lacroix, T.; Obozinski, G.; and Usunier, N. 2020. Tensor Decompositions for temporal knowledge base completion. Lautenschlager, J.; Shellman, S.; and Ward, M. 2015. Icews event aggregations. Harvard Dataverse, 3(595): 28. Leblay, J.; and Chekol, M. W. 2018. Deriving validity time in knowledge graph. In Companion Proceedings of the The Web Conference 2018, 1771–1776. Messner, J.; Abboud, R.; and Ceylan, I. I. 2022. Temporal knowledge graph completion using box embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 7779–7787. Montella, S.; Barahona, L. M. R.; and Heinecke, J. 2021. Hyperbolic Temporal Knowledge Graph Embeddings with Relational and Time Curvatures. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 3296–3308. Niu, G.; and Li, B. 2023. Logic and Commonsense-Guided Temporal Knowledge Graph Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 4569–4577. Singh, I.; Kaur, N.; Gaur, G.; and Mausam. 2023. NeuSTIP: A Novel Neuro-Symbolic Model for Link and Time Prediction in Temporal Knowledge Graphs. arXiv:2305.11301. Trivedi, R.; Dai, H.; Wang, Y.; and Song, L. 2017. Knowevolve: Deep temporal reasoning for dynamic knowledge graphs. In international conference on machine learning, 3462–3471. PMLR. Xu, C.; Chen, Y.-Y.; Nayyeri, M.; and Lehmann, J. 2021. Temporal knowledge graph completion using a linear temporal regularizer and multivector embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2569–2578. Xu, C.; Nayyeri, M.; Alkhoury, F.; Yazdi, H. S.; and Lehmann, J. 2020. TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation. In Proceedings of the 28th International Conference on Computational Linguistics, 1583–1593. Zhang, F.; Zhang, Z.; Ao, X.; Zhuang, F.; Xu, Y.; and He, Q. 2022. Along the Time: Timeline-traced Embedding for Temporal Knowledge Graph Completion. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2529–2538. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8920
2024
991
18,841
Cross-Domain Contrastive Learning for Time Series Clustering Furong Peng1, 2, Jiachen Luo1, 2, Xuan Lu3*, Sheng Wang4, Feijiang Li1, 2 1 Institute of Big Data Science and Industry, Shanxi University 2 School of Computer and Information Technology, Shanxi University 3 College of Physics and Electronic Engineering, Shanxi University 4 School of Automation, Zhengzhou University of Aeronautics [email protected], [email protected], [email protected],[email protected], [email protected] Abstract Most deep learning-based time series clustering models concentrate on data representation in a separate process from clustering. This leads to that clustering loss cannot guide feature extraction. Moreover, most methods solely analyze data from the temporal domain, disregarding the potential within the frequency domain. To address these challenges, we introduce a novel end-toend Cross-Domain Contrastive learning model for time series Clustering (CDCC). Firstly, it integrates the clustering process and feature extraction using contrastive constraints at both cluster-level and instance-level. Secondly, the data is encoded simultaneously in both temporal and frequency domains, leveraging contrastive learning to enhance withindomain representation. Thirdly, cross-domain constraints are proposed to align the latent representations and category distribution across domains. With the above strategies, CDCC not only achieves end-to-end output but also effectively integrates frequency domains. Extensive experiments and visualization analysis are conducted on 40 time series datasets from UCR, demonstrating the superior performance of the proposed model. Introduction Data clustering, a technique for exploring data structure, has attracted significant attention (Li et al. 2018, 2019). Unlike image or text processing, the temporal variation of the series should be fully considered for similarity measurement, especially when data are distorted or shifted. Various methods have been proposed, such as Dynamic Time Wrapping (DTW) (Wang et al. 2018), Longest Common Subsequence (LCSS) (G´orecki 2014), and Pearson correlation coefficient (Rodgers and Nicewander 1988). However, these methods have limitations in handling abnormalities, sensitivity, or complexity for long-term series clustering. Along with similarity measurement, feature extraction is also crucial in time series clustering. For example, Zerveas et al. (Zerveas et al. 2021) utilized the transformer to extract features in an unsupervised manner, achieving superior results compared to some supervised methods. Tiano et al. (Tiano, Bonifati, and Ng 2021) proposed extracting statistical features for clustering. However, the extracted features *Corresponding author Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Time Encoder  Frequency Encoder  FFT Time Domain Frequency Domain Cross Domain Positive pair Negative pair Instance-wise projector Cluster-wise projector Data Augmentation Data Augumentation Data Encoding Constrative Learning Figure 1: The framework of cross domain contrastive learning for time series clustering. may not be beneficial for clustering tasks (Q1) if the representation learning is separated from the clustering process. Ma et al. (Ma et al. 2021a) proposed an unsupervised model for clustering incomplete time series by integrating the Kmeans objective into an encoder-decoder network. The integration operation enhances the quality of clustering. Nevertheless, it is worth noting that these methods only analyze time series data from the temporal domain but ignore utilizing the frequency domain information (Q2), which captures periodic patterns better and is more resilient against noise and outliers. (Aghabozorgi, Seyed Shirkhorshidi, and Ying Wah 2015). To address these issues, a Cross-Domain Contrastive learning model for time series Clustering (CDCC) is proposed in this paper. Initially, the Fast Fourier Transform (FFT) (Brigham and Morrow 1967) is utilized to derive the frequency spectrum data, and augmentation techniques are applied to enhance both the temporal and spectrum data. Then, encoding networks in both domains are used for feature extraction. Instance-level and cluster-level conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8921 trastive constraints are leveraged to achieve end-to-end clustering in the temporal and frequency domain. By these intradomain contrastive constraints, CDCC optimizes representations and category distributions for each domain separately. Furthermore, we employs a cross-domain contrastive constraint, including instance-level and cluster-level, to align the spectrum representation with the temporal domain representation, so that to capture waveform characteristics and periodicity for temporal domain representation. Finally, the clustering results are generated by the category assignments of the cluster-level contrast in the temporal domain, because most time series are labeled in this domain. In summary, the main contributions of this work are as follows: • Proposing a cross-domain contrastive time series clustering framework that incorporates information in temporal and frequency domains by enabling comparison of representations within and between both domains. • Adopting cluster-level constraints within and across domains to align cluster distributions and output sample categories from temporal domain. • Conducting extensive experiments on 40 time series datasets, demonstrating that the proposed model achieves superior clustering performance. Related Work Deep Time Series Clustering Deep clustering have demonstrated promising clustering quality via advanced data representation. These methods can be categorized into multi-step clustering and joint clustering, in view of the pipeline (Alqahtani et al. 2021). Multi-step Clustering Multi-step clustering involves extracting time series representations or features, followed by traditional clustering algorithms such as K-means or hierarchical clustering. For instance, Chen et al. (Chen, Krishnan, and Sontag 2022) used RNN to learn encoded representations of time series, which were then clustered using K-means. Baytas et al. (Baytas et al. 2017) employed an improved LSTM to capture long-term dependencies in patient data for clustering purposes. CNNs have also been used to convert time series into visual images, enabling shape feature extraction and time series clustering (Han et al. 2019). However, these methods are often domain-specific and lack of universality. Moreover, the separation of feature extraction and clustering impedes effective guidance of feature extraction by clustering loss(Ma et al. 2021b). Joint Clustering Joint clustering optimizes both feature extraction and clustering simultaneously to improve their compatibility. For example, Zhang et al. (Zhang and Sun 2023) learned representations and class labels using multivariate shapelets of various lengths under the assumption that time series in homogeneous clusters share similar subsequences. Ma et al. (Ma et al. 2021b) proposed a selfsupervised time series clustering network that optimized feature extraction and clustering iteratively. Another approach by Ma et al. (Ma et al. 2019) aimed to minimize clustering errors using a discriminator to align the distribution of interpolated values with true values in the feature extraction. In this paper, we propose a cross-domain contrastive clustering method that belongs to the joint clustering type. It achieves end-to-end joint clustering by instance-level and cluster-level contrastive constraints. The main distinction is that we learn the representation from both temporal and frequency domains, enhancing representation quality through cross-domain contrastive constraints. The entire process can be optimized using gradient back propagation to obtain a superior model. Contrastive Learning Contrastive learning, a self-supervised learning paradigm, has been popular in fields of natural language processing, computer vision, and recommendation systems (Zhang et al. 2021; Chen and He 2021; Xie et al. 2022). The core idea is to learn data representations or features by modeling the similarity and dissimilarity between samples. In the context of time series, contrastive learning has been explored in many ways. Franceschi et al. (Franceschi, Dieuleveut, and Jaggi 2019) proposed an unsupervised representation learning model for multivariate time series using a time-based sampling strategy and triple loss to ensure that similar time series have similar representations. To consider neighborhood information, Tonekaboni et al. (Tonekaboni, Eytan, and Goldenberg 2021) assumed that signals from neighboring areas should have distinguishable distributions from non-neighborhood signals, and learned representations by a debiased contrastive objective. Eldele et al. (Eldele et al. 2021) introduced an unsupervised time series contrastive learning framework (TSTCC) based on temporal and context to capture contextual representations. Additionally, Yue et al. (Yue et al. 2022) developed a general framework for comparing time series across instances and time scales. However, these methods only focus on contrastive learning in temporal domain but ignore the frequency domain. In this paper, we leverage the FFT of the time-frequency transform technique in signal processing to obtain frequency domain information, which enables cross-domain comparison of time series between temporal and frequency domains to incorporate crucial information into the representation. Cross-Domain Contrastive Clustering In this section, we introduce the proposed CDCC method that consists of three main components: data augmentation, encoding network, and contrastive constraints, incorporating both temporal and frequency domains to enhance clustering performances for time series data. The model framework is illustrated in Figure 1. Data Augmentation Temporal Domain Data Augmentation Given a set of time series dataset X = {xi}n i=1, xi represents the i-th time series in the dataset. For each time series xi, random mixing operations are applied on the library T of time-domain data augmentation methods, including jittering, scaling, and permutation operations, to generate the augmented data ˜X t = {˜xt i}n i=1 . To distinguish the domains, we denote the original The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8922 temporal domain data with superscript t (e.g. xt i), the corresponding frequency domain with superscript f (e.g. xf i ), and the augmented data with˜(e.g. ˜xt i, ˜xf i ). The temporal data augmentation process can be defined as follows: T a(xt i, α), T b(xt i, β), T c(xt i, γ) ∼T , ˜xt i = T j(xt i), j ∈{a, b, c}, (1) where T a, T b, and T c represent the data augmentation operations of jittering, scaling, and permutation respectively, α, β, γ correspond to the jittering rate, scaling rate, and permutation rate, respectively. We set α = 0.8, β = 1.1, and γ = 0.8. Through extensive experiments, these parameter settings demonstrates favorable results. Frequency Domain Data Augmentation Most existing data augmentation methods for time series focus on enhancing the data in temporal domain, while few approaches targets the frequency domain. In this study, we introduce a data augmentation technique on frequency domain inspired by the method proposed by Zhang et al. (Zhang et al. 2022). First, we employ FFT converting temporal data to frequency spectrum, Xf = {xf i }n i=1, where xf i = FFT(xt i). (2) Subsequently, random mixing is applied on a library F of frequency domain data augmentation methods, including the addition and removal of frequency components. To add frequency components, we first calculate the maximum amplitude Am in the spectrum. Then we randomly select θ frequency components with amplitudes smaller than ωAm and increase their amplitudes to ωAm, where ω and θ are predefined scaling factors and perturbation rates, respectively. To remove frequency components, a masking operation is used to randomly discard frequency components in the spectrum, with the masking rate ϵ. It is important to note that excessive perturbation in frequency spectrum may lead to significant changes in temporal domain. Therefore it is crucial to avoid excessively large values for ω, θ, or ϵ. In our experiments, we set ω = 0.1, θ = 0.1, and ϵ = 0.1. The frequency domain data augmentation process can be summarized as follows: F a(xf i , ω, θ), F b(xf i , ϵ) ∼F, ˜xf i = F j(xf i ), j ∈{a, b}, (3) where F a and F b refer to the operations of adding and removing frequency components, respectively. The frequency domain augmented data is denoted as ˜Xf = {˜xf i }n i=1. Encoding Network The encoding network plays a pivotal role in our deep learning model, directly influencing its ability to capture the structure of time series data. To leverage the distinct characteristics of the temporal and frequency domains, we employ a bidirectional long short-term memory network (BiLSTM) (Kong et al. 2023) and a three-layer convolutional block as the encoders for the temporal and frequency domains, respectively. Choosing BiLSTM is because it considers both past and future information within the time series and extract abstract features across different time periods effectively. By feeding Xt/ ˜Xt into BiLSTM, we obtain the corresponding temporal domain representations Ht/ ˜Ht as follows: Ht = BiLSTM(Xt), ˜Ht = BiLSTM( ˜Xt). (4) We employ a three-layer convolutional block as the spectrum encoder. Specifically, each convolutional block consists of a convolutional layer (Cv1d), a batch normalization layer (BN1d), a rectified linear unit (relu) activation function, and a one-dimensional max pooling layer (MaxPool1d). Following the first convolutional block, a dropout layer is added after the max pooling layer to randomly deactivate pooled results, thus mitigating overfitting concerns. The three-layer convolutional block (CB3(·)) can be defined as follows: CB3(xf i ) = CB(CB(Dropout(CB(xf i )))), CB(x) = MaxPool1d(relu(BN1d(Cv1d(x)))), (5) where CB(x) is one layer convolutional block. By feeding Xf/ ˜Xf into CB3(·), we obtain the corresponding frequency domain representation Hf/ ˜Hf as follows: Hf = CB3(Xf), ˜Hf = CB3( ˜Xf). (6) Contrastive Constraints According to the optimization strategy of contrastive learning, contrastive constraints aim to maximize the similarity of representations between the original sample and the augmented data, while also enhancing the discriminability of different sample representations. We adopt the InfoNCE for contrastive loss function, as it effectively preserves the underlying data clusters (Parulekar et al. 2023). This type of contrast function, acting on individual samples, is referred to as the instance-level contrast constraint in this paper. Additionally, drawing inspiration from the contrastive clustering model (Li et al. 2021), in order to achieve end-to-end clustering results, we perform classification on sample representations to obtain pseudo-labels. These pseudo-labels are then used to construct cluster-level contrastive constraints. The cluster-level contrastive can facilitate the aggregation of similar samples, aiding the model in learning more discriminative representation in class-level. The instance-level and cluster-level contrastive losses will be discussed in the following. The instance-level loss function does not directly act on the encoding results {Ht, ˜Ht, Hf, ˜Hf} for model’s robustness. Instead, it operates on the transformed instance-level representations {Zt, ˜Zt, Zf, ˜Zf}. Specifically, let zI i and ˜zI i represent the i-th row of the original view ZI and its augmented view ˜ZI in the temporal or frequency domain. In a training dataset of size n, each sample representation zI i is paired positively with its corresponding augmented view ˜zI i and negatively with other samples. The instance-level contrastive loss for sample zI i can be formulated as: The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8923 LzI i = −log exp(s(zI i ,˜zI i )/τ I) n P j=1,j̸=i exp(s(zI i ,zI j )/τ I)+exp(s(zI i ,˜zI j )/τ I), zI i = gI ϕ(hi), ˜zI i = gI ϕ(˜hi), gI ϕ(h) = normalize(MLP 2(h)), h ∈{hi, ˜hi}, (7) where MLP 2(·) is a two-layer perception, normalize(·) is a normalization operation, τ I denotes the instance-level temperature parameter. s(zi, ˜zi) is the cosine similarity between samples zi and ˜zi, which is defined as: s(zi, ˜zi) = (zT i ˜zi) ||zi||||˜zi||. (8) Considering the symmetry between the original view zI i and the augmented view ˜zI i , the instance-level contrastive loss can be expressed as: Lins(ZI, ˜ZI) = 1 2n n X i=1 LzI i + L˜zI i , (9) where Lins(ZI, ˜ZI) can be applied to the temporal and frequency domain. The cluster-level constraint is also not directly applied to the output of the encoder, but rather on the results of the clustering. Let ZC ·,i be the i-th column of category assignment ZC ∈Rn×k (n is the number of samples, k is the number of clusters, ZC can be ZCt or ZCf ) . Similarly, ˜ZC ·,i is for the augmented data. Then, the cluster-level contrastive loss of ZC ·,i can be written as: LZC ·,i = −log exp(s(ZC ·,i, ˜ ZC ·,i)/τ C)) n P j=1,j̸=i exp(s(ZC ·,i, ˜ ZC ·,j))+exp(s(ZC ·,i,ZC ·,j)), zC i = gC ϕ (hi), ˜zC i = gC ϕ (˜hi), gC ϕ (h) = softmax(MLP 2(h)), h ∈{hi, ˜hi}, (10) where gC ϕ (hi) function is calculated with a two-layer perception and a classification function softmax(·), τ C is the cluster-level temperature parameter. In the clustering process, in order to prevent degenerate solutions, cross-entropy constraints are introduced: Lce = − k X i=1 P C i logP C i −˜P C i log ˜P C i , (11) where P C i = Pn j=1 ZC j,i/n, ˜P C i = Pn j=1 ˜ZC j,i/n. Through this constraint, the occurrence of empty clusters can be prevented. Considering the symmetry of the original view and the augmented view, the cluster-level contrastive loss can be expressed as: Lcls(ZC, ˜ZC) = 1 2n k X i=1 LZC ·,i + L ˜ ZC ·,i + Lce. (12) Cross-Domain Contrastive Constraints To achieve information fusion between the temporal and frequency domain, firstly, instance-level and cluster-level contrastive constraints are separately applied to the two domains. Then, a third contrastive constraint between the two domains is established for information fusion. By employing Equation 9 and Equation 12 on the temporal domain representations {ZIt, ˜ZIt, ZCt, ˜ZCt} and frequency domain representations {ZIf , ˜ZIf , ZCf , ˜ZCf }, we obtain the following two intra-domain loss functions: Lt = Lins(ZIt, ˜ZIt) + Lcls(ZCt, ˜ZCt), (13) Lf = Lins(ZIf , ˜ZIf ) + Lcls(ZCf , ˜ZCf ). (14) Given that the frequency domain representation of any sample is derived from its temporal domain counterpart, it is expected that the representations of the same sample in different domains should exhibit analogous structural characteristics. To address this, we employ instance-level and cluster-level contrastive constraints for the augmented samples between domains. Specifically, the cross-domain contrastive loss function is: Ltf = Lins( ˜ZIt, ˜ZIf ) + Lcls( ˜ZCt, ˜ZCf ). (15) It is essential to note that this cross-domain contrastive constraint is solely applied to the augmented samples, excluding the original data. Based on our experiments, it has been observed that conducting cross-domain constraint on the original data often results in model overfitting. Because the human labels are marked using the temporal domain, integrating too much frequency domain information into temporal domain will deteriorate the quality of clustering. The final loss function is established by combining above three losses: L = λ(Lt + Lf) + (1 −λ)Ltf, (16) where λ is employed to balance the significance of the intradomain constraint and cross-domain constraint. In our experimental setup, we set its value to 0.5. Finally, we adopt the Adam optimizer to optimize the proposed framework. Clustering The CDCC integrates representation learning and clustering process, where the clustering-level representation Zc can serve as the basis for category assignments Y that can be determined as: Y = arg max(gC ϕ (BiLSTM(X))), (17) where gC ϕ (·) represents the mapping function at the clustering level in the temporal domain. It is worth noting that time series data is typically labeled based on temporal information. Therefore, we employ the temporal category output as the final results. If frequency domain data is utilized for labeling, one may consider employing frequency domain. Experiments Experimental Setup Dataset and Evaluation Metrics To validate the effectiveness of the proposed model, experiments were conThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8924 Dataset NMI RI TSTCC TST FeatTS STCN R-Clust TCGAN CDCC TSTCC TST FeatTS STCN R-Clust TCGAN CDCC ACSF1 0.477 0.491 0.364 0.314 0.545 0.331 0.557 0.820 0.742 0.680 0.760 0.867 0.562 0.884 Adiac 0.527 0.609 0.450 0.531 0.708 0.536 0.567 0.937 0.946 0.911 0.891 0.956 0.930 0.953 ArrowHead 0.270 0.324 0.326 0.324 0.332 0.288 0.310 0.656 0.677 0.695 0.692 0.661 0.621 0.705 Beef 0.295 0.283 0.277 0.173 0.270 0.290 0.378 0.679 0.671 0.612 0.695 0.670 0.639 0.770 Car 0.286 0.254 0.350 0.302 0.562 0.243 0.387 0.701 0.687 0.738 0.717 0.792 0.679 0.754 CBF 0.578 0.673 0.767 0.498 0.947 0.452 0.993 0.773 0.821 0.907 0.768 0.984 0.741 0.998 CricketX 0.360 0.276 0.272 0.133 0.323 0.291 0.457 0.871 0.864 0.736 0.833 0.864 0.863 0.895 CricketY 0.365 0.334 0.258 0.156 0.360 0.320 0.439 0.873 0.868 0.847 0.839 0.871 0.862 0.889 CricketZ 0.317 0.283 0.234 0.212 0.331 0.292 0.388 0.870 0.865 0.797 0.849 0.862 0.864 0.885 DSR 0.878 0.878 0.643 0.896 0.613 0.864 0.766 0.939 0.939 0.839 0.941 0.810 0.936 0.901 DPOAG 0.472 0.428 0.388 0.398 0.426 0.330 0.422 0.752 0.743 0.713 0.743 0.742 0.724 0.741 DPTW 0.594 0.552 0.565 0.601 0.550 0.499 0.577 0.905 0.901 0.804 0.888 0.808 0.786 0.883 ECG200 0.208 0.172 0.321 0.268 0.180 0.128 0.381 0.655 0.644 0.679 0.690 0.633 0.618 0.696 ECGFiveDays 0.358 0.006 0.586 0.233 0.018 0.002 0.832 0.727 0.504 0.844 0.652 0.511 0.501 0.940 EOGVS 0.252 0.349 0.175 0.221 0.132 0.219 0.319 0.848 0.859 0.784 0.846 0.833 0.785 0.874 FaceFour 0.649 0.448 0.376 0.505 0.646 0.434 0.557 0.836 0.756 0.707 0.810 0.828 0.725 0.828 FiftyWords 0.689 0.666 0.400 0.454 0.646 0.659 0.638 0.957 0.955 0.908 0.926 0.953 0.955 0.953 Fish 0.340 0.301 0.327 0.443 0.555 0.345 0.507 0.793 0.789 0.810 0.842 0.858 0.784 0.863 Fungi 1.000 0.983 0.730 0.861 1.000 0.926 0.969 1.000 0.995 0.918 0.960 1.000 0.972 0.992 GPMVF 0.341 0.421 0.477 0.384 0.000 0.341 0.923 0.617 0.696 0.786 0.737 0.499 0.617 0.978 GPOVY 0.565 1.000 0.705 0.979 1.000 0.343 1.000 0.782 1.000 0.897 0.995 1.000 0.619 1.000 HouseTwenty 0.403 0.202 0.658 0.115 0.246 0.065 0.568 0.751 0.633 0.881 0.575 0.661 0.540 0.838 Large.Kit.App 0.056 0.132 0.211 0.262 0.008 0.044 0.271 0.583 0.595 0.657 0.685 0.551 0.576 0.693 MPOAG 0.399 0.389 0.379 0.393 0.400 0.395 0.403 0.736 0.734 0.725 0.737 0.732 0.733 0.740 MPTW 0.430 0.427 0.449 0.445 0.412 0.405 0.432 0.851 0.849 0.791 0.841 0.795 0.785 0.833 OSULeaf 0.307 0.261 0.301 0.302 0.447 0.238 0.524 0.767 0.764 0.675 0.761 0.806 0.747 0.841 PAwP 0.597 0.586 0.170 0.408 0.625 0.581 0.613 0.968 0.964 0.217 0.908 0.967 0.965 0.969 PAP 0.696 0.677 0.723 0.589 0.934 0.687 0.833 0.971 0.971 0.955 0.926 0.991 0.968 0.983 PigCVP 0.596 0.534 0.306 0.532 0.714 0.572 0.727 0.960 0.952 0.693 0.931 0.972 0.959 0.975 Plane 0.932 0.932 0.617 0.946 0.981 0.830 0.989 0.954 0.954 0.847 0.959 0.994 0.917 0.997 PowerCons 0.683 0.727 0.447 0.351 0.465 0.568 0.779 0.889 0.894 0.766 0.717 0.725 0.837 0.930 PPTW 0.546 0.521 0.564 0.631 0.557 0.449 0.624 0.803 0.797 0.796 0.880 0.772 0.742 0.883 SHMC2 0.266 0.247 0.250 0.120 0.250 0.245 0.264 0.750 0.728 0.679 0.734 0.739 0.663 0.770 ShapeletSim 0.061 0.032 1.000 0.605 1.000 0.001 0.713 0.528 0.519 1.000 0.827 1.000 0.498 0.904 ShapesAll 0.724 0.714 0.205 0.508 0.751 0.698 0.769 0.979 0.978 0.615 0.908 0.981 0.972 0.982 SwedishLeaf 0.649 0.615 0.506 0.467 0.724 0.537 0.770 0.911 0.916 0.900 0.863 0.933 0.890 0.953 S.C. 0.895 0.784 0.626 0.671 0.810 0.739 0.884 0.929 0.880 0.857 0.856 0.900 0.862 0.964 TS1 0.004 0.002 0.202 0.297 0.020 0.000 0.261 0.500 0.499 0.629 0.686 0.512 0.498 0.668 Trace 0.558 0.800 0.591 0.804 1.000 0.500 0.750 0.762 0.843 0.759 0.898 1.000 0.749 0.874 WS 0.506 0.470 0.318 0.293 0.469 0.435 0.497 0.904 0.901 0.882 0.875 0.899 0.894 0.905 #Best↑ 7 2 3 4 10 0 18 6 1 2 2 7 0 26 AVG NMI/RI↑ 0.478 0.470 0.438 0.441 0.524 0.403 0.601 0.812 0.807 0.773 0.816 0.823 0.764 0.877 AVG RANK↓ 3.425 4.238 4.800 4.688 3.213 5.563 2.075 3.213 3.925 5.288 4.363 3.588 5.825 1.800 P-Value 5E-04 7E-06 1E-07 1E-06 6E-03 6E-11 1E-03 3E-06 1E-10 4E-07 6E-05 2E-11 Table 1: Overall performance comparison. #Best indicates the number best results on all datasets. AVG NMI/RI indicates average of NMI or RI over all datasets. AVG RANK indicates average rank. P-Value indicates the significance tests. ducted on 40 time series datasets from UCR1 (Dau et al. 2019). The statistical information of the datasets is presented in Appendix. The training and testing sets from the UCR were merged for evaluation. Normalized Mutual Information (NMI) and Rand Index (RI) are considered as metrics (Li, Qian, and Wang 2021; Aghabozorgi, Seyed Shirkhorshidi, and Ying Wah 2015). 1DSR: DiatomSizeReduction, DPOAG: DistalPhalanxOutlineAgeGroup, DPTW: DistalPhalanxTW, EOGVS: EOGVerticalSignal, GPMVF: GunPointMaleVersusFemale, GPOVY: GunPointOldVersusYoung, Large.Kit.App: LargeKitchenAppliances, MPOAG: MiddlePhalanxOutlineAgeGroup, MPTW: MiddlePhalanxTW, PAwP: PigAirwayPressure, PAP: PigArtPressure, PPTW: ProximalPhalanxTW, SHMC2: SemgHandMovementCh2, S.C.: SyntheticControl, TS1: ToeSegmentation1, WS: WordSynonyms Baseline Methods Six models, including a semisupervised models, a self-supervised model, and four unsupervised representation learning models (two-stage clustering) were chosen for performance evaluation2: TSTCC (Eldele et al. 2021): A contrastive learning model that introduces strong augmentation and weak augmentation. Similar to TST, K-means clustering is applied for clustering tasks. TST (Zerveas et al. 2021): An unsupervised representation learning model for time series based on transformers. It achieves better performance than supervised methods in regression, classification, and prediction. K-means is applied to the representations for clustering tasks. FeatTS (Tiano, Bonifati, and Ng 2021): A novel semisupervised clustering method that extracts discriminative 2https://github.com/JiacLuo/CDCC The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8925 CD = 1.424 CD = 1.424 1.800 2.075 3.213 3.425 4.238 4.688 4.800 5.563 3.213 3.588 3.925 4.363 5.288 5.825 a) NMI b) RI Figure 2: Critical-difference diagram for NMI and RI. features from time series and then performs clustering. STCN (Ma et al. 2021b): A self-supervised network for time series clustering, which optimizes feature extraction and clustering in a self-supervised manner. R-Clust (Marco-Blanco and Cuevas 2023): A pipeline for time series clustering, using random convolutions and Principal Component Analysis (PCA) to extract features. TCGAN (Huang and Deng 2023): A representation learning framework for time series, using adversarial game to optimize representations. Following Yang et al. (Yang and Hong 2022), all the data are normalized using z-score normalization. Parameter Settings We conducted a grid search on parameters which may affect the performance based on the recommendations in the corresponding paper and experimental analysis. In CDCC, τ I = 0.5, and τ C = 1. The learning rate, the number of layers in BiLSTM, batch size and the dropout rate was searched. The experiments were conducted on a DCU Z100SM (16GB) computing card using PyTorch environment. The result of each algorithm is reported at their best parameters. Overall Performance Comparing The proposed CDCC method was compared with TSTCC, TST, FeatTS, STCN, R-Clustering, and TCGAN. As shown in Table 1, the CDCC achieved 18 best NMIs and 26 best RIs out of 40 datasets. It also achieved the highest average NMI of 0.601, the highest average RI of 0.877, and the highest average rankings of 2.075 (NMI) and 1.800 (RI), respectively. We also conduct pairwise comparisons between CDCC and other method using Wilcoxon signed-rank tests with Bonferroni correction (Demˇsar 2006). The results of the significance tests are presented in the last row (P-Value) of Table 1. At a significance level of p < 0.05, CDCC shows significant superiority over all the compared methods. Furthermore, post-hoc Nemenyi tests (Demˇsar 2006) were conducted for accessing the statistical significance. The results, as depicted in Figure 2, reveal that CDCC exhibits a significantly superior performance compared to most baselines at a significance level of p < 0.05. The methods TCGAN, FeatTS, STCN, and TST, which are aligned along the horizontal line in the NMI diagram, display similar performance without statistically significant differences. Notably, it is worth mentioning that TSTCC displays better performance than others, owning to its strong and weak augmentation. R-Clustering outperforms others by using a large number of random convolution kernels to extract features. Ablation Study We conducted ablation experiments on the frequency domain contrast, temporal domain contrast, and cross-domain contrast modules to analyze their individual contributions. Experimental results are presented in Figure 3, where all ablation operations are listed as follows: - w/o Lins tf : without instance-level cross-domain contrast. - w/o Lcls tf : without cluster-level cross-domain contrast. - w/o Lf : without frequency-domain contrast. - w/o Lf&Ltf: without both cross-domain contrast and frequency-domain contrast. Figure 3 shows that removing instance-level contrast loss (w/o Lins tf ) or cluster-level contrast loss (w/o Lcls tf ) from CDCC leads to a noticeable decrease. Likewise, excluding the frequency-domain loss (w/o Lf) results in the optimization of data representation solely through the crossdomain contrast loss. Nevertheless, the model’s clustering performance remains superior to that achieved when clusterlevel of cross domain constraints are excluded. Moreover, when the cross-domain contrast loss is further disregarded (w/o Lf&Ltf), the model’s clustering metrics exhibit a significant decline. The above results indicate the effective guidance provided by the cross-domain contrast loss (especially for cluster-level) in learning representation of time series. More details can be found in the Appendix. Parameter Analysis CDCC’s key parameters, the perturbation rate (θ), masking rate (ϵ), and balancing coefficient (λ), are analyzed to evaluate their impacts on performances using ArrowHead, CBF, Fungi, and SwedishLeaf datasets. θ and ϵ play similar roles in the model, we set them equally (θ = ϵ) for all datasets. AVG NMI AVG RI 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Value w/o ins tf w/o cls tf w/o f w/o f & tf CDCC Figure 3: The ablation results. The average of NMI/RI on 40 datasets are used as metrics. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8926 0.0 0.2 0.4 0.6 0.8 θ/ε 0.25 0.26 0.27 0.28 0.29 0.30 0.31 NMI ArrowHead 0.0 0.2 0.4 0.6 0.8 θ/ε 0.675 0.680 0.685 0.690 0.695 0.700 0.705 RI ArrowHead 0.0 0.2 0.4 0.6 0.8 θ/ε 0.4 0.5 0.6 0.7 0.8 0.9 1.0 NMI CBF 0.0 0.2 0.4 0.6 0.8 θ/ε 0.75 0.80 0.85 0.90 0.95 1.00 RI CBF 0.0 0.2 0.4 0.6 0.8 θ/ε 0.91 0.92 0.93 0.94 0.95 0.96 0.97 NMI Fungi 0.0 0.2 0.4 0.6 0.8 θ/ε 0.9775 0.9800 0.9825 0.9850 0.9875 0.9900 0.9925 RI Fungi 0.0 0.2 0.4 0.6 0.8 θ/ε 0.70 0.72 0.74 0.76 NMI SwedishLeaf 0.0 0.2 0.4 0.6 0.8 θ/ε 0.9375 0.9400 0.9425 0.9450 0.9475 0.9500 0.9525 RI SwedishLeaf Figure 4: The impact of parameter θ/ϵ. 0.2 0.4 0.6 0.8 λ 0.27 0.28 0.29 0.30 0.31 NMI ArrowHead 0.2 0.4 0.6 0.8 λ 0.675 0.680 0.685 0.690 0.695 0.700 0.705 RI ArrowHead 0.2 0.4 0.6 0.8 λ 0.4 0.6 0.8 1.0 NMI CBF 0.2 0.4 0.6 0.8 λ 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 RI CBF 0.2 0.4 0.6 0.8 λ 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 NMI Fungi 0.2 0.4 0.6 0.8 λ 0.95 0.96 0.97 0.98 0.99 RI Fungi 0.2 0.4 0.6 0.8 λ 0.64 0.66 0.68 0.70 0.72 0.74 0.76 NMI SwedishLeaf 0.2 0.4 0.6 0.8 λ 0.935 0.940 0.945 0.950 0.955 RI SwedishLeaf Figure 5: The impact of parameter λ. Figure 4 depicts NMI and RI as a function of θ/ϵ. It can be observed that as θ/ϵ increases, the clustering metrics fluctuate continuously. However, smaller values of θ/ϵ generally result in better clustering performance. This is due to the sensitivity of frequency domain information to data augmentation methods, where excessive removal or addition of frequency components can disrupt useful features. The impact of the balancing coefficient λ on the model’s performance is illustrated in Figure 5. Different datasets exhibit varying sensitivities to λ. Generally, a trend can be observed where optimal clustering results are achieved when λ is around 0.5. This suggests that the cross-domain contrast constraint plays a crucial role in the overall model’s constraints. Visualization Representation Visualization We visualize the distribution of representations on CBF and S.C. datasets by t-SNE (van der Maaten and Hinton 2008). The following observations can be made from Figure 6: - The original data X are dispersed as depicted in the first column. T-SNE is unable to reveal the data structure as labeled by humans. - The representations in the frequency domain (Hf) are also dispersed in the case of CBF , but exhibit some clustering tendencies in the case of S.C., as seen in the second column. - The temporal domain (Ht) demonstrate clear clustering structures, as illustrated in the 3rd. The distribution of Ht demonstrates that CDCC can discover data structures in a manner similar to humans. Therefore, we primarily select temporal domain to generate cluster categories. a) CBF b) S.C. Original Frequency Temporal Original Frequency Temporal Figure 6: Visualization of the representations for different domains with t-SNE. The samples from the same class are marked in the same color. 0 50 100 150 200 250 300 Epoch 0.0 0.2 0.4 0.6 0.8 1.0 Value a) CBF NMI RI 0 50 100 150 200 250 300 Epoch 0.0 0.2 0.4 0.6 0.8 Value b) CricketY NMI RI 0 50 100 150 200 250 300 Epoch 0.0 0.2 0.4 0.6 0.8 1.0 Value c) Fungi NMI RI 0 50 100 150 200 250 300 Epoch 0.0 0.2 0.4 0.6 Value d) MPOAG NMI RI Figure 7: The convergence of clustering performance. Convergence Analysis The clustering quality convergence of CDCC was analyzed on CBF, CricketY, Fungi, and MOAG datasets, as illustrated in Figure 7. It reveals that with an increasing number of epochs, the model’s clustering performance, steadily improves until reaching convergence. These findings underscore the desirable convergence behavior exhibited by the proposed clustering model. Conclusion This work proposes a cross-domain contrastive learning model CDCC, for time series clustering. It utilizes intradomain and cross-domain contrastive constraints to enhance the representation capability in both the temporal and frequency domains. By incorporating instance-level and cluster-level contrastive constraint, the model not only optimizes sample representations but also obtains clustering outputs. Extensive experiments demonstrate that the overall performance of the model is superior to existing models. Ablation experiments show that incorporating frequency domain information and cross-domain contrast can improve the clustering performance effectively. However, CDCC for aperiodic data stills need more explorations (e.g. image and device data), which is our future work. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8927 Acknowledgments This work was supported by the Science and Technology Innovation 2030-”New Generation of Artificial Intelligence” Major Program (No.2021ZD0112400), the National Natural Science Foundation of China (Nos. 62276162, 62106132, 62136005, 62272286), the Fundamental Research Program of Shanxi Province (No. 202203021222016), and the Science and Technology Major Project of Shanxi (No. 202201020101006). References Aghabozorgi, S.; Seyed Shirkhorshidi, A.; and Ying Wah, T. 2015. Time-series clustering – A decade review. Information Systems, 53: 16–38. Alqahtani, A.; Ali, M.; Xie, X.; and Jones, M. W. 2021. Deep time-series clustering: A review. Electronics, 10(23): 3001. Baytas, I. M.; Xiao, C.; Zhang, X.; Wang, F.; Jain, A. K.; and Zhou, J. 2017. Patient Subtyping via Time-Aware LSTM Networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 65–74. Brigham, E. O.; and Morrow, R. E. 1967. The fast Fourier transform. IEEE Spectrum, 4(12): 63–70. Chen, I. Y.; Krishnan, R. G.; and Sontag, D. 2022. Clustering interval-censored time-series for disease phenotyping. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 6211–6221. Chen, X.; and He, K. 2021. Exploring Simple Siamese Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15750–15758. Dau, H. A.; Bagnall, A.; Kamgar, K.; Yeh, C.-C. M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C. A.; and Keogh, E. 2019. The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6): 1293–1305. Demˇsar, J. 2006. Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research, 7: 1–30. Eldele, E.; Ragab, M.; Chen, Z.; Wu, M.; Kwoh, C. K.; Li, X.; and Guan, C. 2021. Time-Series Representation Learning via Temporal and Contextual Contrasting. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2352–2359. Franceschi, J.-Y.; Dieuleveut, A.; and Jaggi, M. 2019. Unsupervised Scalable Representation Learning for Multivariate Time Series. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. G´orecki, T. 2014. Using derivatives in a longest common subsequence dissimilarity measure for time series classification. Pattern Recognition Letters, 45: 99–105. Han, L.; Zheng, K.; Zhao, L.; Wang, X.; and Shen, X. 2019. Short-Term Traffic Prediction Based on DeepCluster in Large-Scale Road Networks. IEEE Transactions on Vehicular Technology, 68(12): 12301–12313. Huang, F.; and Deng, Y. 2023. TCGAN: Convolutional Generative Adversarial Network for time series classification and clustering. Neural Networks, 165: 868–883. Kong, F.; Li, J.; Jiang, B.; Wang, H.; and Song, H. 2023. Integrated Generative Model for Industrial Anomaly Detection via Bidirectional LSTM and Attention Mechanism. IEEE Transactions on Industrial Informatics, 19(1): 541–550. Li, F.; Qian, Y.; and Wang, J. 2021. GoT: a Growing Tree Model for Clustering Ensemble. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 8349–8356. Li, F.; Qian, Y.; Wang, J.; Dang, C.; and Jing, L. 2019. Clustering ensemble based on sample’s stability. Artificial Intelligence, 273: 37–55. Li, F.; Qian, Y.; Wang, J.; Dang, C.; and Liu, B. 2018. Cluster’s quality evaluation and selective clustering ensemble. ACM Transactions on Knowledge Discovery From Data, 12(5): 60. Li, Y.; Hu, P.; Liu, Z.; Peng, D.; Zhou, J. T.; and Peng, X. 2021. Contrastive clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 8547– 8555. Ma, Q.; Chen, C.; Li, S.; and Cottrell, G. W. 2021a. Learning Representations for Incomplete Time Series Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10): 8837–8846. Ma, Q.; Li, S.; Zhuang, W.; Li, S.; Wang, J.; and Zeng, D. 2021b. Self-Supervised Time Series Clustering With ModelBased Dynamics. IEEE Transactions on Neural Networks and Learning Systems, 32(9): 3942–3955. Ma, Q.; Zheng, J.; Li, S.; and Cottrell, G. W. 2019. Learning Representations for Time Series Clustering. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Marco-Blanco, J.; and Cuevas, R. 2023. Time Series Clustering With Random Convolutional Kernels. arXiv:2305.10457. Parulekar, A.; Collins, L.; Shanmugam, K.; Mokhtari, A.; and Shakkottai, S. 2023. InfoNCE Loss Provably Learns Cluster-Preserving Representations. arXiv:2302.07920. Rodgers, J. L.; and Nicewander, W. A. 1988. Thirteen ways to look at the correlation coefficient. The American Statistician, 42: 59–66. Tiano, D.; Bonifati, A.; and Ng, R. 2021. FeatTS: FeatureBased Time Series Clustering. In Proceedings of the 2021 International Conference on Management of Data, 2784–2788. ISBN 9781450383431. Tonekaboni, S.; Eytan, D.; and Goldenberg, A. 2021. Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In 9th International Conference on Learning Representations. van der Maaten, L.; and Hinton, G. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9(86): 2579–2605. Wang, W.; Lyu, G.; Shi, Y.; and Liang, X. 2018. Time Series Clustering Based on Dynamic Time Warping. In 2018 IEEE The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8928 9th International Conference on Software Engineering and Service Science, 487–490. Xie, X.; Sun, F.; Liu, Z.; Wu, S.; Gao, J.; Zhang, J.; Ding, B.; and Cui, B. 2022. Contrastive Learning for Sequential Recommendation. In 2022 IEEE 38th International Conference on Data Engineering, 1259–1273. Yang, L.; and Hong, S. 2022. Unsupervised Time-Series Representation Learning with Iterative Bilinear TemporalSpectral Fusion. In Proceedings of the 39th International Conference on Machine Learning, volume 162, 25038– 25054. Yue, Z.; Wang, Y.; Duan, J.; Yang, T.; Huang, C.; Tong, Y.; and Xu, B. 2022. Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8980–8987. Zerveas, G.; Jayaraman, S.; Patel, D.; Bhamidipaty, A.; and Eickhoff, C. 2021. A Transformer-Based Framework for Multivariate Time Series Representation Learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2114–2124. Zhang, D.; Nan, F.; Wei, X.; Li, S.-W.; Zhu, H.; McKeown, K.; Nallapati, R.; Arnold, A. O.; and Xiang, B. 2021. Supporting Clustering with Contrastive Learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 5419–5430. Zhang, N.; and Sun, S. 2023. Multiview Unsupervised Shapelet Learning for Multivariate Time Series Clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4981–4996. Zhang, X.; Zhao, Z.; Tsiligkaridis, T.; and Zitnik, M. 2022. Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency. In Advances in Neural Information Processing Systems, volume 35, 3988–4003. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8929
2024
992
18,842
Refining Latent Homophilic Structures over Heterophilic Graphs for Robust Graph Convolution Networks Chenyang Qiu1, Guoshun Nan1*, Tianyu Xiong1, Wendi Deng1, Di Wang1, Zhiyang Teng2, Lijuan Sun1, Qimei Cui1, Xiaofeng Tao1 1Beijing University of Posts and Telecommunications, China 2Nanyang Technological University, Singapore {cyqiu, nanguo2021, tyxiong, dengwendi, wdwdwd, sunlijuan, cuiqimei, taoxf}@bupt.edu.cn, [email protected] Abstract Graph convolution networks (GCNs) are extensively utilized in various graph tasks to mine knowledge from spatial data. Our study marks the pioneering attempt to quantitatively investigate the GCN robustness over omnipresent heterophilic graphs for node classification. We uncover that the predominant vulnerability is caused by the structural outof-distribution (OOD) issue. This finding motivates us to present a novel method that aims to harden GCNs by automatically learning Latent Homophilic Structures over heterophilic graphs. We term such a methodology as LHS. To elaborate, our initial step involves learning a latent structure by employing a novel self-expressive technique based on multi-node interactions. Subsequently, the structure is refined using a pairwisely constrained dual-view contrastive learning approach. We iteratively perform the above procedure, enabling a GCN model to aggregate information in a homophilic way on heterophilic graphs. Armed with such an adaptable structure, we can properly mitigate the structural OOD threats over heterophilic graphs. Experiments on various benchmarks show the effectiveness of the proposed LHS approach for robust GCNs. Introduction Graph-structured spatial data, such as social networks (Qiu et al. 2022) and molecular graphs, is ubiquitous in numerous real-world applications (Li et al. 2022). Graph convolution networks (GCNs) (Kipf and Welling 2017), following a neighborhood aggregation scheme, are well-suited to handle these relational and non-Euclidean graph structures, and have been widely applied in various graph tasks, including node classification and recommender systems. Recently, there has been a surge in GCN approaches for challenging heterophilic graphs (Zhu et al. 2020), where most neighboring nodes have different labels or features. These methods can be divided into two categories: 1) Multi-hop-based approaches (Abu-El-Haija et al. 2019; Jin et al. 2021a; Wang and Derr 2021); 2) Ranking-based approaches (Liu, Wang, and Ji 2021; Yuan and Ji 2021; Yang et al. 2022). The former group learns node representations based on multi-hop aggregations, while the latter performs selective node aggre*Guoshun Nan is the corresponding author. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Illustration for the vulnerability of existing GPNN under various threats on Squirrel, including a poisoning attack, and another two adapted from evasion attacks. The subfigures (a), (b), and (c) depict how we generate the above three attacks, and (d) reports the significant performance degradation under each attack. gations by a sorting mechanism. These GCN methods continue to advance the state-of-the-art performance for node classification and have enabled various downstream applications (Lin, Lan, and Li 2021; Qiu et al. 2023). Despite the significant success of the current GCN methods on heterophilic graphs, these approaches are extremely vulnerable to malicious threats that aim to distort the structure of the target graph during testing. We conduct experiments to attack the state-of-the-art GPNN (Yang et al. 2022) method, which was trained on the popular Squirrel (Pei et al. 2020) benchmark for heterophilic graphs, using samples created by various attacks. Fig.1 demonstrates that the accuracy of node classification can be greatly reduced under three different types of destructive attacks, including a well-known poisoning attack (Jin et al. 2020), and two attacks adapted from evasion attacks (Biggio et al. 2013; Zhang et al. 2016). Specifically, as shown in Fig.1 (a), the poisoning attack produces adversarial structural perturbaThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8930 (a) Injected evasion attack OOD evasion attack Poisoning attack (0.17,0.29) (0.13,0.21) (0.11,0.19) Squirrel Degradation Classification Accuracy 0.35 0.3 0.25 0.2 0.15 0.2 0.18 0.16 0.14 0.12 0.1 0.08 (b) Scale of 'right-shift' injected evasion attack Sample set of OOD evasion attack Sample set of poisoning attack Sample set of Original testing set Original training set Cora Density Distribution 3 2 1 0 1.5 1 0.5 0 -0.5 (c) 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 Density Distribution Squirrel Original training set Original testing set Sample set of poisoning attack Sample set of OOD evasion attack Sample set of injected evasion attack Figure 2: Illustration of H distributions and the “right-shift”. (a) H distributions of various data over Squirrel, including crafted sample sets of three attacks. (b) correlation between the “right-shift” of H distributions and node classification degradation. (c) H distributions of various data, including the original train and test set of a homophilic dataset Cora, and crafted sample sets of three attacks. tions to the edges of the graph, fooling GPNN to make incorrect predictions. The proposed two evasion-based attacks are referred to as “OOD evasion attacks” and “injected evasion attacks”, respectively. Fig.1 (b) and Fig.1 (c) demonstrate how the sample sets for these two attacks are created. The first generates a graph with a node distribution that is vastly different from that of the target testing set, while the second manipulates the target graph by injecting more heterophilic edges. Under these three attacks, Fig.1 (d) shows that classification accuracy of GPNN is sharply decreased by 18.90%, 21.42%, and 29.30%, respectively. To analyze the reasons why GCN methods are fragile on heterophilic graphs, we further depict the H distributions (Zheng et al. 2022) of the crafted data from the aforementioned three attacks, as well as the distributions of the original train and test sets of Squirrel in Fig.2 (a). Here H represents the node-level heterophily, which is the proportion of a node’s neighbors that have a different class1. Fig.2 (a) demonstrates that the distributions of three attack samples are all located to the right of the training set, with the most destructive sample for the GPNN method being the furthest to the right. This observation led us to investigate the correlation between the “right-shift” of the H distribution relative to the train set and the vulnerability of GCN approaches. This correlation is visualized in Fig.2 (b) and it is shown that the scale of “right-shift” is strongly proportional to the degradation of node classification performance. We refer to this phenomenon as “structural out-of-distribution (OOD)” in GCN methods for graphs of spatial data. To investigate the underlying cause of the aforementioned structural OOD, we attacked another GPNN model that was trained on the homophilic graph Cora (Yang, Cohen, and Salakhudinov 2016) and depicted the resulting H distributions in Fig.2 (c). Interestingly, the shifts of the three attacks relative to the training set of Cora are very small. This minor “right-shift” enables the GPNN model trained on Cora to be more robust. We attribute this to the strong homophily present in the Cora dataset and believe that more homophily will result in less “right-shift” under attacks, even for heterophilic graphs, and hence alleviate the structural OOD. In light of the above discussion, a critical question arises: “How can a GCN model automatically learn an appropriate 1A more formal definition is given in Preliminaries Section, and higher H values indicate a node with strong heterophily. homophilic structure over heterophilic graphs to reduce the scale of “right-shift” in H distributions?” This could help to make the model more resistant to malicious attacks on heterophilic graphs. Achieving this goal is challenging. Despite the success of many structure learning-related methods (Jin et al. 2021b,a; He et al. 2022), they also tend to strengthen the heterophily or only focus on the local relations between two nodes rather than considering the global connections. These methods still suffer from vulnerability issues under attacks (as seen in Figure 4 and Table 1), and they are hardly able to address the challenge. We address the above challenging question with a novel method called LHS. The key components of the proposed LHS are: 1) a self-expressive generator that automatically induces a latent homophilic structure over heterophilic graphs via multi-node interactions, and 2) a dual-view contrastive learner that refines the latent structure in a selfsupervised manner. LHS iteratively refines this latent structure during the learning process, enabling the model to aggregate information in a homophilic way on heterophilic graphs, thereby reducing the “right-shift” and increasing robustness. It should be noted that the original graph Experiments on five benchmarks of heterophilic graphs show the superiority of our method. We also verify the effectiveness of our LHS on three public homophilic graphs. Additionally, the induced structure can also be applied to other graph tasks such as clustering. Our contributions are as follows: • We quantitatively analyze the robustness of GCN methods over omnipresent heterophilic graphs for node classification, and reveal that the “right-shift” of Hnode distributions is highly proportional to the model’s vulnerability, i.e., the structural OOD. To the best of our knowledge, this is the first study in this field. • We present LHS, a novel method that strengthens GCN against various attacks by learning latent homophilic structures on heterophilic graphs. • We conduct extensive experiments on various spatial datasets to show the effectiveness of the proposed LHS in mitigating the structural OOD issue. Related Work Graph Convolution Networks There is a line of early studies in graph convolution networks (GCNs) (Kipf and Welling 2016, 2017; Hamilton, Ying, and The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8931 Leskovec 2017; Veliˇckovi´c et al. 2018). Recent GCN approaches over heterophilic graphs can be grouped into multihop-based ones (Abu-El-Haija et al. 2019; Zhu et al. 2020; Jin et al. 2021b; Wang and Derr 2021; Wang et al. 2022b), ranking-based ones (Liu, Wang, and Ji 2021; Wang et al. 2022a; Yang et al. 2022), and the ones using GCN architecture refinement (Bo et al. 2021; Yang et al. 2021; Suresh et al. 2021a; Yan et al. 2021; Luan et al. 2022; Xu et al. 2023; Li, Kim, and Wang 2023; Zheng et al. 2023). These methods have achieved remarkable success in graph node classification. However, robustness is yet to be explicitly considered on challenging heterophilic graphs. Robust Graph Convolution Networks Recently we have witnessed a surge in the robustness of GCN on heterophilic graphs. These methods can be categorized into the structure learning-based ones (Jin et al. 2020, 2021a; He et al. 2022; Zhu et al. 2022; Liu et al. 2023), and the ones based on adversarial training (Dai et al. 2018; Zhu et al. 2019; Zhang, Zhang, and Cheng 2020; Zhang and Zitnik 2020; Suresh et al. 2021b). The most related to our work is ProGNN (Jin et al. 2020) which explores the low-rank and sparsity of the graph structure, and SimP-GCN (Jin et al. 2021b) which relies on a similarity preservation scheme for structure learning. Our work differs from the above methods in two aspects: 1) We focus on the structural OOD issue of GCN approaches over heterophilic graphs. To the best of our knowledge, this problem is largely ignored in previous works. 2) We iteratively refine the latent structure of heterophilic graphs by a novel self-expressive method and a dual-view contrastive learning scheme, enabling a GCN model to effectively aggregate information in a homophilic way on heterophilic graphs. Preliminaries We denote the graph as G = (V, E, X), where V ∈RN×N is the set of nodes, E is the set of edges between nodes, X is the node features and (V, E) can form the original network structure A. We aim to generate a latent homophilic structure for robust GCN in node classification. For convenience, we give the following edge definition: Definition 1. (Positive/Negative Edge) A positive edge indicates that the two nodes in the link have the same type, while negative one refers to a link that connects two nodes with different types. Node-level Heterophily: We use H to represent the nodelevel heterophily, which is the proportion of a node’s neighbors that have a different class. We refer to (Zheng et al. 2022) to give a formal definition, and it is a fine-grained metric to measure the edge heterophily in a graph. Definition 2. Node-level Heterophily (H): H(vi) = |(vi, vj) | y(vi) ̸= y(vj)| |E(vi)| , ∀vi, vj ∈V, (1) where (vi, vj) ∈E(vi), E(vi) is the edge set of vi, y(vi) is the node class of vi, and | · | represents the number of edges. The nodes with strong heterophily have higher H (closer to 1), whereas nodes with strong homophily have smaller H (closer to 0). It also provides an edge distribution sampling set for quantitative analysis over heterophilic graphs. OOD Formulation: We rigorously formulate the ego-graph edge distribution by utilizing the proposed node-level heterophily, and the formulation further enables the multi-layer edge distribution analyses. The ‘right-shift’ phenomenon found in the heterophilic graphs also motivates us to propose latent homophilic structure refinement. Theoretical analysis from a spectral-domain view is given in Appendix 1 2 to further elaborate the rationale of the proposed LHS. Edge Distribution Formulation: Given a random node vi ∈V , we define vi’s k-hop neighbors as Nvi(k) (where k is an arbitrary positive integer) and the nodes in Nvi(k) form an ego-graph substructure called Avi(k), which consists of a local adjacency matrix represented as Avi(k) = {aviu | u ∈Nvi(k)}. In this way, we can study the distribution of the k-hop substructure Av via p(H | Avi(k)) = p(H | Avi(1)Avi(2)...Avi(k)). It’s worth noting that the ego-graph can be seen as a Markov blanket for the centered node vi, meaning that the conditional distribution p(H | Avi(k)) can be decomposed as a product of independent and identical marginal distributions p(H | Avi(i), i ∈k) for each of the Avi(j), j ≤k. We also provide more empirical observations about the “right-shift” phenomenons on heterophilic graphs, which are available in Appendix 3. Methodology Overview In this section, we present the proposed LHS. Our goal is to learn an appropriate latent homophilic structure from heterophilic graphs, so as to reduce the scale of “right-shift” in H distributions. Inspired by the analysis in the Introduction Section that more homophily of a graph can reduce the “right-shift”, our latent structure tends to encourage positive edge connections by increasing the edge weights for pairs of nodes with the same type, and suppresses negative edge connections by reducing the edge weight for nodes with different types. Fig. 3 shows the architecture of LHS. Structure Inducer The proposed structure inducer involves a self-expressive generator and a dual-view contrastive learner. Self-Expressive Generator. Our proposed self-expressive generator produces a latent homophilic structure over heterophilic graphs in three steps: Step 1: Capturing multi-node information. Given the node features X, we aim to capture the multi-node feature information by expressing one node feature via a linear or affine combination of other node features. Differently from the existing structure learning method with pair-wise similarity matrix (Jin et al. 2021a), our inducer can generate fine-grained latent structure S∗∈RN×N by discovering the global information in low-dimension subspace. Specifically, for ∀vi ∈V , we express it by a linear sum of multi-node features xj, vj ̸= vi, which can be expressed 2Appendices are available in the preprint version. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8932 Pre ③ Node Classifier ① Structure Inducer Pairwise Constraint GCN Encoder Dual-view Contrastive Learner Self-Expressive Generator Multi-node interactions node i express Mask operation Refinement of Latent Homophilic Structure Input Output Reconstructed features Ground-truth Node representation GCN Decoder ② Graph Encoder GCN Encoder Node Features Reconstruction Error(xi , zi) RE RE Heterophilic Graphs Node class memberships Probabilities of K classes of . . . 1 2 K node i Latent Structure S* Original Structure A S* Z Figure 3: The architecture of the proposed LHS, which consists of three modules: structure inducer, graph encoder, and node classifier. The key ingredient structure inducer involves two components, i.e., the self-expressive generator and dual-view contrastive. The former learns a latent homophilic structure by multi-node interactions and then the latter refines the structure. We iteratively perform such a procedure to learn a better homophilic structure on heterophilic graphs. The refined structure will be fed to the graph encoder for representation aggregation, and finally, the classifier for node classification. as xi = P vj∈V qijxj, where qij is the (i, j) th element of a coefficient matrix Q. Step 2: Optimizing the generator loss. We use the coefficient matrix Q to generate latent structure. The optimization problem to solve Q can be formulated as follows: min Q ∥Q∥F s.t. X = QX; diag(Q) = 0 (2) where ∥Q∥F is the Frobenius matrix norm (B¨ottcher and Wenzel 2008) of Q and diag(Q) denotes the diagonal entries of Q. Eq. 2 optimizes a block-diagonal matrix Q to generate the latent structure S∗. Each block of Q contains the nodes which belong to the same class, thus mitigating the “rightshift” phenomenon. We relax the hard constraint X = QX with a soft constraint (X−QX), as the exact reconstruction of X may be impractical. The relaxation formulation is: min Q LSE = ∥X−QX∥2 F +λ1∥Q∥2 F , s.t.diag(Q) = 0, (3) where λ1 is a weight hyperparameter of optimization. Step 3: Generating latent homophilic structure. We construct the latent homophilic structure S∗by Q + QT , while this structure still has noise and outliers. Therefore, we rely on Algorithm 1 to generate S∗. Specifically, the SVD decomposition in Algorithm 1 aims to filter noisy information during the structure generation. In each iteration, we refine the latent structure S∗. We employ the scalable randomized SVD (Halko, Martinsson, and Tropp 2011) to improve the computation efficiency for large-scale graphs. Details are available in Appendix 2.1. Dual-view Contrastive Learner. So far we have obtained the latent structure S∗, and in the previous step, we focus on learning S∗based on the node features. To refine such a structure, we further explore the enriched structural information of the graph and propose a novel dual-view contrastive learner. We take four steps for such a refinement. Step 1: Generating the dual views of latent structure. We Algorithm 1: The generation algorithm of S∗ Input: Coefficient matrix Q, subspaces dimension K = 4, rank r = Kd+1, where d is the number of node classes. Output: Latent structure S∗. 1: Initialization: Q′ = 1 2 Q + QT  . 2: Compute: the r rank SVD of Q′ via Q′ = UΣV T . 3: Compute: L = UΣ 1 2 and normalize each row of L. 4: Update: L′ ←set the negative values in L to zero. 5: Obtain: S∗= (L′+ L′T  /∥L∥∞, where sij ∈[0, 1]. denote the graph as G = (S∗, X), where S∗is the learnable latent homophilic structure. Based on G, we generate two graphs G1 and G2 via the corruption function (Velickovic et al. 2019) to refine the structure in a self-supervised manner. Specifically, the corruption function randomly removes a small portion of edges from S∗and also randomly masks a fraction of dimensions with zeros in node features X. Step 2: Aggregating information on latent structure. For efficient aggregation on S∗, we devise a truncated threshold GCN to control the sparsity of the structure. For S∗, we introduce a threshold σ to decide if there exists a soft connection with continuous values between two nodes and then form a new structure S∗ σ. Such a way is quite different from the previous hard-coding operations (Liu et al. 2022) that only have 0 or 1, and our S∗ σ can be flexibly applied to various benchmarks. We employ the truncated threshold GCN on three graphs, including G, G1, and G2. The proposed truncated threshold GCN on graph G can generate the representations as follows: S∗= {s∗ ij ≥σ | s∗ ij ∈S∗} Z = GCN(X, S∗) = ˆS∗ReLU(ˆS∗XW(0))W(1) (4) where ReLU is an activation function, W (0) and W (1) are the trainable weight matrices, ˜S∗= S∗+I, I ∈R|V |×|V | is The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8933 the identity matrix and the degree diagonal matrix ˜Dii with ˜Dii = P j∈V ˜S∗ ij, ∀i ∈V. We set ˆS∗= ˜D−1 2 ˜S∗˜D−1 2 . W (0) and W (1) are trainable parameter matrices of GCN. Z1 and Z2 denote the node embedding matrices for the two views G1 and G2, and these node embeddings are generated from the proposed GCN encoder. Step 3: Sampling the contrastive samples. For a node vi ∈V , let us denote the corresponding nodes in G1 and G2 as G1(vi) and G2(vi) respectively. Then we introduce the node-pair sampling rules for the contrastive learning as follows: a) positive example is the node pair from the same node of different graph views, that is, ∀i ∈V , the pair (G1(i), G2(i)). b) negative example is the node pair from the different nodes of the same or different graph views, that is, ∀i ∈V , V−i = {j ∈V | j ̸= i}. Both (G1(i), G1(j)) and (G1(i), G2(j)) are the negative examples. Step 4: Optimizing the contrastive loss. In addition to the above dual-view optimization, we also propose a novel pairwise constraint to optimize the original graph view, which can further improve the quality of the learned homophilic structure. Specifically, we sample the node pairs from the training set with labels. The same-class node pairs are positive samples noted as (u, v), while the different-classes node pairs are negative samples noted as (u, vn), where u, v, and vn belong to the training node set and y(u) = y(v), y(u) ̸= y(vn). Here y(·) is the node label. We formally propose the loss function as: Lrefine = X i∈V  −cos (z1i, z2i) τ + log  X j∈V−i e cos(z1i,z1j) τ + e cos(z1i,z2j) τ     −λ2[log σ z⊤ u zv  −log σ −z⊤ u zvn  ] (5) where z1i and z2i denote embeddings node i on Z1 and Z2 respectively, z(v) denotes embedding of node v on Z, cos(·) is the cosine similarity between the two embeddings, τ is a temperature parameter, and λ2 is a hyperparameter of the second term. The first term of Eq. 5 encourages the consistent information between positive samples, while the second term penalizes the inconsistent information between dual views. The last term of Eq. 5 makes sure that the sameclass nodes have more similar representations. Structure Refinement. The node embedding matrix Z generated by Eq. 5 has incorporated the refined structure and node features. Finally, we feed Z into the structure inducer again to iteratively refine the structure S∗. Equipped with both the original graph A and the refined one S∗, we use a structure bootstrapping mechanism S∗←ζA + (1 −ζ)S∗ to update S∗with a slow-moving of A, where ζ is a hyperparameter to balance the information between A and S∗. Specifically, the input graphs with high heterophily will lead to smaller ζ, while the ones with high homophily have larger ζ. By doing so, we can reduce the scale of “right-shift” over heterophilic graphs and thus potentially mitigate the structural OOD issue under malicious attacks as discussed in Fig. 1 in the Introduction Section. Graph Encoder Our graph encoder consists of a GCN encoder and a GCN decoder, where the former encodes the masked features, and the latter aims to generate the reconstructed features ˆX. We feed the masked node features ˜X and S∗to the graph encoder. Then we use a scaled cosine error to optimize the encoder as follows: LRe = 1 |eV| X vi∈eV  1 − xT i ˆxi ∥xi∥· ∥ˆxi∥ γ , γ ≥1 (6) where xi and ˆxi are the feature and reconstructed feature of node i, γ is a scale factor. Classifier and Loss Functions Finally, our classifier outputs predictions. We generate classification representations via a fully-connected layer F(·), that is ypred = softmax(F(hi)). Then the loss of the classifier can be expressed as LP re = PNl i=1 yi log ypred i. We jointly train the graph encoder and classifier with L, which can be expressed as: L = LP re + βLRe (7) where β is a hyperparameter of loss weight. Experiments Datasets, Baselines and Settings Datasets: We experiment on nine benchmarks. For six heterographic spatial datasets including Cornell, Texas, Wisconsin (Pei et al. 2020), Chameleon, Squirrel (Rozemberczki, Allen, and Sarkar 2021), nodes are web pages and edges are hyperlinks between these pages, and Actor (Tang et al. 2009), nodes are actors and edges denote cooccurrences on same web pages. For the three homophilic datasets including Cora and Citeseer (Yang, Cohen, and Salakhudinov 2016), nodes refer to articles, and edges are the citations between articles. Due to space limitations, we provide detailed descriptions in Appendix 4.1. Baselines: We follow the previous works (Jin et al. 2021b; He et al. 2022) to use eleven baselines. We categorize these methods into three groups: 1) multi-hop-based approaches MixHop (Abu-El-Haija et al. 2019) and H2GCN (Zhu et al. 2020), which mix the multi-hop neighbors for aggregation; 2) ranking-based approaches NLGNN (Liu, Wang, and Ji 2021), GEOM-GCN (Pei et al. 2020), Node2Seq (Yuan and Ji 2021) and GPNN (Yang et al. 2022) that aim to search on the network structure and then perform selective aggregation; 3) structure learning approaches ProGNN (Jin et al. 2020), UGCN (Jin et al. 2021a), BM-GCN (He et al. 2022) and GREET (Liu et al. 2023) that automatically learn graph structures for aggregations. Specifically, ProGNN preserves the low-rank and sparsity characteristics of the graph structure for robust GCN. UGCN and SimP-GCN employ a similarity preservation scheme for structure learning on heterophilic graphs and BM-GCN employs a selective aggregation on structure via a block-guided strategy. We also compare our model with a recently proposed spectral-based method ALT-GCN (Xu et al. 2023). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8934 (a) Squirrel LH BM-GCN SimP-GCN ProGNN GPNN GCN Test accuracy(%) 70 65 60 55 50 45 Perturbation Rate(%) 27.5 25.0 22.5 20.0 17.5 15.0 12.5 10.0 7.5 5.0 2.5 0.0 (b) Chameleon LH BM-GCN S ProGNN GPNN GCN Test accuracy(%) 40 35 30 25 20 15 Perturbation Rate(%) 27.5 25.0 22.5 20.0 17.5 15.0 12.5 10.0 7.5 5.0 2.5 0.0 (c)Actor 1 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 27.5 Perturbation Rate(%) 35 40 45 50 55 60 Test accuracy(%) GCN GPNN ProGNN SimP-GCN BM-GCN LH SimP-GCN S S Figure 4: Comparisons of node classification under a poisoning attack. We repeat three times and report the mean values. Wisconsin Chameleon Squirrel Actor OOD Injected OOD Injected OOD Injected OOD Injected H2GCN 48.24±2.1 41.53±1.7 47.20±1.5 37.21±2.3 48.27±3.1 36.34±1.6 21.33±2.3 14.17±1.4 GPNN 52.78±0.8 40.21±1.4 54.62±2.0 48.49±1.6 50.23±0.6 40.55±1.3 20.83±2.7 16.28±1.5 UGCN 72.37±2.7 44.58±2.2 57.23±3.1 40.39±2.8 52.45±3.3 41.79±3.2 23.37±1.9 15.57±0.8 SimP-GCN 73.34±2.1 61.43±2.4 61.28±1.6 54.57±2.6 54.34±1.0 54.01±2.3 28.96±1.8 24.31±2.9 BM-GCN 76.58±0.5 69.78±1.3 62.37±2.6 52.58±2.3 57.30±1.0 59.82±0.7 26.17±1.4 18.92±1.9 LHS 82.31±0.5 73.34±1.1 67.33±0.9 60.10±2.2 68.76±3.1 62.13±2.7 32.23±2.6 33.42±1.8 Table 1: Robustness comparisons in terms of classification accuracy(%) under 2 evasion-based attacks OOD and Injected. Settings: We implement our method by Pytorch and Pytorch Geometric and use Adam Optimizer on all datasets with the learning rate as 0.001. We configure epochs as 1000 and apply early stopping with the patience of 40. We configure the hidden size as 64 and the batch size as 256. We perform the structure learning for 2 rounds. More detailed hyperparameters are available in Appendix 4.3. Main Results Comparisons under Poisoning Attacks. We compare the robustness of our LHS with five baseline approaches under a popular poisoning attack (Jin et al. 2020) on three benchmarks including Squirrel, Chameleon, and Actor. Under various perturbation rates ranging from 0 to 25%, Figure 4 shows that our LHS consistently performs best among all baselines. For example, ours yields higher classification accuracy of up to 20 points compared to ProGNN. These results confirm the superiority of our latent structure learning scheme against poisoning attacks. The existing structure learning methods, including BM-GCN, SimPGCN, and ProGNN, are also extremely vulnerable under large positioning perturbation rates. Nevertheless, they are better than the other two, showing the promise of structure learning over heterophilic graphs. We also observe that the positioning perturbations, which can significantly degrade the baselines at a large rate (i.e., 25%), have a very slight impact on our method. We attribute such gains to the latent structure that can be resistant to the structural OOD issue discussed in the Introduction Section, which will also be illustrated in the first question of the Discussion Section. Comparisons under Evasion Attacks. We presented two evasion-based attacks (Zhang et al. 2016), i.e., “OOD evasion attack (OOD)” and “injected evasion attack (Injected)” Figure 5: Comparisons of the “right-shift” of HE. in Fig. 1 (b) and Fig. 1(c), to craft attack samples with destructive structural perturbations to the edges of the graph. Here we compare our method with five baselines on five heterophilic graphs, and report the results in Table 1. We chose these five baselines because they are representative of different types of GCN and have been widely used in previous studies. For two nodes with different classes, the “Injected” attacks manipulate to inject a connection with a 0.9 probability. We repeat our experiments three times and report the mean and variance values in Table 1. Under the two attacks, Table 1 shows that our method consistently achieves the best among all baselines on five benchmarks. Compared The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8935 Wisconsin Texas Cornell Chameleon Squirrel Actor Cora Citeseer MixHop 75.88±4.9 77.84±7.7 73.51±6.2 60.50±2.5 43.80±1.4 32.22±2.3 81.90±0.8 71.40±1.3 H2GCN 86.67±4.6 84.86±6.7 82.16±6.0 57.11±1.6 37.90±2.0 35.86±1.0 87.81±1.3 77.07±1.6 NLGNN 87.30±4.3 85.40±3.8 84.90±5.7 70.10±2.9 59.00±1.2 37.90±1.3 88.50±1.8 76.20±1.6 GEOM-GCN 65.10±6.5 67.84±5.8 60.00±6.5 65.81±1.6 45.49±1.3 31.94±1.0 85.65±1.7 79.41±1.7 Node2Seq 60.30±7.0 63.70±6.1 58.70±6.8 69.40±1.6 58.80±1.4 31.40±1.0 GPNN 86.86±2.6 85.23±6.4 85.14±6.0 71.27±1.8 59.11±1.3 37.08±1.4 UGCN 69.89±5.2 71.72±6.2 69.77±6.7 54.07±1.7 34.39±1.9 84.00±0.9 74.08±1.2 SimP-GCN 85.49±3.5 81.62±6.5 84.05±5.3 36.20±1.3 BM-GCN 85.13±4.6 69.58±2.9 51.41±1.1 87.99±1.2 76.13±1.9 GREET 84.90±3.3 87.00±4.2 85.10±4.9 63.60±1.2 42.30±1.3 36.60±1.2 83.81±0.9 73.08±0.8 ALT-GCN 76.40±3.9 70.90±4.3 73.90±5.1 65.80±0.9 52.40±0.8 81.20±0.5 71.40±0.4 LHS 88.32±2.3 86.32±4.5 85.96±5.1 72.31±1.6 60.27±1.2 38.87±1.0 88.71±0.7 78.53±1.5 Table 2: Comparisons of node classification without any attacks. with “OOD” attacks, “Injected” attacks are much more constructive as they significantly increase of heterophily of the testing set. Compared to the state-of-the-art structure learning method BM-GCN, our LHS achieves an 11.33 point accuracy under “OOD” attacks. Overall, these results suggest that LHS is more robust against both attacks compared to all the considered baselines. The key to these improved results is the ability of our LHS to perform global searches of the homophilic structure learned by the structure inducer. Comparisons without attacks. We have shown that our model is more robust than existing methods under various attacks. To further investigate the performance without attacks, we conduct experiments on the five heterophilic graphs and compare ours with baseline approaches. Table 2 shows that the proposed LHS performs best and we attribute this to the information aggregation in a homophilic way on heterophilic graphs. Additionally, we also achieve better or comparable classification results on three benchmarks for homophilic graphs. This suggests that improving homophily for both homophilic and heterophilic graphs benefits node classification, and this also remotely aligns with a previous work (Yan et al. 2021), showing that our method can handle both types of graphs in a unified manner. Discussion Can LHS reduce the scale of “right-shift” of H distributions? We have discussed that the “right-shift” phenomenon, i.e., the structural OOD, is the cause of performance degradation under attacks in the Introduction Section. To answer this question, we visualize how our method reduces the “right-shift” for experiments in Table 1 on Squirrel. Under the “injected evasion attack”, Figure 5 shows that our latent structure can greatly move the H distribution of the attacking sample to the left, thus reducing the “right-shift” (see the red arrow). We also observe that the second round of refinement can further move distribution to the left side, further improving the model’s robustness. However, we find that existing SimP-GCN can slightly move the distribution, visually explaining why LHS is more robust than Simp-GCN. This further confirms our hypothesis that reducing “rightshift” can harden the GCN over heterophilic graphs. Can the learnable homophilic structure be applied to Wisconsin Squirrel Chameleon SimP-GCN 58.42 38.57 46.44 BMGCN 54.92 40.26 50.17 AGC 43.71 32.98 35.78 GCN + Inducer 61.32 41.36 52.37 Table 3: Performance Comparisons on graph clustering other tasks? To answer this question, we also apply the homophilic structure learned on four graphs, including Wisconsin, Squirrel, Chameleon, and Cora, to the graph clustering task. We use vanilla GCN (Kipf and Welling 2017) and the proposed structure inducer of LHS to develop “GCN + Structure Inducer”. Even on the vanilla GCN, Table 3 shows that our “GCN + Structure Inducer” outperforms all other baselines on heterophilic graphs. For example, ours outperforms the Simp-GCN on Squirrel by 2.79 points. We attribute such again to our homophilic structure. Conclusion This paper studies robust graph convolution networks over heterophilic graphs. We take the first step towards quantitatively analyzing the robustness of GCN approaches over omnipresent heterophilic graphs for node classification, and reveal that the vulnerability is mainly caused by the structural out-of-distribution (OOD). Based on this crucial observation, we present LHS, a novel method that aims to harden GCN against various attacks by learning latent homophilic structures on heterophilic graphs. Our LHS can iteratively refine the latent structure during the learning process, facilitating the model to aggregate information in a homophilic way on heterophilic graphs. Extensive experiments on various benchmarks show the effectiveness of our approach. We believe our structure can also benefit more graph tasks for better representation learning. Future work could focus on the development of novel adversarial training methods based on the structural OOD. Acknowledgments This work was partially supported by the National Key R&D Program of China (Grant No.2022YFB2902200), Major Projects of National Natural Science Foundation of China The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8936 (Grant No.72293583), and the Joint Funds for Regional Innovation and Development of the National Natural Science Foundation of China (No. U21A20449). References Abu-El-Haija, S.; Perozzi, B.; Kapoor, A.; Alipourfard, N.; Lerman, K.; Harutyunyan, H.; Ver Steeg, G.; and Galstyan, A. 2019. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, 21–29. PMLR. Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; ˇSrndi´c, N.; Laskov, P.; Giacinto, G.; and Roli, F. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases. Springer. Bo, D.; Wang, X.; Shi, C.; and Shen, H. 2021. Beyond low-frequency information in graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3950–3957. B¨ottcher, A.; and Wenzel, D. 2008. The Frobenius norm and the commutator. Linear algebra and its applications, 429(89): 1864–1885. Dai, H.; Li, H.; Tian, T.; Huang, X.; Wang, L.; Zhu, J.; and Song, L. 2018. Adversarial attack on graph structured data. In International conference on machine learning, 1115–1124. PMLR. Halko, N.; Martinsson, P.-G.; and Tropp, J. A. 2011. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2): 217–288. Hamilton, W. L.; Ying, Z.; and Leskovec, J. 2017. Inductive Representation Learning on Large Graphs. In NIPS. He, D.; Liang, C.; Liu, H.; Wen, M.; Jiao, P.; and Feng, Z. 2022. Block modeling-guided graph convolutional neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 36, 4022–4029. Jin, D.; Yu, Z.; Huo, C.; Wang, R.; Wang, X.; He, D.; and Han, J. 2021a. Universal graph convolutional networks. Advances in Neural Information Processing Systems, 34: 10654–10664. Jin, W.; Derr, T.; Wang, Y.; Ma, Y.; Liu, Z.; and Tang, J. 2021b. Node similarity preserving graph convolutional networks. In Proceedings of the 14th ACM international conference on web search and data mining, 148–156. Jin, W.; Ma, Y.; Liu, X.; Tang, X.; Wang, S.; and Tang, J. 2020. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 66–74. Kipf, T. N.; and Welling, M. 2016. Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations. Li, H.; Xu, W.; Qiu, C.; and Pei, J. 2022. Fast Markov clustering algorithm based on belief dynamics. IEEE Transactions on Cybernetics. Li, S.; Kim, D.; and Wang, Q. 2023. Restructuring Graph for Higher Homophily via Adaptive Spectral Clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8622–8630. Lin, W.; Lan, H.; and Li, B. 2021. Generative Causal Explanations for Graph Neural Networks. In Proceedings of the 38th International Conference on Machine Learning. Liu, M.; Wang, Z.; and Ji, S. 2021. Non-local graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence. Liu, Y.; Zheng, Y.; Zhang, D.; Chen, H.; Peng, H.; and Pan, S. 2022. Towards unsupervised deep graph structure learning. In Proceedings of the ACM Web Conference 2022, 1392–1403. Liu, Y.; Zheng, Y.; Zhang, D.; Lee, V. C.; and Pan, S. 2023. Beyond smoothing: Unsupervised graph representation learning with edge heterophily discriminating. In Proceedings of the AAAI conference on artificial intelligence, volume 37, 4516–4524. Luan, S.; Hua, C.; Lu, Q.; Zhu, J.; Zhao, M.; Zhang, S.; Chang, X.-W.; and Precup, D. 2022. Is Heterophily A Real Nightmare For Graph Neural Networks on Performing Node Classification? Pei, H.; Wei, B.; Chang, K. C.-C.; Lei, Y.; and Yang, B. 2020. Geom-GCN: Geometric Graph Convolutional Networks. In International Conference on Learning Representations. Qiu, C.; Geng, Y.; Lu, J.; Chen, K.; Zhu, S.; Su, Y.; Nan, G.; Zhang, C.; Fu, J.; Cui, Q.; et al. 2023. 3D-IDS: Doubly Disentangled Dynamic Intrusion Detection. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1965–1977. Qiu, C.; Huang, Z.; Xu, W.; and Li, H. 2022. VGAER: graph neural network reconstruction based community detection. arXiv preprint arXiv:2201.04066. Rozemberczki, B.; Allen, C.; and Sarkar, R. 2021. Multiscale attributed node embedding. Journal of Complex Networks, 9(2): cnab014. Suresh, S.; Budde, V.; Neville, J.; Li, P.; and Ma, J. 2021a. Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns. In KDD. Suresh, S.; Li, P.; Hao, C.; and Neville, J. 2021b. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34: 15920–15933. Tang, J.; Sun, J.; Wang, C.; and Yang, Z. 2009. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, 807–816. Velickovic, P.; Fedus, W.; Hamilton, W. L.; Li`o, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep graph infomax. ICLR (Poster), 2(3): 4. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8937 Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations. Wang, T.; Jin, D.; Wang, R.; He, D.; and Huang, Y. 2022a. Powerful Graph Convolutional Networks with Adaptive Propagation Mechanism for Homophily and Heterophily. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4210–4218. Wang, Y.; and Derr, T. 2021. Tree decomposed graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Wang, Y.; Yi, K.; Liu, X.; Wang, Y. G.; and Jin, S. 2022b. ACMP: Allen-cahn message passing with attractive and repulsive forces for graph neural networks. In The Eleventh International Conference on Learning Representations. Xu, Z.; Chen, Y.; Zhou, Q.; Wu, Y.; Pan, M.; Yang, H.; and Tong, H. 2023. Node Classification Beyond Homophily: Towards a General Solution. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2862–2873. Yan, Y.; Hashemi, M.; Swersky, K.; Yang, Y.; and Koutra, D. 2021. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint arXiv:2102.06462. Yang, L.; Li, M.; Liu, L.; bingxin niu; Wang, C.; Cao, X.; and Guo, Y. 2021. Diverse Message Passing for Attribute with Heterophily. In Beygelzimer, A.; Dauphin, Y.; Liang, P.; and Vaughan, J. W., eds., Advances in Neural Information Processing Systems. Yang, T.; Wang, Y.; Yue, Z.; Yang, Y.; Tong, Y.; and Bai, J. 2022. Graph pointer neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8832–8839. Yang, Z.; Cohen, W.; and Salakhudinov, R. 2016. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, 40–48. PMLR. Yuan, H.; and Ji, S. 2021. Node2Seq: Towards Trainable Convolutions in Graph Neural Networks. CoRR. Zhang, F.; Chan, P. P. K.; Biggio, B.; Yeung, D. S.; and Roli, F. 2016. Adversarial Feature Selection Against Evasion Attacks. IEEE Transactions on Cybernetics, 46. Zhang, K.; Zhang, Y.; and Cheng, H. 2020. Self-supervised structure learning for crack detection based on cycleconsistent generative adversarial networks. Journal of Computing in Civil Engineering, 34(3): 04020004. Zhang, X.; and Zitnik, M. 2020. Gnnguard: Defending graph neural networks against adversarial attacks. Advances in neural information processing systems, 33: 9263–9275. Zheng, X.; Liu, Y.; Pan, S.; Zhang, M.; Jin, D.; and Yu, P. S. 2022. Graph neural networks for graphs with heterophily: A survey. arXiv preprint arXiv:2202.07082. Zheng, Y.; Zhang, H.; Lee, V.; Zheng, Y.; Wang, X.; and Pan, S. 2023. Finding the Missing-half: Graph Complementary Learning for Homophily-prone and Heterophily-prone Graphs. arXiv preprint arXiv:2306.07608. Zhu, D.; Zhang, Z.; Cui, P.; and Zhu, W. 2019. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 1399–1407. Zhu, J.; Jin, J.; Loveland, D.; Schaub, M. T.; and Koutra, D. 2022. How does heterophily impact the robustness of graph neural networks? theoretical connections and practical implications. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2637– 2647. Zhu, J.; Yan, Y.; Zhao, L.; Heimann, M.; Akoglu, L.; and Koutra, D. 2020. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33: 7793–7804. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8938
2024
993
18,843
Link Prediction in Multilayer Networks via Cross-Network Embedding Guojing Ren1, Xiao Ding2, Xiao-Ke Xu3, Hai-Feng Zhang2* 1Institutes of Physical Science and Information Technology, Anhui University 2Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Mathematical Science, Anhui University 3 Computational Communication Research Center and School of Journalism and Communication, Beijing Normal University {rengj, xiaoding2021}@stu.ahu.edu.cn, [email protected], [email protected] Abstract Link prediction is a fundamental task in network analysis, with the objective of predicting missing or potential links. While existing studies have mainly concentrated on single networks, it is worth noting that numerous real-world networks exhibit interconnectedness. For example, individuals often register on various social media platforms to access diverse services, such as chatting, tweeting, blogging, and rating movies. These platforms share a subset of users and are termed multilayer networks. The interlayer links in such networks hold valuable information that provides more comprehensive insights into the network structure. To effectively exploit this complementary information and enhance link prediction in the target network, we propose a novel crossnetwork embedding method. This method aims to represent different networks in a shared latent space, preserving proximity within single networks as well as consistency across multilayer networks. Specifically, nodes can aggregate messages from aligned nodes in other layers. Extensive experiments conducted on real-world datasets demonstrate the superior performance of our proposed method for link prediction in multilayer networks. Introduction Link prediction (Kumar et al. 2020; Daud et al. 2020) is a fundamental task in network analysis that aims to predict missing or potential links in a network. It plays a crucial role in various fields, including (i) social network analysis (Kossinets and Watts 2006): suggesting new friendships; (ii) recommender systems (Vahidi Farashah et al. 2021): recommending relevant items or products to users; (iii) biological networks (Cos¸kun and Koyut¨urk 2021): predicting protein-protein interactions; and (iv) pandemic forecasting (Ma et al. 2022): predicting the spread of infectious diseases. The objective of link prediction is to infer the likelihood of a link between two nodes in a network based on the observed network structural features and, if available, node attribute features. Numerous existing approaches have been developed for link prediction, employing various techniques such as similarity indices (Newman 2001; Zhou, L¨u, and Zhang 2009), *Hai-Feng Zhang is the corresponding author of this paper. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 𝐺𝑠 𝐺𝑡 ? ? 𝑆 Figure 1: A example of multilayer networks. black lines represent intralayer links, blue lines represent interlayer links, and black dashes represent non-observed intralayer links that need to be predicted. maximum likelihood models (Clauset, Moore, and Newman 2008; Guimer`a and Sales-Pardo 2009), matrix factorization methods (Ding, Li, and Jordan 2010; Ma, Sun, and Qin 2017), Skip-gram embedding (Grover and Leskovec 2016; Tang et al. 2015), deep learning models (Wang, Cui, and Zhu 2016), and graph neural networks (GNNs) (Kipf and Welling 2017; Hamilton, Ying, and Leskovec 2017; Veliˇckovi´c et al. 2018). These methods have mainly focused on single networks. However, there might be missing links or noise in the single network, due to limitations in observation or sampling. This data insufficiency problem hinders the performance of link prediction methods, which are sensitive to network topology. Moreover, mining information from a single network provides one-sided insights, as users exhibit distinct characteristics and behavior patterns across different networks. For example, the Facebook network captures social friendships, the LinkedIn network focuses on employment relationships, the Douban network contains a common interest in movies, and the DBLP network reveals coauthorship among scholars. We cannot tell if someone is genuinely interested in the movie “Fast X” or just influenced by their friends, using only the limited information revealed in the movie rating network without knowledge of their social friendships. To address these challenges, some researchers have turned their attention to multilayer networks (Dickison, Magnani, and Rossi 2016). Interconnectedness is pervasive among real-world netThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8939 works. For example, individuals often register on multiple social media platforms to access various services, such as chatting, tweeting, blogging, and rating movies. Some users display accounts from other platforms on their profiles, indicating cross-platform connections. Similarly, collaborative networks in different fields share a subset of researchers, implying connections between these fields. Additionally, knowledge graphs share named entities across domains. These interconnected networks are modeled as multilayer networks. The relationships between different networks hold valuable information that can enhance our understanding of the structure, patterns, and evolution of networks. By considering multilayer networks as complementary information, we can improve link prediction in the target network. While previous research has explored link prediction in multilayer networks (Liu et al. 2017; Cao et al. 2018; Najari et al. 2019; Luo et al. 2022), these studies assume complete knowledge of the correspondence between nodes in all layers, e.g., multiplex networks. However, in practice, obtaining complete interlayer relationships is challenging and expensive due to user privacy concerns and platform policies. Consequently, researchers typically only have access to a limited subset of interlayer links. Therefore, this paper focuses on link prediction in multilayer networks, where nodes in different layers are allowed to be partially overlapped. We propose a novel cross-network embedding model for link prediction in multilayer networks, which represents different networks in a shared latent space based on GNNs. Specifically, a random-walk-based objective function is employed to preserve proximity within each single network. To leverage interlayer relationships as complementary information, nodes aggregate messages not only from their neighbors within the same layer but also from aligned nodes in the other layer. Furthermore, consistency across multilayer networks is maintained by minimizing the distance between aligned nodes, which allows networks to be close in the embedding space. The contributions of our work are summarized as follows: • We introduce a GNN-based model that enables nodes to simultaneously aggregate messages from both their neighbors in the same layer and aligned nodes in the other layer. Each layer learns complementary information from its counterpart layer. • We develop a joint objective function to train the model, which effectively preserves both the proximity within single networks and consistency across multilayer networks. • We extend some state-of-the-art methods designed for single networks and compare them with our proposed model in multilayer networks. Extensive experiments conducted on real-world datasets demonstrate that our proposed method outperforms the baselines for link prediction in multilayer networks, especially with few interlayer links. Related Works Link Prediction Various link prediction methods have been developed which can be categorized into several types. Similarity-based metrics assign a similarity score for each non-observed link. Common Neighbors (CN) assumes that two individuals with more common friends are more likely to establish a friendship, and it calculates the number of common neighbors for a given pair of nodes (Newman 2001). Resource Allocation Index (RA) considers the resource allocation process in networks and calculates the amount of resource transported through the common neighbors of two nodes (Zhou, L¨u, and Zhang 2009). Maximum likelihood models evaluate the likelihood of each non-observed link, which may not be suitable for largescale networks due to their complexity and time-consuming nature. Hierarchical structure model (HSM) suggests that many real-world networks are hierarchically structured, and it infers the likelihood of a hierarchical random graph to predict missing links (Clauset, Moore, and Newman 2008). Stochastic block model (SBM) distributes nodes into blocks or communities and computes the link reliability (Guimer`a and Sales-Pardo 2009). Matrix factorization methods extract the latent features of each node and are considered dimensionality reduction techniques. Some authors apply the Singular Value Decomposition (SVD), which maintains important information based on the eigenvalues (Ding, Li, and Jordan 2010). Nonnegative matrix factorization has also been used to learn latent structural features incorporating additional node/link attribute information (Ding, Li, and Jordan 2010; Ma, Sun, and Qin 2017). Embedding-based methods have gained significant attention. Node2vec is a Skip-gram model that preserves neighborhoods of nodes through biased random walks, taking into account both exploration and exploitation (Grover and Leskovec 2016). LINE preserves both first- and secondorder proximity (Tang et al. 2015). SDNE is a semisupervised deep autoencoder model that jointly preserves local and global structure features (Wang, Cui, and Zhu 2016). GNN-based methods have achieved great success in recent years. GCN aggregates information from a node’s local neighborhood (Kipf and Welling 2017, 2016). GraphSAGE generates embeddings in an inductive manner and is capable of handling large-scale graphs (Hamilton, Ying, and Leskovec 2017). GAT introduces self-attention to assign different importance to each pair of nodes (Veliˇckovi´c et al. 2018). Chen et al. generalized GCN to simplicial complexes by integrating interactions among multiple higherorder graph structures (Chen, Gel, and Poor 2022). Multilayer Link Prediction Above works are modeled for link prediction in single-layer networks having homogeneous links. However, many realworld networks might be heterogeneous, developing different types of links in multiple layers. Compared to singlelayer networks, multilayer networks can express richer information, thus drawing more attention recently. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8940 Similarity-based metrics: Hristova et al. extended Jaccard Coefficient and Adamic/Adar Index to a multilayer scenario (Hristova et al. 2016). Yao et al. proposed NSILR which aggregates inter-layer and intra-layer similarity scores based on layer-wise correlations (Yao et al. 2017). Najari et al. developed LPIS synthesizing probability that combines intra-layer and inter-layer information (Najari et al. 2019). Aleta et al. generalized the Adamic-Adar Index to multiplex networks via triadic closure (Aleta et al. 2020). Luo et al. introduced EMLP algorithm that integrates similarity scores from all layers using evidence theory (Luo et al. 2022). Embedding based methods: Liu et al. proposed layer coanalysis which modifies node2vec to multilayer networks, traversing between layers by leveraging interactions among layers (Liu et al. 2017). Cao et al. trained multiple neural networks (MNN) targeting heterogeneous feature channels and assigned the same embedding for aligned nodes (Cao et al. 2018). Zhan et al. proposed collective link fusion (CLF) to predict link probability using collective random walk with restart (Zhan, Zhang, and Yu 2019). Jiang proposed partially aligned GCNs that jointly learn embeddings incorporating interlayer information (Jiang 2021). Du et al. trained a Skip-gram embedding model CELP via a biased random walk based on intra-network and cross-network distributions (Du et al. 2022). Alnaimy et al. employed matrix factorization to obtain embeddings on the expanded graph (EG) (Alnaimy and Desouki 2022). Preliminaries In this section, we will define some terminology and notations used in this paper, and provide the problem formulation of link prediction in multilayer networks. Definition 1. Multilayer networks: For simplicity, we consider two undirected and unweighted networks Gs = (Vs, Es) and Gt = (Vt, Et), where Vs, Vt are the sets of nodes, and Es, Et are the sets of edges (intralayer links), respectively. Gs, Gt can be referred to as layers, and they share some nodes belonging to the same entities, i.e., Vs ∩Vt ̸= ∅. S = {(vi, vj)|vi ∈Vs, vj ∈Vt} is the set of interlayer links between Gs and Gt. For ∀vi ∈Vs ∪Vt, at most one interlayer link exists, i.e., |S| ≤min(|Vs|, |Vt|). The multilayer networks can be defined by a triplet (Gs, Gt, S), as Figure 1 shows. Note that multiplex networks are a special case of general multilayer networks, where Vs = Vt = V , and |S| = |V |. Definition 2. Link prediction: For the target network G = (V, E), |V |·(|V |−1) 2 is the number of all possible links, denoted as the universal set U. The set of non-existing links is U −E, and there may be some missing or potential links in the set U −E. The aim of link prediction is to find such missing or potential links. In the above multilayer network, G ∈{Gs, Gt}. To evaluate the effectiveness of link prediction methods, E is randomly divided into two parts ET and EP , named the training set and the probe set (i.e. test set) respectively. In general, a link prediction algorithm provides a similarity score or linkage probability for each non-observed link (x, y) ∈U −ET . Proposed Method Cross-Network Embedding GNNs can learn node representations by aggregating information from neighbors, thereby capturing the underlying connectivity patterns, which is beneficial for the link prediction task. Multilayer networks provide valuable information to enhance link prediction. In light of this, our approach involves utilizing GNNs in multilayer networks, which learn a cross-network embedding simultaneously integrating intralayer and interlayer structural features. General GNN Layer Various GNNs have been developed for node representation. In general, a typical GNN layer follows the form: hk u ←θ(W k 1 · hk−1 u + W k 2 · X v∈Nu auvhk−1 v ), (1) where hk u represents the output vector of node u in the kth layer, ∀k ∈{1, . . . , K}. θ denotes an activation function (e.g., ReLU). W k 1 and W k 2 are matrices that weight the contributions of the node itself and its neighbors, respectively. auv indicates the importance of link (u, v). In the case of GCN (Kipf and Welling 2017), auv corresponds to an element of the symmetrically normalized adjacency matrix, and auv = 1 √ |Nu|·|Nv|; for GAT (Veliˇckovi´c et al. 2018), auv represents attention coefficients; for GraphSAGE (Hamilton, Ying, and Leskovec 2017), auv = 1 |Nu| when using a mean aggregator. Tang et al. argued that intralayer links connected with small degree nodes have the most significant impact on capturing interlayer features (Tang et al. 2022). This observation might be explained from a resource allocation perspective (Zhou, L¨u, and Zhang 2009). For instance, an individual who is popular in school may have many friends, but due to time constraints (e.g., having only 2.5 hours for daily social activities on average), they have less opportunity to interact with each specific friend. Conversely, an individual with few friends may share more attention with each friend. In other words, intralayer links connected with small degree nodes are more important for capturing both intralayer and interlayer features. Therefore, we adopt the GCN form for auv that suppresses the contributions of neighbors with large degrees. It indicates that nodes with larger degrees transmit fewer messages to each of their neighbors. Cross-GNN Layer We assume that different layers of multilayer networks have inherent structural consistency to some extent, which is a prerequisite for link prediction in multilayer networks. Aggregating messages from other layers will help complement node information, which enhances the understanding of connectivity patterns. A cross-GNN layer integrating interlayer information is formulated as follows: hk u ←θ(W k 1 · hk−1 u + W k 2 · X v∈Nu 1 p |Nu| · |Nv| hk−1 v + W k 3 · buu′hk−1 u′ ), (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8941 ℎ𝒩1 ℎ2 ℎ3 ℎ4 ℎ1′ ℎ1 𝑎12 𝑎13 𝑎14 · 𝑏11′ 𝑊2 𝑊1 𝑊3 Figure 2: An illustration of the cross-GNN layer. Nodes 2, 3, and 4 are neighbors of 1, denoted as N1. 1′ is the aligned node of 1. Information from node 1 itself, neighbors, and aligned node is aggregated with different weights. where u′ is aligned with u, buu′ denotes the importance of u′ to u, and buu′ = σ(hk−1 u T · hk−1 u′ ), σ(x) = 1/(1 + exp−x) is the sigmoid function. W k 1, W k 2, W k 3 are matrices weighting the contribution of the node itself, its neighbors, and the aligned nodes, respectively. An example can be seen in Figure 2. Note that if u is not aligned with any nodes in the other network (i.e. u′ = None), then hk−1 u′ = 0. As the aggregator functions are defined, we employ them to the multilayer network (Gs, Gt, S). It is essential that hk−1 u and hk−1 u′ have the same dimensions. To achieve this, we use the GCN-form aggregator to capture first-order proximity and ensure consistent output dimensions for both layers. Subsequently, the cross-network aggregator captures higher-order proximity and information from the aligned nodes. The input node features are denoted as X, which can be the adjacency matrix in attribute-free networks. The representation vectors are formulated as follows: H1 = θ(XW 1 1 + ˜ AXW 1 2) Hk = θ(Hk−1W k 1 + ˜ AHk−1W k 2 + BkHk−1 ∗ W k 3). (3) Here, Hk ∈Rn×ck represents the output representation vectors of the k-th layer, ∀k ∈{2, . . . , K}. The symmetrically normalized adjacency matrix is denoted as ˜ A = D−1 2 AD−1 2 , where A is the adjacency matrix, and D is the degree matrix. Additionally, Bk is the importance matrix of Hk−1 ∗ to Hk−1, defined as Bk = σ(I(Hk−1 ⊙Hk−1 ∗ )T ), where ⊙represents the Hadamard product. The matrix I ∈ Rn×ck is filled with 1 to summarize the Hadamard product of each aligned pair (hk−1, hk−1 ∗ ) and broadcast. The aligned representation matrix Hk−1 ∗ has the same shape as Hk−1, and it can be rewritten as: Hk−1 ∗ [u] = hk−1 u′ , if (u, u′) ∈S 0, otherwise. (4) The model defined as Eq. (3) is a K-layer GNN, where the first layer is a GCN-form layer, and the following are crossGNN layers. The model is trained for both Gs and Gt, and parameters in cross-GNN layers are shared. Notably, there is no activation function in the last layer of the GNN model. The final output vectors HK are denoted as Z. Objective Functions The objective function includes intralayer and interlayer loss, in which intralayer loss mainly retains intralayer structural features, and interlayer loss retains interlayer structural features. Then the total loss is jointly optimized to unify the two networks to the same latent space better. Intralayer Loss Random walks are often employed to link prediction tasks. If two nodes co-occur on fixed-length random walks frequently, it indicates a higher probability of a link between them. Random walks thus capture higher-order proximity and provide insights into the connectivity patterns and potential links within a network. Therefore, to learn embedding zi, ∀i ∈V , we apply a random-walk-based objective function in an unsupervised setting: Lr = −log(σ(zT i · zj)) −Q · Ek∼Pn(v) log(1 −σ(zT i · zk)), (5) where j is a node that co-occurs with i in a window from sequences of random walks, σ is the sigmoid function, Pn(v) is the negative sampling probability distribution, and Q defines the number of negative samples. Proximity nodes are encouraged to have similar embeddings, while discrete nodes are distinct in the embedding space. The intralayer loss is the sum of the random-walk-based loss of Gs and Gt: Lintra = Ls r + Lt r. (6) By minimizing the intralayer loss, the intralayer structural features of both networks can be preserved. Interlayer Loss Besides intralayer structural features, interlayer structural features are crucial to multilayer networks. We assume that the multilayer networks are consistent to some extent, and aligned nodes should be close in the latent space, i.e., they share similar embeddings. Therefore, we can build an anchor-aware loss to minimize the distance of aligned nodes. Obviously, the number of non-interlayer links is far more than interlayer links (i.e., |Vs| · |Vt| −|S| ≫|S|). To overcome the sample imbalance problem, we adopt undersampling, which selects the several nearest non-interlayer links for each interlayer link. These non-interlayer links termed hard negatives are more informative and helpful for maintaining consistency between multilayer networks. In detail, for each interlayer link (vs i , vt j), we randomly sample η noninterlayer links (vs i , vt k) for vs i , where vt k ∈N(vt j), indicating that vt k is chosen from the neighbors of vt j. The same selection is performed for vt j. The set of interlayer links is called S+ with the label 1, while the set of these sampled non-interlayer links is called S−(i.e., |S−| = 2η|S+|) with the label −1. The interlayer loss is defined as follows: Linter = 1 |S+| X (vs i ,vt j)∈S+ (1 −cos(zs i, zt j))+ 1 |S−| X (vs k,vt l )∈S− max(cos(zs k, zt l)) −ϵ, 0), (7) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8942 Algorithm 1: CGNN Input: Training multilayer networks (Gs, Gt, S+) Parameter: Batch size of random walks br, batch size of interlayer links bs, margin ϵ, weight parameter α, numbers of negative samples Q, η Output: Embedding vectors Zs for Gs, Zt for Gt 1: Obtain node attribute vectors Xs for Gs, Xt for Gt 2: Initialize set of model parameters W 3: while not converged do 4: Sample a batch of positive and negative pairs from random walks 5: Sample a batch of interlayer links from S+ and noninterlayer links from S− 6: Generate embedding vectors Zs, Zt by Eq. (3) 7: Calculate the total loss by Eq. (8) 8: Update W with Adam optimizer 9: end while 10: return Zs, Zt where ϵ is a margin parameter: if cos(zs k, zt l) > ϵ, noninterlayer link (zs k, zt l) is hard to be distinguished from interlayer link (zs i, zt j), therefore cos(zs k, zt l) is encouraged to decrease; otherwise, it is not taken into account. By minimizing interlayer loss, we can preserve interlayer structural information. Total Loss The total objective function is defined as a linear combination of both intralayer and interlayer loss, so that we can preserve intralayer and interlayer structural information by jointly optimizing this: L = α · Lintra + (1 −α) · Linter. (8) Here, α is the weight parameter to tradeoff the two components of the objective function. We employ Adam optimizer to update the parameters of the GNN layers. For convenience, we denote our proposed model as CGNN (Cross-network Graph Neural Networks). The pseudocode is shown as Algorithm 1. Link Prediction To calculate the probability of the existence of each nonobserved link (x, y) ∈U −ET , we can simply use cosine similarity. To leverage the latent information contained in the node embeddings, we employ the Logistic Regression classifier to predict the linkage probability. The input feature is edge embedding, which is a concatenation of two node embeddings and the Hadarmard product of them (Qu et al. 2016): I(x, y) = concat(zx, zx ⊙zy, zy). (9) Complexity Analysis Notations The depth of GNN layers K is set to 2. f, c, d are the dimensions of the input feature vectors, hidden vectors and output vectors, respectively. |Es|, |Et| are the numbers of edges of Gs, Gt. For random walks, the walk length is l, window size is w, walks per node are m. Datasets Nodes# Edges# Anchors# Facebook 1043 4734 1043 Twitter 1043 4860 Twitter 2562 6967 2177 YouTube 2409 7862 Table 1: Structural statistics of the datasets. GNN Layers For a single GNN layer, the convolutional operation has complexity O(|E|fc). For the 2-layer GNN defined above, the complexity is O(|E|c(f+d)), where E = max(|Es|, |Et|). Intralayer Loss The number of fixed-length random walks from batch sampling is brm(l −w). Then the complexity of intralayer loss is O(brm(l −w)wd). Interlayer Loss The batch size of interlayer links bs is |S|, i.e. full batch. The complexity of interlayer loss is O(|S|d), where |S| = max(|S+|, |S−|). Experiments Datasets We select two real-world datasets: (i) Facebook/Twitter (Du et al. 2022); (ii) Twitter-YouTube (Dickison, Magnani, and Rossi 2016). The detailed statistics of these datasets are shown in Table 1. More real-world datasets are shown in the Appendix. Baselines We compare our proposed model with the following stateof-the-art baseline methods. Single-Layer Methods • Common Neighbors (CN): Similarity-based metric calculating the size of common neighbors for a given pair of nodes x and y (Newman 2001). • Resource Allocation Index (RA): Similarity-based metric calculating the resources sent from node x to y through the common neighbors (Zhou, L¨u, and Zhang 2009). • SVD: Matrix factorization-based method applying the singular value decomposition technique to generate lowdimensional vectors (Ding, Li, and Jordan 2010). • node2vec (n2v): Skip-gram embedding method using a biased random walk to preserve higher-order proximity (Grover and Leskovec 2016). • GAE: GNN-based method using a GCN encoder and a simple inner product decoder (Kipf and Welling 2016). • GAT: GNN-based method using a GAT encoder and a simple inner product decoder (Veliˇckovi´c et al. 2018). Multilayer Methods • MAA: Similarity-based metric generalizing the AdamicAdar method (Adamic and Adar 2003) to multiplex networks via triadic relationships within a single layer and across different layers (Aleta et al. 2020). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8943 Dataset Method Ratio of Interlayer Links 0 30% 60% 90% CN 0.7642 / 0.7880 0.7699 / 0.7913 0.7722 / 0.7984 0.7863 / 0.8057 RA 0.7649 / 0.7885 0.7708 / 0.7917 0.7725 / 0.7992 0.7878 / 0.8077 SVD 0.8100 / 0.8330 0.8197 / 0.8384 0.8331 / 0.8581 0.8528 / 0.8827 n2v 0.8120 / 0.8319 0.8304 / 0.8514 0.8642 / 0.8800 0.9313 / 0.9470 Facebook GAE 0.8352 / 0.8488 0.8492 / 0.8608 0.8660 / 0.8851 0.9123 / 0.9326 GAT 0.8328 / 0.8545 0.8364 / 0.8529 0.8640 / 0.8878 0.9213 / 0.9344 ↕ MAA 0.7649 / 0.7889 0.7692 / 0.7920 0.7753 / 0.7986 0.7814 / 0.8064 Twitter LPIS 0.7721 / 0.7929 0.7444 / 0.7759 0.7449 / 0.7638 0.8287 / 0.8306 CLF 0.8637 / 0.8787 0.8875 / 0.8996 0.9122 / 0.9169 0.9365 / 0.9341 n2v-e 0.8120 / 0.8319 0.8481 / 0.8543 0.8792 / 0.8795 0.9013 / 0.9073 EG-mini 0.7977 / 0.8559 0.8164 / 0.8729 0.8499 / 0.8908 0.8920 / 0.9202 CELP 0.8150 / 0.8371 0.8697 / 0.8464 0.8983 / 0.9132 0.9588 / 0.9684 CGNN 0.8989 / 0.9125 0.9105 / 0.9164 0.9444 / 0.9550 0.9653 / 0.9740 CN 0.7381 / 0.7512 0.7490 / 0.7568 0.7882 / 0.7715 0.8048 / 0.7828 RA 0.7559 / 0.7529 0.7667 / 0.7597 0.8073 / 0.7740 0.8227 / 0.7914 SVD 0.6458 / 0.7103 0.6537 / 0.7086 0.6641 / 0.7220 0.6725 / 0.7044 n2v 0.6276 / 0.6594 0.6332 / 0.6619 0.6687 / 0.7057 0.7130 / 0.7266 Twitter GAE 0.6887 / 0.7225 0.6996 / 0.7403 0.7000 / 0.7567 0.7159 / 0.7782 GAT 0.7065 / 0.7715 0.7260 / 0.7967 0.7302 / 0.7858 0.7582 / 0.8174 ↕ MAA 0.7551 / 0.7531 0.7602 / 0.7551 0.7740 / 0.7646 0.7741 / 0.7782 YouTube LPIS 0.7093 / 0.8975 0.6824 / 0.8811 0.6199 / 0.8508 0.5304 / 0.8069 CLF 0.8194 / 0.8920 0.8244 / 0.8850 0.8235 / 0.8800 0.8318 / 0.8752 n2v-e 0.6276 / 0.6594 0.7332 / 0.8001 0.7418 / 0.8033 0.7656 / 0.8138 EG-mini 0.7859 / 0.7345 0.7942 / 0.7505 0.8000 / 0.7764 0.8104 / 0.7758 CELP 0.8942 / 0.8737 0.8908 / 0.8635 0.8965 / 0.8735 0.9042 / 0.8569 CGNN 0.9300 / 0.9095 0.9327 / 0.9094 0.9353 / 0.9125 0.9353 / 0.9108 Table 2: AUC with different ratios of interlayer links. • LPIS: This model predicts intralayer link probability using Logistic regression classifier with intralayer features and then calculates interlayer link probability with interlayer similarity. The total link probability is a combination of them (Najari et al. 2019). • CLF: A collective link fusion model predicting both intralayer and interlayer links (Zhan, Zhang, and Yu 2019). It can be seen as random walks with restart (Tong, Faloutsos, and Pan 2006) across multilayer networks. • EG-mini: For multilayer networks (Gs, Gt, S), we construct an expanded graph Ge = (Ve, Ee), where Ve = Vs∪Vt, Ee = Es∪Et∪S (Alnaimy and Desouki 2022). Then we use matrix factorization on the adjacency matrix of the expanded graph to obtain node embeddings. • CELP: Cross-network Skip-gram embedding method employing a biased random walk strategy, which is a balance of intra-network and cross-network empirical distributions (Du et al. 2022). • n2v-e: We employ node2vec on the above expanded graph, called n2v-e. It can be seen as a simplified version of CELP. For a fair comparison, we extend the single-layer methods to multilayer methods. We apply two strategies respectively, then report the best performances of them. • Network Extension: The assumption for this strategy is that different layers in a multilayer network share similar connection patterns. Based on this, if a pair of nodes are not linked in one layer, but their aligned pairs are linked in the other layer, we can add an edge between them to complement the present network structure (Liu et al. 2017). Formally, given a multilayer network (Gs, Gt, S), the extension of Gt = (Vt, Et) can be formulated as: Et ←Et ∪{(u, v)|(u, v) /∈Et, (u′, v′) ∈Es, (u, u′), (v, v′) ∈S}. • Score Extension: Link prediction on each layer of a multilayer network can generate different similarity scores or link probabilities, which contain valuable information from different networks. It can be seen as a simplified version of LPIS (Najari et al. 2019). Formally, given a multilayer network (Gs, Gt, S), the link probability of (x, y) ∈Et can be formulated as: pt(x, y) ←c · pt(x, y) + (1 −c) · ps(x′, y′), s.t. (x, x′), (y, y′) ∈S. Experiment Settings Parameter Setup For SVD, the embedding dimension is 32. For node2vec and n2v-e, p = 1, q = 1, window size is 10, the number of walks per node is 20, walk length is 80, and the embedding dimension is 128. For GAE, the hidden dimension is 32, the embedding dimension is 16, and learning rate is 0.01. For GAT, the first layer consists of 4 attention heads computing 8 features each (for a total of 32 features), the embedding dimension is 16, and learning rate The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8944 Method Facebook Twitter CGNN 0.9653 0.9740 W/o cross-GNN layer 0.8967 0.9106 W/o degree-discount 0.9281 0.9392 W/o intralayer loss 0.5639 0.5793 W/o interlayer loss 0.8859 0.8965 W/o weight-sharing 0.9629 0.9723 W/o negative sampling 0.9648 0.9732 Table 3: Ablation study on Facebook/Twitter dataset. is 0.01. For CGNN, the hidden dimension is 256, the embedding dimension d = 128, α = 0.05, ϵ = 0.7, window size is 10, walks per node is 10, walk length is 20, batch size br = 512, and learning rate is 0.001. Evaluation Metric We use AUC (Area Under the Curve) as the evaluation metric. Each layer from the multilayer network is divided into the training set and the test set. We predict the linkage probability on each layer respectively. Hardware SVD, GAE, GAT, and CGNN were run on a Linux server with an NVIDIA A100-40G GPU. These codes are implemented in Python with PyTorch and PyTorch Geometric libraries. All experiments in this paper were carried out 10 times and averaged. Experimental Results For each dataset, we randomly select 90% edges as the training set, and the rest 10% edges as the test set. Then we randomly sample negative test edges of the same size as the test set for evaluation. The proposed model is compared with the baselines, and the performance in Facebook/Twitter and Twitter/YouTube is shown in Table 2. One can see that almost all methods perform better with a higher ratio of interlayer links. It proves that interlayer information can indeed enhance link prediction. The improvement of embeddingbased methods is greater than that of similarity-based metrics, indicating that embedding-based methods have a better capability of leveraging interlayer information. CGNN is the best, showing the effectiveness of our proposed method. Compared to CELP, CGNN performs better with fewer interlayer links. Results on more real-world datasets are shown in the Appendix. Ablation Study To investigate the importance of the different components in our proposed model, we conduct an ablation study on the Facebook-Twitter dataset, with 90% training edges and 90% interlayer links. We compare our model CGNN with 6 ablated variants: (i) CGNN without the cross-GNN layer, which replaces the cross-GNN layer with a GCN-form layer (W/o cross-GNN layer); (ii) CGNN without degree-discount, where auv = 1 (W/o degreediscount); (iii) CGNN without the intralayer loss, where α = 0 (W/o intralayer loss); (iv) CGNN without the interlayer loss, where α = 1 (W/o interlayer loss); (v) CGNN without weight-sharing in the 2-nd layer (W/o weight-sharing); (vi) CGNN without negative sampling in interlayer loss, where ϵ = 1 (W/o negative sampling). The results are shown in 0.0 0.2 0.4 0.6 0.8 1.0 0.64 0.68 AUC 0.90 0.94 0.98 (a) 0.0 0.2 0.4 0.6 0.8 1.0 0.88 0.90 0.92 0.94 0.96 0.98 AUC (b) Facebook Twitter Figure 3: Performance with different hyperparameters. (a) AUC vs. α; (b) AUC vs. ϵ. Table 3, where black means the optimal results. We can observe that CGNN outperforms most variants, indicating the importance of these components. Parameter Sensitivity Study To analyze the sensitivity of important parameters in our proposed method, we conduct experiments on the Facebook-Twitter dataset with 90% training edges and 90% interlayer links. Figure 3(a) shows the impact of the weight parameter α that tradeoffs the intralayer loss and interlayer loss. α = 0 means only consider interlayer loss, and α = 1 means only consider intralayer loss. One can observe that the performance of the Twitter network arrives highest when α = 0.05, where interlayer loss is more important. Performance at α = 0 is extremely low, indicating that intralayer loss is necessary for link prediction. Figure 3(b) depicts the impact of the margin parameter ϵ that selects indistinguishable interlayer links from non-interlayer links in the embedding space. ϵ = 1 means no negative samples are used. One can observe that performance is better when ϵ becomes larger before arriving 1 and achieves best when ϵ = 0.7. It illustrates the effectiveness of negative sampling. More experiments can be seen in the Appendix. Conclusion In this paper, we focus on link prediction in multilayer networks. Multilayer networks provide a more comprehensive view of network analysis. To take advantage of the valuable information in multilayer networks, we present a crossnetwork GNN model for link prediction in multilayer networks. More specifically, nodes are capable of aggregating messages not only from their immediate neighbors within the same layer but also from corresponding nodes in the other layer. Evidently, each layer learns complementary information from its counterpart layer. For joint model training, we utilize both the intralayer loss based on random walks, which maintains proximity within single layers, and interlayer loss which ensures consistency across the multilayer network. Therefore, different layers of the multilayer network are embedded into the same latent space. Some single-layer state-of-the-art methods are extended to multilayer networks for comparison. Experiments on real-world datasets indicate that our proposed model outperforms baselines for link prediction in multilayer networks, particularly under conditions of limited interlayer links. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8945 Acknowledgments This work is supported by the National Natural Science Foundation of China (61973001) and the University Synergy Innovation Program of Anhui Province (GXXT-2021-032). References Adamic, L. A.; and Adar, E. 2003. Friends and neighbors on the web. Social Networks, 25(3): 211–230. Aleta, A.; Tuninetti, M.; Paolotti, D.; Moreno, Y.; and Starnini, M. 2020. Link prediction in multiplex networks via triadic closure. Physical Review Research, 2(4): 042029. Alnaimy, M.; and Desouki, M. S. 2022. Expanded graph embedding for joint network alignment and link prediction. Journal of Big Data, 9(1): 1–15. Cao, X.; Chen, H.; Wang, X.; Zhang, W.; and Yu, Y. 2018. Neural link prediction over aligned networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-18), volume 32. Chen, Y.; Gel, Y. R.; and Poor, H. V. 2022. BScNets: Block simplicial complex neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-22), volume 36, 6333–6341. Clauset, A.; Moore, C.; and Newman, M. E. 2008. Hierarchical structure and the prediction of missing links in networks. Nature, 453(7191): 98–101. Cos¸kun, M.; and Koyut¨urk, M. 2021. Node similarity-based graph convolution for link prediction in biological networks. Bioinformatics, 37(23): 4501–4508. Daud, N. N.; Ab Hamid, S. H.; Saadoon, M.; Sahran, F.; and Anuar, N. B. 2020. Applications of link prediction in social networks: A review. Journal of Network and Computer Applications, 166: 102716. Dickison, M. E.; Magnani, M.; and Rossi, L. 2016. Multilayer social networks. Cambridge University Press. Ding, C.; Li, T.; and Jordan, M. I. 2010. Convex and SemiNonnegative Matrix Factorizations. IEEE Transactions on Pattern Analysis & Machine Intelligence, 32(01): 45–55. Du, X.; Yan, J.; Zhang, R.; and Zha, H. 2022. CrossNetwork Skip-Gram Embedding for Joint Network Alignment and Link Prediction. IEEE Transactions on Knowledge & Data Engineering, 34(03): 1080–1095. Grover, A.; and Leskovec, J. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD-16), 855–864. Guimer`a, R.; and Sales-Pardo, M. 2009. Missing and spurious interactions and the reconstruction of complex networks. Proceedings of the National Academy of Sciences, 106(52): 22073–22078. Hamilton, W. L.; Ying, R.; and Leskovec, J. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS-17), 1025–1035. Hristova, D.; Noulas, A.; Brown, C.; Musolesi, M.; and Mascolo, C. 2016. A multilayer approach to multiplexity and link prediction in online geo-social networks. EPJ Data Science, 5: 1–17. Jiang, M. 2021. Cross-Network Learning with Partially Aligned Graph Convolutional Networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (SIGKDD-21), 746–755. Kipf, T. N.; and Welling, M. 2016. Variational graph autoencoders. arXiv preprint arXiv:1611.07308. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR-17). Kossinets, G.; and Watts, D. J. 2006. Empirical analysis of an evolving social network. Science, 311(5757): 88–90. Kumar, A.; Singh, S. S.; Singh, K.; and Biswas, B. 2020. Link prediction techniques, applications, and performance: A survey. Physica A: Statistical Mechanics and its Applications, 553: 124289. Liu, W.; Chen, P.-Y.; Yeung, S.; Suzumura, T.; and Chen, L. 2017. Principled multilayer network embedding. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW-17), 134–141. Luo, H.; Li, L.; Dong, H.; and Chen, X. 2022. Link prediction in multiplex networks: An evidence theory method. Knowledge-Based Systems, 257: 109932. Ma, X.; Sun, P.; and Qin, G. 2017. Nonnegative matrix factorization algorithms for link prediction in temporal networks using graph communicability. Pattern Recognition, 71: 361–374. Ma, Y.; Gerard, P.; Tian, Y.; Guo, Z.; and Chawla, N. V. 2022. Hierarchical spatio-temporal graph neural networks for pandemic forecasting. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (CIKM-22), 1481–1490. Najari, S.; Salehi, M.; Ranjbar, V.; and Jalili, M. 2019. Link prediction in multiplex networks based on interlayer similarity. Physica A: Statistical Mechanics and its Applications, 536: 120978. Newman, M. E. 2001. Clustering and preferential attachment in growing networks. Physical Review E, 64(2): 025102. Qu, Y.; Cai, H.; Ren, K.; Zhang, W.; Yu, Y.; Wen, Y.; and Wang, J. 2016. Product-based neural networks for user response prediction. In Proceedings of the 16th IEEE International Conference on Data Mining (ICDM-16), 1149–1154. Tang, J.; Qu, M.; Wang, M.; Zhang, M.; Yan, J.; and Mei, Q. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web (WWW-15), 1067–1077. Tang, R.; Jiang, S.; Chen, X.; Wang, W.; and Wang, W. 2022. Network structural perturbation against interlayer link prediction. Knowledge-Based Systems, 250: 109095. Tong, H.; Faloutsos, C.; and Pan, J.-Y. 2006. Fast random walk with restart and its applications. In Proceedings of the 6th International Conference on Data Mining (ICDM-06), 613–622. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8946 Vahidi Farashah, M.; Etebarian, A.; Azmi, R.; and Ebrahimzadeh Dastjerdi, R. 2021. A hybrid recommender system based-on link prediction for movie baskets analysis. Journal of Big Data, 8: 1–24. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations (ICLR18). Wang, D.; Cui, P.; and Zhu, W. 2016. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD-16), 1225–1234. Yao, Y.; Zhang, R.; Yang, F.; Yuan, Y.; Sun, Q.; Qiu, Y.; and Hu, R. 2017. Link prediction via layer relevance of multiplex networks. International Journal of Modern Physics C, 28(08): 1750101. Zhan, Q.; Zhang, J.; and Yu, P. S. 2019. Integrated anchor and social link predictions across multiple social networks. Knowledge and Information Systems, 60: 303–326. Zhou, T.; L¨u, L.; and Zhang, Y.-C. 2009. Predicting missing links via local information. The European Physical Journal B, 71: 623–630. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8947
2024
994
18,844
Towards Diverse Perspective Learning with Selection over Multiple Temporal Poolings Jihyeon Seong*1, Jungmin Kim*1, Jaesik Choi1, 2 1Korea Advanced Institute of Science and Technology (KAIST), South Korea 2INEEJI, South Korea {jihyeon.seong, aldirl7, jaesik.choi}@kaist.ac.kr Abstract In Time Series Classification (TSC), temporal pooling methods that consider sequential information have been proposed. However, we found that each temporal pooling has a distinct mechanism, and can perform better or worse depending on time series data. We term this fixed pooling mechanism a single perspective of temporal poolings. In this paper, we propose a novel temporal pooling method with diverse perspective learning: Selection over Multiple Temporal Poolings (SoM-TP). SoM-TP dynamically selects the optimal temporal pooling among multiple methods for each data by attention. The dynamic pooling selection is motivated by the ensemble concept of Multiple Choice Learning (MCL), which selects the best among multiple outputs. The pooling selection by SoM-TP’s attention enables a non-iterative pooling ensemble within a single classifier. Additionally, we define a perspective loss and Diverse Perspective Learning Network (DPLN). The loss works as a regularizer to reflect all the pooling perspectives from DPLN. Our perspective analysis using Layer-wise Relevance Propagation (LRP) reveals the limitation of a single perspective and ultimately demonstrates diverse perspective learning of SoM-TP. We also show that SoM-TP outperforms CNN models based on other temporal poolings and state-of-the-art models in TSC with extensive UCR/UEA repositories. Introduction Time Series Classification (TSC) is one of the most valuable tasks in data mining, and Convolutional Neural Network (CNN) with global pooling shows revolutionary success on TSC (L¨angkvist, Karlsson, and Loutfi 2014; Ismail Fawaz et al. 2019). However, global pooling in TSC poses a significant challenge, as it disregards the fundamental characteristic of time series data, which is the temporal information, by compressing it into a single scalar value (Lecun et al. 1998; Yu et al. 2014). To tackle this issue, temporal pooling methods were introduced, which preserve the temporal nature of the time series at the pooling level (Lee, Lee, and Yu 2021). Temporal pooling involves employing operations such as ‘maximum’ (MAX) and ‘average’ (AVG), categorized by segmentation types: ‘no segment,’ ‘uniform,’ and ‘dynamic.’ These segmentation types correspond respectively to *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Global-Temporal-Pooling (GTP), Static-Temporal-Pooling (STP), and Dynamic-Temporal-Pooling (DTP) (Lee, Lee, and Yu 2021). We refer to each distinct pooling mechanism as a perspective based on the segmentation types. However, we discovered that the most effective temporal pooling varies depending on the characteristics of the time series data, and there is no universally dominant pooling method for all datasets (Esling and Agon 2012). This underlines the necessity for a learnable pooling approach adaptable to each data sample’s characteristics. In this paper, we propose Selection over Multiple Temporal Poolings (SoM-TP). SoM-TP is a learnable ensemble pooling method that dynamically selects heterogeneous temporal poolings through an attention mechanism (Vaswani et al. 2017). Aligned with our observation that a more suitable pooling exists for each data sample, a simple ensemble weakens the specified representation power (Lee et al. 2017). Therefore, SoM-TP applies advanced ensemble learning, motivated by Multiple Choice Learning (MCL), that selects the best among the multiple pooling outputs (Guzm´an-rivera, Batra, and Kohli 2012). MCL is a selection ensemble that generates M predictions from multiple instances, computes the oracle loss for the most accurate prediction, and optimizes only the best classifier. Capitalizing on the advantage of deep networks having access to intermediate features, SoM-TP ensembles diverse pooling features in a single classifier. To achieve non-iterative optimization, SoM-TP dynamically selects the most suitable pooling method for each data sample through attention, which is optimized by Diverse Perspective Learning Network (DPLN) and perspective loss. DPLN is a subnetwork that utilizes all pooling outputs, and the perspective loss reflects DPLN’s result to make a regularization effect. Finally, the CNN model based on SoM-TP forms fine representations through diverse pooling selection, allowing it to capture both the ‘global’ and ‘local’ features of the dataset. Recognizing the crucial role of pooling in selecting the most representative values from encoded features in CNNs, we have chosen CNNs as the suitable model for our study. We apply our new selection ensemble pooling to Fully Convolutional Networks (FCNs) and Residual Networks (ResNet), which show competitive performances in TSC as a CNN-based model (Wang, Yan, and Oates 2017; Ismail Fawaz et al. 2019). SoM-TP outperforms the existThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8948 X1 X2 … Xt (a) GTP (b) STP AlignMatrix SoftDTW (c) DTP Figure 1: Perspectives of Temporal Poolings. Depending on segmentation types, each temporal pooling generates different pooling outputs and has different perspectives. ing temporal pooling methods and state-of-the-art models of TSC both in univariate and multivariate time series datasets from massive UCR/UEA repositories. We also provide a detailed analysis of the diverse perspective learning result by Layer-wise Relevance Propagation (LRP) (Bach et al. 2015) and the dynamic selection process of SoM-TP. To the best of our knowledge, this is the first novel approach to poolinglevel ensemble study in TSC. Therefore, our contributions are as follows: • We investigate data dependency arising from distinct perspectives of existing temporal poolings. • We propose SoM-TP, a new temporal pooling method that fully utilizes the diverse temporal pooling mechanisms through an MCL-inspired selection ensemble. • We employ an attention mechanism to enable a noniterative ensemble in a single classifier. • We define DPLN and perspective loss as a regularizer to promote diverse pooling selection. Background Different Perspectives between Temporal Poolings Convolutional Neural Network in TSC In TSC, CNN outperforms conventional methods, such as nearest neighbor classifiers (Yuan et al. 2019) or COTE (Bagnall et al. 2015; Lines, Taylor, and Bagnall 2016), by capturing local patterns of time series (Ismail Fawaz et al. 2019). The TSC problem is generally formulated as follows: a time series data T = {(X1, y1), ..., (Xt, yt)}, where X ∈ Rd×t of length t with d variables and y ∈{1, ..., C} from C classes. Then the convolution stack Φ of out channel dimension k encodes features as hidden representations with temporal position information H = {h0, ..., ht} ∈Rk×t (Lee, Lee, and Yu 2021; Wang, Yan, and Oates 2017). H = Φ(T) (1) After convolutional layers, global pooling plays a key role with two primary purposes: 1) reducing the number of parameters for computational efficiency and preventing overfitting, and 2) learning position invariance. For this purpose, pooling combines the high-dimensional feature outputs into low-dimensional representations (Gholamalinezhad and Khosravi 2020). However, global pooling presents an issue of losing temporal information, which has led to the development of temporal pooling methods (Lee, Lee, and Yu 2021). We investigate different mechanisms of temporal poolings, which we refer to as perspective. Global Temporal Pooling GTP pools only one representation pg = [p1] ∈Rk×1 in the entire time range. GTP ignores temporal information by aggregating the H to pg = h: the global view. pg = poolg(H) (2) GTP effectively captures globally dominant features, such as trends or the highest peak, but has difficulty capturing multiple points dispersed on a time axis. To solve this constraint, temporal poolings based on sequential segmentation have been proposed: STP and DTP (Lee, Lee, and Yu 2021). Both have multiple local segments with the given number n ∈Z+: the local view. Static Temporal Pooling STP divides the time axis equally into n segments with a length ℓ= t n, where ¯H = {h0:ℓ, hℓ:2ℓ, ..., h(n−1)ℓ:nℓ} and ps = [p1, ..., pn] ∈ Rk×n. Note that hℓretains temporal information, but there is no consideration of the temporal relationship between time series in the segmentation process: the uniform local view. ps = pools( ¯H) (3) STP functions well on a recursive pattern, such as a stationary process. However, forced uniform segmentation can divide important consecutive patterns or create unimportant segmentations. This inefficiency causes representation power to be distributed to non-informative regions. Dynamic Temporal Pooling DTP is a learnable pooling layer optimized by soft-DTW (Cuturi and Blondel 2017) for dynamic segmentation considering the temporal relationship. By using the soft-DTW layer, H is segmented in diverse time lengths ¯ℓ= [ℓ1, ℓ2, ..., ℓn], where t = P ¯ℓ. Finally, the optimal pooled vectors pd = [pℓ1, ..., pℓn] ∈ Rk×n are extracted from each segment of ¯H¯ℓ, where ¯H¯ℓ= {hℓ1, hℓ2, ..., hℓn}; the dynamic local view. pd = poold( ¯H¯ℓ) (4) DTP has the highest complexity in finding different optimal segmentation lengths, enabling the pooling to fully represent segmentation power. However, since DTP is based on temporally aligned similarity of hidden features with a constraint that a single time point should not be aligned with multiple consecutive segments, the segmentation can easily divide informative change points that need to be preserved in time series patterns (Appendix. DTP Algorithm). Limitation of Single Perspective Traditional temporal pooling methods only focus on a single perspective when dealing with hidden features H. A global perspective cannot effectively capture multiple classification points, while a local perspective struggles to emphasize a dominant classification point. Consequently, datasets that require the simultaneous capture of dominant and hidden local features from diverse viewpoints inevitably exhibit lower performance when using a single perspective. Motivated by these limitations, we propose a novel pooling approach that fully leverages diverse perspectives. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8949 1 2 3 4 5 6 7 8 9 𝑖𝑖𝑖𝑖𝑖𝑖 (1, T) Pooling Block GTP STP DTP CNN Attention Block Conv 𝜙𝜙0 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑥𝑥𝑖𝑖𝑖𝑖𝑖𝑖𝐴𝐴= 3 𝑦𝑦𝐶𝐶𝐶𝐶𝐶𝐶 𝑓𝑓𝐶𝐶𝐶𝐶𝐶𝐶 𝑦𝑦𝐷𝐷𝐷𝐷𝐷𝐷 𝑓𝑓𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷 selection Pooling Index ⨀ ⨀ Main Decision Process 3 Attention Magnitude Φ pg ps pd ps ഥP pg ps pd ഥP A0 = M A ഥP = E Figure 2: SoM-TP Architecture. Diverse Perspective Learning based on selection-ensemble is achieved as follows: The aggregated output of all pooling, ¯P is passed to the attention block to calculate the attention score A. In the attention block, a weighted pooling output M is formed by the multiplication of ¯P and a learnable weight vector A0. After M passes through the convolutional layer ϕ0, the attention score A is drawn out as an encoded weight vector. Using the index of the highest attention score (here, index 3), pooling for the CLS network is selected. Next, the parameters are updated with the following procedure: 1) DPLN uses the ensembled vector E, whereas CLS network uses only the selected pooling output (here, ps); 2) Each network predicts yCLS and yDP L respectively, and yDP L is used in the perspective loss to work as a regularizer; 3) With these two outputs, the model is optimized with diverse perspectives while selecting the proper pooling method for each batch. Multiple Choice Learning for Deep Temporal Pooling The traditional ML-based ensemble method focuses on aggregating multiple outputs. However, the aggregation of the simple ensemble makes outputs smoother, due to the generalization effect (Lee et al. 2017). To solve this limitation, MCL has been proposed as an advanced ensemble method that selects the best among multiple outputs using oracle loss (Guzm´an-rivera, Batra, and Kohli 2012). More formally, MCL generates M solutions ˆYi = (ˆy1 i , ..., ˆyM i ), and learns a mapping g : X →YM that minimizes oracle loss minmℓ(yi, ˆym i ). The ensemble mapping function g consists of multiple predictors, g(x) = {f1(x), f2(x), ..., fM(x)} (Lee et al. 2016). The effects of diverse solution sets in MCL can be summarized as addressing situations of ‘Ambiguous evidence’ and ‘Bias towards the mode’. ‘Ambiguous evidence’ refers to situations with insufficient information to make a definitive prediction. In such cases, presenting a small set of reasonable possibilities can alleviate the over-confidence problem of deep learning (Nguyen, Yosinski, and Clune 2015), rather than striving for a single accurate answer. The other situation is ‘Bias towards the mode’, indicating the model’s tendency to learn a mode-seeking behavior to reduce the expected loss across the entire dataset. When only a single prediction exists, the model eventually learns to minimize the average error. In contrast, MCL generates multiple predictions, allowing some classifiers to cover the lowerdensity regions of the solution space without sacrificing performance on the high-density regions (Lee et al. 2016). MCL faces computational challenges in deep networks due to the iterative optimization process of the oracle loss, with a complexity of O(N 2). Although sMCL has partially alleviated this issue through stochastic gradient descent, the method still requires identifying the best output among multiple possibilities (Lee et al. 2016). CMCL, which is another approach to address MCL’s over-confidence problem, cannot be applied at the feature level due to optimization at the output level (Lee et al. 2017). In summary, the integration of MCL into the pooling level is not feasible due to the structural constraints imposed by the oracle loss design. To overcome this challenge, we establish a model structure to incorporate the concept of MCL into a pooling-level ensemble. Selection over Multiple Temporal Pooling SoM-TP Architecture and Selection Ensemble Diverse Perspective Learning (DPL) is achieved by dynamically selecting heterogeneous multiple temporal poolings in a single classifier. The overview architecture, as illustrated in Figure 2, consists of four parts: 1) a common feature extractor with CNN Φ; 2) a pooling block with multiple temporal pooling layers; 3) an attention block with an attention weight vector A0, and a convolutional layer ϕ0, as well as 4) Fully Connected layers (FC): a classification network (CLS) fCLS and a DPLN fDP LN. Through these modules, SoMTP can cover the high probability prediction space, which is aligned with MCL where multiple classifiers are trained to distinguish specific distributions (Lee et al. 2016). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8950 DPL Attention SoM-TP ensembles multiple temporal pooling within a single classifier. The advantage of using a single classifier lies in the absence of the need to compare prediction outputs, making it non-iterative and computationally efficient, in contrast to MCL. Through the attention mechanism (Vaswani et al. 2017), we achieve a ‘comparison-free’ ensemble by dynamically selecting the most suitable temporal pooling for each batch. DPL attention is an extended attention mechanism that simultaneously considers two factors: the overall dataset and each data sample. In Figure 2, the attention weight vector A0 is learned in the direction of adding weight to specific pooling, which has a minimum loss for every batch. Consequently, A0 has a weight that reflects the entire dataset. Next, a convolutional layer ϕ0 is used to reflect a more specific data level. As shown in Algorithm 1, the weighted pooling vector M which is the element-wise multiplication between A0 and ¯P is given as an input to ϕ0. By encoding M through ϕ0, A can reflect more of each batch characteristic, not just following the dominant pooling in terms of the entire dataset. As a result, this optimized pooling selection by attention can solve the ‘Bias towards the mode’ problem by assigning the most suitable pooling for each data batch, which is aligned with learning the multiple experts in MCL. Diverse Perspective Learning Network and Perspective Loss To optimize DPL attention, we introduce DPLN and perspective loss. DPLN is a sub-network that utilizes the weighted aggregation ensemble E. The main role of DPLN is regularization through perspective loss. In contrast, the CLS network, which predicts the main decision, does not directly utilize attention A. Instead, it employs a chosen pooling feature output (denoted as ps in Figure 2), determined by the index with the biggest score in A. Perspective Loss Perspective loss serves as a cost function to maximize the utilization of the sub-network, DPLN, through network tying between the two FC networks. Ultimately, it aims to prevent the CLS network from converging to one dominant pooling and to maintain the benefits of ensemble learning continuously. To achieve its purpose, perspective loss is designed as the sum of DPLN cross-entropy loss and the Kullback-Leibler (KL) divergence between yCLS and yDP L. Specifically, KL divergence works similarly to CMCL’s KL term, which regulates one model to be over-confident through a uniform distribution, while the KL term of perspective loss regulates based on DPLN (Lee et al. 2017). KL(yCLS, yDP L) = yDP L · log yDP L yCLS , LDP LN({WΦ}, {W(dpln)}) = −1 t t X n=1 logP(y = yn|Xn), Lperspective = KL(yCLS, yDP L) + LDP LN, (5) Algorithm 1: SoM-TP selecting algorithm Function Attention Block(H): ▷select proper temporal pooling by attention A ∈ R1×3n ▷attention weight A0 ∈R1×3n is initialized as zero ▷GTP, STP, DTP: poolg, pools, poold ▷convolutional encoding layer: ϕ0 Function Pooling Block(H): ▷convolutional hidden feature: H ∈Rk×t ▷static segmented hidden feature: ¯H = {h0:ℓ, hℓ:2ℓ, ..., h(n−1)ℓ:nℓ}, ℓ= t n ▷dynamic segmented hidden feature: ¯H¯ℓ= {hℓ1, hℓ2, ..., hℓn}, ¯ℓ= [ℓ1, ℓ2, ..., ℓn], where t = P ¯ℓ ▷pooling outputs: pg, ps, pd = poolg(H), pools( ¯H), poold( ¯H¯ℓ) return pg, ps, pd ¯P = [pg, ps, pd] M = A0 ⊙¯P A = ϕ0(M), where x ∈A idx = ( argmaxi(y), where yi = exp(xi) P i exp(xi) argmaxj(y), where yj = Pn j x(j|n) n return p = ¯P(idx), E = A ⊙¯P where input time series {(X1, y1), ..., (Xt, yt)}, Φ with learnable parameter WΦ of CNN, yCLS ∈R1×c from the ‘CLS network’ W(cls), and yDP L ∈R1×c from the DPLN W(dpln). Then, we set first fDP LN weight matrix W(dpln) 0 = [w(pg) 1 , ..., w(ps) 2n , ..., w(pd) 3n ] ∈Rk×3·n, where w(p) ∈Rk is weight matrix of each latent dimension k of pooling pi, whereas W(cls) 0 = [w(c) 1 , ..., w(c) n ] ∈Rk×n is the first fCLS weight matrix (Lee, Lee, and Yu 2021). Note that the results of GTP are repeated n times to give the same proportion for each pooling by attention weight. Therefore, the final loss function of the SoM-TP is designed as follows, LCLS({WΦ}, {W(cls)}) = −1 t t X n=1 logP(y = yn|Xn), Lcost({WΦ}, {W}) = LCLS + λ · Lperspective, (6) where {W(cls), W(dpln), A0, ϕ0} ∈W are learnable parameters. Prioritizing classification accuracy, the loss LCLS is computed, and Lperspective is added with λ decay. As a result, SoM-TP can address the ‘Ambiguous evidence’ problem through DPLN and perspective loss. In a pooling ensemble, the ‘Ambiguous evidence’ can be conceived as a scenario where a single pooling is not dominant. Even though SoM-TP selects only one pooling, yDP L in the perspective loss enables the model to consider the importance of other poolings. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8951 CNN POOL (type) UCR (uni-variate) UEA (multi-variate) MAX ACC F1 macro ROC AUC PR AUC Rank ACC F1 macro ROC AUC PR AUC Rank FCN GTP 0.6992 0.6666 0.8662 0.7406 3.6 0.6558 0.6213 0.7841 0.6854 3.9 STP 0.7462 0.7133 0.8924 0.7889 2.7 0.6801 0.6603 0.8001 0.6984 3.0 DTP euc 0.7406 0.7123 0.8897 0.7782 3.1 0.6795 0.6559 0.8129 0.7163 3.4 cos 0.7335 0.7062 0.8879 0.7768 2.9 0.6702 0.6314 0.8061 0.7022 3.1 SoM-TP 0.7556 0.7241 0.9026 0.8000 2.6 0.6920 0.6621 0.8105 0.7099 2.4 ResNet GTP 0.7227 0.6952 0.8837 0.7654 3.5 0.6423 0.6083 0.7798 0.6769 3.5 STP 0.7420 0.7126 0.8880 0.7864 2.9 0.6717 0.6383 0.7962 0.6934 2.8 DTP euc 0.7456 0.7197 0.8939 0.7846 3.0 0.6567 0.6271 0.7981 0.6968 3.1 cos 0.7452 0.7198 0.8945 0.7829 3.0 0.6534 0.6377 0.7895 0.6832 3.0 SoM-TP 0.7773 0.7489 0.9182 0.8261 2.4 0.6769 0.6387 0.8016 0.7033 2.5 *This table is for pooling type MAX. Please refer to Table 7 in Appendix for pooling type AVG. Table 1: SoM-TP Comparison with Single Perspective Temporal Poolings. The table presents the effectiveness of the selection ensemble of SoM-TP compared to traditional temporal poolings. The best performances where SoM-TP outperforms others are bolded and the best performances of other temporal poolings are underlined. Optimization For attention weight A0, SoM-TP proceeds with additional optimization: a dot product similarity to regulate A0. The similarity term is defined as, Lattn = −yCLS · yDP L, (7) Due to the KL-divergence cost function in perspective loss, the CLS network and DPLN can be overly similar during the optimization process. As the additional regulation for output over-similarity, Lattn plays the opposite regulation to the perspective loss. Note that the dot-product similarity considers both the magnitude and direction of two output vectors. Finally, the overall optimization process is as follows, A0 ←A0 −η · ∂Lattn/∂A0, WΦ ←WΦ −η · ∂Lcost/∂WΦ, W ←W −η · ∂Lcost/∂W. (8) As a result, even with a non-iterative optimization process, SoM-TP learns various perspectives through DPL attention, DPLN, and perspective loss. Consequently, Φ reflects fCLS and fDP LN relatively while minimizing the similarity between each network output. Experiments Experimental Settings For the extensive evaluation, 112 univariate and 22 multivariate time series datasets from the UCR/UEA repositories are used (Bagnall et al. 2018; Dau et al. 2019); collected from a wide range of domains and publicly available. To ensure the validity of our experiments, we exclude a few datasets from the UCR/UEA repositories due to the irregular data lengths. While zero padding could resolve this, it might cause bias in some time series models. All temporal pooling methods have the same CNN architecture. FCN and ResNet are specifically designed as a feature extractor (Wang, Yan, and Oates 2017), and temporal poolings are constructed with the same settings: normalization with BatchNorm (Ioffe and Szegedy 2015), activation function with ReLU, and optimizer with Adam (Kingma and Ba 2015). The validation set is made from 20% of the training set for a more accurate evaluation. In the case of imbalanced classes, a weighted loss is employed. The prototype number n is searched in a greedy way, taking into consideration the unique class count of each dataset. Specifically, we observe that selecting 4-10 segments based on the class count in each dataset enhances performance. Consequently, we use an equal number of segments in each dataset for segment-based poolings (Appendix. Table 5). Baselines We conduct two experiments to evaluate the performance of SoM-TP. First, we compare it with traditional temporal poolings, GTP, STP, and DTP, to demonstrate the effectiveness of selection-ensemble in temporal poolings (Lee, Lee, and Yu 2021). Second, we compare SoM-TP with other state-of-the-art models that utilize advanced methods, including scale-invariant methods (ROCKET (Dempster, Petitjean, and Webb 2020), InceptionTime (Ismail Fawaz et al. 2020), OS-CNN (Tang et al. 2021), and DSN (Xiao et al. 2022)), sequential models (MLSTM-FCN (Karim et al. 2019), and TCN (Bai, Kolter, and Koltun 2018)), and Transformer-based models (VanillaTransformer (Vaswani et al. 2017), TST (Zerveas et al. 2021), and ConvTran (Foumani et al. 2024)). Models leveraging temporal information use attention or RNNs to emphasize long-term dependencies. On the other hand, scaleinvariant learning models employ a CNN-based architecture with various kernel sizes to find the optimum through global average pooling. Experimental Evaluation Performance Analysis As shown in Table 1, SoM-TP shows superior performance for overall TSC datasets when compared to conventional temporal poolings. We calculate the average performance of the entire repository. We consider not only accuracy but also the F1 macro score, ROCAUC, and PR-AUC to consider the imbalanced class. Quantitatively, SoM-TP outperforms the existing temporal pooling methods both in univariate and multivariate time series datasets. Through these results, we can confirm that the dynamic selection ensemble of SoM-TP boosts the performance of the CNN model. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8952 0 20 40 60 80 100 Batch GTP STP DTP ArrowHead (a) GTP most selected 0 200 400 600 800 1000 Batch GTP STP DTP Chinatown (b) STP most selected 0 2 4 6 8 10 12 Batch GTP STP DTP ACSF1 (c) DTP most selected Figure 3: Dynamic Pooling Selection in SoM-TP. This figure represents the graph of dynamic selection in the FCN SoM-TP MAX on the UCR repository: ArrowHead, Chinatown, and ACSF1. Methods UCR UEA Baseline SoM-TP wins Tie Rank Baseline SoM-TP wins Tie Rank Vanilla-Transformer (Vaswani et al. 2017) 10 99 4 5.2 3 18 1 4.4 TCN (Bai, Kolter, and Koltun 2018) 28 80 5 4.0 4 17 1 4.3 TST (Zerveas et al. 2021) 32 78 3 3.8 6 15 1 3.6 ConvTran (Foumani et al. 2024) 35 71 7 3.1 8 13 1 3.1 MLSTM-FCN (Karim et al. 2019) 46 60 6 2.6 6 12 4 3.2 SoM-TP - MAX 2.5 2.5 Methods UCR UEA Acc F1-score ROC AUC PR AUC Acc F1-score ROC AUC PR AUC ROCKET (Dempster, Petitjean, and Webb 2020) 0.7718 0.7478 0.8899 0.7841 0.6785 0.6592 0.7926 0.6940 InceptionTime (Ismail Fawaz et al. 2020) 0.7713 0.7455 0.9056 0.8164 0.6612 0.6360 0.7984 0.7106 OS-CNN (Tang et al. 2021) 0.7663 0.7324 0.9005 0.8139 0.6808 0.6547 0.8118 0.7137 DSN (Xiao et al. 2022) 0.7488 0.7230 0.8838 0.7968 0.5648 0.5433 0.7575 0.6265 SoM-TP - MAX 0.7773 0.7489 0.9182 0.8261 0.6920 0.6621 0.8105 0.7099 Table 2: SoM-TP Comparison with Advanced TSC Methods. This table compares the performance of SoM-TP with advanced TSC models that leverage temporal information and those that exploit scale-invariant properties, respectively. The best performances, where SoM-TP beat others, are bolded, and the best performances among other models are underlined. Type SoM-TP Modules Rank Acc A0 ϕ0 DPLN Lattn 1 2 3 only ϕ0 ✓ 8 15 17 0.6966 only A0 ✓ 4 9 24 0.6963 DPL Attention ✓ ✓ 6 7 23 0.6974 DPLN w/o A0 ✓ ✓ 6 14 29 0.7047 DPLN ✓ ✓ ✓ 23 27 26 0.7399 SoM-TP ✓ ✓ ✓ ✓ 42 20 6 0.7503 Table 3: SoM-TP Module Ablation Study. Additionally, Table 2 compares SoM-TP with other stateof-the-art TSC models from two different approaches. In Table 2-1, ResNet SoM-TP MAX significantly outperforms other sequential models in terms of comparing models leveraging temporal information. As SoM-TP clearly outperforms all other models in accuracy metric, we demonstrate the robustness of performance by providing the number of datasets where SoM-TP achieves higher accuracy. Considering the lowest average rank of SoM-TP, we can conclude that dynamic pooling selection leverages the model to keep important temporal information in a more optimal way than other methods in the massive UCR/UEA repository. Next, Table 2-2 highlights SoM-TP’s comparable performance alongside scale-invariant methods, even with SoMTP’s significant computational efficiency. Since SoM-TP and scale-invariant methods have different learning approaches, it is suitable to consider various metrics. Regarding the time complexity of models, scale-invariant methods consider various receptive fields of a CNN, which results in longer training times. In contrast, SoM-TP achieves comparable performance with only one-third of the time. Finally, in Table 3, we present the results of an ablation study on the modules of SoM-TP discussed in Section 3. When each module, including A0 and ϕ0 constituting DPL Attention, DPLN, and Lattn, is removed, it decreases SoMTP’s performance. In terms of dataset robustness, we can observe through the rank results that all modules contribute to promoting SoM-TP’s Diverse Perspective Learning. Perspective Analysis with LRP In Figure 3, we can observe that SoM-TP dynamically selects pooling during inference. ArrowHead, Chaintown, and ACSF1, in order, are datasets where GTP, STP, and DTP pooling selections are most frequently chosen. The DPL attention is trained to select the optimal pooling for each batch during the training process, and during inference, it continues to choose the most suitable pooling without DPLN (Appendix. Figure 7). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8953 1 GTP (a) acc: 0.9219 3 STP (b) acc: 0.7893 2 DTP (c) acc: 0.8988 1 3 2SoM-TP (d) acc: 0.9396 1 GTP (e) acc: 0.3143 2 3 STP (f) acc: 0.6769 3 DTP (g) acc: 0.5429 1 2 3 SoM-TP (h) acc: 0.6901 1 GTP (i) acc: 0.3914 1 STP (j) acc: 0.8139 1 2 DTP (k) acc: 0.7780 1 2 SoM-TP (l) acc: 0.8810 Figure 4: Comparison of LRP Input Attribution on Single vs Diverse Perspective Learning. The figure shows LRP attribution results for FaceAll, FiftyWords, and MoteStrain datasets in the UCR repository, ordered in respective rows. Pooling choice significantly affects accuracy and attributions, reflecting different perspectives. Redder areas of time series indicate higher attribution, aligning with LRP’s conservation rule of summing to 1. Blue circles denote well-captured regions, while red circles suggest dispersed focus or inadequate capture. Given the absence of a ground truth concept for input attributes in TSC, we infer these implicitly from the presented accuracy. Model Complexity Pooling Optimization GTP O(1) O(N) STP/DTP O(L) SoM-TP O(LP) + O(Lmul) = O(L) MCL O(N 2) Table 4: Complexity Study. For qualitative analysis, we employ Layer-wise Relevance Propagation (LRP) to understand how different temporal pooling perspectives capture time series patterns. LRP attributes relevance to input features, signifying their contribution to the output. Note that the conservation rule maintains relevance sum in backward propagation, ensuring that the sum of attribution is 1. We use the LRP z+ rule for the convolutional stack Φ, and the ϵ rule for the FC layers. In Figure 4, GTP focuses on globally crucial parts (a, e, i), while STP and DTP use local views within segmented time series for a more balanced representation (b, c, f, g, j, k). However, GTP’s limitation lies in concentrating only on specific parts, neglecting other local aspects (e, i). Conversely, STP and DTP risk diluting the primary representation by reflecting all local segments (b, j) or cutting significant series due to forced segmentation (c, k). DTP often segments at change points, losing essential information (c, k). SoM-TP addresses these issues by combining global and local views of each pooling method via diverse perspective learning. In Figure 4-(d), SoM-TP captures GTP’s points (d-1) and enhances multiple representations by effectively capturing local patterns (d-2, d-3). In (h), SoM-TP identifies the common important points (h-2, h-3) and complements GTP’s missed local points (h-1). Finally, in (l), SoM-TP captures GTP’s missed local points (l-1) and fully utilizes important time series (l-2) cut by STP and DTP (j-1, k-1, k-2). Complexity of SoM-TP We compare the complexity of independent temporal poolings: pooling and optimization complexity. We exclude the maximum or average operation, which is common for all pooling complexity. As shown in Table 4, for the pooling complexity, GTP has O(1) while STP and DTP have O(L) from segmenting. SoM-TP has increased complexity as O(LP +Lmul) = O(L) for computation of the attention score, where O(LP) is for a sum of the three temporal poolings’ complexity, making it O(L), and O(Lmul) for the complexity of multiplication between ¯P and A0, and between ¯P and A. As for the optimization complexity, SoM-TP and other temporal poolings have all O(N), while MCL has O(N 2) to generate and compare multiple outputs. Therefore, compared with independent pooling, SoM-TP has little degradation of complexity, while optimization is effectively achieved even with an ensemble. Conclusion This paper proposes SoM-TP, a novel temporal pooling method employing a selection ensemble to address data dependency in temporal pooling by learning diverse perspectives. Utilizing a selection ensemble inspired by MCL, SoMTP adapts to each data batch’s characteristics. Optimal pooling selection with DPL attention achieves a comparisonfree ensemble. We define DPLN and perspective loss for effective ensemble optimization. In quantitative evaluation, SoM-TP surpasses other pooling methods and state-of-theart TSC models in UCR/UEA experiments. In qualitative analysis, LRP results highlight SoM-TP’s ability to complement existing temporal pooling limitations. We re-examine the conventional role of temporal poolings, identify their limitations, and propose an efficient data-driven temporal pooling ensemble as a first attempt. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8954 Acknowledgements This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation; No. 2022-0-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics; No. 2021-0-02068, Artificial Intelligence Innovation Hub; No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)). References Alsallakh, B.; Yan, D.; Kokhlikyan, N.; Miglani, V.; ReblitzRichardson, O.; and Bhattacharya, P. 2023. Mind the Pool: Convolutional Neural Networks Can Overfit Input Size. In Proceedings of the 11th International Conference on Learning Representations (ICLR’23). Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; M¨uller, K.-R.; and Samek, W. 2015. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE, 10(7): 1–46. Bagnall, A.; Dau, H. A.; Lines, J.; Flynn, M.; Large, J.; Bostrom, A.; Southam, P.; and Keogh, E. 2018. The UEA multivariate time series classification archive, 2018. arXiv:1811.00075. Bagnall, A.; Lines, J.; Hills, J.; and Bostrom, A. 2015. Time-Series Classification with COTE: The Collective of Transformation-Based Ensembles. IEEE Transactions on Knowledge and Data Engineering, 27(9): 2522–2535. Bai, S.; Kolter, J. Z.; and Koltun, V. 2018. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv:1803.01271. Bay, S. D.; Kibler, D.; Pazzani, M. J.; and Smyth, P. 2000. The UCI KDD archive of large data sets for data mining research and experimentation. ACM SIGKDD Explorations Newsletter, 2(2): 81–85. Cui, Z.; Chen, W.; and Chen, Y. 2016. Multi-scale convolutional neural networks for time series classification. arXiv preprint arXiv:1603.06995. Cuturi, M.; and Blondel, M. 2017. Soft-DTW: A Differentiable Loss Function for Time-Series. In Proceedings of the 34th International Conference on Machine Learning (ICML’17). Dau, H. A.; Bagnall, A.; Kamgar, K.; Yeh, C.-C. M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C. A.; and Keogh, E. 2019. The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6): 1293–1305. Dempster, A.; Petitjean, F.; and Webb, G. I. 2020. ROCKET: Exceptionally Fast and Accurate Time Series Classification Using Random Convolutional Kernels. Data Mining and Knowledge Discovery, 34(5): 1454–1495. Esling, P.; and Agon, C. 2012. Time-Series Data Mining. ACM Comput. Surv., 45(1). Foumani, N. M.; Tan, C. W.; Webb, G. I.; and Salehi, M. 2024. Improving position encoding of transformers for multivariate time series classification. Data Mining and Knowledge Discovery, 38(1): 22–48. Gao, Z.; Wang, Q.; Zhang, B.; Hu, Q.; and Li, P. 2021. Temporal-attentive covariance pooling networks for video recognition. In Proceedings of the 35th Advances in Neural Information Processing Systems (NIPS’21). Gholamalinezhad, H.; and Khosravi, H. 2020. Pooling Methods in Deep Neural Networks, a Review. arXiv:2009.07485. Girdhar, R.; and Ramanan, D. 2017. Attentional pooling for action recognition. In Proceedings of the 31th Advances in Neural Information Processing Systems (NIPS’17). Guzm´an-rivera, A.; Batra, D.; and Kohli, P. 2012. In Proceedings of the 26th Advances in Neural Information Processing Systems (NIPS’12). Hinton, G. E.; Sabour, S.; and Frosst, N. 2018. Matrix capsules with EM routing. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18). Hou, Q.; Zhang, L.; Cheng, M.-M.; and Feng, J. 2020. Strip pooling: Rethinking spatial pooling for scene parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20). Ioffe, S.; and Szegedy, C. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML’15). Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; and Muller, P.-A. 2019. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4): 917–963. Ismail Fawaz, H.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D. F.; Weber, J.; Webb, G. I.; Idoumghar, L.; Muller, P.-A.; and Petitjean, F. 2020. InceptionTime: Finding AlexNet for Time Series Classification. Data Mining and Knowledge Discovery, 34(6): 1936–1962. Kachuee, M.; Fazeli, S.; and Sarrafzadeh, M. 2018. ECG Heartbeat Classification: A Deep Transferable Representation. CoRR, abs/1805.00794. Karim, F.; Majumdar, S.; Darabi, H.; and Harford, S. 2019. Multivariate LSTM-FCNs for time series classification. Neural Networks, 116: 237–245. Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of 3rd International Conference on Learning Representations (ICLR’15). Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE. Lee, D.; Lee, S.; and Yu, H. 2021. Learnable Dynamic Temporal Pooling for Time Series Classification. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI’21). Lee, K.; Hwang, C.; Park, K.; and Shin, J. 2017. Confident Multiple Choice Learning. In Proceedings of the 34th International Conference on Machine Learning (ICML’17). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8955 Lee, S.; Purushwalkam, S.; Cogswell, M.; Ranjan, V.; Crandall, D.; and Batra, D. 2016. Stochastic Multiple Choice Learning for Training Diverse Deep Ensembles. In Proceedings of the 30th Advances in Neural Information Processing Systems (NIPS’16). Lines, J.; Taylor, S.; and Bagnall, A. 2016. HIVE-COTE: The Hierarchical Vote Collective of Transformation-Based Ensembles for Time Series Classification. In Proceedings of the 16th IEEE International Conference on Data Mining (ICDM’16). Liu, H.; Simonyan, K.; and Yang, Y. 2019. DARTS: Differentiable Architecture Search. In Proceedings of the 7th International Conference on Learning Representations (ICLR’19). L¨angkvist, M.; Karlsson, L.; and Loutfi, A. 2014. A review of unsupervised feature learning and deep learning for timeseries modeling. Pattern Recognition Letters, 42: 11–24. Nguyen, A.; Yosinski, J.; and Clune, J. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’15). Pigou, L.; Van Den Oord, A.; Dieleman, S.; Van Herreweghe, M.; and Dambre, J. 2018. Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video. International Journal of Computer Vision, 126: 430–439. Rakthanmanon, T.; Campana, B.; Mueen, A.; Batista, G.; Westover, B.; Zhu, Q.; Zakaria, J.; and Keogh, E. 2012. Searching and mining trillions of time series subsequences under dynamic time warping. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’12). Rippel, O.; Snoek, J.; and Adams, R. P. 2015. Spectral representations for convolutional neural networks. In Proceedings of the 28th Advances in Neural Information Processing Systems (NIPS’15). Sabour, S.; Frosst, N.; and Hinton, G. E. 2017. Dynamic routing between capsules. In Proceedings of the 31st Advances in Neural Information Processing Systems (NIPS’17). Sch¨afer, P. 2015. The BOSS is concerned with time series classification in the presence of noise. Data Mining and Knowledge Discovery, 29: 1505–1530. Tan, C. W.; Dempster, A.; Bergmeir, C.; and Webb, G. I. 2022. MultiRocket: multiple pooling operators and transformations for fast and effective time series classification. Data Mining and Knowledge Discovery, 36(5): 1623–1646. Tang, W.; Long, G.; Liu, L.; Zhou, T.; Blumenstein, M.; and Jiang, J. 2021. Omni-Scale CNNs: a simple and effective kernel size configuration for time series classification. In Proceedings of the 9th International Conference on Learning Representations (ICLR’21). Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Proceedings of the 31st Advances in Neural Information Processing Systems (NIPS’17). Wang, J.; Shao, Z.; Huang, X.; Lu, T.; Zhang, R.; and Lv, X. 2021. Spatial–temporal pooling for action recognition in videos. Neurocomputing, 451: 265–278. Wang, Z.; Yan, W.; and Oates, T. 2017. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’17). Xiao, Q.; Wu, B.; Zhang, Y.; Liu, S.; Pechenizkiy, M.; Mocanu, E.; and Mocanu, D. C. 2022. Dynamic Sparse Network for Time Series Classification: Learning What to “See”. In Proceedings of the 36th Advances in Neural Information Processing Systems (NIPS’22). Yu, D.; Wang, H.; Chen, P.; and Wei, Z. 2014. Mixed Pooling for Convolutional Neural Networks. Rough Sets and Knowledge Technology, 364–375. Yuan, J.; Douzal-Chouakria, A.; Varasteh Yazdi, S.; and Wang, Z. 2019. A large margin time series nearest neighbour classification under locally weighted time warps. Knowledge and Information Systems, 59(1): 117–135. Zerveas, G.; Jayaraman, S.; Patel, D.; Bhamidipaty, A.; and Eickhoff, C. 2021. A Transformer-Based Framework for Multivariate Time Series Representation Learning. In Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’21). Zhang, X.; Gao, Y.; Lin, J.; and Lu, C.-T. 2020. Tapnet: Multivariate time series classification with attentional prototypical network. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI’20). Zhao, P.; Luo, C.; Qiao, B.; Wang, L.; Rajmohan, S.; Lin, Q.; and Zhang, D. 2022. T-SMOTE: Temporal-oriented Synthetic Minority Oversampling Technique for Imbalanced Time Series Classification. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI’22). Zoph, B.; and Le, Q. 2017. Neural Architecture Search with Reinforcement Learning. In Proceedings of the 5th International Conference on Learning Representations (ICLR’17). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8956
2024
995
18,845
LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation Bin Shang1,2, Yinliang Zhao1,2*, Jun Liu1,2, Di Wang3* 1Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, School of Computer Science and Technology, Xi’an Jiaotong University, China 2National Engineering Lab for Big Data Analytics, Xi’an Jiaotong University, China 3School of Computer Science and Technology, Xidian University, China [email protected], {zhaoy, liukeen}@xjtu.edu.cn, [email protected] Abstract Recently, an enormous amount of research has emerged on multimodal knowledge graph completion (MKGC), which seeks to extract knowledge from multimodal data and predict the most plausible missing facts to complete a given multimodal knowledge graph (MKG). However, existing MKGC approaches largely ignore that visual information may introduce noise and lead to uncertainty when adding them to the traditional KG embeddings due to the contribution of each associated image to entity is different in diverse link scenarios. Moreover, treating each triple independently when learning entity embeddings leads to local structural and the whole graph information missing. To address these challenges, we propose a novel link aware fusion and aggregation based multimodal knowledge graph completion model named LAFA, which is composed of link aware fusion module and link aware aggregation module. The link aware fusion module alleviates noise of irrelevant visual information by calculating the importance between an entity and its associated images in different link scenarios, and fuses the visual and structural embeddings according to the importance through our proposed modality embedding fusion mechanism. The link aware aggregation module assigns neighbor structural information to a given central entity by calculating the importance between the entity and its neighbors, and aggregating the fused embeddings through linear combination according to the importance. Extensive experiments on standard datasets validate that LAFA can obtain state-of-the-art performance. Introduction Knowledge graphs (KGs) represent real-world data as fact triples (head entity, relation, tail entity), which have shown great research value and application prospect. KGs are broadly used in many downstream tasks, such as multimedia reasoning (Li, Wang, and Zhu 2020), question answering (Huang et al. 2019), objective detection (Yang et al. 2023), and recommendation system (Guo et al. 2020; Wu et al. 2022). Since existing KGs typically contain structural and visual data, multimodal knowledge graphs (MKGs) have recently attracted great attention in the fields of natural language processing and multimedia (Chen et al. 2022). Generally, multiple images associate an entity to describe *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the behaviors and appearances of it. Even though the scale of many public MKGs is noticeably large, they are still confronted with incompleteness because the insufficient accumulation of multimodal corpus and the emerging entities with complicated relations. In this case, many researches on multimodal knowledge graph completion (MKGC) have been generalized to find out missing triples automatically (also called link prediction) by extracting knowledge from multimodal data (Wang et al. 2021; Chen et al. 2022; Xu et al. 2022; Shang et al. 2023b). Specifically, images (visual data) can be considered as supplementary information to enhance entity embddings for the MKGC task. Multimodal knowledge graph completion (MKGC) approaches complete MKGs by projecting entities and relations to latent space as well as learning the dense and low-dimensional vectors (embeddings) of them according to visual and structural information, and predict missing triples by scoring function based on the learned embeddings. Specifically, for MKGC task, one entity generally has multiple associated images, and they can improve the representation quality of the entity embedding. Therefore, it is necessary to fuse the structural properties of KG entities and various images with matched semantics in integrated embeddings. On this account, IKRL (Xie et al. 2017) firstly attempt to fuse visual information to the existing knowledge graph embedding (KGE) models to predict missing triples in MKGs. Mousselly et al. (Mousselly-Sergieh et al. 2018) propose to use Imagined, DeViSE, and simple concatenation to fuse multimodal information. TransAE (Wang et al. 2019) presents an specific auto-encoder module. Although existing studies for MKGC have shown promising improvements, these approaches are still afflicted by several noticeable limitations as follows: (1) Modality contradiction. Many existing MKGC approaches substantially ignore that visual information may lead to uncertainty and introduce noise when adding them to the traditional KG embeddings, which could bring on modality contradiction. Particularly, an entity usually has different attributes in various triples (link information), and the contribution of each associated image to this entity is disparate in diverse link scenarios. For instance in Figure 1, entity Taylor Swift has many associated images, but the contribution of them is different in the two links. Although recent works such as RSME (Wang et al. 2021) and MKGformer (Chen et al. 2022) take into acThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8957 Grammy Award Joe Alwyn Award Boyfriend 0.7 0.2 0.1 0.5 0.4 0.1 0.2 0.1 0.7 0.6 0.3 0.1 Images of head entity Images of tail entity Images of head entity Images of tail entity Taylor Swift Link 1 Link 2 Figure 1: Illustration of the contribution of each associated image to entity Taylor Swift in different links. The numbers below the image represent its importance to the entity. count the noise of images, they target independent entities rather than link information when fusing visual information. (2) Structural information missing. Most existing MKGC models attend to treat each triple independently when learning entity and relation embeddings, which result in structural information missing. Since an entity in a KG is often linked with multiple neighbor entities, which can provide rich structural information for the embedding of this entity. Therefore, treating each triple independently when learning entity embeddings leads to missing information about its neighborhood and whole structure of KGs. Motivated by the above analysis, in this paper, we propose a novel link aware fusion and aggregation based multimodal knowledge graph completion model named LAFA. LAFA is composed of two modules link aware fusion and link aware aggregation to generate entity embeddings, and a decoder for link prediction. In order to alleviate the problem of modality contradiction, the link aware fusion module calculates the importance between an entity and its associated images based on link information, and then fuses the visual and structural embeddings by our proposed modality embedding fusion mechanism, which performs linear combination on visual embeddings according to their importance and fuses them with structural embeddings. In order to alleviate the problem of structural information missing, the link aware aggregation module calculates the importance between a given central entity and its neighbors, then aggregates the neighbor embeddings with visual information according to the importance through linear combination to assign structural information to the central entity. Our main contributions are summarized as follows: • We propose a link aware fusion module to alleviate noise of irrelevant visual information by calculating the importance between an entity and its associated images in different link scenarios, and fusing the visual and structural embeddings according to the importance by our proposed modality embedding fusion mechanism. To the best of our knowledge, this work is the first to consider the role of images to entity based on link information. • We propose a link aware aggregation module to assign neighbor structural information to a given central entity by calculating the importance between the entity and its neighbors, and aggregating the embeddings with visual information by linear combination according to the importance. To the best of our knowledge, this work is the first to aggregate neighbor structural information of entities in MKGC task. • We conduct comprehensive experiments and extensive analysis on real-world benchmark multimodal datasets. Results and analysis illustrate that our proposed LAFA can effectively model the multimodal representations and substantially outperform the current state-of-the-art (SOTA) models under appropriate circumstances. Related Work Our work addresses multimodal knowledge graph completion task, which is relevant to multimodal data and multimodal NLP community. In this section, we briefly introduce the existing unimodal knowledge graph completion (UKGC) methods and multimodal knowledge graph completion (MKGC) approaches. Unimodal Knowledge Graph Completion TransE (Bordes et al. 2013) is the first UKGC model, which assumes the triples to satisfy the assumption that h + r = t, where h, t and r are the embeddings of the head entity, tail entity and relation, respectively. Based on TransE, there are a range of improved models such as TransH (Wang et al. 2014), TransR (Lin et al. 2015), and TransD (Ji et al. 2015). RotatE (Sun et al. 2019) encodes entities and relations into the complex space, allowing them to have more flexible representations. RESCAL (Nickel, Tresp, and Kriegel 2011) encodes entities into vectors and relations into matrices, and then designs a bilinear function to score the triples. Based on RESCAL (Nickel, Tresp, and Kriegel 2011), there are a range of improved models such as NTN (Socher et al. 2013), DistMult (Yang et al. 2015), ComplEx (Trouillon et al. 2016), and TuckER (Balaˇzevi´c, Allen, and Hospedales 2019). ConvE (Dettmers et al. 2018) first uses convolutional neural networks (CNNs) to explore the interaction between entity embeddings and relation embeddings. ConvKB (Dai Quoc Nguyen, Nguyen, and Phung 2018) simplifies ConvE. CompGCN (Vashishth et al. 2020) introduces a graph convolutionl network (GCN) based model. LTE-ConvE (Zhang et al. 2022) introduces a simple linear transformation of entity representation to enhance UKGC models. CompoundE (Ge et al. 2023) extends the distancebased scoring functions to relation-dependent compound operations. Recently, some neural network based models have been proposed such as MRGAT (Dai et al. 2022), HADC (Shang et al. 2023a), ConKGC (Shang et al. 2023c), and GreenKGC (Wang et al. 2023). Multimodal Knowledge Graph Completion Existing multimodal knowledge graph completion models focus on encoding image features in KG embeddings. IKRL (Xie et al. 2017) extend TransE (Bordes et al. 2013) to obtain visual embeddings that correspond to the KG entities The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8958 ih 1t 2t 3t 1r 2r 3r s ih ⊗ ⊗ T 1 T k 2 T k 3 T k ⊗ iq 1iγ 2 iγ 3 iγ L1 L2 L3 || || i j j     h r t ⊗ ⊗ T 1 T v 2 T v 3 T v ⊗ ij a 1 Sim 2 Sim 3 Sim 1 Sim ⊗ ⊕ ⊗ ⊗ ⊗ 2 Sim 3 Sim ⊕ ⊕ σ Link Aware Fusion Link Aware Aggregation Deco der s Q s K a W v W s W Structural and visual embedding fusing Figure 2: The overall framework of our proposed model LAFA. The lower part represents the link aware fusion module, which first calculates the attention score between an entity and its associated images according to each link by our proposed modality interaction attention mechanism, and then fuses the embeddings of the entity and images based on different links by our proposed modality embedding fusion mechanism. The upper part represents the link aware aggregation module, which calculates the attention score of each neighbor entity to the central entity, and aggregates the embeddings obtained from the link aware fusion module based on structural information. and structural information of the KG separately. Mousselly et al. (Mousselly-Sergieh et al. 2018) propose to integrate multi-modal information. TransAE (Wang et al. 2019) learns the visual and structural features jointly into unified knowledge embeddings by an auto-encoder. RSME (Wang et al. 2021) automatically encourages or filters the influence of additional visual context during the representation learning. MKGformer (Chen et al. 2022) presents M-Encoder with multi-level fusion at the last several layers of ViT and BERT to conduct image-text incorporated entity modeling. Xu et al. (Xu et al. 2022) propose a multimodal relation-enhanced negative sampling framework to figure out hard negative samples for knowledge graph completion. HRGAT (Liang et al. 2023) incorporates different modal information with graph structure. MoSE (Zhao et al. 2022) designs a modality split representation learning and ensemble inference framework. OTKGE (Cao et al. 2022) models the multi-modal fusion procedure as a transport plan moving different modal embeddings to a unified space. Although existing MKGC models have shown promising performance, they target independent entities while exploring the contribution of images on entity embeddings without considering the impact of link information on them. Furthermore, they ignore the effect of the structural information of the KG on the entity embeddings, which leads to missing information from their neighbors and the structure of KG. Methodology In this section, we will show the formal description and implementation details of our model. First, we introduce the problem formulation of MKGC task. Then we describe the details of each module in LAFA. Finally, we show the decoder and loss function. The overall framework of LAFA is shown in Figure 2. Specifically, LAFA follows Encoder–Decoder framework. The encoder generates entity embeddings containing multimodal and neighborhood structural information, which contains two components. 1) The link aware fusion module can find the noise of irrelevant visual information by calculating importance scores between an entity and its associated images in different link scenarios, then fuses the visual and structural embeddings based on the attention scores and link information. 2)The link aware aggregation module can find the noise of irrelevant neighbors by calculating importance scores between the central entity and its neighbors, based on which the neighbor information with fused multimodal embeddings are aggregated. The learned embeddings from the encoder are fed to the decoder to predict missing triples. The decoder can be implemented by many existing UKGC models, such as DistMult (Yang et al. 2015), ComplEx (Trouillon et al. 2016), and ConvE (Dettmers et al. 2018). Problem Formulation A knowledge graph (G) is a directed graph, which can be formulated as G = {E, R, T }, where E and R represents the set of entities (nodes) and relations (edges), respectively. T = {(h, r, t) | (h, t ∈E) , r ∈R} is triple set in G, and r ∈R is the relation between head entity h and tail entity t. Multimodal knowledge graphs contain visual information based on the above structural information, that is, each entity is associated with multiple corresponding images. Multimodal knowledge graph completion (MKGC) approaches aim to learn the multimodal fused embeddings of entities The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8959 and relations by compressing them into a continuous lowdimensional vector space, and conduct link prediction based on the embeddings to predict tail entity for triple (h, r, ?) or head entity for triple (?, r, t). Link Aware Fusion An entity hi usually has multiple images in multimodal knowledge graphs, which can be represented as Vhi = { vi,1, vi,2,..., vi,N v i }, where N v i denotes the number of images of entity hi. To extract image features, the pre-trained ViT (Dosovitskiy et al. 2021) on ImageNet-1k (Touvron et al. 2021) is adopted as the visual encoder. The images for each entity are from different aspects in various scenarios, which may mislead the entity embedding since parts of them may be irrelevant to this entity due to different link information will give entities different attributes. Therefore, we argue that the contribution of images to entity embedding needs to be based on link information. To be specific, an entity usually has different attributes in different triples, and the contribution of each associated image to this entity is also different in diverse triples. To this end, we propose a link aware fusion module, which designs a modality interaction attention mechanism to dynamically measure the contribution of images to entity embedding based on link information so as to judge which images are noise, and a modality embedding fusion mechanism to fuse visual and structural embeddings. Modality Interaction Attention Since an entity in a KG has distinct heterogeneous connections with its neighbors, the idea of modality interaction attention comes from an intuition that the contribution of images to entity embedding is different in diverse link scenarios, and whether an image is noise needs to be judged based on triple information. Given a central entity hi and its neighbor entity set N s i = {tj ∈E | (hi, rj, tj) ∈T , rj ∈R}, we firstly randomly initialize the structural embeddings of entities and relations. Then we define a visual matrix to project visual embeddings into hidden embeddings for dimensional unification and similarity matching, as follows: vi,k = vi,kW v, (1) where vi,k ∈Rdv represents the initial visual embedding of image vi,k associated entity hi from ViT, dv is the dimension of initial visual embedding, W v ∈Rdv×dh is a trainable visual matrix, and dh is the dimension of hidden embedding. Triples connected to entity hi contain different semantic and link information, based on which we calculate the importance between the images with entity hi for a given triple (hi, rj, tj) to find noisy images, as follows: ai,j =  hs i ∥rs j ∥ts j  W a, (2) bi,k = ai,jv⊤ i,k √dh , (3) αi,k = softmaxk(bi,k) = exp(bi,k) P m∈Vhi exp(bi,m), (4) where hs i ∈Rds, rs j ∈Rds and ts j ∈Rds represent initial structural embeddings of hs i, rs j and ts j respectively, ds is the dimension of initial structural embedding, W a ∈R3ds×dh is a trainable linear transformation matrix, αi,k ∈[0, 1] represents the importance of image vi,k to entity hi, ∥denotes the concatenation of embeddings. The value of αi,k indicates whether the image vi,k is noise for the entity hi. In particular, we consider image vi,k to be noise when αi,k ≤ξ, where ξ is a predefined threshold. In addition, in order to find the noisy images of tail entity tj, the importance of images in set Vtj = { vj,1, vj,2,..., vj,N v j } associated with the tail entity tj also need to be measured based on the link information, as follows: cj,k = ai,jv⊤ j,k √dh ; vj,k = vj,kW v, (5) βj,k = softmaxk(cj,k) = exp(cj,k) P m∈Vtj exp(cj,m), (6) where vj,k ∈Rdv represents visual embedding of image vj,k ∈Vtj associated with the tail entity tj, βj,k represents the importance of image vj,k to entity tj. And the image vj,k is considered to be noise when βj,k ≤ξ. Modality Embedding Fusion The modality interaction attention mechanism finds noisy images based on link information. Unlike existing MKGC approaches, we do not directly remove these noisy images, but perform linear combination on visual embeddings of them based on the importance calculated above, then the visual and structural embeddings are fused. The motivation for this is that image information will always be helpful for learning entity embeddings. For entities hi and tj, the visual information of them for triple (hi, rj, tj) can be aggregated as follows: evi,j = X k∈Vhi αi,kvi,k, evj = X k∈Vtj βj,kvj,k. (7) To facilitate the fusion of visual and structural embedding, we define a structural matrix to project structural embeddings of entities hi and tj into hidden embeddings as follows: hi = hs iW s, tj = ts jW s, (8) where W s ∈Rds×dh is a trainable structural matrix, and ds is the dimension of structural embedding. Then the new embedding of entities hi and tj containing visual and structural information for triple (hi, rj, tj) can be fused as follows: ehi,j = σ hi + evi,j  , etj = σ tj + evj  , (9) where σ(·) is sigmoid activation function. In this way, we obtain the updated embeddings of entities by fusing visual and structural embeddings according to diverse link scenarios. Moreover, in order to improve and stabilize the effectiveness of LAFA and the learning procedure, we apply multi-head attention mechanism for capturing subspace information from different parameters. Specifically, P independent attention heads are applied to learn embeddings, and their outputs are combined to generate the unified represenThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8960 tation. The formula is defined as follows: bhi,j =  eh (1) i,j ∥eh (2) i,j ∥... ∥eh (P ) i,j  W P , (10) btj = h et (1) j ∥et (2) j ∥... ∥et (P ) j i W P , (11) where eh (p) i,j and et (p) j mean the embeddings generated by the p-th attention head respectively, ∥represents the concatenation of embeddings, and W P ∈RP dh×dh is a trainable linear transformation matrix. bhi,j and btj are called multimodal fusion embeddings. Link Aware Aggregation Knowledge graph is a special graph based dataset, in which a central entity is often connected with multiple tail entities. Existing MKGC models usually focus on the interaction between multimodal data but ignore the structural information of KGs. For an entity hi, we argue that aggregating the information of its neighbor entities to its embedding is help for improving representation quality. Therefore, we propose a link aware aggregation module to aggregate the neighborhood structural information to the central entity. For a central entity hi, we define a query matrix to project the initial structural embedding of hi into a query vector, and a key matrix to project the initial structural embedding of a neighbor entity into a key vector, as follows: qi = hs iQs, kj = ts jKs, (12) where Qs ∈Rds×dh and Ks ∈Rds×dh are trainable query and key matrices, tj ∈N s i is a neighbor entity, and N s i is the neighbor entity set of hi. Then the softmax normalization is conducted on the dot product of query vector qi and key vector kj to calculate the attention score between them as follows: gi,j = qik⊤ j √dh , (13) γi,j = softmaxj(gi,j) = exp(gi,j) P l∈N s i exp(gi,l), (14) where γi,j represents the importance of the neighbor entity tj to the central entity hi. And the entity tj is considered to be noise when γi,j ≤ξ. Since the link aware fusion module will calculate multiple multimodal fusion embedding bhi,j based on each triple connected to the central entity hi, we aggregate them with structural information as follows: bj = h bhi,j ∥btj i W b, ei = X j∈N s i γi,jbj, (15) where bhi,j is the multimodal fusion embedding of central entity hi calculated according to j-th triple (hi, rj, tj), btj is the multimodal fusion embedding of entity tj, W b ∈ R2dh×dh is a trainable linear transformation matrix. To enable the model can concentrate on information from various subspaces and extract richer feature information, we apply multi-head attention mechanism. Specifically, we use Q independent attention heads to learn embeddings, and then combine them to generate the final embedding h′ i of the central entity, the formula is as follows: h′ i = h e(1) i ∥e(2) i ∥... ∥e(Q) i i W Q, (16) where e(q) i denotes the embedding learned by the q-th attention head, W Q ∈RQdh×dh is a trainable linear transformation matrix. By stacking link aware aggregation, the neighborhood information of each entity can be explored. Therefore, the structural and visual information of the entire KG is aggregated and the highly multimodal contextually relevant embeddings can be generated. Decoder MKGC models usually require a basic embedding model for link prediction, which is called decoder. In this paper, we build the decoder based on ConvE (Dettmers et al. 2018). Specifically, The input of the decoder are entity and relation embeddings E ∈R|E|×dh and R ∈R|R|×dh. Entity embeddings E ∈R|E|×dh are generated by aforementioned steps and have fused multimodal and structural information of entity neighbors. |R| denotes the number of relation set, and |E| represents the number of entity set, dh is the dimension of embeddings. Then the decoder outputs the score of each triple calculated by the scoring function, which represents the probability that the triple is valid. For a triple (h, r, t), the scoring function of our model can be defined as follows: Ψ(h, r, t) = σ  vec  σ ⌢ h′∥ ⌢r  ∗ω  W ′  t′, (17) where h′ and t′ are the updated embeddings (Eq. (16)) of head entity h and tail entity t respectively, r is the embedding of relation r, σ(·) represents a non-linear function, ⌢ h′ and ⌢r denote 2D reshaping of h′ and r respectively, ω is the convolution filter, ∗denotes the convolution operation, and W ′ is a trainable transformation matrix. Then the score is activated by the sigmoid function: p(h, r, t) = sigmoid(Ψ(h, r, t)). (18) It should be noticed that LAFA can be easily adapted to various decoders such as DistMult (Yang et al. 2015) and ComplEx (Trouillon et al. 2016). Training and Optimization We use the cross-entropy loss function as the loss of the entire model, which is defined as follows: L = X (h,r,t)∈T −1 |E| |E| X s=1 (y(h, r, ts) · log(p(h, r, ts))+ (1 −y(h, r, ts)) · log(1 −p(h, r, ts))), (19) where y(h, r, ts) ∈{0, 1} is the label of the triple (h, r, ts), |E| is the total number of all candidate tail entities, T is the set of true triples. We use Adam (Kingma and Ba 2014) as optimizer, and use label smoothing (Szegedy et al. 2016), Dropout (Srivastava et al. 2014), and Batch normalization (Ioffe and Szegedy 2015) to avoid overfitting. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8961 Model FB15k-237-IMG WN18-IMG Hits@1↑ Hits@3↑ Hits@10↑ MR↓ Hits@1↑ Hits@3↑ Hits@10↑ MR↓ Unimodal approaches TransE 0.198 0.376 0.441 323 0.040 0.745 0.923 357 DistMult 0.199 0.301 0.446 512 0.335 0.876 0.940 655 ComplEx 0.194 0.297 0.450 546 0.936 0.945 0.947 ConvE 0.237 0.356 0.501 256 0.937 0.947 0.951 294 LTE-ConvE 0.245 0.377 0.535 169 0.943 0.953 0.961 189 MRGAT 0.266 0.386 0.542 159 0.932 0.946 0.971 38 GreenKGC 0.265 0.369 0.507 241 0.937 0.946 0.950 266 CompoundE 0.264 0.393 0.545 151 0.942 0.952 0.972 36 Multimodal approaches IKRL(UNION) 0.194 0.284 0.458 298 0.127 0.796 0.928 596 TransAE 0.199 0.317 0.463 431 0.323 0.835 0.934 352 RSME(ViT-B/32+Forget) 0.242 0.344 0.467 417 0.943 0.951 0.957 223 MKGformer 0.256 0.367 0.504 221 0.944 0.961 0.972 28 LAFA-DistMult 0.264 0.392 0.546 150 0.943 0.958 0.971 29 LAFA-ComplEx 0.262 0.386 0.540 157 0.945 0.962 0.973 27 LAFA-ConvE 0.269 0.398 0.551 136 0.947 0.965 0.977 25 Table 1: Link prediction results on FB15k-237-IMG and WN18-IMG datasets. The best score is in bold. Datasets Entities Relations Train triples Validation triples Test triples FB15k-237-IMG 14,541 237 272,115 17,535 20,466 WN18-IMG 40,943 18 141,442 5,000 5,000 Table 2: Statistics of the datasets. Experiments Experimental Setup Datasets We evaluate our proposed model by two publicly available multimodal datasets: 1) FB15k-237-IMG (Bordes et al. 2013; Chen et al. 2022) and WN18-IMG (Bordes et al. 2013). The details of them are summarized in Table 2. FB15k-237-IMG is a subset of the large-scale knowledge graph Freebase (Bollacker et al. 2008), which has 10 images for each entity. WN18 (Bordes et al. 2013) is a knowledge graph originally extracted from WordNet (Miller 1995). WN18-IMG is an extended dataset of WN18 (Bordes et al. 2013) with 10 images for each entity. Evaluation Protocol Following previous work (Dettmers et al. 2018), our model is evaluated with link prediction task: ranking all entities to predict the tail entity in query (h, r, ?) or the head entity in query (?, r, t). We adopt four evaluation metrics: the mean rank of correct entities (MR), and the proportion of correct entities ranked in top k Hits@k (k ∈{1, 3, 10}). A small MR or a big Hit@k indicates a good result. And we follow the standard evaluation protocol in the filtered setting (Bordes et al. 2013): all true triples in the KG are filtered out during evaluation, since predicting a low rank for these triples should not be penalized. Baselines We compare results with the following SOTA models: Unimodal KGC approaches TransE (Bordes et al. 2013), DistMult (Yang et al. 2015), ComplEx (Trouillon et al. 2016), ConvE (Dettmers et al. 2018), LTEConvE (Zhang et al. 2022), MRGAT (Dai et al. 2022), GreenKGC (Wang et al. 2023), and CompoundE (Ge et al. 2023). Multimodal KGC approaches IKRL (Xie et al. 2017), TransAE (Wang et al. 2019), RSME (Wang et al. 2021), and MKGformer (Chen et al. 2022). Implementation Details We define the threshold ξ = 0.1. For all MKG datasets, the best performing hyper-parameters are found by grid search on the validation set. And the candidate hyper-parameters are selected in the following ranges: batch size {128, 512, 1024}, number of epochs {500, 1000, 2000}, dropout rate {0.1, 0.2, 0.3}, learning rate {0.001, 0.002, 0.003}, embedding dimensions {100, 200, 300, 400, 500}, attention head number P and Q {1, 2, 3, 4}. The experiments are implemented using the PyTorch (Paszke et al. 2017) framework, and are performed on single NVIDIA GeForce RTX2080Ti GPU. Main Results Table 1 presents the link prediction results on FB15k237-IMG and WN18-IMG datasets. We strictly follow the experimental setting and data splitting of the previous works (Wang et al. 2021; Chen et al. 2022) and report the results in the original papers for some baselines. The results show that LAFA have the best performance compared with existing SOTA unimodal and multimodal approaches, which demonstrate that fusing the visual and structural information to entity embeddings according to link information is generally helpful for MKC tasks. Specifically, the Hits@k (k ∈{1, 3, 10}) are improved by 1%-2% on FB15k-237IMG dataset. Particularly, Hits@3 and Hits@10 are improved from 0.393 to 0.398 and 0.545 to 0.551 respectively, MR is declined from 151 to 136. Compared with SOTA MKGC method MKGformer, LAFA improves Hits@3 and Hits@10 from 0.367 to 0.398 and 0.504 to 0.551 respecThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8962 img1 img2 img3 img4 img5 09c7w0-0168cl 09c7w0-09889g 09c7w0-03f1zhf 09c7w0-04cl1 09c7w0-07pzc 0.0 0.1 0.2 0.3 (a) Entities-imgs (Head 1) img1 img2 img3 img4 img5 09c7w0-0168cl 09c7w0-09889g 09c7w0-03f1zhf 09c7w0-04cl1 09c7w0-07pzc 0.0 0.1 0.2 0.3 (b) Entities-imgs (Head 3) 0168cl 09889g 03f1zhf 04cl1 07pzc 06lvlf 0gz5hs 09c7w0 05zppz 09nqf 02h40lc 04ztj 0.0 0.1 0.2 0.3 (c) Entities-entities (Head 1) 0168cl 09889g 03f1zhf 04cl1 07pzc 06lvlf 0gz5hs 09c7w0 05zppz 09nqf 02h40lc 04ztj 0.0 0.1 0.2 0.3 (d) Entities-entities (Head 3) Figure 3: Attention matrices of images and entities on FB15k-237-IMG dataset. The darker colored blocks in individual heads represent a higher attention score. tively. The reason is that LAFA can find noise images and fuse multimodal embeddings by the proposed link aware fusion, and aggregate neighbor structural information by the proposed link aware aggregation. The lower part of Table 1 shows the performance of LAFA with three different decoders, i.e., DistMult (Yang et al. 2015), ComplEx (Trouillon et al. 2016), and ConvE (Dettmers et al. 2018). Obviously, the decoders do have impact on the performance of the model, but not absolutely because the difference in their link prediction results is not very large. ConvE works best as a decoder for LAFA. DistMult and ComplEx are simple and effective models, but they result in a decline in the model performance when they are used as decoders. One possible reason is that their scoring functions weaken the importance of relations. Verification of Importance To explore the potential patterns of our innovations and verify that LAFA can give different importance to images and neighbor entities, we analyze the attention matrices generated according to α (Eq. (4)) and γ (Eq. (14)). Figure 3 shows the attention matrices of images and neighbor entities to the central entity respectively. It can be found that the color distribution is not uniform on all attention matrices, which is in line with our expectation that LAFA can assign different importance score to the specific images or neighbor entities. Furthermore, the consistent distribution of regions with higher importance scores illustrates that attention heads follow the same pattern in capturing subspace semantics. Specifically, we can find from Figure 3 (a) and Figure 3 (b) that the importance of each image associated with entity 09c7w0 is different in different links, which verifies our hypothesis. From Figure 3 (c) and Figure 3 (d), we can observe that the importance of each neighbor entity to the central entity 09c7w0 is also different, the reason is that the semantics of it vary greatly in different links. To this end, the results demonstrate that the proposed LAFA can effectively assign different importance scores to images and neighboring entities of the central entity. Ablation Study We conduct the ablation studies by removing the corresponding parts to construct variants of LAFA as follows: (1) LAFA−MIA replaces the modality interaction attention Model FB15k-237-IMG Hits@1↑ Hits@3↑ Hits@10↑ MR↓ ConvE 0.237 0.356 0.501 256 LAFA−MIA 0.258 0.384 0.538 164 LAFA−LAF 0.259 0.386 0.540 161 LAFA−LAA 0.257 0.384 0.537 168 LAFA 0.269 0.398 0.551 136 Table 3: Ablation study results on FB15k-237-IMG dataset. (MIA) module with the traditional vector similarity matching when calculating the importance of the image to the entity. (2) LAFA−LAF removes link aware fusion (LAF) module, in which the visual and structural embeddings are fused only by attention mechanism without link information; (3) LAFA−LAA removes link aware aggregation (LAA) module. The ablation studies results in Table 3 indicate that our proposed MIA, LAF, and LAA are all valid, that is, removing any of them will make the model less effective. MIA assigns different importance score to images and can judge which of them is noise, LAF exploits the influence of images to entity, and LAA assigns neighbor structural information to the central entity. The experimental results prove that the proposed innovations are effective and contribute significantly to the performance of the model. Conclusion In this paper, we present a novel link aware fusion and aggregation multimodal knowledge graph completion model named LAFA. The link aware fusion module calculates the importance between an entity and its associated images in different link scenarios and fuses the visual and structural embeddings according to the importance through our proposed modality embedding fusion mechanism to alleviate noise of irrelevant visual information. The link aware aggregation module calculates the importance between a given central entity and its neighbors, and aggregates the embeddings of them through linear combination according to the importance to assigns neighbor structural information to this entity. Empirical experimental evaluations on well-established multimodal datasets show that LAFA can achieve the state-of-the-art performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8963 Acknowledgments This work was supported by the National Natural Science Foundation of China (62137002, 62192781, and 62072354), the Fundamental Research Funds for the Central Universities (QTZX23084). References Balaˇzevi´c, I.; Allen, C.; and Hospedales, T. 2019. TuckER: Tensor Factorization for Knowledge Graph Completion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 5185–5194. Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 1247–1250. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26. Cao, Z.; Xu, Q.; Yang, Z.; He, Y.; Cao, X.; and Huang, Q. 2022. Otkge: Multi-modal knowledge graph embeddings via optimal transport. Advances in Neural Information Processing Systems, 35: 39090–39102. Chen, X.; Zhang, N.; Li, L.; Deng, S.; Tan, C.; Xu, C.; Huang, F.; Si, L.; and Chen, H. 2022. Hybrid transformer with multi-level fusion for multimodal knowledge graph completion. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 904–915. Dai, G.; Wang, X.; Zou, X.; Liu, C.; and Cen, S. 2022. MRGAT: Multi-Relational Graph Attention Network for knowledge graph completion. Neural Networks, 154: 234–245. Dai Quoc Nguyen, T. D. N.; Nguyen, D. Q.; and Phung, D. 2018. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In Proceedings of NAACL-HLT, 327–333. Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In 9th International Conference on Learning Representations. Ge, X.; Wang, Y. C.; Wang, B.; and Kuo, C.-C. J. 2023. Compounding Geometric Operations for Knowledge Graph Completion. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6947–6965. Guo, Q.; Zhuang, F.; Qin, C.; Zhu, H.; Xie, X.; Xiong, H.; and He, Q. 2020. A survey on knowledge graph-based recommender systems. IEEE Transactions on Knowledge and Data Engineering, 34(8): 3549–3568. Huang, X.; Zhang, J.; Li, D.; and Li, P. 2019. Knowledge graph embedding based question answering. In Proceedings of the twelfth ACM international conference on web search and data mining, 105–113. Ioffe, S.; and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, 448– 456. Ji, G.; He, S.; Xu, L.; Liu, K.; and Zhao, J. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers), 687–696. Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Li, G.; Wang, X.; and Zhu, W. 2020. Boosting visual question answering with context-aware knowledge aggregation. In Proceedings of the 28th ACM International Conference on Multimedia, 1227–1235. Liang, S.; Zhu, A.; Zhang, J.; and Shao, J. 2023. Hyper-node relational graph attention network for multi-modal knowledge graph completion. ACM Transactions on Multimedia Computing, Communications and Applications, 19(2): 1–21. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI conference on artificial intelligence, volume 29. Miller, G. A. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11): 39–41. Mousselly-Sergieh, H.; Botschen, T.; Gurevych, I.; and Roth, S. 2018. A multimodal translation-based approach for knowledge graph representation learning. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, 225–234. Nickel, M.; Tresp, V.; and Kriegel, H.-P. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, 809–816. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch. In Advances in neural information processing systems, 1–12. Shang, B.; Zhao, Y.; Liu, J.; Liu, Y.; and Wang, C. 2023a. A contrastive knowledge graph embedding model with hierarchical attention and dynamic completion. Neural Computing and Applications, 35(20): 15005–15018. Shang, B.; Zhao, Y.; Liu, Y.; and Wang, C. 2023b. Attentionbased exploitation and exploration strategy for multi-hop knowledge graph reasoning. Information Sciences, 653: 119787. Shang, B.; Zhao, Y.; Wang, D.; and Liu, J. 2023c. RelationAware Multi-Positive Contrastive Knowledge Graph Completion with Embedding Dimension Scaling. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 878– 888. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8964 Socher, R.; Chen, D.; Manning, C. D.; and Ng, A. 2013. Reasoning with neural tensor networks for knowledge base completion. Advances in neural information processing systems, 26. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1): 1929–1958. Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; and Tang, J. 2019. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In International Conference on Learning Representations. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2818–2826. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J´egou, H. 2021. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, 10347–10357. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, ´E.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, 2071–2080. PMLR. Vashishth, S.; Sanyal, S.; Nitin, V.; and Talukdar, P. 2020. Composition-based Multi-Relational Graph Convolutional Networks. In Proceedings of the 7th International Conference on Learning Representations, 1–16. Wang, M.; Wang, S.; Yang, H.; Zhang, Z.; Chen, X.; and Qi, G. 2021. Is visual context really helpful for knowledge graph? A representation learning perspective. In Proceedings of the 29th ACM International Conference on Multimedia, 2735–2743. Wang, Y.-C.; Ge, X.; Wang, B.; and Kuo, C.-C. J. 2023. Greenkgc: A lightweight knowledge graph completion method. Wang, Z.; Li, L.; Li, Q.; and Zeng, D. 2019. Multimodal data enhanced representation learning for knowledge graphs. In 2019 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI conference on artificial intelligence, volume 28. Wu, Y.; Liao, L.; Zhang, G.; Lei, W.; Zhao, G.; Qian, X.; and Chua, T.-S. 2022. State graph reasoning for multimodal conversational recommendation. IEEE Transactions on Multimedia. Xie, R.; Liu, Z.; Luan, H.; and Sun, M. 2017. Imageembodied knowledge representation learning. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, 3140–3146. Xu, D.; Xu, T.; Wu, S.; Zhou, J.; and Chen, E. 2022. Relation-enhanced Negative Sampling for Multimodal Knowledge Graph Completion. In Proceedings of the 30th ACM International Conference on Multimedia, 3857–3866. Yang, A.; Lin, S.; Yeh, C.-H.; Shu, M.; Yang, Y.; and Chang, X. 2023. Context Matters: Distilling Knowledge Graph for Enhanced Object Detection. IEEE Transactions on Multimedia. Yang, B.; Yih, S. W.-t.; He, X.; Gao, J.; and Deng, L. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the 3rd International Conference on Learning Representations, 1–12. Zhang, Z.; Wang, J.; Ye, J.; and Wu, F. 2022. Rethinking graph convolutional networks in knowledge graph completion. In Proceedings of the ACM Web Conference 2022, 798– 807. Zhao, Y.; Cai, X.; Wu, Y.; Zhang, H.; Zhang, Y.; Zhao, G.; and Jiang, N. 2022. MoSE: Modality Split and Ensemble for Multimodal Knowledge Graph Completion. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 10527–10536. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8965
2024
996
18,846
Mixed Geometry Message and Trainable Convolutional Attention Network for Knowledge Graph Completion Bin Shang1,2, Yinliang Zhao1,2*, Jun Liu1,2, Di Wang3* 1Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, School of Computer Science and Technology, Xi’an Jiaotong University, China 2National Engineering Lab for Big Data Analytics, Xi’an Jiaotong University, China 3School of Computer Science and Technology, Xidian University, China [email protected], {zhaoy, liukeen}@xjtu.edu.cn, [email protected] Abstract Knowledge graph completion (KGC) aims to study the embedding representation to solve the incompleteness of knowledge graphs (KGs). Recently, graph convolutional networks (GCNs) and graph attention networks (GATs) have been widely used in KGC tasks by capturing neighbor information of entities. However, Both GCNs and GATs based KGC models have their limitations, and the best method is to analyze the neighbors of each entity (pre-validating), while this process is prohibitively expensive. Furthermore, the representation quality of the embeddings can affect the aggregation of neighbor information (message passing). To address the above limitations, we propose a novel knowledge graph completion model with mixed geometry message and trainable convolutional attention network named MGTCA. Concretely, the mixed geometry message function generates rich neighbor message by integrating spatially information in the hyperbolic space, hypersphere space and Euclidean space jointly. To complete the autonomous switching of graph neural networks (GNNs) and eliminate the necessity of pre-validating the local structure of KGs, a trainable convolutional attention network is proposed by comprising three types of GNNs in one trainable formulation. Furthermore, a mixed geometry scoring function is proposed, which calculates scores of triples by novel prediction function and similarity function based on different geometric spaces. Extensive experiments on three standard datasets confirm the effectiveness of our innovations, and the performance of MGTCA is significantly improved compared to the state-of-the-art approaches. Introduction Knowledge graphs (KGs) represent real-world data as fact triples (head entity, relation, tail entity), which have shown great research value and application prospect. KGs are widely used in many downstream tasks, such as question answering (Kaiser, Saha Roy, and Weikum 2021), dialogue generation (Keizer et al. 2017), semantic search (Xiong, Power, and Callan 2017), and recommender systems (Wang et al. 2021b). Even though the scale of many public KGs is noticeably large such as Yago3 (Mahdisoltani, Biega, and Suchanek 2013) and Freebase (Bollacker et al. 2008), they are still confronted with incompleteness because there are *Corresponding authors. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many missing relations among them. Therefore, knowledge graph completion (KGC) has attracted extensive attention and attempts to automatically find out missing facts. Knowledge graph embedding (KGE) is an effective solution for KGC task, and many of them have been proposed such as (Bordes et al. 2013; Yang et al. 2015; Dettmers et al. 2018; Vashishth et al. 2020a; Li et al. 2022; Ge et al. 2023). KGE approaches aim to embed entities and relations into a lowdimensional vector space and define scoring functions to assess the plausibility of triples for link prediction. Although these methods are simple and efficient, they are significantly reliant on the pre-defined scoring function and rather challenging to encode structural information about an entity into a single vector (Dai et al. 2022). In order to capture the intrinsic graph structure of KGs, graph neural networks (GNNs) (Gilmer et al. 2017; Javaloy et al. 2023) have been used for KGC task. GNNs based KGC models learn the hidden representation of each entity by aggregating its corresponding local neighbors’ information (Dai et al. 2022; Wang et al. 2023). Recently, many studies tend to model KGs by diverse types of GNNs such as graph convolutional networks (GCNs) (Kipf and Welling 2017) based models R-GCN (Schlichtkrull et al. 2018), CompGCN (Vashishth et al. 2020b), and LTE-ConvE (Zhang et al. 2022); graph attention networks (GATs) (Veliˇckovi´c et al. 2018) based models MRGAT (Dai et al. 2022), GreenKGC (Wang et al. 2023) and Ae2KGR (Shang et al. 2023b). Although these approaches have shown promising performance, they still suffer from several evident limitations as follows: (i) Data dependence. Both GCNs and GATs based KGC models have their strengths and limitations because they are data sensitive, which results in the problem of data dependence. GCNs based KGC approaches fully summarize neighbor messages to endow the central entity with sufficient structural information, while they tend to stack redundant information when there are various neighbors. GATs based KGC approaches introduces non-uniform score to each neighbors and can reduce the stacking of redundant information, while they tend to focus on certain neighbor entities and weaken the structural information. Therefore, the local structure of each entity can influence the performance of GCNs and GATs. The best method is to analyze the neighbors of each entity before selecting GCNs or GATs (pre-validating), while this The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8966 process is prohibitively expensive. (ii) Message limitation. The message function in GNNs can be affected by representation quality of embeddings, which results in the problem of message limitation. The message functions (MFs) are used to generate neighbor information and is a crucial component for the GNNs based KGC methods (Nathani et al. 2019; Dai et al. 2022). Existing MFs are designed only in Euclidean space (zero curvature), which cannot fully capture the intrinsic structural information of KGs and may lead to insufficient neighbor message. Therefore, exploring a new message function can help aggregate local information for GNNs and improve the representation quality of entity embeddings. To address the above issues, in this paper, we propose a Mixed Geometry message and Trainable Convolutional Attention network based knowledge graph completion model named MGTCA. In order to deal with the problem of message limitation, MGTCA introduces a mixed geometry message function (MGMF), which captures spatially information in the hyperbolic space (negative curvature), hypersphere space (positive curvature) and Euclidean space (zero curvature) jointly. In addition, MGMF integrates these information into message through geometric mapping and linear transformation. In order to deal with the problem of data dependence, MGTCA presents a trainable convolutional attention network (TCAN), which comprises different types of GNNs in one trainable formulation. TCAN aims to eliminate the necessity of pre-validating the local structure of KGs, complete the autonomous switching of GNNs types, and learn the amount of attention required for each local structure. Furthermore, to calculate scores of triples, we propose a mixed geometry scoring function with novel prediction function and similarity function based on the three geometric spaces. Our contributions are summarized as follows: • We propose a mixed geometry message function to generate rich neighbor message by integrating spatially information in the hyperbolic space, hypersphere space and Euclidean space jointly. To the best of our knowledge, we are the first to explore to generate mixed geometric message in GNNs based KGC methods. • We propose a trainable convolutional attention network to complete the autonomous switching of GNNs types and learn the amount of attention required for each local structure by comprising different types of GNNs in one trainable formulation. To the best of our knowledge, we are the first to explore the autonomous switching of GNNs types in KGC task. • We propose a mixed geometry scoring function to calculate scores of triples by novel prediction function and similarity function based on three geometric spaces. • We conduct extensive experiments on three benchmark datasets. The results show that MGTCA achieves stateof-the-art performance compared to existing models. Related Work Non-Euclidean KGC Models Modeling KGs in non-Euclidean spaces has attracted considerable attention, which can capture the complex structures of KGs by specific geometric space and improve the representation quality of embeddings. ManifoldE (Xiao, Huang, and Zhu 2016a) expands pointwise modeling in the translation based principle to manifoldwise space (e.g., hypersphere space). MuRP (Balazevic, Allen, and Hospedales 2019) learns KG embeddings in hyperbolic space to capture the hierarchical structure in the KG. RotH (Chami et al. 2020) introduces the hyperbolic geometry on the basis of the rotation. Meng’s (Meng et al. 2019) proposes a spherical generative model and learns word and paragraph embeddings jointly. These works model the KG in only one geometric space, which can not capture the complex spatial structure of KGs. Recently, in order to make full use of the advantages of each geometric space, M2 GNN (Wang et al. 2021a) constructs a generic graph neural network framework to model multi-relational KG. HBE (Pan and Wang 2021) fine-tunes the operator and fix model in polar coordinate system to embed KGs. GIE (Cao et al. 2022) is proposed to embrace semantic matching between entities and satisfy the key of relational representation learning. Euclidean KGC Models Euclidean KGC Models capture the information of KGs and prediction missing facts in Euclidean space. Generally, existing Euclidean KGC models can be divided into four groups: (i) Translation-based models consider the relations as translation between head and tail entities and design scoring function based on distances, such as TransE (Bordes et al. 2013), TransH (Wang et al. 2014), TransR (Lin et al. 2015), TransG (Xiao, Huang, and Zhu 2016b), RotatE (Sun et al. 2019), RotatE-IAS (Yang et al. 2022), HousE (Li et al. 2022), and CompoundE (Ge et al. 2023). (ii) Semantic matching models design scoring function by similarity matching of vector or matrix, such as RESCAL (Nickel, Tresp, and Kriegel 2011), DistMult (Yang et al. 2015), ComplEx (Trouillon et al. 2016), TuckER (Balaˇzevi´c, Allen, and Hospedales 2019), and HAKE (Zhang et al. 2020). (iii) Convolutional neural networks (CNNs) based Models employ multi-layer CNNs to generate more expressive embeddings, such as ConvKB (Nguyen et al. 2018), ConvE (Dettmers et al. 2018), and InteractE (Vashishth et al. 2020a). (iv) Graph neural networks (GNNs) based models utilize GNNs to update the embeddings of entities and relations based on the structural information of the knowledge graph, such as R-GCN (Schlichtkrull et al. 2018), KBGAT (Nathani et al. 2019), CompGCN (Vashishth et al. 2020b), ATTH (Chami et al. 2020), HittER (Chen et al. 2021), Rot-Pro (Song, Luo, and Huang 2021), SE-GNN (Li et al. 2022), LTEConvE (Zhang et al. 2022), MRGAT (Dai et al. 2022), HADC (Shang et al. 2023a), ConKGC (Shang et al. 2023c), and GreenKGC (Wang et al. 2023). Although the aforementioned GNNs based models have achieved satisfactory performance, they use a single type of GNNs to learn embeddings, which will degrade the representation quality of embeddings because both GCNs and GATs have their limitations when aggregating neighbor information. Furthermore, the message function of them are in Euclidean space, which cannot fully capture the intrinsic structural information of KGs. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8967 Preliminaries Geometric Space Geometric spaces are distinguished according to the value of the curvature. Specifically, the curvature c is negative for hyperbolic space H, positive for hypersphere space S, and zero for Euclidean space E. The Poincar´e ball is a popular model for describing the geometric space in mathematical language (Nickel and Kiela 2017; Chami et al. 2020; Xiao et al. 2022), which has basic mathematical operations (e.g., addition, multiplication) and provides closed-form expressions for many basic objects such as distance and angle (Ganea, B´ecigneul, and Hofmann 2018). The principled generalizations of basic operations in hypersphere space are similar to operations in hyperbolic space, except that the curvature c > 0. Therefore, here we only introduce the operation of hyperbolic space, the operation of hypersphere space can be obtained by analogy. The hyperbolic space can be formalized as an approximated vectorial structure by the framework of gyrovector space (Ungar 2008). For two points x, y ∈Hd c in the hyperbolic space, the M¨obius addition (Ganea, B´ecigneul, and Hofmann 2018) is used as the vector addition in Hd c: x ⊕c y = (1 + 2cxy + c||y||2)x + (1 −c||x||2)y 1 + 2cxy + c2||x||2||y||2 , (1) then the distance between these two points is measured along a geodesic (shortest path between them) as follows: dc(x, y) = 2 √carctanh(√c|| −x ⊕c y||). (2) The practical computations (addition, multiplication, etc) in the hyperbolic space are often implemented using the tangent space. For x ∈Hd c, the associated tangent space TxHd c is a d-dimensional Euclidean space. Exponential map and logarithmic map can achieve the mutual transformation between the local hyperbolic space and the tangent space of the point. The logarithmic map logc x transforms the point to the tangent space, and the exponential map expc x transforms back to the hyperbolic space. Specifically, these two maps have more appealing forms when x = 0, namely for: v ∈T0Hd c \ {0}, y ∈Hd c \ {0}: expc 0(v) = tanh(√c||v||) v √c||v||, (3) logc 0(y) = artanh(√c||y||) y √c||y||. (4) Furthermore, the multiplication in hyperbolic space can be defined by M¨obius scalar multiplication between vectors r ∈E and x ∈Hd c: r ⊗c x = 1 √ctanh rtanh−1(√c||x||)  x ||x||, (5) Knowledge Graph Completion A knowledge graph (G) can be formulated as G = {E, R, T }, where E and R represent the set of entities (nodes) and relations (edges), respectively. T = {(h, r, t)|h, t ∈E, r ∈R} is triple set in G, and r ∈R is the relation between entity h and t. KGC approaches first project entities h ∈E onto entity embedding matrix {h ∈E | E ∈R|E|×d} and relations r ∈R onto relation embedding matrix {r ∈R | R ∈R|R|×d}, where |E| and |R| represent the total number of entities and relations respectively, d is the embedding dimension. The link prediction task aims to predict the tail entity t for a query (h, r, ?) or head entity for a query (?, r, t). Such a goal is achieved by designing and learning a scoring function Φ(h, r, t) = ξ(ϕ(h, r), t). ϕ(h, r) is prediction function, which predicts the tail entity embedding t′. ξ(t′, t) is similarity function, which measures the similarity between the predicted tail entity embedding t′ and the true tail entity embedding t. The scoring function also directly affects the model performance. Furthermore, The goal of the optimization is to score a correct triple higher than incorrect triples. Methodology In this section, we show the formal description and implementation details of our proposed model MGTCA. We start by introducing the mixed geometry message function. Then we describe the trainable convolutional attention network. Next, we present the mixed geometry scoring function. In the end, we provide loss function. The overall framework of MGTCA is shown in Figure 1. Generally, the whole model contains L layers, the input to l-th layer (l = 1,...,L) are two embedding sets: (1) the output entity embedding matrix El−1 = n el−1 1 , el−1 2 , ..., el−1 |E| o ∈ R|E|×d from (l-1)-th layer, where |E| is the number of entities, and d is the dimension of embeddings. (2) the output relation embedding matrix Rl−1 = n rl−1 1 , rl−1 2 , ..., rl−1 |R| o ∈ R|R|×d from (l-1)-th layer, where |R| is the number of relations. The l-th layer then produces the corresponding new output embedding matrices (of potentially different cardinality), El ∈R|E|×d and Rl ∈R|R|×d. Specifically, we describe the l-th layer of our model. Mixed Geometry Message Function Graph neural networks (GNNs) (Gilmer et al. 2017) have been widely used in knowledge graph completion tasks, which can update the embeddings of entities by aggregating the information from their neighbors based on message function (MF). In this way, the entity embeddings can obtain structural information. The message function is used to generate neighbor information and is a crucial component for the GNNs based KGC methods (Nathani et al. 2019; Dai et al. 2022). Recently, many MF for KGC task have been proposed and achieved satisfactory results in the Euclidean space (zero curvature). However, KGs usually contain rich structural information and they cannot be captured in the Euclidean space, which leads to insufficient neighbor information passed by MF. To alleviate this problem, we propose a mixed geometry message function to integrate spatially information in diverse geometric spaces (hyperbolic, hypersphere and Euclidean spaces). Specifically, given a central entity hi and its neighbor set The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8968 Layer L Trainable convolutional attention network Mixed geometry message function +    ··· Layer 1 Mixed geometry scoring function Triple (h, r, t) + h r t' Prediction function Similarity function Score of triple (h, r, t) Knowledge graph Distance between t' () and t () Distance between t' () and t () Distance between t' () and t () Figure 1: The overall framework of MGTCA. The embeddings are learned by multi layer trainable convolutional attention network with mixed geometry message function, and then are fed into mixed geometry scoring function for link prediction. H, S, and E represent hypersphere space, hypersphere space, and Euclidean space respectively. Ni = {(rj, tj) | (hi, rj, tj) ∈T }, which denotes all neighbors of entity hi. The messages in the three geometric spaces can be defined as follows: El j = rl−1 j tl−1 j , Hl j = rl−1 j ⊗cl 1 expcl 1 0 (tl−1 j ), Sl j = rl−1 j ⊗cl 2 expcl 2 0 (tl−1 j ), (6) where rl−1 j and tl−1 j denote the embeddings of relation rj and tail entity tj in Euclidean space in (l-1)-th layer respectively, cl 1 < 0 and cl 2 > 0 are two trainable curvatures for hyperbolic and hypersphere spaces in l-th layer respectively, the operation ⊗is related to Eq. (5) and expc 0 is the exponential map (Eq. (3)). Messages El j, Hl j, and Sl j in l-th layer are from different geometric spaces, we combine them and define our mixed geometry message function as follows: φl(rl−1 j , tl−1 j ) = W l m h El j ∥logcl 1 0 (Hl j) ∥logcl 2 0 (Sl j) i , (7) where W l m ∈Rd×3d is a trainable transformation matrix, ∥ denotes the concatenation of embeddings, logc 0 is logarithmic map (Eq. (4)). It should be noted that the input and output of φl(rl−1 j , tl−1 j ) are in Euclidean space, but the output message contain rich spatially information from the three geometric spaces. In this way, we can improve embedding representation quality without burdening vector storage. Trainable Convolutional Attention Network Recently, many GNNs based KGC models have been proposed. Graph convolutional networks (GCNs) (Kipf and Welling 2017) and graph attention networks (GATs) (Veliˇckovi´c et al. 2018) are two important and widely used GNNs. For a given central entity hi, the message passing function of GCNs for KGC task can be defined as follows: ¯h l i = σ(bh l i) where bh l i = 1 |Ni| X j∈Ni φl(rl−1 j , tl−1 j ), (8) where ¯h l i represents the generated embedding of entity hi in l-th layer, σ(·) is an activation function, Ni denotes the neighbors of hi, φl(rl−1 j , tl−1 j ) is our proposed mixed geometry message function (Eq. (7)). And the message passing function of GATs for KGC task is defined as follows: ¯h l i = σ(bh l i) where bh l i = X j∈Ni αl ijφl(rl−1 j , tl−1 j ), αl ij = exp(ψl(hl−1 i , rl−1 j , tl−1 j )) P k∈Ni exp(ψl(hl−1 i , rl−1 k , tl−1 k )) , ψl(hl−1 i , rl−1 j , tl−1 j ) = LeakyRelu(al⊤[W l qhl−1 i ∥W l kφl(rl−1 j , tl−1 j )]), (9) where W l q ∈Rdh×d, and W l k ∈Rdh×d are trainable transformation matrices in l-th layer, dh is the dimension of hidden embedding, al is the attention vector in l-th layer, ∥denotes the concatenation of two embeddings, and ψl(hl−1 i , rl−1 j , tl−1 j ) is the attention function in l-th layer. GCNs based KGC models treat the neighbors of entities equally and can fully summarize neighbor messages to endow the central entity with sufficient structural information, while they may stack redundant information when there are various neighbors. GATs based KGC models introduces non-uniform score to each neighbors and can reduce the stacking of redundant information, while they tend to focus on certain neighbor entities and weaken the structural information. Based on the above observations, we can conclude that both GCNs and GATs based KGC approaches are sensitive to KG structures (properties of entity neighbors). The best method to deal this problem is to analyze the neighbors of each entity (pre-validating), while this process is prohibitively expensive. Therefore, we propose a knowledge graph convolutional attention network (KGCAT), which applies the convolutional operation to the attention function The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8969 based on Eq. (9): ψl(hl−1 i , rl−1 j , tl−1 j ) = LeakyRelu(al⊤[W l qeh l i ∥W l kφl(rl−1 j ,et l j)]), eh l i = 1 1 + |Ni| hl−1 i + X k∈Ni φl(rl−1 k , tl−1 k ) ! , (10) where eh l i and et l j are the convolved embeddings of the central entity hi and neighbor entity tj in l-th layer respectively. Using the convolution operation before the attention mechanism can give the central entity sufficient structural information while avoiding redundant information stacking. But this method is a compromise, which may cover up the original advantages of GCNs and GATs. To this end, we propose a trainable convolutional attention network (TCAN), and its attention function is defined as follows: ψl(hl−1 i , rl−1 j , tl−1 j ) = αlLeakyRelu(al⊤[W l qeh l i ∥W l kφl(rl−1 j ,et l j)]), eh l i = hl−1 i + βl P k∈Ni φl(rl−1 k , tl−1 k ) 1 + βl|Ni| , (11) where αl, βl ∈[0, 1] are two trainable coefficients in lth layer. According to these two coefficients, TCAN can be transformed into GCNs (αl = 0), GATs (αl = 1 and βl = 0), and KGCAT (αl = 1 and βl = 1). TCAN eliminates the necessity of pre-validating the local structure of KGs by comprising three types of GNNs in one trainable formulation. Furthermore, it completes the autonomous switching of GNNs types and can learn the amount of attention required for each local structure. Our model contains L layers, each of them has a trainable convolutional attention network and mixed geometry message function. The output of the final layer is the generated embeddings of entities with rich structural information, which are fed into scoring function for link prediction. Mixed Geometry Scoring Function For a triple (h, r, t), the scoring function Φ(h, r, t) = softmax(ξ(ϕ(h, r), t)) is composed of prediction function t′ = ϕ(h, r) and similarity function ξ(t′, t). In this work, we propose a novel prediction function as follows: t′ = ϕ(h, r) = W p [h ∥r] + bp, (12) where W p ∈Rd×2d is a trainable transformation matrix, bp is bias, t′ is the predicted embedding of tail entity t. Considering that the embeddings contain the information from hyperbolic, hypersphere and Euclidean spaces, we propose a novel geometric similarity function as follows: ξ(t′, t) = d0(t′, t) + dc1(expc1 0 (t′), expc1 0 (t))+ dc2(expc2 0 (t′), expc2 0 (t)), (13) where dc(·) is the distance function (Eq. (2)), c1 < 0 and c2 > 0 are two trainable curvatures for hyperbolic and hypersphere spaces. Finally, the softmax function is employed on the absolute score calculated by the similarity function to get the relative score of each triple. Datasets Entities Relations Train triples Validation triples Test triples FB15k-237 14,541 237 272,115 17,535 20,466 YAGO3-10 123,182 37 1,079,040 5,000 5,000 WN18RR 40,943 11 86,835 3,034 3,134 Table 1: Statistics of the datasets used in this paper. Training and Optimization Our objective is to minimize the Bernoulli negative loglikelihood, based on which the loss function of MGTCA is defined as follows: L = X (h,r,t)∈T −1 N N X i=1 (y(h, r, ti)log(pi) + (1 −y(h, r, ti))log(1 −pi)), (14) where y(h, r, ti) is the label (1 or 0) of the triple (h, r, ti), pi = Φ(h, r, ti) is the score calculated by scoring function, N denotes the number of candidates for the tail entity. We use Adam (Kingma and Ba 2015) as optimizer, and label smoothing (Szegedy et al. 2016), Dropout (Srivastava et al. 2014), Batch normalization (Ioffe and Szegedy 2015) to lessen overfitting. Experiments Experimental Setup Datasets We evaluate our proposed model by three standard datasets: FB15k-237 (Toutanova et al. 2015), YAGO310 (Dettmers et al. 2018), and WN18RR (Dettmers et al. 2018). FB15k-237 is a subset of FB15k (Bordes et al. 2013), in which the inverse relations are removed. YAGO3-10 is a subset of YAGO3 (Mahdisoltani, Biega, and Suchanek 2013), which constitutes entities with at least 10 relations. WN18RR is a subset of WN18 (Bordes et al. 2013) and the main relation patterns are symmetry/antisymmetry and composition. The details of them are summarized in Table 1. Evaluation Metrics Following previous work (Dettmers et al. 2018), our model is evaluated with link prediction task: ranking all entities to predict the tail entity in query (h, r, ?) or the head entity in query (?, r, t). We adopt four evaluation metrics: the average inverse rank of the test triples mean reciprocal rank (MRR), and the proportion of correct entities ranked in top k Hits@k (k ∈{1, 3, 10}). We follow the standard evaluation protocol in the filtered setting: all true triples in the KG are filtered out during evaluation. Baselines We compare results with the following SOTA models: Euclidean approaches TransE (Bordes et al. 2013), ConvE (Dettmers et al. 2018), RotatE (Sun et al. 2019), CompGCN (Vashishth et al. 2020b), HittER (Chen et al. 2021), LTE-ConvE (Zhang et al. 2022), RotatE-IAS (Yang et al. 2022), MRGAT (Dai et al. 2022), GreenKGC (Wang et al. 2023), and CompoundE (Ge et al. 2023). Non-Euclidean approaches MuRP (Balazevic, Allen, and Hospedales 2019), MuRS (Wang et al. 2021a), MuRMP (Wang et al. 2021a), HBE (Pan and Wang 2021), Rot-Pro (Song, Luo, and Huang 2021), and GIE (Cao et al. 2022). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8970 Model FB15k-237 YAGO3-10 WN18RR MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 Euclidean approaches TransE .294 .465 .226 .501 ConvE .325 .237 .356 .501 .440 .350 .490 .620 .430 .400 .440 .520 RotatE .338 .241 .375 .533 .495 .402 .550 .670 .476 .428 .492 .571 CompGCN .355 .264 .390 .535 .489 .395 .500 .582 .479 .443 .494 .546 HittER .373 .279 .409 .558 .503 .462 .516 .584 LTE-ConvE .355 .264 .389 .535 .472 .437 .485 .544 RotatE-IAS .339 .242 .374 .532 .483 .467 .502 .570 MRGAT .358 .266 .386 .542 .552 .439 .561 .698 .481 .443 .501 .568 GreenKGC .345 .265 .369 .507 .453 .361 .509 .629 .411 .367 .430 .491 CompoundE .357 .264 .393 .545 .491 .450 .508 .576 Non-Euclidean approaches MuRP .335 .243 .367 .518 .354 .249 .400 .567 .481 .440 .495 .566 MuRS .338 .249 .373 .525 .351 .244 .382 .562 .454 .432 .482 .550 MuRMP .345 .258 .385 .542 .358 .248 .389 .566 .473 .435 .485 .552 HBE .336 .239 .372 .534 .488 .448 .502 .570 Rot-Pro .344 .246 .383 .540 .542 .443 .596 .699 .457 .397 .482 .577 GIE .362 .271 .401 .552 .579 .505 .618 .709 .491 .452 .505 .575 MGTCA .393 .291 .428 .583 .586 .514 .629 .721 .511 .475 .525 .593 Table 2: Link prediction results of MRR and Hits@k on FB15k-237, YAGO3-10, and WN18RR datasets. The best score is in bold and second best score is underlined. Implementation Details We set layer number L = 5, attention head number is 3, and dimension d = 200. Coefficients αl and βl (l = 1, ..., L) are initially set to 0.5. For each dataset, the best performing hyper-parameters are found by grid search on the validation set. All experiments are performed on single NVIDIA GeForce RTX2080Ti GPU, and are implemented by the PyTorch framework. Results on Link Prediction Main Results Table 2 presents the link prediction results on FB15k-237, YAGO3-10, and WN18RR datasets. We strictly follow the experimental setting and data splitting of the previous work (Dettmers et al. 2018) and report the results in the original papers for some baselines. It is clear that our proposed model MGTCA achieves the best performance on the vast majority of datasets by comparing with existing state-of-the-art (SOTA) models. MGTCA improves the four evaluation metrics by 2%-3% compared to the SOTA results (underlined results) on the FB15k-237 dataset. Particularly, MRR and Hits@10 are improved from 0.373 to 0.393 and 0.558 to 0.583, respectively. On YAGO310 and WN18RR datasets, MGTCA yields a significant improvement for Hits@3 and Hits@10 compared with SOTA baselines. Furthermore, compared with GNNs based KGC models such as MRGAT and GreenKGC, MGTCA achieves definitive improvement on all datasets, which demonstrate that our proposed trainable convolutional attention network facilitates the exploration of local structures as well as the learning of embeddings. Finally, MGTCA outperforms existing non-Euclidean approaches, the reason is that MGTCA designs its unique mixed geometry message function and 1-1 1-N N-1 N-N MRR H10 MRR H10 MRR H10 MRR H10 TransE .217 .407 .183 .399 .254 .381 .323 .510 ConvE .195 .401 .212 .410 .271 .397 .352 .531 MRGAT .178 .395 .237 .413 .294 .432 .371 .562 MGTCA .219 .411 .246 .421 .312 .447 .382 .572 Table 3: Link prediction results of MRR and Hits@10 from different relation types on FB15k-237 dataset. scoring function. These two functions integrate the spatially information from three geometric spaces for message passing and link prediction respectively. Analysis of Relations Generally, the number of relations is much smaller than the number of entities in KGs, so the same relation is usually connected to multiple entities, which leads to multiple relation types. Following (Bordes et al. 2013), relations can be classified into four categories: oneto-one (1-1), one-to-many (1-N), many-to-one (N-1), and many-to-many (N-N). In order to verify whether MGTCA can effectively deal with the challenge brought by different relation types, we classify the relations in FB15k-237 into the four types mentioned above. The link prediction results of them is shown in Table 3. It can be found that MGTCA is more advantageous for modeling KGs with various relation types. Specifically, MGTCA can better model complex relations such as 1-N, N-1, and N-N types, the reason is that the proposed mixed geometry message function is able to capture the interactions between entities and relations. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8971 FB15k-237 YAGO3-10 WN18RR MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 w/o MGMF .377 .275 .411 .567 .568 .497 .610 .704 .496 .459 .509 .578 w/o H .380 .278 .413 .570 .570 .501 .613 .706 .499 .463 .511 .580 w/o S .381 .280 .414 .571 .571 .502 .612 .708 .501 .466 .513 .582 w/o E .380 .279 .412 .572 .571 .501 .613 .706 .501 .466 .513 .581 w/o MGSF .380 .280 .414 .569 .572 .499 .611 .706 .499 .465 .510 .581 MGTCA-GCN .381 .280 .415 .571 .574 .500 .612 .709 .501 .467 .512 .582 MGTCA-GAT .384 .284 .417 .573 .577 .503 .616 .712 .503 .470 .516 .585 MGTCA-KGCAT .387 .287 .421 .576 .580 .507 .620 .715 .506 .474 .520 .588 MGTCA .393 .291 .428 .583 .586 .514 .629 .721 .511 .475 .525 .593 Table 4: Ablation study results on three datasets. w/o MGMF represents removing mixed geometry message function (MGMF) from MGTCA. w/o H, w/o S and w/o E denote removing hyperbolic, hypersphere and Euclidean space respectively. w/o MGSF denotes removing mixed geometry scoring function (MGSF). MGTCA-M denotes that the proposed trainable convolutional attention network (TCAN) is replaced by M. 1 2 3 4 5 Layers 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 Value FB15k-237 0.01 0.11 0.25 0.08 0.02 0.13 0.32 0.49 0.29 0.18 Figure 2: Values of α and β over layers. Effect of α and β The two coefficients α and β are important for our proposed trainable convolutional attention network (TCAN). TCAN can be transformed into GCNs (α = 0), GATs (α = 1 and β = 0), and KGCAT (α = 1 and β = 1). The values of these two coefficients reflect the characteristics of each layer. Therefore, we observe their values in each layer on FB15k-237 dataset, and the experimental results are shown in Figure 2. It can be found that the value of α of the first layer and the last layer are close to 0, so both layers can be regarded as GCNs. The attention of the third layer plays the largest role, and this layer has been approximately transformed into KGCAT. The second and fourth layers are close to GATs. Overall, the results fully verify the advantage of MGTCA, that is, each layer of it can autonomously learn α and β to adjust its GNNs type. Ablation Study Table 4 shows the ablation study results of our proposed MGTCA on the three datasets, where we evaluate the innovations of our model to judge their contribution. The comparison results indicate that our proposed mixed geometry message function (MGMF), trainable convolutional attention network (TCAN) and mixed geometry scoring function (MGSF) are all valid, that is, removing any of them will make the model less effective. Specifically, we use the message function from MRGAT (Dai et al. 2022) for removing MGMF. The hyperbolic, hypersphere and Euclidean space can be removed directly from our model, and the scoring function is defined according to the rest spaces. For removing MGSF, we use the scoring function of ConvE (Dettmers et al. 2018) for link prediction. MGMF integrates the spatially information to generate rich neighbor message, TCAN comprises different types of GNNs in one trainable formulation, and MGSF designs novel prediction function and similarity function based on the three geometric spaces. These three innovations are important components of our model, and the ablation results have verify their contribution. Furthermore, removing any of geometric spaces in MGTCA leads to a decline in model performance, which demonstrate that they all contribute significantly to the message passing and link prediction. Conclusion In this paper, we propose a mixed geometry message and trainable convolutional attention network for knowledge graph completion named MGTCA. MGTCA introduces a mixed geometry message function to enrich the neighbor message by integrating the spatially information in the hyperbolic space (negative curvature), hypersphere space (positive curvature) and Euclidean space (zero curvature) jointly. To eliminate the necessity of pre-validating the local structure of KGs, complete the autonomous switching of GNNs types, and learn the amount of attention required for each local structure, MGTCA presents a trainable convolutional attention network (TCAN) by comprising different types of GNNs in one trainable formulation. Moreover, MGTCA designs a mixed geometry scoring function to calculate scores of triples by novel prediction function and similarity function based on the three geometric spaces. Empirical experimental evaluations on three well-established datasets show that MGTCA can achieve the state-of-the-art performance. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8972 Acknowledgments This work was supported by the National Natural Science Foundation of China (62137002, 62192781, and 62072354), the Fundamental Research Funds for the Central Universities (QTZX23084). References Balazevic, I.; Allen, C.; and Hospedales, T. 2019. Multirelational poincar´e graph embeddings. Advances in Neural Information Processing Systems, 32. Balaˇzevi´c, I.; Allen, C.; and Hospedales, T. 2019. TuckER: Tensor Factorization for Knowledge Graph Completion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 5185–5194. Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 1247–1250. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26. Cao, Z.; Xu, Q.; Yang, Z.; Cao, X.; and Huang, Q. 2022. Geometry interaction knowledge graph embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 5521–5529. Chami, I.; Wolf, A.; Juan, D.-C.; Sala, F.; Ravi, S.; and R´e, C. 2020. Low-Dimensional Hyperbolic Knowledge Graph Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 6901–6914. Chen, S.; Liu, X.; Gao, J.; Jiao, J.; Zhang, R.; and Ji, Y. 2021. HittER: Hierarchical Transformers for Knowledge Graph Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 10395–10407. Dai, G.; Wang, X.; Zou, X.; Liu, C.; and Cen, S. 2022. MRGAT: Multi-Relational Graph Attention Network for knowledge graph completion. Neural Networks, 154: 234–245. Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32. Ganea, O.; B´ecigneul, G.; and Hofmann, T. 2018. Hyperbolic neural networks. Advances in neural information processing systems, 31. Ge, X.; Wang, Y. C.; Wang, B.; and Kuo, C.-C. J. 2023. Compounding Geometric Operations for Knowledge Graph Completion. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6947–6965. Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural message passing for quantum chemistry. In International conference on machine learning, 1263–1272. PMLR. Ioffe, S.; and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, 448– 456. pmlr. Javaloy, A.; Martin, P. S.; Levi, A.; and Valera, I. 2023. Learnable Graph Convolutional Attention Networks. In The Eleventh International Conference on Learning Representations. Kaiser, M.; Saha Roy, R.; and Weikum, G. 2021. Reinforcement learning from reformulations in conversational question answering over knowledge graphs. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 459–469. Keizer, S.; Guhe, M.; Cuay´ahuitl, H.; Efstathiou, I.; Engelbrecht, K.-P.; Dobre, M.; Lascarides, A.; Lemon, O.; et al. 2017. Evaluating persuasion strategies and deep reinforcement learning methods for negotiation dialogue agents. 480– 484. EACL. Kingma, D. P.; and Ba, J. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, 1–15. Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations. Li, R.; Cao, Y.; Zhu, Q.; Bi, G.; Fang, F.; Liu, Y.; and Li, Q. 2022. How does knowledge graph embedding extrapolate to unseen data: a semantic evidence view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 5781–5791. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence. Mahdisoltani, F.; Biega, J.; and Suchanek, F. M. 2013. Yago3: A knowledge base from multilingual wikipedias. In CIDR. Meng, Y.; Huang, J.; Wang, G.; Zhang, C.; Zhuang, H.; Kaplan, L.; and Han, J. 2019. Spherical text embedding. Advances in neural information processing systems, 32. Nathani, D.; Chauhan, J.; Sharma, C.; and Kaul, M. 2019. Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4710–4723. Nguyen, T. D.; Nguyen, D. Q.; Phung, D.; et al. 2018. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 327–333. Nickel, M.; and Kiela, D. 2017. Poincar´e embeddings for learning hierarchical representations. Advances in neural information processing systems, 30. Nickel, M.; Tresp, V.; and Kriegel, H.-P. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, 809–816. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8973 Pan, Z.; and Wang, P. 2021. Hyperbolic hierarchy-aware knowledge graph embedding for link prediction. In Findings of the Association for Computational Linguistics: EMNLP 2021, 2941–2948. Schlichtkrull, M.; Kipf, T. N.; Bloem, P.; Van Den Berg, R.; Titov, I.; and Welling, M. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference, 593–607. Springer. Shang, B.; Zhao, Y.; Liu, J.; Liu, Y.; and Wang, C. 2023a. A contrastive knowledge graph embedding model with hierarchical attention and dynamic completion. Neural Computing and Applications, 35(20): 15005–15018. Shang, B.; Zhao, Y.; Liu, Y.; and Wang, C. 2023b. Attentionbased exploitation and exploration strategy for multi-hop knowledge graph reasoning. Information Sciences, 653: 119787. Shang, B.; Zhao, Y.; Wang, D.; and Liu, J. 2023c. RelationAware Multi-Positive Contrastive Knowledge Graph Completion with Embedding Dimension Scaling. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 878– 888. Song, T.; Luo, J.; and Huang, L. 2021. Rot-pro: Modeling transitivity by projection in knowledge graph embedding. Advances in Neural Information Processing Systems, 34: 24695–24706. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1): 1929–1958. Sun, Z.; Deng, Z.-H.; Nie, J.-Y.; and Tang, J. 2019. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In International Conference on Learning Representations. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2818–2826. Toutanova, K.; Chen, D.; Pantel, P.; Poon, H.; Choudhury, P.; and Gamon, M. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 conference on empirical methods in natural language processing, 1499–1509. Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, ´E.; and Bouchard, G. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, 2071–2080. PMLR. Ungar, A. A. 2008. Analytic hyperbolic geometry and Albert Einstein’s special theory of relativity. World Scientific. Vashishth, S.; Sanyal, S.; Nitin, V.; Agrawal, N.; and Talukdar, P. 2020a. Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 3009–3016. Vashishth, S.; Sanyal, S.; Nitin, V.; and Talukdar, P. 2020b. Composition-based Multi-Relational Graph Convolutional Networks. In International Conference on Learning Representations. Veliˇckovi´c, P.; Cucurull, G.; Casanova, A.; Romero, A.; Li`o, P.; and Bengio, Y. 2018. Graph Attention Networks. In International Conference on Learning Representations. Wang, S.; Wei, X.; Nogueira dos Santos, C. N.; Wang, Z.; Nallapati, R.; Arnold, A.; Xiang, B.; Yu, P. S.; and Cruz, I. F. 2021a. Mixed-curvature multi-relational graph neural network for knowledge graph completion. In Proceedings of the Web Conference 2021, 1761–1771. Wang, X.; Huang, T.; Wang, D.; Yuan, Y.; Liu, Z.; He, X.; and Chua, T.-S. 2021b. Learning intents behind interactions with knowledge graph for recommendation. In Proceedings of the web conference 2021, 878–887. Wang, Y.-C.; Ge, X.; Wang, B.; and Kuo, C.-C. J. 2023. Greenkgc: A lightweight knowledge graph completion method. Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28. Xiao, H.; Huang, M.; and Zhu, X. 2016a. From one point to a manifold: knowledge graph embedding for precise link prediction. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 1315–1321. Xiao, H.; Huang, M.; and Zhu, X. 2016b. TransG: A Generative Model for Knowledge Graph Embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2316– 2325. Xiao, H.; Liu, X.; Song, Y.; Wong, G. Y.; and See, S. 2022. Complex Hyperbolic Knowledge Graph Embeddings with Fast Fourier Transform. arXiv preprint arXiv:2211.03635. Xiong, C.; Power, R.; and Callan, J. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of the 26th international conference on world wide web, 1271–1279. Yang, B.; Yih, S. W.-t.; He, X.; Gao, J.; and Deng, L. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015. Yang, J.; Ying, X.; Shi, Y.; Tong, X.; Wang, R.; Chen, T.; and Xing, B. 2022. Knowledge graph embedding by adaptive limit scoring loss using dynamic weighting strategy. In Findings of the Association for Computational Linguistics: ACL 2022, 1153–1163. Zhang, Z.; Cai, J.; Zhang, Y.; and Wang, J. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 3065–3072. Zhang, Z.; Wang, J.; Ye, J.; and Wu, F. 2022. Rethinking graph convolutional networks in knowledge graph completion. In Proceedings of the ACM Web Conference 2022, 798– 807. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8974
2024
997
18,847
ResDiff: Combining CNN and Diffusion Model for Image Super-Resolution Shuyao Shang 1, Zhengyang Shan 1, Guangxing Liu 1, LunQian Wang 2, XingHua Wang 2, Zekai Zhang 3, Jinglin Zhang∗1 1 Shandong University 2 Linyi University 3 Qilu University of Technology [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Adapting the Diffusion Probabilistic Model (DPM) for direct image super-resolution is wasteful, given that a simple Convolutional Neural Network (CNN) can recover the main low-frequency content. Therefore, we present ResDiff, a novel Diffusion Probabilistic Model based on Residual structure for Single Image Super-Resolution (SISR). ResDiff utilizes a combination of a CNN, which restores primary lowfrequency components, and a DPM, which predicts the residual between the ground-truth image and the CNN-predicted image. In contrast to the common diffusion-based methods that directly use LR space to guide the noise towards HR space, ResDiff utilizes the CNN’s initial prediction to direct the noise towards the residual space between HR space and CNN-predicted space, which not only accelerates the generation process but also acquires superior sample quality. Additionally, a frequency-domain-based loss function for CNN is introduced to facilitate its restoration, and a frequencydomain guided diffusion is designed for DPM on behalf of predicting high-frequency details. The extensive experiments on multiple benchmark datasets demonstrate that ResDiff outperforms previous diffusion-based methods in terms of shorter model convergence time, superior generation quality, and more diverse samples. Introduction Single Image Super-Resolution (SISR) is a difficult task in computer vision, which aims to recover high-resolution (HR) images from their low-resolution (LR) counterparts. During image degradation, the high-frequency components are lost, and multiple HR images could produce the same LR image, making this task ill-posed. After Generative Adversarial Networks(GAN) (Goodfellow et al. 2014) was proposed, the main generative-model-based SISR methods are GAN-driven. However, GAN-based methods are hard to train and prone to fall into pattern collapse, causing a lack of diversity. Therefore, a superior generative model is required in the SISR task. Diffusion Probabilistic Model (DPM) has already demonstrated impressive capabilities in image synthesis (Saharia et al. 2022a,b; Rombach et al. 2022; Ramesh et al. 2022) and image restoration (Choi et al. 2021; Kawar et al. 2022; Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Overall struture of proposed ResDiff. Wang, Yu, and Zhang 2023). It has also shown promising prospects in SISR tasks (Saharia et al. 2022c; Li et al. 2022). However, current Diffusion-based methods for SISR, such as SR3(Saharia et al. 2022c), generate HR images directly from random noise, and LR images are only used as conditional input to the diffusion process (Fig.2 (a)). Consequently, the diffusion model needs to recover both the high and low-frequency contents of the image, which not only prolongs the convergence time but also inhibits the model from focusing on the fine-grained information, potentially missing texture details. Li et al.(Li et al. 2022) had taken this into account but employed only a bilinear interpolation for the initial prediction, which, compared to CNN, failed to restore sufficiently low-frequency contents and was incapable of generating any high-frequency components in the initial prediction (Fig.2 (b)). Similarly, whang et al.(Whang et al. 2022) designed a random-sampler and a deterministicpredictor to tackle this problem. However, there is no information interaction between the random-sampler and the deterministic-predictor, resulting in the latter not functioning to its full potential (Fig.2 (c)). Inspired by the above (Li et al. 2022; Whang et al. 2022), we propose ResDiff, a residual-structure-based difThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8975 Figure 2: Comparison of different generation processes. In contrast to (a) (Saharia et al. 2022c), (b) (Li et al. 2022), (c) (Whang et al. 2022) where only LR Space is used to guide the generation, our ResDiff (d) makes full utilization of CNN Prediction Space and High-Frequency Space to guide a faster and better generation. fusion model. Unlike (Li et al. 2022), ResDiff utilizes a CNN for initial prediction. And in contrast to (Whang et al. 2022), the CNN in ResDiff is pre-trained, thus capable of restoring the major low-frequency components and partial high-frequency components. The initial prediction of the CNN is adopted to guide the random noise towards the Res Space (i.e., the residual space between the Ground Truth image and the CNN predicted image). Compared to the methods that only use LR space as guidance, ResDiff can leverage additional information and generate richer highfrequency details. (Fig.2 (d)). Fig.1 presents the structure of ResDiff. The CNN used in ResDiff contains a limited number of parameters. Thus, two more loss functions are introduced to strengthen its recovery capabilities. To further enhance the generation quality, we design a Frequency Domain-guided Diffusion (FD-guided Diffusion) as shown in Fig.2 (d) where the high-frequency space also guides the generation process. FD-guided Diffusion consists of two novel modules. The first is a Frequency-Domain Information Splitter (FD Info Splitter) that separates high-frequency and low-frequency contents and performs adaptive denoising on the noisy image. The second is a high-frequency guided cross-attention module (HF-guided CA) that helps the diffusion model predict high-frequency details. The pseudo-code for sampling with ResDiff is as Alg.1. Experiments on two face datasets (FFHQ and CelebA) and two general datasets (Div2k and Urban100) demonstrate that ResDiff not only accelerates the model’s convergence speed but also generates more fine-grained images. To verify the generalization of our method, more experiments on different types of datasets (Bai et al. 2022) are given in the supplementary material. Our contributions can be summarized as follows: • Shorter Convergence Time: We have designed ResDiff, a residual structure-based diffusion model for the SISR task that leads to an apparent improvement in convergence speed compared to other diffusion-based methods. • Superior Generation Quality: We have introduced FDguided Diffusion to enhance the diffusion model’s concentration on high-frequency details, resulting in superior generation quality. • More Diverse Output: Experiments have demonstrated that ResDiff holds a lower perceptual-based evaluation value, indicating our method is capable of producing diverse samples. Related Works Generative-model-based methods have created great success in SISR, which can be classified into GAN-based(Ledig et al. 2017; Wang et al. 2018b; Mirchandani and Chordiya 2021; Wang et al. 2018a; Zhang et al. 2019), flowbased(Lugmayr et al. 2020; Liang et al. 2021), and fiffusionbased(Saharia et al. 2022c; Li et al. 2022) methods. GAN-based methods Ledig et al.(Ledig et al. 2017) proposed SRGAN, which employs a perceptual loss function to generate high-quality images. Similarly, Kim et al.(Wang et al. 2018b) introduced ESRGAN, which adopted an enhanced super-resolution GAN and a superior loss function to improve the perceptual quality. GAN-based methods combine content losses with adversarial losses, allowing them to generate sharp edges and richer textures. However, they are prone to mode-collapse, which decreases diversity in the generated SR samples. Moreover, training GANs is challenging and may lead to unexpected artifacts in the generated image. Flow-based methods Lugmayr et al.(Lugmayr et al. 2020) proposed SRFlow, which is a flow-based method that learns the conditional distribution of high-resolution images given their low-resolution counterparts, enabling highquality image super-resolution with natural and diverse outputs. Flow-based methods map HR images to flow-space latents using an invertible encoder and connect the encoder and decoder with an invertible flow module, which avoids training instability but requires higher training costs and provides lower perceptual quality. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8976 Algorithm 1: ResDiff Inference Input: low-resolution image xLR and pre-trained CNN Parameter: µθ and Σθ same as in DDPM Output: High-resolution image 1: xcnn = CNN(xLR) 2: xT ∼N(0, I) 3: for t = T : 1 do 4: ϵ ∼N(0, I) if t > 1, else ϵ = 0 5: xt−1 = µθ(xt, t, xcnn) + p Σθ(xt, t, xcnn) ϵ 6: end for 7: return x0 + xcnn Diffusion-based methods Li et al.(Li et al. 2022) introduced SrDiff, the first diffusion-based model for SISR, demonstrating that using the diffusion model for SISR tasks is feasible and promising. Saharia et al. proposed Sr3 (Saharia et al. 2022c), which adapts Denoising Diffusion Probabilistic Models (DDPM) to perform SISR tasks, yielding a competitive perceptual-based evaluation value. Diffusionbased methods utilize a diffusion process that simulates noise reduction, resulting in sharper and more detailed images. However, a high computational cost is needed due to multiple forward and backward passes through the entire network during the training process. Our proposed ResDiff, though without improving the training speed of a single iteration, accelerates convergence, which can alleviate this issue from another perspective. The Proposed ResDiff Pre-trained CNN To reduce additional training costs, we utilize a CNN with a reduced number of parameters to generate an initial prediction. This CNN aims to recover primary low-frequency components and partial high-frequency components, consequently facilitating the diffusion model’s restoration of the more intricate high-frequency details. To ensure its generating capability, we are enlightened by (Deng et al. 2019; Dou, Tu, and Peng 2020) and introduce two more loss functions (Fig.3), namely LF F T based on the Fast Fourier Transform (FFT) (Cooley and Tukey 1965) and LDW T based on the Discrete Wavelet Transform (DWT) (Mallat and Hwang 1992), in addition to the original loss function. The LF F T can be defined as the mean square error(MSE) between the magnitudes of the FFT coefficients of the two images: LF F T = E[ M −ˆ M 2 ] (1) where M and ˆ M denote the frequency domain images obtained by performing FFT on the ground-truth image and the predicted image. In a bid to enable the CNN to further recover partial highfrequency contents on top of recovering the primary lowfrequency contents, we designed LDW T . Performing DWT on an image will decompose it into four sub-bands: low-low (LL), low-high (LH), high-low (HL), and high-high (HH). Figure 3: Depiction of the three loss functions utilized in CNN pre-training. A spatial domain loss (GT Loss) and two frequency domain losses (FFT Loss and DWT Loss) are computed. LL sub-band contains the low-frequency content of the image, while the remaining three contain the high-frequency components of the image from horizontal, vertical, and diagonal directions, respectively. The LL sub-band can perform further similar decomposition to obtain multi-layer high-frequency components. As for LDW T , we extract the wavelet coefficients of the high-frequency bands H, V , and D, which refer to the high-frequency components in the horizontal, vertical, and diagonal directions, respectively. For both the ground-truth image and predicted image, LDW T compute the MSE between each high-frequency sub-band: LDW T = L X i=1 E[ ˆ Hi −Hi 2 + ˆVi −Vi 2 + ˆ Gi −Gi 2 ] (2) where Hi,Vi,Di are the sub-bands of the ground-truth image in the i-th downsampling, and ˆ Hi, ˆVi, ˆ Di are the subbands of the predicted image in the i-th downsampling, L is the total level of downsampling. We also add the spatial domain loss named LGT : let the ground-truth image be Y , the predicted image be ˆY , and LGT is the MSE between them: LGT = E[ Y −ˆY 2 ] (3) The total loss function of pre-trained CNN thus is: LCNN = LGT + αLF F T + βLDW T (4) where α and β are adjustable hyperparameters. Furthermore, we design a simple CNN using residualconnection (He et al. 2016) and pixel-shuffle (Shi et al. 2016), named SimpleSR, for initial prediction (the specific structure is given in the supplementary material). Ablation The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8977 studies on the proposed loss function and SimpleSR are given in the supplementary material. FD-guided Diffusion After obtaining the image I predicted by the pre-trained CNN, we adapt a diffusion model to predict the residuals between I and the ground truth, i.e., the high-frequency components of the ground-truth image. To this end, we propose a Frequency-Domain guided diffusion (FD-guided diffusion), as shown in Fig.4. In contrast to SR3 (Saharia et al. 2022c), which simply concatenates the bilinear interpolated image with the noisy image xt at step t, we propose a FrequencyDomain Information Splitter module (FD-Info-Splitter): I and xt is first fed into the FD-Info-Splitter, whose output is then fed into the U-net (Ronneberger, Fischer, and Brox 2015). We follow the Imagen (Saharia et al. 2022a), where the self-attention layer is added. In addition, a FrequencyDomain guided Cross-Attention mechanism (FD-guild CA) is designed, which utilizes the high-frequency features obtained from DWT at each layer to generate more finegrained detail features. FD Info Splitter For CNN’s initial prediction, low-frequency components are mixed with high-frequency contents. As the diffusion model only needs to recover high-frequency details, the input low and high-frequency features have different statuses: the former mainly assist the generation of high-frequency components globally, while the latter is required to provide guidance for fine-grained details in each region. Therefore, we introduce Frequency-Domain Information Splitter (FD Info Splitter), which explicitly separates high-frequency and lowfrequency information for better restoration. Additionally, it effectively mitigates noise for noisy images with large time steps, resulting in better noise prediction (The detailed structure of FD Info Splitter is shown in Fig.4). For the CNN predicted images xcnn ∈RH×W ×C, we first perform 2D FFT along the spatial dimensions to obtain the frequency domain feature map M: M = FFT(xcnn) ∈CH×W ×C (5) where FFT(·) denotes the 2D FFT. We adapt the methods proposed by (Hu, Shen, and Sun 2018; He et al. 2016) and merged them into the ResSE module (Residual Squeezeand-Excitation module), the details of which are shown in the supplementary material. To implement adaptive high-pass filtering, a Gaussian high-pass filter is utilized whose Standard deviation is obtained from M as follows: σ = min(|ResSE(M)| + l 2, l) (6) where l = min(H, W). The operation for the acquired ResSE(M) is for numerical stability. After obtaining σ, adaptive gaussian high-pass filter can be given directly as: H(u, v) = 1 −e−D2(u,v)/(2σ2) (7) where D(u, v) is the distance from the point (u, v) in the frequency domain to the center point. The gaussian highpass filter are then preformed element-wise multiplication with M to obtain the adaptive high-pass filtered feature map M ′: M ′ = Ahp ⊗M (8) Finally, we reverse M ′ back to the spatial domain by adopting inverse FFT to obtain an feature map xHF rich in high-frequency components: xHF = FFT −1(M ′) ∈RH×W ×C (9) where FFT −1(·) denotes the Inverse 2D FFT. Meanwhile, we feed M ′ into a ResSE module to acquire the attention weights learned in the frequency domain and then perform element-wise multiplication with xcnn to obtain a feature map xLF containing abundant low-frequency information: xLF = ResSE(M) ⊗xcnn (10) These two feature maps, dominated by high-frequency and low-frequency components, are concatenated in the channel dimension. By explicitly separating the input’s mixed high-frequency and low-frequency components, the network can utilize both differently and more efficiently. For a noisy image xt at a large time step t, the noise components can be so large that it hinders network inference. Hence, an adaptive denoising is utilized on xt to obtain the partially denoised noisy image x ′ t: x ′ t = ResSE(T) ⊗xt (11) The three feature maps xHF , xLF , x ′ t, along with xcnn and xt, are all concatenated in the channel dimension and fed into the U-net. HF-guided CA In the original U-net architecture, the encoder features are directly concatenated with the features obtained by the decoder (Ronneberger, Fischer, and Brox 2015). This fusion facilitates the network to integrate the higher and lowerlayer features effectively but lacks the ability to extract high-frequency features. To tackle this issue, we introduce a High-Frequency feature guided Cross-Attention mechanism (HF-guided CA) to recover fine-grained high-frequency details. The flow of the HF-guided CA is illustrated in Fig.4. We utilize the pre-trained CNN prediction by extracting the ˆ Hi, ˆVi, and ˆ Di coefficients at the i-th level of the DWT. By adding these extracted coefficients with a linear projection, we obtain the feature map Q with aggregated highfrequency information: Q = Conv1×1( ˆ Hi + ˆVi + ˆ Di) (12) Then, different linear projections of the input feature map M are constructed to obtain K and V in the cross-attention mechanism (Hou et al. 2019) : The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8978 Conv FD Info Spliter H, W, 15 H, W, C Conv H/2, W/2, 2C H/4, W/4, 3C Conv H/4, W/4, 3C H/8, W/8, 4C H/8, W/8, 4C SA Block SA Block H/8, W/8, 4C H/4, W/4, 3C HF-Guided CA H/4, W/4, 3C Conv H/4, W/4, 3C HF-Guided CA H/2, W/2, 2C H/2, W/2, 2C Conv H/4, W/4, 2C H, W, C Conv H, W, 3 Intermediate Shape Down Sampling x2 Up Sampling x2 Output Shape Pretrained CNN prediction DWT V K Scaled Dot-Product MatMul Q HF-Guided Cross Attention Element-wise Multiplication ResSE FFT ResSE IFFT ResSE Concat T Adaptive gaussian high-pass filter FD Info Spliter H/2, W/2, 2C 1x1Conv Output Input Figure 4: An overview of the model architecture in proposed FD-guided diffusion. The pre-trained CNN prediction and the noisy image xt from step t are fed into the FD-info-Splitter, and its output is then passed on to a U-net, which is equipped with HF-guided cross-attention. K = Conv1×1(M) (13) V = Conv1×1(M) (14) The output feature map M ′ can then be obtained from the formula: M ′ = Softmax(QKT √dk )V (15) where dk is the number of columns of matrix Q. Experiments Performance To evaluate the performance of our ResDiff model, we compared it with previous diffusion-based and GAN-based methods using four datasets: two face datasets (FFHQ (Karras, Laine, and Aila 2019), and CelebA (Liu et al. 2015)) and two general datasets (Div2k (Agustsson and Timofte 2017), and Urban100 (Huang, Singh, and Ahuja 2015)). The selected evaluation metrics include two distortion-based metrics (PSNR and SSIM (Wang et al. 2004)), as well as a perceptual-based metric (FID (Heusel et al. 2017)). Our ResDiff is trained solely on the provided training data to guarantee a fair comparison. The supplementary material contains detailed information about the training process, hyperparameters, and other relevant details. Since several methods did not state their performance on some datasets we use, their values are marked as ”-” in the table. More experiments with different types of datasets are presented in the supplementary material. FFHQ and CelebA Results The quantitative results at 32×32 →128×128 (4×) ,256×256 →1024×1024 (4×) on FFHQ (Karras, Laine, and Aila 2019) and 20 × 20 → 160×160 (8×), 64×64 →256×256 (4×) on CelebA (Liu et al. 2015) are shown in table 1,2. Our ResDiff demonstrates superior performance compared to all diffusion-based methods, as evidenced by the metrics presented in the table, and has about 50% reduction in Perceptual metrics (FID) than the GAN-based model. DIV2K and Urban100 Results The quantitative results at 40 × 40 →160 × 160 (4×) on DIV2K (Agustsson and Timofte 2017) and 40 × 40 →160 × 160 (4×) on Urban100 (Huang, Singh, and Ahuja 2015) are shown in table 3. Note that ResDiff’s distortion-based metric values can The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8979 Input Bicubic SR3 SRDiff ResDiff(Ours) Reference Figure 5: DIV2k 4× results. Note that ResDiff provides richer details and more natural textures than other diffusion-based methods for the recovery of small objects (e.g., the clock in the first column) and difficult scenes (e.g., the bridge structure in the second column, the building in the fourth column). 32 →128 256 →1024 PSNR↑SSIM↑FID↓PSNR↑SSIM↑FID↓ Ground Truth ∞ 1.000 0.00 ∞ 1.000 0.00 SRGAN 17.57 0.688 156.07 21.49 0.515 60.67 ESRGAN 15.43 0.267 166.36 19.84 0.353 72.73 BRGM 24.16 0.70 – PULSE 15.74 0.37 SRDiff 26.07 0.794 72.36 23.01 0.656 56.17 SR3 25.37 0.778 75.29 22.78 0.647 60.12 ResDiff 26.73 0.818 70.54 23.15 0.668 53.23 Table 1: Quantitative comparison on the FFHQ (Karras, Laine, and Aila 2019) dataset, where the bolded values represent the best value in each evaluation metric. 20 →160 64 →256 PSNR↑SSIM↑FID↓PSNR↑SSIM↑FID↓ Ground Truth ∞ 1.000 0.00 ∞ 1.000 0.00 ESRGAN 23.24 0.66 PULSE 22.74 0.623 40.33 SRFlow 25.28 0.72 SRDiff 25.32 0.73 80.98 26.84 0.792 39.16 SR3 24.89 0.728 83.11 26.04 0.779 43.27 ResDiff 25.37 0.734 78.52 27.16 0.797 38.47 Table 2: Quantitative comparison on the CelebA (Liu et al. 2015) dataset, where the bolded values represent the best value in each evaluation metric. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8980 DIV2K 4× Urban100 4× PSNR↑SSIM↑FID↓PSNR↑SSIM↑FID↓ Ground Truth ∞ 1.000 0.00 ∞ 1.000 0.00 SRDiff 26.87 0.69 110.32 26.49 0.79 51.37 SR3 26.17 0.65 111.45 25.18 0.62 61.14 ResDiff 27.94 0.72 106.71 27.43 0.82 42.35 Table 3: Quantitative comparison on the DIV2K (Agustsson and Timofte 2017) and Urban100 (Huang, Singh, and Ahuja 2015) dataset, where the bolded values represent the best value in each evaluation metric. significantly outperform other diffusion-based methods on these general datasets whose restoration is more difficult. Fig.5 presents partial results of ResDiff and other diffusionbased methods. Ablation Study In this section, we perform an ablation study on FFHQ (4×) to investigate the effectiveness of each component in ResDiff, including the influence of different CNNs, and the usefulness of the proposed FD Info Splitter/HF-guided CA. The results are shown in Table 4. Note that utilizing the residual structure, even with a simple bilinear interpolation for the initial prediction, can significantly improve the performance. In terms of CNN selection, our proposed SimpleSR also outperforms SRCNN (Dong et al. 2014). Moreover, the addition of FD Info Splitter and HF-guided CA both have an improvement in the results. More detailed ablation studies are given in the supplementary material. Conclusion and Future Work In this paper, we propose ResDiff, a residual structure-based diffusion model. In contrast to the previous works, which only adapt LR images to generate HR images, ResDiff utilizes the feature-richer CNN prediction for guidance. Meanwhile, we introduce a frequency-domain-based loss function to the CNN and design a frequency-domain guided diffusion to facilitate the diffusion model in generating lowfrequency information. Comprehensive experiments on different datasets demonstrate that the proposed ResDiff accelerates the training convergence speed and provides superior image generation quality. Our ResDiff can also be adapted for other image restoration tasks, such as image blind super-resolution, deblurring, and inpainting. Although ResDiff can accelerate convergence, operations such as DWT are still time-consuming and call for optimization in future work. In addition, it can be seen from the supplementary material that the color will appear a large discrepancy when the model is under-trained, which may be caused by a lack of color features in the guided high-frequency information. Utilizing a global color feature may well address this issue in future work. Moreover, our ResDiff does not outperform current State-Of-TheArt(SOTA) SISR methods (Chen et al. 2022; Zhang et al. Model Components Metrics CNN FD Info HF-guided PSNR↑SSIM↑FID↓ Splitter CA SimpleSR ✓ ✓ 26.73 0.818 70.54 N/A ✓ ✓ 25.49 0.781 74.18 Bilinear ✓ ✓ 25.99 0.792 74.29 SRCNN ✓ ✓ 26.14 0.809 72.17 SimpleSR ✓ ✓ 26.47 0.812 71.58 (only LGT ) SimpleSR 25.41 0.788 77.21 SimpleSR ✓ 26.09 0.796 72.42 SimpleSR ✓ 25.97 0.793 73.17 Table 4: Ablation study over different model components on the ffhq (Karras, Laine, and Aila 2019) test sets (The model components we use are placed in the first row). N/A denotes no residual structure used. 2022). This is attributed to the disparity between model parameters. Due to equipment limitations, adopting a larger U-net model in ResDiff is left to future work. In addition, if a pre-trained SOTA model is applied to replace the CNN in ResDiff, it may be possible to establish a new SOTA. Finally, ResDiff may consider incorporating more DPM techniques (Rombach et al. 2022; Dhariwal and Nichol 2021; Ho and Salimans 2022) and superior network architectures (Peebles and Xie 2022; Chen et al. 2021) in the future. Acknowledgments We gratefully thank the creators of the dataset and the server support from Shandong University and Linyi University. This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFB4500602, the Key Research and Development Program of Jiangsu Province under Grant BE2021093, Distinguished Young Scholar of Shandong Province under Grant ZR2023JQ025, Taishan Scholars Program under Grant tsqn202211290, and Major Basic Research Projects of Shandong Province under Grant ZR2022ZD32. References Agustsson, E.; and Timofte, R. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26, 2017, 1122–1131. IEEE Computer Society. Bai, C.; Zhang, M.; Zhang, J.; Zheng, J.; and Chen, S. 2022. LSCIDMR: Large-Scale Satellite Cloud Image Database for Meteorological Research. IEEE Transactions on Cybernetics, 52(11): 12538–12550. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A. L.; and Zhou, Y. 2021. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. CoRR, abs/2102.04306. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8981 Chen, X.; Wang, X.; Zhou, J.; and Dong, C. 2022. Activating More Pixels in Image Super-Resolution Transformer. CoRR, abs/2205.04437. Choi, J.; Kim, S.; Jeong, Y.; Gwon, Y.; and Yoon, S. 2021. ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 14347–14356. IEEE. Cooley, J. W.; and Tukey, J. W. 1965. An algorithm for the machine calculation of complex Fourier series. Mathematics of Computation, 19: 297–301. Deng, X.; Yang, R.; Xu, M.; and Dragotti, P. L. 2019. Wavelet Domain Style Transfer for an Effective PerceptionDistortion Tradeoff in Single Image Super-Resolution. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 November 2, 2019, 3076–3085. IEEE. Dhariwal, P.; and Nichol, A. Q. 2021. Diffusion Models Beat GANs on Image Synthesis. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y. N.; Liang, P.; and Vaughan, J. W., eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, 8780– 8794. Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2014. Learning a Deep Convolutional Network for Image Super-Resolution. In Fleet, D. J.; Pajdla, T.; Schiele, B.; and Tuytelaars, T., eds., Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV, volume 8692 of Lecture Notes in Computer Science, 184–199. Springer. Dou, J.; Tu, Z.; and Peng, X. 2020. Single Image Superresolution Reconstruction with Wavelet based Deep Residual Learning. In 2020 Chinese Control And Decision Conference (CCDC), 4270–4275. Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A. C.; and Bengio, Y. 2014. Generative Adversarial Nets. In NIPS. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 770–778. IEEE Computer Society. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; and Hochreiter, S. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 6626– 6637. Ho, J.; and Salimans, T. 2022. Classifier-Free Diffusion Guidance. CoRR, abs/2207.12598. Hou, R.; Chang, H.; Ma, B.; Shan, S.; and Chen, X. 2019. Cross Attention Network for Few-shot Classification. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alch´eBuc, F.; Fox, E. B.; and Garnett, R., eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 4005– 4016. Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-Excitation Networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 7132–7141. Computer Vision Foundation / IEEE Computer Society. Huang, J.; Singh, A.; and Ahuja, N. 2015. Single image super-resolution from transformed self-exemplars. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, 5197– 5206. IEEE Computer Society. Karras, T.; Laine, S.; and Aila, T. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 4401–4410. Computer Vision Foundation / IEEE. Kawar, B.; Elad, M.; Ermon, S.; and Song, J. 2022. Denoising Diffusion Restoration Models. In ICLR Workshop on Deep Generative Models for Highly Structured Data (ICLRW). Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A. P.; Tejani, A.; Totz, J.; Wang, Z.; and Shi, W. 2017. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 105–114. IEEE Computer Society. Li, H.; Yang, Y.; Chang, M.; Chen, S.; Feng, H.; Xu, Z.; Li, Q.; and Chen, Y. 2022. SRDiff: Single image superresolution with diffusion probabilistic models. Neurocomputing, 479: 47–59. Liang, J.; Lugmayr, A.; Zhang, K.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2021. Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, 4056–4065. IEEE. Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep Learning Face Attributes in the Wild. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, 3730–3738. IEEE Computer Society. Lugmayr, A.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2020. SRFlow: Learning the Super-Resolution Space with Normalizing Flow. In Vedaldi, A.; Bischof, H.; Brox, T.; and Frahm, J., eds., Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V, volume 12350 of Lecture Notes in Computer Science, 715–732. Springer. Mallat, S.; and Hwang, W. 1992. Singularity detection and processing with wavelets. IEEE Transactions on Information Theory, 38(2): 617–643. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8982 Mirchandani, K.; and Chordiya, K. 2021. DPSRGAN: Dilation Patch Super-Resolution Generative Adversarial Networks. In 2021 6th International Conference for Convergence in Technology (I2CT), 1–7. Peebles, W.; and Xie, S. 2022. Scalable Diffusion Models with Transformers. CoRR, abs/2212.09748. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. ArXiv, abs/2204.06125. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 10674–10685. IEEE. Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Navab, N.; Hornegger, J.; III, W. M. W.; and Frangi, A. F., eds., Medical Image Computing and ComputerAssisted Intervention - MICCAI 2015 - 18th International Conference Munich, Germany, October 5 - 9, 2015, Proceedings, Part III, volume 9351 of Lecture Notes in Computer Science, 234–241. Springer. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour, S. K. S.; Ayan, B. K.; Mahdavi, S. S.; Lopes, R. G.; Salimans, T.; Ho, J.; Fleet, D. J.; and Norouzi, M. 2022a. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. CoRR, abs/2205.11487. Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E. L.; Ghasemipour, S. K. S.; Ayan, B. K.; Mahdavi, S. S.; Lopes, R. G.; Salimans, T.; Ho, J.; Fleet, D. J.; and Norouzi, M. 2022b. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. ArXiv, abs/2205.11487. Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D. J.; and Norouzi, M. 2022c. Image Super-Resolution Via Iterative Refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–14. Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A. P.; Bishop, R.; Rueckert, D.; and Wang, Z. 2016. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 1874–1883. IEEE Computer Society. Wang, X.; Yu, K.; Dong, C.; and Loy, C. C. 2018a. Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 606–615. Computer Vision Foundation / IEEE Computer Society. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; and Loy, C. C. 2018b. ESRGAN: Enhanced SuperResolution Generative Adversarial Networks. In Leal-Taix´e, L.; and Roth, S., eds., Computer Vision - ECCV 2018 Workshops - Munich, Germany, September 8-14, 2018, Proceedings, Part V, volume 11133 of Lecture Notes in Computer Science, 63–79. Springer. Wang, Y.; Yu, J.; and Zhang, J. 2023. Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. In The Eleventh International Conference on Learning Representations(ICLR). Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process., 13(4): 600–612. Whang, J.; Delbracio, M.; Talebi, H.; Saharia, C.; Dimakis, A. G.; and Milanfar, P. 2022. Deblurring via Stochastic Refinement. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 16272–16282. IEEE. Zhang, D.; Huang, F.; Liu, S.; Wang, X.; and Jin, Z. 2022. SwinFIR: Revisiting the SwinIR with Fast Fourier Convolution and Improved Training for Image Super-Resolution. CoRR, abs/2208.11247. Zhang, W.; Liu, Y.; Dong, C.; and Qiao, Y. 2019. RankSRGAN: Generative Adversarial Networks With Ranker for Image Super-Resolution. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, 3096–3105. IEEE. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8983
2024
998
18,848
An Attentive Inductive Bias for Sequential Recommendation beyond the Self-Attention Yehjin Shin*, Jeongwhan Choi*, Hyowon Wi, Noseong Park Yonsei University, Seoul, South Korea {yehjin.shin, jeongwhan.choi, wihyowon, noseong}@yonsei.ac.kr Abstract Sequential recommendation (SR) models based on Transformers have achieved remarkable successes. The selfattention mechanism of Transformers for computer vision and natural language processing suffers from the oversmoothing problem, i.e., hidden representations becoming similar to tokens. In the SR domain, we, for the first time, show that the same problem occurs. We present pioneering investigations that reveal the low-pass filtering nature of self-attention in the SR, which causes oversmoothing. To this end, we propose a novel method called Beyond Self-Attention for Sequential Recommendation (BSARec), which leverages the Fourier transform to i) inject an inductive bias by considering finegrained sequential patterns and ii) integrate low and highfrequency information to mitigate oversmoothing. Our discovery shows significant advancements in the SR domain and is expected to bridge the gap for existing Transformer-based SR models. We test our proposed approach through extensive experiments on 6 benchmark datasets. The experimental results demonstrate that our model outperforms 7 baseline methods in terms of recommendation performance. Our code is available at https://github.com/yehjin-shin/BSARec. Introduction Recommender systems play a vital role in web applications, delivering personalized item recommendations by analyzing user-item interactions (Ying et al. 2018; Lee, Kim, and Lee 2018; He et al. 2020; Choi, Jeon, and Park 2021; Kong et al. 2022; Hong et al. 2022; Choi et al. 2023a,d; Gao et al. 2023). As users’ preferences evolve over time, capturing the temporal user behavior becomes essential. This is where SR steps in, attracting substantial research attention (Hidasi et al. 2016; Wu et al. 2022; Gao et al. 2023; Tang and Wang 2018; Kang and McAuley 2018; Chen et al. 2019; Schedl et al. 2018; Hansen et al. 2020; Jiang et al. 2016; Huang et al. 2018). With the increasing popularity of sequential recommendation (SR) systems, Transformer-based models, especially those utilizing self-attention (Vaswani et al. 2017), have emerged as dominant approaches for providing accurate and personalized recommendations to users (Kang and McAuley *These authors contributed equally. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Items of long-term persistent interest that are low frequencies ? Items of short-term abrupt interest that are high frequencies ... Fourier Domain of  Item Embedding Figure 1: Illustration of high and low-frequency signals in SR. A user u1’s long-term persisting interests and tastes constitute low frequencies in the Fourier domain of embedding, and abrupt short-term changes in u1’s interests correspond to high frequencies. Method Inductive Bias Self-Attention High-pass Filter SASRec ✗ ✓ ✗ BERT4Rec ✗ ✓ ✗ FMLPRec ✓ ✗ ✗ DuoRec ✗ ✓ ✗ BSARec ✓ ✓ ✓ Table 1: Comparison of existing Transformer-based methods that differ at three points: i) using inductive bias, ii) using self-attentions, and iii) using high-pass filters 2018; Sun et al. 2019; Li, Wang, and McAuley 2020; Wu et al. 2020; Wu, Cai, and Wang 2020). However, despite their successes in the SR, Transformer-based models possess inherent limitations that confine themselves to the learned self-attention matrix. The following two key limitations need to be addressed: i) First, the models may still suffer from suboptimal performance due to insufficient inductive bias inherent in processing sequences with selfattention (Dosovitskiy et al. 2020). While the self-attention mechanism captures long-range dependencies, it may not only adequately consider certain fine-grained sequential patterns but also be overfitted to training data, leading to potential weak generalization capabilities. As Table 1 shows, SASRec (Kang and McAuley 2018), BERT4Rec (Sun et al. 2019), and DuoRec (Qiu et al. 2022) rely on training the The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8984 self-attention layer and lack inductive bias1. ii) The second limitation pertains to the low-pass filtering nature of self-attention. By focusing on the entire range of data, selfattention may unintentionally smoothen out important and detailed patterns in embedding, resulting in the oversmoothing problem. The oversmoothing issue poses a significant challenge in the SR domain, as it may hinder the ability of model to capture crucial temporal dynamics and provide accurate predictions. In Table 1, most Transformer-based models are limited to low-pass filters. These models do not consider high-pass filters. Note that FMLPRec (Zhou et al. 2022) attempts to learn a filter, but it tends to gravitate towards the low-pass filter (cf. Fig. 2 (b)). As shown in Fig. 1, the low-pass filter only captures the ongoing preferences of the user, i.e., an Apple fanatic, and it may be difficult to capture preferences based on new interests or trends (e.g., snorkel mask to buy for vacation). When recommending items for the next time (|Su1|`1), it is undemanding to recommend long-term interests, but recommending short-term interests is a challenging task. In this paper, we address these two limitations and present Beyond Self-Attention for Sequential Recommendation (BSARec), a novel model that uses inductive bias via Fourier transform with self-attention. By using the Fourier transform, BSARec gains access to the inductive bias of frequency information, enabling the capture of essential patterns and periodicity that may be overlooked by selfattention alone. This enhances inductive bias and has the potential to improve recommendation performance. To tackle the oversmoothing issue, we introduce our own designed frequency rescaler to apply high-pass filters into BSARec’s architecture. Our frequency rescaler can capture high-frequency behavioral patterns, such as interests driven by short-term trends, as well as low-frequency patterns, such as long-term interests, in a user’s behavioral patterns (cf. Fig. 1). Additionally, our method provides a perspective to improve the performance of SR models and solve the problem of oversmoothing. To evaluate the efficacy of BSARec, we conduct extensive experiments on 6 benchmark datasets. Our experimental results demonstrate that BSARec consistently outperforms 7 baseline methods regarding recommendation performance. Additionally, we conduct a series of experiments that underscore the necessity of our approach and verify its effectiveness in mitigating the oversmoothing problem, leading to improved recommendation accuracy and enhanced generalization capabilities. The contributions of this work are as follows: • We unveil the low-pass filtering nature of the selfattention of Transformer-based SR models, resulting in the problem of oversmoothing. • We propose a novel model, Beyond Self-Attention for Sequential Recommendation (BSARec), that leverages the Fourier transform to balance between our inductive 1We mean by inductive bias a pre-determined attention structure that is not trained but injected by us when designing our model. Therefore, we call it as attentive inductive bias. bias and self-attention. Further, we design the rescaler for high-pass filters to mitigate the oversmoothing issue. • Extensive evaluation on 6 benchmark datasets demonstrates BSARec’s outperformance over 7 baseline methods, validating its effectiveness in improving recommendation performance. Preliminaries Problem Formulation The goal of SR is to predict the user’s next interaction with an item given their historical interaction sequences. Given a set of users U and items V, we can sort the interacted items of each user u P U chronologically in a sequence as Su “ rvu 1 , vu 2 , . . . vu |Su|s, where vu i denotes the i-th interacted item in the sequence. The aim is to recommend a Top-k list of items as potential next items in a sequence. Formally, we predict ppvu |Su|`1 “ v|Suq. Self-Attention for Sequential Recommendation The basic idea behind the self-attention mechanism is that elements within sequences are correlated but hold varying levels of significance concerning their positions in the sequence. Self-attention uses dot-products between items in the sequence to infer their correlations defined as: A “ softmax ˆQKT ? d ˙ , (1) where Q “ ESuWQ, K “ ESuWK, and d is the scale factor. The scaled dot-product component learns the latent correlation between items. Other components in Transformer are utilized in SASRec, including the point-wise feed-forward network, residual connection, and layer normalization. Our method uses this self-attention matrix and adds an inductive bias to find the trade-off between the two methods. Discrete vs. Graph Fourier Transform This subsection introduces the concept of the frequency domain and the Fourier transform, providing a cohesive foundation for the proposed method. The Discrete Fourier Transform (DFT) is a linchpin in digital signal processing (DSP), projecting a sequence of values into the frequency domain (or the Fourier domain). We typically use F : RN Ñ CN to denote DFT with the Inverse DFT (IDFT) F´1 : CN Ñ RN. Applying F to a signal is equal to multiplying it from the left by a DFT matrix. The rows of this matrix consist of the Fourier basis fj “ re2πipj´1q¨0 . . . e2πipj´1qpN´1qsT{ ? N P RN, where i is the imaginary unit and j denotes the j-th row. For the spectrum of x, let it be represented as sx “ Fx. We can define sxlfc P Cc containing the c lowest elements of sx, and sxhfc P CN´c as the vector containing the remaining elements. The low-frequency components (LFC) of the sequence signal x are defined as: LFCrxs “ rf1, f2, . . . , fcs sxlfc P RN. (2) The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8985 (a) A ring graph −0.4 −0.2 0.0 0.2 0.4 Frequency 0.0 0.2 0.4 0.6 0.8 1.0 Normalized Magnitude SASRec DuoRec FMLPRec BSARec(AIB) BSARec(A) (b) Spectral responses Figure 2: (a) A ring graph with N nodes, and (b) visualization of the filter of the self-attentions in LastFM. 0 20 40 60 Singular Value Index 0.0 0.3 0.6 0.9 Normalized Singular Value SASRec DuoRec FMLPRec BSARec (a) Singular value 0 4 8 12 16 Number of Layers 0.0 0.2 0.4 0.6 Cosine Similarity SASRec DuoRec FMLPRec BSARec (b) Cosine similarity Figure 3: Visualization of oversmoothing in LastFM. The singular values and cosine similarity of user sequence output embedding. Conversely, the high-frequency components (HFC) are: HFCrxs “ rfc`1, fc`2, . . . , fNs sxhfc P RN. (3) Note that we use real-valued DFT and multiplying with the Fourier bases in Eqs. 2 and 3 means IDFT. For more descriptions, interested readers should refer to Appendix (Shin et al. 2023). The Graph Fourier Transform (GFT) can be considered as a generalization of DFT toward graphs. In other words, DFT is a special case of GFT, where a ring graph of N nodes is used (see Fig. 2 (a)) (Sandryhaila and Moura 2014). In fact, DFT is a method to project a sequence of values onto the eigenspace of the Laplacian matrix of the ring graph (which is the same as the Fourier domain). The frequency concept can also be described with the ring graph. The number of neighboring nodes with different signs on their signals corresponds to the frequency. Therefore, low-frequency information means a series of signals over N nodes whose signs do not change often. In the case of the SR in our work, where N nodes mean N item embeddings, such low-frequency information means a long-standing interest of a user (see Fig. 1). Motivation In this section, we show that self-attention in the spectral domain is a low-pass filter that continuously erases high-frequency information. We visualize the spectrum of self-attention of the Transformer-based sequential model as shown in Fig. 2 (b). It shows that the spectrum is concentrated in the low frequency region, and it reveals that selfattention is a low-pass filter. We further make theoretical justifications for the low-pass filter of self-attention. Theorem 1 (Self-Attention is a low-pass filter). Let A “ softmaxpQKT{ ? dq. Then A inherently acts as a low-pass filter. For all x P RN, in other words, limtÑ8 ||HFCrAtpxqs||2{||LFCrAtpxqs||2 “ 0. Theorem 1 is ensured by the Perron-Frobenius theorem (Meyer and Stewart 2023; He and Wai 2021), revealing that the attention matrix is a low-pass filter independent of the input key and query matrices. A proof of Theorem 1 and the formal definition of the low-pass filter are provided in Appendix (Shin et al. 2023). If the self-attention matrix is applied successively, the final output loses all feature expressiveness as the number of layers increases to infinity. Therefore, the self-attention causes the oversmoothing problem that Tranformer-based SR models lose feature representation in deep layers (see Fig. 3). As can be seen from the empirical analysis of Fig. 3, as the number of layers of these models increases, the cosine similarity increases, and the singular value tends to decay rapidly 2 (Fan et al. 2023). This inevitably causes the model to fail to capture the user’s detailed preferences, and performance degradation is a natural result. We not only alleviate oversmoothing using a high-pass filter as motivation against this background, but also try to capture short-term preferences of user behavior patterns through inductive bias. Proposed Method Here, we introduce the overview of BSARec, the method behind our BSARec, and its relation to previous models. Embedding Layer Given a user’s action sequence Su and the maximum sequence length N, the sequence is first truncated by removing earliest item if |Su| ą N or padded with 0s to get a fixed length sequence s “ ps1, s2, . . . , sNq. With an item embedding matrix M P R|V|ˆD, we define the embedding representation of the sequence Eu, where D is the latent dimension size and Eu i “ Msi. To make our model sensitive to the positions of items, we adopt positional embedding to inject additional positional information while maintaining the same embedding dimensions of the item embedding. A trainable positional embedding P P RNˆD is added to the sequentially ordered item embedding matrix Eu. Moreover, dropout and layer normalization are also implemented: Eu “ DropoutpLayerNormpEu ` Pqq. (4) Beyond Self-Attention Encoder We develop item encoders by stacking beyond self-attention (BSA) blocks based on the embedding layer. It consists of 2This indicates that the largest singular value predominates and the other outliers are much smaller, and there is a potential risk of losing embedding rank. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8986 Attentive Inductive Bias Self-Attention Add & Norm Add & Norm Feed Forward Add & Norm Prediction Layer Embedding Layer Input: low 0 1 1 FFT Inverse FFT Frequency  Rescaler 0 high low high Output: Figure 4: Architecture of our proposed BSARec. We propose a BSA encoder that uses both an inductive bias with a frequency rescaler and original self-attention. 3 modules (see Fig. 4): BSA layer, attentive inductive bias with frequency rescaler, and feed forward network. Beyond Self-Attention Layer Let rAℓbe a beyond selfattention (BSA), Aℓ IB be a rescaled filter matrix for the l-th layer, and Xℓis the input for the l-th layer. When l “ 0, we set X0 “ Eu. We use the following BSA layer: Sℓ“ rAℓXℓ“ αAℓ IBXℓ` p1 ´ αqAℓXℓ, (5) where the first term corresponds to DSP, where the discrete Fourier transform is utilized, α ď 1 is a coefficient to (de)emphasize the inductive bias. Therefore, our main design point is to trade off between the verified inductive bias and the trainable self-attention. For the multi-head version used in BSARec, the multihead self-attention (MSA) is defined as: pXℓ“ MSApXℓq “ rS1, S2, . . . ShsWO, (6) where h is the number of heads and the projection matrix WO P RDˆD is the learnable parameter. Attentive Inductive Bias with Frequency Rescaler We propose a filter that injects the attentive inductive bias and, at the same time, adjusts the scale of the frequency by dividing it into low and high frequency components: Aℓ IBXℓ“ LFCrXℓs ` βHFCrXℓs, (7) where β is a trainable parameter to scale the high-pass filter. In particular, β can be either a vector with D dimension or a scalar parameter. Meaning of our Attentive Inductive Bias We note that DFT is used in Eq. (7), which assumes the ring graph in Fig. 2 (a) — in the perspective of self-attention, this inductive bias says that an item to purchase is influenced by its previous item. This attentive inductive bias does not need to be trained since we know that it presents universally for SR. However, we do not stop at utilizing the inductive bias in a na¨ıve way but extract its low and high-frequency information to learn how to optimally mix them in Eq. (7). To be more specific, suppose a ring graph of N item embeddings. LFCr¨s on them extracts their common signals that do not greatly change following the ring graph topology whereas HFCr¨s extracts locally fluctuating signals (see Fig. 1). By selectively utilizing the high-pass information, we can prevent the oversmoothing problem (see Fig. 3). If relying on LFCr¨s only, we cannot prevent the oversmoothing problem. In addition, we also learn the self-attention matrix Aℓin Eq. (5) and combine it with our attentive inductive bias Aℓ IB. By separating Aℓfrom rAℓ, the self-attention mechanism focuses on capturing non-obvious attentions in Aℓ. Point-wise Feed-Forward Network and Layer Outputs The multi-head attention function is primarily based on linear projection. A point-wise feed-forward network is applied to import nonlinearity to the self-attention block. The process is defined as follows: rXℓ“ pGELUp pXℓWℓ 1 ` bℓ 1qqWℓ 2 ` bℓ 2, (8) where Wℓ 1, Wℓ 2 P RDˆD and bℓ 1, bℓ 2 P RDˆD are learnable parameters. The dropout layer, residual connection, and layer normalization operations are applied as follows: Xℓ`1 “ LayerNormpXℓ` pXℓ` Dropoutp rXℓqq. (9) Prediction Layer and Training In the final layer of BSARec, we calculate the user’s preference score for the item i derived from user’s historical interactions. This score is given by: ˆyi “ ppvu |Su|`1 “ v|Suq “ eT vXL |Su|, (10) where ev is the representation of item v from M, and XL |Su| is the output of the L-layer blocks at step |Su|. This dot product computes the similarity between these two vectors to give us the preference score ˆyi. The cross-entropy (CE) loss function is usually used in SR since the next item prediction task is treated as a classification task over the whole item set (Zhang et al. 2019; Qiu et al. 2022; Du et al. 2023). We adopt the CE loss to optimize the model parameter as: L “ ´log exppˆygq ř iP|V| exppˆyiq, (11) where g P |V| is the ground truth item. Relation to Previous Models Several Transformer-based SR models can be a special case of BSARec, and the comparison with existing models is as follows: i) When α is 0 in BSARec, our model is reduced to SASRec. This is because pure self-attention is used as it is. However, one difference is that their loss functions are different. BSARec uses the CE loss, while SASRec uses the BCE loss. Even in the case of DuoRec, which extends SASRec with contrastive learning, it can be a BSARec with α “ 0 except for contrastive learning. ii) In the case of the FMLPRec, it uses DFT only without self-attention. Nevertheless, the most significant difference is that the filter matrix itself in FMLPRec is a learnable matrix. Because of this, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8987 Datasets Metric Caser GRU4Rec SASRec BERT4Rec FMLPRec DuoRec FEARec BSARec Improv. Beauty HR@5 0.0125 0.0169 0.0340 0.0469 0.0346 0.0707 0.0706 0.0736 4.10% HR@10 0.0225 0.0304 0.0531 0.0705 0.0559 0.0965 0.0982 0.1008 2.65% HR@20 0.0403 0.0527 0.0823 0.1073 0.0869 0.1313 0.1352 0.1373 1.55% NDCG@5 0.0076 0.0104 0.0221 0.0311 0.0222 0.0501 0.0512 0.0523 2.15% NDCG@10 0.0108 0.0147 0.0283 0.0387 0.0291 0.0584 0.0601 0.0611 1.66% NDCG@20 0.0153 0.0203 0.0356 0.0480 0.0369 0.0671 0.0694 0.0703 1.30% Sports HR@5 0.0091 0.0118 0.0188 0.0275 0.0220 0.0396 0.0411 0.0426 3.65% HR@10 0.0163 0.0187 0.0298 0.0428 0.0336 0.0569 0.0589 0.0612 3.90% HR@20 0.0260 0.0303 0.0459 0.0649 0.0525 0.0791 0.0836 0.0858 2.63% NDCG@5 0.0056 0.0079 0.0124 0.0180 0.0146 0.0276 0.0286 0.0300 4.90% NDCG@10 0.0080 0.0101 0.0159 0.0229 0.0183 0.0331 0.0343 0.0360 4.96% NDCG@20 0.0104 0.0131 0.0200 0.0284 0.0231 0.0387 0.0405 0.0422 4.20% Toys HR@5 0.0095 0.0121 0.0440 0.0412 0.0432 0.0770 0.0783 0.0805 2.81% HR@10 0.0161 0.0211 0.0652 0.0635 0.0671 0.1034 0.1054 0.1081 2.56% HR@20 0.0268 0.0348 0.0929 0.0939 0.0974 0.1369 0.1397 0.1435 2.72% NDCG@5 0.0058 0.0077 0.0297 0.0282 0.0288 0.0568 0.0574 0.0589 2.61% NDCG@10 0.0079 0.0106 0.0366 0.0353 0.0365 0.0653 0.0661 0.0679 2.72% NDCG@20 0.0106 0.0140 0.0435 0.0430 0.0441 0.0737 0.0747 0.0768 2.81% Yelp HR@5 0.0117 0.0130 0.0149 0.0256 0.0159 0.0271 0.0262 0.0275 1.48% HR@10 0.0197 0.0221 0.0249 0.0433 0.0287 0.0442 0.0437 0.0465 5.20% HR@20 0.0337 0.0383 0.0424 0.0717 0.0490 0.0717 0.0691 0.0746 4.04% NDCG@5 0.0070 0.0080 0.0091 0.0159 0.0100 0.0170 0.0165 0.0170 0.00% NDCG@10 0.0096 0.0109 0.0123 0.0216 0.0142 0.0225 0.0221 0.0231 2.67% NDCG@20 0.0131 0.0150 0.0167 0.0287 0.0192 0.0294 0.0285 0.0302 2.72% LastFM HR@5 0.0303 0.0312 0.0413 0.0294 0.0367 0.0431 0.0431 0.0523 21.35% HR@10 0.0431 0.0404 0.0633 0.0459 0.0560 0.0624 0.0587 0.0807 27.49% HR@20 0.0642 0.0541 0.0927 0.0596 0.0826 0.0963 0.0826 0.1174 21.91% NDCG@5 0.0227 0.0217 0.0284 0.0198 0.0243 0.0300 0.0304 0.0344 13.16% NDCG@10 0.0268 0.0245 0.0355 0.0252 0.0306 0.0361 0.0354 0.0435 20.50% NDCG@20 0.0321 0.0280 0.0429 0.0286 0.0372 0.0446 0.0414 0.0526 17.94% ML-1M HR@5 0.0927 0.1005 0.1374 0.1512 0.1316 0.1838 0.1834 0.1944 5.77% HR@10 0.1556 0.1657 0.2137 0.2346 0.2065 0.2704 0.2705 0.2757 1.92% HR@20 0.2488 0.2664 0.3245 0.3440 0.3137 0.3738 0.3714 0.3884 3.91% NDCG@5 0.0592 0.0619 0.0873 0.1021 0.0846 0.1252 0.1236 0.1306 4.31% NDCG@10 0.0795 0.0828 0.1116 0.1289 0.1087 0.1530 0.1516 0.1568 2.48% NDCG@20 0.1028 0.1081 0.1395 0.1564 0.1356 0.1790 0.1771 0.1851 3.41% Table 2: Performance comparison of different methods on 6 datasets. The best results are in boldface and the second-best results are underlined.‘Improv.’ indicates the relative improvement against the best baseline performance. FMLPRec’s filter is inevitably learned as a low-pass filter, while BSARec uses a filter rescaler to simultaneously use a high-pass filter. iii) Similar to BSARec, FEARec separates low-frequency and high-frequency information in the frequency domain. However, FEARec allows the frequency domain to be learned separately before entering its Transformer’s encoder. BSARec adaptively uses low and highfrequency information by using a frequency rescaler in a step to inject an inductive bias. FEARec is designed with a complex model structure using contrastive learning and frequency normalization. However, our model shows better performance with a much simpler architecture. Experiments Experimental Setup Datasets We evaluate our model on 6 SR datasets where the sparsity and domain varies: i,ii,iii) Amazon Beauty, Sports, Toys (McAuley et al. 2015), iv) Yelp, v) ML1M (Harper and Konstan 2015), and vi) LastFM. We followed the data pre-processing procedure from Zhou et al. (2020, 2022), where all reviews and ratings are regarded as implicit feedback. The detailed dataset statistics are presented in Appendix (Shin et al. 2023). Baselines To verify the effectiveness of our model, we compare our method with well-known SR baselines with three categories: • RNN or CNN-based sequential models: GRU4Rec (Hidasi et al. 2016) and Caser (Tang and Wang 2018). • Transformer-based sequential models: SASRec (Kang and McAuley 2018), BERT4Rec (Sun et al. 2019), and FMLPRec (Zhou et al. 2022). • Transformer-based sequential models with contrsastive learning: DuoRec (Qiu et al. 2022) and FEARec (Du et al. 2023). Implementation Details Our method is implemented in PyTorch on an NVIDIA RTX 3090 with 16 GB memory. We search the best hyperparameters for baselines based on their recommended hyperparameters. We conduct experiments under the following hyperparameters: the coefficient α is in t0.1, 0.3, 0.5, 0.7, 0.9u, and c is chosen from t1, 3, 5, 7, 9u. The number of BSA blocks L is set to 2, and the number of heads in Transformer h is in t1, 2, 4u. The dimension of The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8988 Methods Beauty Toys HR@20 NDCG@20 HR@20 NDCG@20 BSARec 0.1373 0.0703 0.1435 0.0768 Only A 0.1265 0.0657 0.1320 0.0720 Only AIB 0.1338 0.0677 0.1402 0.0744 Scalar β 0.1333 0.0685 0.1435 0.0756 Table 3: Ablation studies on rA and β. More results in other datasets are in Appendix (Shin et al. 2023). D is set to 64, and the maximum sequence length N is set to 50. For training, the Adam optimizer is optimized with a learning rate in {5 ˆ 10´4, 1 ˆ 10´3}, and the batch size is set to 256. The best hyperparameters are in Appendix (Shin et al. 2023) for reproducibility. Metrics To measure the recommendation accuracy, we commonly use widely used Top-k metrics, HR@k (Hit Rate) and NDCG@k (Normalized Discounted Cumulative Gain) to evaluate the recommended list, where k is set to 5, 10, and 20. To ensure a fair and comprehensive comparison, we analyze the ranking results across the full item set without negative sampling (Krichene and Rendle 2020). Experimental Results Table 2 presents the detailed recommendation performance. Overall, our proposed method, BSARec, clearly marks the best accuracy. First, compared to existing RNN-based and CNN-based methods, Transformer-based methods show better performance in modeling interaction sequences in SR. Second, in Transformer-based methods, BERT4Rec and FMLPRec models outperform SASRec. In particular, FMLPRec redesigned the self-attention of the existing Transformer only with MLP, but it still does not perform well in all datasets. Third, there is no doubt that models using contrastive learning show higher results than models that do not. DuoRec and FEARec significantly outperform SASRec, BERT4Rec, and FMLPRec. Surprisingly, however, BSARec records the best performance across all datasets and all metrics. The most surprising thing is that it can show better performance than DuoRec and FEARec without using contrastive learning. In LastFM, BSARec shows a performance improvement of 27.49% based on HR@10. Thus, our model leaves a message that it can show good performance without going to complex model design by adding contrastive learning. Ablation, Sensitivity, and Additional Studies Ablation Studies As ablation study models, we define the following models: i) the first ablation model has only the self-attention term i.e., A, ii) the second ablation model has only the attentive inductive bias term, i.e., AIB in Eq. 5, and iii) the third ablation model uses β as a single parameter. For Beauty and Toys, the ablation study model with only AIB outperforms the case with only A (e.g., HR@20 in Beauty by AIB of 0.1338 versus 0.1265 by A). However, BSARec, 0.1 0.3 0.5 0.7 0.9 ® 0.065 0.067 0.070 NDCG@20 0.126 0.131 0.137 HR@20 (a) Beauty 0.1 0.3 0.5 0.7 0.9 ® 0.171 0.178 0.185 NDCG@20 NDCG@20 HR@20 0.370 0.379 0.388 HR@20 (b) ML-1M Figure 5: Sensitivity to α. More results in other datasets are in Appendix (Shin et al. 2023). 1 3 5 7 9 c 0.065 0.0680.070 NDCG@20 0.13 0.133 0.137 HR@20 (a) Beauty 1 3 5 7 9 c 0.169 0.177 0.185 NDCG@20 NDCG@20 HR@20 0.361 0.375 0.388 HR@20 (b) ML-1M Figure 6: Sensitivity to c. More results in other datasets are in Appendix (Shin et al. 2023). which utilizes them all, outperforms them. This shows that both are required to achieve the best accuracy. Sensitivity to α Fig. 5 shows the NDCG@20 and HR@20 by varying the α. For Beauty, we find our BSARec, a larger value of α is preferred. For ML-1M, with α “ 0.3, we can achieve the best accuracy. The trade-off between the self-attention matrix and the inductive bias differs for each dataset from these results. Sensitivity to c Fig. 6 shows the NDCG@20 and HR@20 by varying the c. For Beauty, the best accuracy is achieved when c is 5. For ML-1M, the larger the value of c, the better performance is reached. Visualization of Learned β In Fig. 7 (a), we show learned β at each layer for all datasets. We can see that a higher weight is learned in the first layer than in the second layer, which confirms that putting more weight on high-frequency in the first layer is effective. In particular, LastFM and Beauty show higher β weights than other datasets. Case Study We introduce a case study obtained from our experiment. In Fig. 7 (b), we analyze one of the heavy users in LastFM. The user u322 constantly listens to artists, mainly in the rock genre. In other models, u322 cannot capture sudden interaction changes in the next step. Only BSARec recommends an artist from the pop genre as the next artist u322 will listen to. This shows that BSARec can capture highfrequency signals that are abrupt changes in user preference. Model Complexity and Runtime Analyses To evaluate the overhead of BSARec, we evaluate the number of parameters and runtime per epoch during training. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8989 1 2 ` 0.00 0.05 0.10 0.15 0.20 0.25 ¯ Beauty Toys Sports Yelp LastFM ML-1M (a) Visualization of learned β 0 10 20 Listening History of u322 Rock Elec. Pop Folk Baroque Classic Correct recommendation (Only by BSARec) |u322| + 1 Artists Listened by u322 Recommended Artist (b) Case study Figure 7: (a) Visualization of learned β, and (b) an example recommendation in LastFM. The y-axis represents the genre of the artist the user listened to. Methods Beauty ML-1M # params s/epoch # params s/epoch BSARec 878,208 12.75 322,368 20.73 SASRec 877,824 10.41 321,984 19.37 DuoRec 877,824 19.26 321,984 32.33 FEARec 877,824 156.83 321,984 278.24 Table 4: The number of parameters and training time (runtime per epoch) on Beauty and ML-1M. More results in other datasets are in Appendix (Shin et al. 2023). The results are shown in Table 4. Overall, BSARec increases total parameters marginally. BSARec is actually faster to train than FEARec and DuoRec using contrastive learning. For ML-1M, BSARec is 7.02% slower than SASRec, but considering the performance difference, it is not a big deal. Related Work Sequential Recommendation In SR, the primary objective is to recommend the next item based on the sequential patterns inherent in the user’s historical interactions. FPMC (Rendle, Freudenthaler, and Schmidt-Thieme 2010) in SR incorporates Markov Chains to capture item-item transitions. Fossil (He and McAuley 2016) extends the approach to consider higher-order item transitions, improving its predictive capabilities. Several notable works have been conducted in this area, each presenting distinct approaches. Early approaches (Rendle, Freudenthaler, and Schmidt-Thieme 2010; He and McAuley 2016) tried to improve prediction by utilizing Markov Chains to transition between items in the SR. Another avenue of SR leverages convolutional neural networks for sequence modeling, as seen in Caser (Tang and Wang 2018). Caser treats the embedding matrix of items in the sequence as an image and applies convolution operators to capture local item-item interactions effectively. The advancements in deep neural network-based SR methods have also made a profound impact on SR, leading to the adoption of RNNs and self-attention mechanisms. For instance, GRU4Rec (Hidasi et al. 2016) proposes the utilization of GRUs. The success of Transformerbased models, exemplified by Transformer, has further motivated researchers to explore the potential of self-attention in SR. Notably, SASRec (Kang and McAuley 2018) and BERT4Rec (Sun et al. 2019) have demonstrated the efficacy of self-attention. These works signify the continued pursuit of enhanced SR methods by integrating self-attention. With their success, SR models are actively studied (Qiu et al. 2022; Zhou et al. 2022; Du et al. 2023; Lin et al. 2023; Zhou et al. 2023; Yue et al. 2023; Liu et al. 2023; Jiang et al. 2023). Recently, contrastive learning has been used as an aid to improve SR performance. DuoRec (Qiu et al. 2022) uses unsupervised model-level augmentation and supervised semantic positive samples for contrastive learning. FMLPRec (Zhou et al. 2022) proposes a filter-enhanced MLP. This approach utilizes a global filter to eliminate frequency domain noise. However, the global filter tends to assign more significance to lower frequencies while undervaluing relatively higher frequencies. FEARec (Du et al. 2023) is a contrastive learning-based model that uses time domain attention and autocorrelation. AC-TSR framework (Zhou et al. 2023) calibrates unreliable attention weights generated by existing Trransformer-based SR models. AdaMCT (Jiang et al. 2023), which appears at the same time as our work, incorporates locality-induced bias into the Transformer using a local convolutional filter. Oversmoothing and Transformers The concept of oversmoothing was first presented by Li, Han, and Wu (2018) in the field of graph research. Intuitively, the expression converges to a constant after repeatedly exchanging messages with neighbors as the layer of graph neural networks goes to infinity, and research is active to solve this problem (Rusch et al. 2022; Choi et al. 2023b). Coincidentally, a parallel occurrence to oversmoothing is observed in Transformers. Early work empirically attributes this to attention collapse or patch or token uniformity (Zhou et al. 2021; Gong et al. 2021). Dong, Cordonnier, and Loukas (2021) also reveals that the pure Transformer output converges to a rank 1 matrix. There have been several attempts in computer vision to solve this problem (Wang et al. 2022; Guo et al. 2023; Choi et al. 2023c), but in SR, there is only one study that solves fast singular value decay (Fan et al. 2023). Conclusion This paper delves into the realm of sequential recommendation (SR) built upon Transformers, an avenue that has garnered substantial success and popularity. The self-attention within Transformers encounters limitations stemming from insufficient inductive bias and its low-pass filtering properties. We also reveal the oversmoothing due to this low-pass filter in SR. To address this, we introduce BSARec, which uses a combination of attentive inductive bias and vanilla self-attention and integrates low and high-frequencies to mitigate oversmoothing. By understanding and addressing the limitations of self-attention, BSARec significantly advances SR. Our model surpasses 7 baseline methods across 6 datasets in recommendation performance. In future work, we aim to delve deeper into the frequency dynamics of the self-attention for SR. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8990 Acknowledgements Noseong Park is the corresponding author. This work was supported by an IITP grant funded by the Korean government (MSIT) (No.2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University); No.2022-000113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework). References Chen, Q.; Zhao, H.; Li, W.; Huang, P.; and Ou, W. 2019. Behavior sequence transformer for e-commerce recommendation in alibaba. In Proceedings of the 1st International Workshop on Deep Learning Practice for High-dimensional Sparse Data, 1–4. Choi, J.; Hong, S.; Park, N.; and Cho, S.-B. 2023a. BlurringSharpening Process Models for Collaborative Filtering. In SIGIR. Choi, J.; Hong, S.; Park, N.; and Cho, S.-B. 2023b. GREAD: Graph Neural Reaction-Diffusion Networks. In ICML. Choi, J.; Jeon, J.; and Park, N. 2021. LT-OCF: LearnableTime ODE-based Collaborative Filtering. In CIKM. Choi, J.; Wi, H.; Kim, J.; Shin, Y.; Lee, K.; Trask, N.; and Park, N. 2023c. Graph Convolutions Enrich the Self-Attention in Transformers! arXiv preprint arXiv:2312.04234. Choi, J.; Wi, H.; Lee, C.; Cho, S.-B.; Lee, D.; and Park, N. 2023d. RDGCL: Reaction-Diffusion Graph Contrastive Learning for Recommendation. arXiv preprint arXiv:2312.16563. Dong, Y.; Cordonnier, J.-B.; and Loukas, A. 2021. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In ICML, 2793–2803. PMLR. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR. Du, X.; Yuan, H.; Zhao, P.; Qu, J.; Zhuang, F.; Liu, G.; Liu, Y.; and Sheng, V. S. 2023. Frequency Enhanced Hybrid Attention Network for Sequential Recommendation. In SIGIR, 78–88. Fan, Z.; Liu, Z.; Peng, H.; and Yu, P. S. 2023. Addressing the Rank Degeneration in Sequential Recommendation via Singular Spectrum Smoothing. arXiv preprint arXiv:2306.11986. Gao, C.; Zheng, Y.; Li, N.; Li, Y.; Qin, Y.; Piao, J.; Quan, Y.; Chang, J.; Jin, D.; He, X.; et al. 2023. A survey of graph neural networks for recommender systems: Challenges, methods, and directions. ACM Transactions on Recommender Systems, 1(1): 1–51. Gong, C.; Wang, D.; Li, M.; Chandra, V.; and Liu, Q. 2021. Vision transformers with patch diversification. arXiv preprint arXiv:2104.12753. Guo, X.; Wang, Y.; Du, T.; and Wang, Y. 2023. Contranorm: A contrastive learning perspective on oversmoothing and beyond. In ICLR. Hansen, C.; Hansen, C.; Maystre, L.; Mehrotra, R.; Brost, B.; Tomasi, F.; and Lalmas, M. 2020. Contextual and sequential user embeddings for large-scale music recommendation. In RecSys, 53–62. Harper, F. M.; and Konstan, J. A. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4): 1–19. He, R.; and McAuley, J. 2016. Fusing similarity models with markov chains for sparse sequential recommendation. In ICDM, 191–200. IEEE. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR. He, Y.; and Wai, H.-T. 2021. Identifying first-order lowpass graph signals using perron frobenius theorem. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5285–5289. IEEE. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2016. Session-based recommendations with recurrent neural networks. In ICLR. Hong, S.; Jo, M.; Kook, S.; Jung, J.; Wi, H.; Park, N.; and Cho, S.-B. 2022. TimeKit: A Time-series Forecasting-based Upgrade Kit for Collaborative Filtering. In 2022 IEEE International Conference on Big Data (Big Data), 565–574. IEEE. Huang, X.; Qian, S.; Fang, Q.; Sang, J.; and Xu, C. 2018. CSAN: Contextual self-attention network for user sequential recommendation. In ACM MM, 447–455. Jiang, J.; Zhang, P.; Luo, Y.; Li, C.; Kim, J. B.; Zhang, K.; Wang, S.; Xie, X.; and Kim, S. 2023. AdaMCT: adaptive mixture of CNN-transformer for sequential recommendation. In CIKM. Jiang, S.; Qian, X.; Mei, T.; and Fu, Y. 2016. Personalized travel sequence recommendation on multi-source big social media. IEEE Transactions on Big Data, 2(1): 43–56. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequential recommendation. In ICDM, 197–206. IEEE. Kong, T.; Kim, T.; Jeon, J.; Choi, J.; Lee, Y.-C.; Park, N.; and Kim, S.-W. 2022. Linear, or Non-Linear, That is the Question! In WSDM, 517–525. Krichene, W.; and Rendle, S. 2020. On sampled metrics for item recommendation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 1748–1757. Lee, Y.-C.; Kim, S.-W.; and Lee, D. 2018. gOCCF: Graphtheoretic one-class collaborative filtering based on uninteresting items. In AAAI, volume 32. Li, J.; Wang, Y.; and McAuley, J. 2020. Time interval aware self-attention for sequential recommendation. In WSDM, 322–330. Li, Q.; Han, Z.; and Wu, X.-M. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI. Lin, G.; Gao, C.; Zheng, Y.; Chang, J.; Niu, Y.; Song, Y.; Gai, K.; Li, Z.; Jin, D.; Li, Y.; et al. 2023. Mixed Attention Network for Cross-domain Sequential Recommendation. arXiv preprint arXiv:2311.08272. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8991 Liu, Q.; Yan, F.; Zhao, X.; Du, Z.; Guo, H.; Tang, R.; and Tian, F. 2023. Diffusion Augmentation for Sequential Recommendation. In CIKM, 1576–1586. McAuley, J.; Targett, C.; Shi, Q.; and Van Den Hengel, A. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, 43–52. Meyer, C. D.; and Stewart, I. 2023. Matrix analysis and applied linear algebra. SIAM. Qiu, R.; Huang, Z.; Yin, H.; and Wang, Z. 2022. Contrastive learning for representation degeneration problem in sequential recommendation. In WSDM, 813–823. Rendle, S.; Freudenthaler, C.; and Schmidt-Thieme, L. 2010. Factorizing personalized markov chains for nextbasket recommendation. In TheWebConf (former WWW), 811–820. Rusch, T. K.; Chamberlain, B.; Rowbottom, J.; Mishra, S.; and Bronstein, M. 2022. Graph-Coupled Oscillator Networks. In ICML, volume 162, 18888–18909. Sandryhaila, A.; and Moura, J. M. 2014. Discrete signal processing on graphs: Frequency analysis. IEEE Transactions on Signal Processing, 62(12): 3042–3054. Schedl, M.; Zamani, H.; Chen, C.-W.; Deldjoo, Y.; and Elahi, M. 2018. Current challenges and visions in music recommender systems research. International Journal of Multimedia Information Retrieval, 7: 95–116. Shin, Y.; Choi, J.; Wi, H.; and Park, N. 2023. An Attentive Inductive Bias for Sequential Recommendation Beyond the Self-Attention. arXiv preprint arXiv:2312.10325. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In CIKM, 1441–1450. Tang, J.; and Wang, K. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In WSDM, 565–573. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In NeurIPS. Wang, P.; Zheng, W.; Chen, T.; and Wang, Z. 2022. AntiOversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice. In ICLR. Wu, J.; Cai, R.; and Wang, H. 2020. D´ej`a vu: A contextualized temporal attention mechanism for sequential recommendation. In TheWebConf (former WWW), 2199–2209. Wu, L.; Li, S.; Hsieh, C.-J.; and Sharpnack, J. 2020. SSE-PT: Sequential recommendation via personalized transformer. In RecSys, 328–337. Wu, S.; Sun, F.; Zhang, W.; Xie, X.; and Cui, B. 2022. Graph neural networks in recommender systems: a survey. ACM Computing Surveys, 55(5): 1–37. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W. L.; and Leskovec, J. 2018. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In KDD. Yue, Z.; Wang, Y.; He, Z.; Zeng, H.; McAuley, J.; and Wang, D. 2023. Linear Recurrent Units for Sequential Recommendation. arXiv preprint arXiv:2310.02367. Zhang, T.; Zhao, P.; Liu, Y.; Sheng, V. S.; Xu, J.; Wang, D.; Liu, G.; Zhou, X.; et al. 2019. Feature-level Deeper SelfAttention Network for Sequential Recommendation. In IJCAI, 4320–4326. Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Hou, Q.; and Feng, J. 2021. Deepvit: Towards deeper vision transformer. arXiv preprint arXiv:2103.11886. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen, J.-R. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In CIKM, 1893–1902. Zhou, K.; Yu, H.; Zhao, W. X.; and Wen, J.-R. 2022. Filterenhanced MLP is all you need for sequential recommendation. In TheWebConf (former WWW), 2388–2399. Zhou, P.; Ye, Q.; Xie, Y.; Gao, J.; Wang, S.; Kim, J. B.; You, C.; and Kim, S. 2023. Attention Calibration for Transformer-based Sequential Recommendation. In CIKM, 3595–3605. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 8992
2024
999